WorldWideScience

Sample records for samples showed faster

  1. A faster sample preparation method for determination of polonium-210 in fish

    International Nuclear Information System (INIS)

    Sadi, B.B.; Jing Chen; Kochermin, Vera; Godwin Tung; Sorina Chiorean

    2016-01-01

    In order to facilitate Health Canada’s study on background radiation levels in country foods, an in-house radio-analytical method has been developed for determination of polonium-210 ( 210 Po) in fish samples. The method was validated by measurement of 210 Po in a certified reference material. It was also evaluated by comparing 210 Po concentrations in a number of fish samples by another method. The in-house method offers faster sample dissolution using an automated digestion system compared to currently used wet-ashing on a hot plate. It also utilizes pre-packed Sr-resin® cartridges for rapid and reproducible separation of 210 Po versus time-consuming manually packed Sr-resin® columns. (author)

  2. Writing faster Python

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    Did you know that Python preallocates integers from -5 to 257 ? Reusing them 1000 times, instead of allocating memory for a bigger integer, can save you a couple of milliseconds of code’s execution time. If you want to learn more about this kind of optimizations then, … well, probably this presentation is not for you :) Instead of going into such small details, I will talk about more "sane" ideas for writing faster code. After a very brief overview of how to optimize Python code (rule 1: don’t do this; rule 2: don’t do this yet; rule 3: ok, but what if I really want to do this ?), I will show simple and fast ways of measuring the execution time and finally, discuss examples of how some code structures could be improved. You will see: - What is the fastest way of removing duplicates from a list - How much faster your code is when you reuse the built-in functions instead of trying to reinvent the wheel - What is faster than the good ol’ for loop - If the lookup is faster in a list or a set (and w...

  3. Extended two-photon microscopy in live samples with Bessel beams: steadier focus, faster volume scans, and simpler stereoscopic imaging.

    Science.gov (United States)

    Thériault, Gabrielle; Cottet, Martin; Castonguay, Annie; McCarthy, Nathalie; De Koninck, Yves

    2014-01-01

    Two-photon microscopy has revolutionized functional cellular imaging in tissue, but although the highly confined depth of field (DOF) of standard set-ups yields great optical sectioning, it also limits imaging speed in volume samples and ease of use. For this reason, we recently presented a simple and retrofittable modification to the two-photon laser-scanning microscope which extends the DOF through the use of an axicon (conical lens). Here we demonstrate three significant benefits of this technique using biological samples commonly employed in the field of neuroscience. First, we use a sample of neurons grown in culture and move it along the z-axis, showing that a more stable focus is achieved without compromise on transverse resolution. Second, we monitor 3D population dynamics in an acute slice of live mouse cortex, demonstrating that faster volumetric scans can be conducted. Third, we acquire a stereoscopic image of neurons and their dendrites in a fixed sample of mouse cortex, using only two scans instead of the complete stack and calculations required by standard systems. Taken together, these advantages, combined with the ease of integration into pre-existing systems, make the extended depth-of-field imaging based on Bessel beams a strong asset for the field of microscopy and life sciences in general.

  4. The impact of accelerating faster than exponential population growth on genetic variation.

    Science.gov (United States)

    Reppell, Mark; Boehnke, Michael; Zöllner, Sebastian

    2014-03-01

    Current human sequencing projects observe an abundance of extremely rare genetic variation, suggesting recent acceleration of population growth. To better understand the impact of such accelerating growth on the quantity and nature of genetic variation, we present a new class of models capable of incorporating faster than exponential growth in a coalescent framework. Our work shows that such accelerated growth affects only the population size in the recent past and thus large samples are required to detect the models' effects on patterns of variation. When we compare models with fixed initial growth rate, models with accelerating growth achieve very large current population sizes and large samples from these populations contain more variation than samples from populations with constant growth. This increase is driven almost entirely by an increase in singleton variation. Moreover, linkage disequilibrium decays faster in populations with accelerating growth. When we instead condition on current population size, models with accelerating growth result in less overall variation and slower linkage disequilibrium decay compared to models with exponential growth. We also find that pairwise linkage disequilibrium of very rare variants contains information about growth rates in the recent past. Finally, we demonstrate that models of accelerating growth may substantially change estimates of present-day effective population sizes and growth times.

  5. New analytical approaches for faster or greener phytochemical analyses

    NARCIS (Netherlands)

    Shen, Y.

    2015-01-01

    Summary

    Chapter 1 provides a short introduction into the constraints of phytochemical analysis. In order to make them faster, less laborious and greener, there is a clear scope for miniaturized and simplified sample preparation, solvent-free extractions

  6. Faster Increases in Human Life Expectancy Could Lead to Slower Population Aging

    Science.gov (United States)

    2015-01-01

    Counterintuitively, faster increases in human life expectancy could lead to slower population aging. The conventional view that faster increases in human life expectancy would lead to faster population aging is based on the assumption that people become old at a fixed chronological age. A preferable alternative is to base measures of aging on people’s time left to death, because this is more closely related to the characteristics that are associated with old age. Using this alternative interpretation, we show that faster increases in life expectancy would lead to slower population aging. Among other things, this finding affects the assessment of the speed at which countries will age. PMID:25876033

  7. Payload specialist Reinhard Furrer show evidence of previous blood sampling

    Science.gov (United States)

    1985-01-01

    Payload specialist Reinhard Furrer shows evidence of previous blood sampling while Wubbo J. Ockels, Dutch payload specialist (only partially visible), extends his right arm after a sample has been taken. Both men show bruises on their arms.

  8. Compressing bitmap indexes for faster search operations

    International Nuclear Information System (INIS)

    Wu, Kesheng; Otoo, Ekow J.; Shoshani, Arie

    2002-01-01

    In this paper, we study the effects of compression on bitmap indexes. The main operations on the bitmaps during query processing are bitwise logical operations such as AND, OR, NOT, etc. Using the general purpose compression schemes, such as gzip, the logical operations on the compressed bitmaps are much slower than on the uncompressed bitmaps. Specialized compression schemes, like the byte-aligned bitmap code(BBC), are usually faster in performing logical operations than the general purpose schemes, but in many cases they are still orders of magnitude slower than the uncompressed scheme. To make the compressed bitmap indexes operate more efficiently, we designed a CPU-friendly scheme which we refer to as the word-aligned hybrid code (WAH). Tests on both synthetic and real application data show that the new scheme significantly outperforms well-known compression schemes at a modest increase in storage space. Compared to BBC, a scheme well-known for its operational efficiency, WAH performs logical operations about 12 times faster and uses only 60 percent more space. Compared to the uncompressed scheme, in most test cases WAH is faster while still using less space. We further verified with additional tests that the improvement in logical operation speed translates to similar improvement in query processing speed

  9. Compressing bitmap indexes for faster search operations

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Kesheng; Otoo, Ekow J.; Shoshani, Arie

    2002-04-25

    In this paper, we study the effects of compression on bitmap indexes. The main operations on the bitmaps during query processing are bitwise logical operations such as AND, OR, NOT, etc. Using the general purpose compression schemes, such as gzip, the logical operations on the compressed bitmaps are much slower than on the uncompressed bitmaps. Specialized compression schemes, like the byte-aligned bitmap code(BBC), are usually faster in performing logical operations than the general purpose schemes, but in many cases they are still orders of magnitude slower than the uncompressed scheme. To make the compressed bitmap indexes operate more efficiently, we designed a CPU-friendly scheme which we refer to as the word-aligned hybrid code (WAH). Tests on both synthetic and real application data show that the new scheme significantly outperforms well-known compression schemes at a modest increase in storage space. Compared to BBC, a scheme well-known for its operational efficiency, WAH performs logical operations about 12 times faster and uses only 60 percent more space. Compared to the uncompressed scheme, in most test cases WAH is faster while still using less space. We further verified with additional tests that the improvement in logical operation speed translates to similar improvement in query processing speed.

  10. Dedicated workspaces: Faster resumption times and reduced cognitive load in sequential multitasking

    DEFF Research Database (Denmark)

    Jeuris, Steven; Bardram, Jakob Eyvind

    2016-01-01

    Studies show that virtual desktops have become a widespread approach to window management within desktop environments. However, despite their success, there is no experimental evidence of their effect on multitasking. In this paper, we present an experimental study incorporating 16 participants...... to perform the same tasks. Results show that adopting virtual desktops as dedicated workspaces allows for faster task resumption (10 s faster on average) and reduced cognitive load during sequential multitasking. Within our experiment the majority of users already benefited from using dedicated workspaces...

  11. Hexagonal undersampling for faster MRI near metallic implants.

    Science.gov (United States)

    Sveinsson, Bragi; Worters, Pauline W; Gold, Garry E; Hargreaves, Brian A

    2015-02-01

    Slice encoding for metal artifact correction acquires a three-dimensional image of each excited slice with view-angle tilting to reduce slice and readout direction artifacts respectively, but requires additional imaging time. The purpose of this study was to provide a technique for faster imaging around metallic implants by undersampling k-space. Assuming that areas of slice distortion are localized, hexagonal sampling can reduce imaging time by 50% compared with conventional scans. This work demonstrates this technique by comparisons of fully sampled images with undersampled images, either from simulations from fully acquired data or from data actually undersampled during acquisition, in patients and phantoms. Hexagonal sampling is also shown to be compatible with parallel imaging and partial Fourier acquisitions. Image quality was evaluated using a structural similarity (SSIM) index. Images acquired with hexagonal undersampling had no visible difference in artifact suppression from fully sampled images. The SSIM index indicated high similarity to fully sampled images in all cases. The study demonstrates the ability to reduce scan time by undersampling without compromising image quality. © 2014 Wiley Periodicals, Inc.

  12. FASTER Test Reactor Preconceptual Design Report

    Energy Technology Data Exchange (ETDEWEB)

    Grandy, C. [Argonne National Lab. (ANL), Argonne, IL (United States); Belch, H. [Argonne National Lab. (ANL), Argonne, IL (United States); Brunett, A. J. [Argonne National Lab. (ANL), Argonne, IL (United States); Heidet, F. [Argonne National Lab. (ANL), Argonne, IL (United States); Hill, R. [Argonne National Lab. (ANL), Argonne, IL (United States); Hoffman, E. [Argonne National Lab. (ANL), Argonne, IL (United States); Jin, E. [Argonne National Lab. (ANL), Argonne, IL (United States); Mohamed, W. [Argonne National Lab. (ANL), Argonne, IL (United States); Moisseytsev, A. [Argonne National Lab. (ANL), Argonne, IL (United States); Passerini, S. [Argonne National Lab. (ANL), Argonne, IL (United States); Sienicki, J. [Argonne National Lab. (ANL), Argonne, IL (United States); Sumner, T. [Argonne National Lab. (ANL), Argonne, IL (United States); Vilim, R. [Argonne National Lab. (ANL), Argonne, IL (United States); Hayes, S. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-03-31

    The FASTER test reactor plant is a sodium-cooled fast spectrum test reactor that provides high levels of fast and thermal neutron flux for scientific research and development. The 120MWe FASTER reactor plant has a superheated steam power conversion system which provides electrical power to a local grid allowing for recovery of operating costs for the reactor plant.

  13. Increasing the Capital Income Tax Leads to Faster Growth

    NARCIS (Netherlands)

    Uhlig, H.F.H.V.S.; Yanagawa, N.

    1994-01-01

    This paper shows that under rather mild conditions, higher capital income taxes lead to faster growth in an overlapping generations economy with endogenous growth. Government expenditures are financed with labor income taxes as well as capital income taxes. Since capital income accrues to the old,

  14. Simultaneous development of laparoscopy and robotics provides acceptable perioperative outcomes and shows robotics to have a faster learning curve and to be overall faster in rectal cancer surgery: analysis of novice MIS surgeon learning curves.

    Science.gov (United States)

    Melich, George; Hong, Young Ki; Kim, Jieun; Hur, Hyuk; Baik, Seung Hyuk; Kim, Nam Kyu; Sender Liberman, A; Min, Byung Soh

    2015-03-01

    Laparoscopy offers some evidence of benefit compared to open rectal surgery. Robotic rectal surgery is evolving into an accepted approach. The objective was to analyze and compare laparoscopic and robotic rectal surgery learning curves with respect to operative times and perioperative outcomes for a novice minimally invasive colorectal surgeon. One hundred and six laparoscopic and 92 robotic LAR rectal surgery cases were analyzed. All surgeries were performed by a surgeon who was primarily trained in open rectal surgery. Patient characteristics and perioperative outcomes were analyzed. Operative time and CUSUM plots were used for evaluating the learning curve for laparoscopic versus robotic LAR. Laparoscopic versus robotic LAR outcomes feature initial group operative times of 308 (291-325) min versus 397 (373-420) min and last group times of 220 (212-229) min versus 204 (196-211) min-reversed in favor of robotics; major complications of 4.7 versus 6.5 % (NS), resection margin involvement of 2.8 versus 4.4 % (NS), conversion rate of 3.8 versus 1.1 (NS), lymph node harvest of 16.3 versus 17.2 (NS), and estimated blood loss of 231 versus 201 cc (NS). Due to faster learning curves for extracorporeal phase and total mesorectal excision phase, the robotic surgery was observed to be faster than laparoscopic surgery after the initial 41 cases. CUSUM plots demonstrate acceptable perioperative surgical outcomes from the beginning of the study. Initial robotic operative times improved with practice rapidly and eventually became faster than those for laparoscopy. Developing both laparoscopic and robotic skills simultaneously can provide acceptable perioperative outcomes in rectal surgery. It might be suggested that in the current milieu of clashing interests between evolving technology and economic constrains, there might be advantages in embracing both approaches.

  15. Faster and Energy-Efficient Signed Multipliers

    Directory of Open Access Journals (Sweden)

    B. Ramkumar

    2013-01-01

    Full Text Available We demonstrate faster and energy-efficient column compression multiplication with very small area overheads by using a combination of two techniques: partition of the partial products into two parts for independent parallel column compression and acceleration of the final addition using new hybrid adder structures proposed here. Based on the proposed techniques, 8-b, 16-b, 32-b, and 64-b Wallace (W, Dadda (D, and HPM (H reduction tree based Baugh-Wooley multipliers are developed and compared with the regular W, D, H based Baugh-Wooley multipliers. The performances of the proposed multipliers are analyzed by evaluating the delay, area, and power, with 65 nm process technologies on interconnect and layout using industry standard design and layout tools. The result analysis shows that the 64-bit proposed multipliers are as much as 29%, 27%, and 21% faster than the regular W, D, H based Baugh-Wooley multipliers, respectively, with a maximum of only 2.4% power overhead. Also, the power-delay products (energy consumption of the proposed 16-b, 32-b, and 64-b multipliers are significantly lower than those of the regular Baugh-Wooley multiplier. Applicability of the proposed techniques to the Booth-Encoded multipliers is also discussed.

  16. Preparing for faster filling

    CERN Multimedia

    CERN Bulletin

    2010-01-01

    Following the programmed technical stop last week, operators focussed on preparing the machine for faster filling, which includes multibunch injection and a faster pre-cycle phase.   The LHC1 screen shot during the first multibunch injection operation. The LHC operational schedule incorporates a technical stop for preventive maintenance roughly every six weeks of stable operation, during which several interventions on the various machines are carried out. Last week these included the replacement of a faulty magnet in the SPS pre-accelerator, which required the subsequent re-setting of the system of particle extraction and transfer to the LHC. At the end of last week, all the machines were handed back for operation and work could start on accommodating all the changes made into the complex systems in order for normal operation to be resumed. These ‘recovery’ operations continued through the weekend and into this week. At the beginning of this week, operators succeeded in pro...

  17. FASTER test reactor preconceptual design report summary

    Energy Technology Data Exchange (ETDEWEB)

    Grandy, C. [Argonne National Lab. (ANL), Argonne, IL (United States); Belch, H. [Argonne National Lab. (ANL), Argonne, IL (United States); Brunett, A. [Argonne National Lab. (ANL), Argonne, IL (United States); Heidet, F. [Argonne National Lab. (ANL), Argonne, IL (United States); Hill, R. [Argonne National Lab. (ANL), Argonne, IL (United States); Hoffman, E. [Argonne National Lab. (ANL), Argonne, IL (United States); Jin, E. [Argonne National Lab. (ANL), Argonne, IL (United States); Mohamed, W. [Argonne National Lab. (ANL), Argonne, IL (United States); Moisseytsev, A. [Argonne National Lab. (ANL), Argonne, IL (United States); Passerini, S. [Argonne National Lab. (ANL), Argonne, IL (United States); Sienicki, J. [Argonne National Lab. (ANL), Argonne, IL (United States); Sumner, T. [Argonne National Lab. (ANL), Argonne, IL (United States); Vilim, R. [Argonne National Lab. (ANL), Argonne, IL (United States); Hayes, Steven [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-02-29

    The FASTER reactor plant is a sodium-cooled fast spectrum test reactor that provides high levels of fast and thermal neutron flux for scientific research and development. The 120MWe FASTER reactor plant has a superheated steam power conversion system which provides electrical power to a local grid allowing for recovery of operating costs for the reactor plant.

  18. Process Fragment Libraries for Easier and Faster Development of Process-based Applications

    Directory of Open Access Journals (Sweden)

    David Schumm

    2011-01-01

    Full Text Available The term “process fragment” is recently gaining momentum in business process management research. We understand a process fragment as a connected and reusable process structure, which has relaxed completeness and consistency criteria compared to executable processes. We claim that process fragments allow for an easier and faster development of process-based applications. As evidence to this claim we present a process fragment concept and show a sample collection of concrete, real-world process fragments. We present advanced application scenarios for using such fragments in development of process-based applications. Process fragments are typically managed in a repository, forming a process fragment library. On top of a process fragment library from previous work, we discuss the potential impact of using process fragment libraries in cross-enterprise collaboration and application integration.

  19. Multiple Object Tracking Using the Shortest Path Faster Association Algorithm

    Directory of Open Access Journals (Sweden)

    Zhenghao Xi

    2014-01-01

    Full Text Available To solve the persistently multiple object tracking in cluttered environments, this paper presents a novel tracking association approach based on the shortest path faster algorithm. First, the multiple object tracking is formulated as an integer programming problem of the flow network. Then we relax the integer programming to a standard linear programming problem. Therefore, the global optimum can be quickly obtained using the shortest path faster algorithm. The proposed method avoids the difficulties of integer programming, and it has a lower worst-case complexity than competing methods but better robustness and tracking accuracy in complex environments. Simulation results show that the proposed algorithm takes less time than other state-of-the-art methods and can operate in real time.

  20. Withholding response to self-face is faster than to other-face.

    Science.gov (United States)

    Zhu, Min; Hu, Yinying; Tang, Xiaochen; Luo, Junlong; Gao, Xiangping

    2015-01-01

    Self-face advantage refers to adults' response to self-face is faster than that to other-face. A stop-signal task was used to explore how self-face advantage interacted with response inhibition. The results showed that reaction times of self-face were faster than that of other-face not in the go task but in the stop response trials. The novelty of the finding was that self-face has shorter stop-signal reaction time compared to other-face in the successful inhibition trials. These results indicated the processing mechanism of self-face may be characterized by a strong response tendency and a corresponding strong inhibition control.

  1. Breaking the Myth That Relay Swimming Is Faster Than Individual Swimming.

    Science.gov (United States)

    Skorski, Sabrina; Etxebarria, Naroa; Thompson, Kevin G

    2016-04-01

    To investigate if swimming performance is better in a relay race than in the corresponding individual race. The authors analyzed 166 elite male swimmers from 15 nations in the same competition (downloaded from www.swimrankings.net). Of 778 observed races, 144 were Olympic Games performances (2000, 2004, 2012), with the remaining 634 performed in national or international competitions. The races were 100-m (n = 436) and 200-m (n = 342) freestyle events. Relay performance times for the 2nd-4th swimmers were adjusted (+ 0.73 s) to allow for the "flying start." Without any adjustment, mean individual relay performances were significantly faster for the first 50 m and overall time in the 100-m events. Furthermore, the first 100 m of the 200-m relay was significantly faster (P > .001). During relays, swimmers competing in 1st position did not show any difference compared with their corresponding individual performance (P > .16). However, swimmers competing in 2nd-4th relay-team positions demonstrated significantly faster times in the 100-m (P individual events (P team positions were adjusted for the flying start no differences were detected between relay and individual race performance for any event or split time (P > .17). Highly trained swimmers do not swim (or turn) faster in relay events than in their individual races. Relay exchange times account for the difference observed in individual vs relay performance.

  2. Faster magnet sorting with a threshold acceptance algorithm

    International Nuclear Information System (INIS)

    Lidia, S.; Carr, R.

    1995-01-01

    We introduce here a new technique for sorting magnets to minimize the field errors in permanent magnet insertion devices. Simulated annealing has been used in this role, but we find the technique of threshold acceptance produces results of equal quality in less computer time. Threshold accepting would be of special value in designing very long insertion devices, such as long free electron lasers (FELs). Our application of threshold acceptance to magnet sorting showed that it converged to equivalently low values of the cost function, but that it converged significantly faster. We present typical cases showing time to convergence for various error tolerances, magnet numbers, and temperature schedules

  3. Faster magnet sorting with a threshold acceptance algorithm

    International Nuclear Information System (INIS)

    Lidia, S.

    1994-08-01

    The authors introduce here a new technique for sorting magnets to minimize the field errors in permanent magnet insertion devices. Simulated annealing has been used in this role, but they find the technique of threshold acceptance produces results of equal quality in less computer time. Threshold accepting would be of special value in designing very long insertion devices, such as long FEL's. Their application of threshold acceptance to magnet sorting showed that it converged to equivalently low values of the cost function, but that it converged significantly faster. They present typical cases showing time to convergence for various error tolerances, magnet numbers, and temperature schedules

  4. Bribes for Faster Delivery

    OpenAIRE

    Sanyal, Amal

    2000-01-01

    The paper models the practice of charging bribes for faster delivery of essential services in third world countries. It then examines the possibility of curbing corruption by supervision, and secondly, by introducing competition among delivery agents. It is argued that a supervisory solution eludes the problem because no hard evidence of the reduction of corruption can be established for this type of offenses. It is also shown that using more than one supplier cannot eliminate the practice, a...

  5. On sampling social networking services

    OpenAIRE

    Wang, Baiyang

    2012-01-01

    This article aims at summarizing the existing methods for sampling social networking services and proposing a faster confidence interval for related sampling methods. It also includes comparisons of common network sampling techniques.

  6. The faster-X effect: integrating theory and data.

    Science.gov (United States)

    Meisel, Richard P; Connallon, Tim

    2013-09-01

    Population genetics theory predicts that X (or Z) chromosomes could play disproportionate roles in speciation and evolutionary divergence, and recent genome-wide analyses have identified situations in which X or Z-linked divergence exceeds that on the autosomes (the so-called 'faster-X effect'). Here, we summarize the current state of both the theory and data surrounding the study of faster-X evolution. Our survey indicates that the faster-X effect is pervasive across a taxonomically diverse array of evolutionary lineages. These patterns could be informative of the dominance or recessivity of beneficial mutations and the nature of genetic variation acted upon by natural selection. We also identify several aspects of disagreement between these empirical results and the population genetic models used to interpret them. However, there are clearly delineated aspects of the problem for which additional modeling and collection of genomic data will address these discrepancies and provide novel insights into the population genetics of adaptation. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Functional effects of treadmill-based gait training at faster speeds in stroke survivors: a prospective, single-group study.

    Science.gov (United States)

    Mohammadi, Roghayeh; Ershad, Navid; Rezayinejad, Marziyeh; Fatemi, Elham; Phadke, Chetan P

    2017-09-01

    To examine the functional effects of walking retraining at faster than self-selected speed (SSS). Ten individuals with chronic stroke participated in a 4-week training over a treadmill at walking speeds 40% faster than SSS, three times per week, 30 min/session. Outcome measures assessed before, after, and 2 months after the end of intervention were the Timed Up and Go, the 6-Minute Walk, the 10-Meter Walk test, the Modified Ashworth Scale, SSS, and fastest comfortable speed. After 4 weeks of training, all outcome measures showed clinically meaningful and statistically significant improvements (Ptraining. The results showed that a strategy of training at a speed 40% faster than SSS can improve functional activity in individuals with chronic stroke, with effects lasting up to 2 months after the intervention.

  8. Object Detection Based on Fast/Faster RCNN Employing Fully Convolutional Architectures

    Directory of Open Access Journals (Sweden)

    Yun Ren

    2018-01-01

    Full Text Available Modern object detectors always include two major parts: a feature extractor and a feature classifier as same as traditional object detectors. The deeper and wider convolutional architectures are adopted as the feature extractor at present. However, many notable object detection systems such as Fast/Faster RCNN only consider simple fully connected layers as the feature classifier. In this paper, we declare that it is beneficial for the detection performance to elaboratively design deep convolutional networks (ConvNets of various depths for feature classification, especially using the fully convolutional architectures. In addition, this paper also demonstrates how to employ the fully convolutional architectures in the Fast/Faster RCNN. Experimental results show that a classifier based on convolutional layer is more effective for object detection than that based on fully connected layer and that the better detection performance can be achieved by employing deeper ConvNets as the feature classifier.

  9. Robustness of a bisimulation-type faster-than preorder

    Directory of Open Access Journals (Sweden)

    Katrin Iltgen

    2009-11-01

    Full Text Available TACS is an extension of CCS where upper time bounds for delays can be specified. Luettgen and Vogler defined three variants of bismulation-type faster-than relations and showed that they all three lead to the same preorder, demonstrating the robustness of their approach. In the present paper, the operational semantics of TACS is extended; it is shown that two of the variants still give the same preorder as before, underlining robustness. An explanation is given why this result fails for the third variant. It is also shown that another variant, which mixes old and new operational semantics, can lead to smaller relations that prove the same preorder.

  10. Faster than Nyquist signaling algorithms to silicon

    CERN Document Server

    Dasalukunte, Deepak; Rusek, Fredrik; Anderson, John B

    2014-01-01

    This book addresses the challenges and design trade-offs arising during the hardware design of Faster-than-Nyquist (FTN) signaling transceivers. The authors describe how to design for coexistence between the FTN system described and Orthogonal frequency-division multiplexing (OFDM) systems, enabling readers to design FTN specific processing blocks as add-ons to the conventional transceiver chain.   • Provides a comprehensive introduction to Faster-than-Nyquist (FTN) signaling transceivers, covering both theory and hardware implementation; • Enables readers to design systems that achieve bandwidth efficiency by making better use of the available spectrum resources; • Describes design techniques to achieve 2x improvement in bandwidth usage with similar performance as that of an OFDM system.  

  11. Faster-X evolution: Theory and evidence from Drosophila.

    Science.gov (United States)

    Charlesworth, Brian; Campos, José L; Jackson, Benjamin C

    2018-02-12

    A faster rate of adaptive evolution of X-linked genes compared with autosomal genes can be caused by the fixation of recessive or partially recessive advantageous mutations, due to the full expression of X-linked mutations in hemizygous males. Other processes, including recombination rate and mutation rate differences between X chromosomes and autosomes, may also cause faster evolution of X-linked genes. We review population genetics theory concerning the expected relative values of variability and rates of evolution of X-linked and autosomal DNA sequences. The theoretical predictions are compared with data from population genomic studies of several species of Drosophila. We conclude that there is evidence for adaptive faster-X evolution of several classes of functionally significant nucleotides. We also find evidence for potential differences in mutation rates between X-linked and autosomal genes, due to differences in mutational bias towards GC to AT mutations. Many aspects of the data are consistent with the male hemizygosity model, although not all possible confounding factors can be excluded. © 2018 John Wiley & Sons Ltd.

  12. Faster than light motion does not imply time travel

    International Nuclear Information System (INIS)

    Andréka, Hajnal; Madarász, Judit X; Németi, István; Székely, Gergely; Stannett, Mike

    2014-01-01

    Seeing the many examples in the literature of causality violations based on faster than light (FTL) signals one naturally thinks that FTL motion leads inevitably to the possibility of time travel. We show that this logical inference is invalid by demonstrating a model, based on (3+1)-dimensional Minkowski spacetime, in which FTL motion is permitted (in every direction without any limitation on speed) yet which does not admit time travel. Moreover, the Principle of Relativity is true in this model in the sense that all observers are equivalent. In short, FTL motion does not imply time travel after all. (paper)

  13. Visual input that matches the content of vist of visual working memory requires less (not faster) evidence sampling to reach conscious access

    NARCIS (Netherlands)

    Gayet, S.; van Maanen, L.; Heilbron, M.; Paffen, C.L.E.; Van Der Stigchel, S.

    2016-01-01

    The content of visual working memory (VWM) affects the processing of concurrent visual input. Recently, it has been demonstrated that stimuli are released from interocular suppression faster when they match rather than mismatch a color that is memorized for subsequent recall. In order to investigate

  14. Suppression dampens unpleasant emotion faster than reappraisal: Neural dynamics in a Chinese sample.

    Science.gov (United States)

    Yuan, JiaJin; Long, QuanShan; Ding, NanXiang; Lou, YiXue; Liu, YingYing; Yang, JieMin

    2015-05-01

    The timing dynamics of regulating negative emotion with expressive suppression and cognitive reappraisal were investigated in a Chinese sample. Event-Related Potentials were recorded while subjects were required to view, suppress emotion expression to, or reappraise emotional pictures. The results showed a similar reduction in self-reported negative emotion during both strategies. Additionally, expressive suppression elicited larger amplitudes than reappraisal in central-frontal P3 component (340-480 ms). More importantly, the Late Positive Potential (LPP) amplitudes were decreased in each 200 ms of the 800-1600 ms time intervals during suppression vs. viewing conditions. In contrast, LPP amplitudes were similar for reappraisal and viewing conditions in all the time windows, except for the decreased amplitudes during reappraisal in the 1400-1600 ms. The LPP (but not P3) amplitudes were positively related to negative mood ratings, whereas the amplitudes of P3, rather than LPP, predict self-reported expressive suppression. These results suggest that expressive suppression decreases emotion responding more rapidly than reappraisal, at the cost of greater cognitive resource involvements in Chinese individuals.

  15. Paying more for faster care? Individuals' attitude toward price-based priority access in health care.

    Science.gov (United States)

    Benning, Tim M; Dellaert, Benedict G C

    2013-05-01

    Increased competition in the health care sector has led hospitals and other health care institutions to experiment with new access allocation policies that move away from traditional expert based allocation of care to price-based priority access (i.e., the option to pay more for faster care). To date, little is known about individuals' attitude toward price-based priority access and the evaluation process underlying this attitude. This paper addresses the role of individuals' evaluations of collective health outcomes as an important driver of their attitude toward (price-based) allocation policies in health care. The authors investigate how individuals evaluate price-based priority access by means of scenario-based survey data collected in a representative sample from the Dutch population (N = 1464). They find that (a) offering individuals the opportunity to pay for faster care negatively affects their evaluations of both the total and distributional collective health outcome achieved, (b) however, when health care supply is not restricted (i.e., when treatment can be offered outside versus within the regular working hours of the hospital) offering price-based priority access affects total collective health outcome evaluations positively instead of negatively, but it does not change distributional collective health outcome evaluations. Furthermore, (c) the type of health care treatment (i.e., life saving liver transplantation treatment vs. life improving cosmetic ear correction treatment - priced at the same level to the individual) moderates the effect of collective health outcome evaluations on individuals' attitude toward allocation policies. For policy makers and hospital managers the results presented in this article are helpful because they provide a better understanding of what drives individuals' preferences for health care allocation policies. In particular, the results show that policies based on the "paying more for faster care" principle are more

  16. Dyslexics' faster decay of implicit memory for sounds and words is manifested in their shorter neural adaptation.

    Science.gov (United States)

    Jaffe-Dax, Sagi; Frenkel, Or; Ahissar, Merav

    2017-01-24

    Dyslexia is a prevalent reading disability whose underlying mechanisms are still disputed. We studied the neural mechanisms underlying dyslexia using a simple frequency-discrimination task. Though participants were asked to compare the two tones in each trial, implicit memory of previous trials affected their responses. We hypothesized that implicit memory decays faster among dyslexics. We tested this by increasing the temporal intervals between consecutive trials, and by measuring the behavioral impact and ERP responses from the auditory cortex. Dyslexics showed a faster decay of implicit memory effects on both measures, with similar time constants. Finally, faster decay of implicit memory also characterized the impact of sound regularities in benefitting dyslexics' oral reading rate. Their benefit decreased faster as a function of the time interval from the previous reading of the same non-word. We propose that dyslexics' shorter neural adaptation paradoxically accounts for their longer reading times, since it reduces their temporal window of integration of past stimuli, resulting in noisier and less reliable predictions for both simple and complex stimuli. Less reliable predictions limit their acquisition of reading expertise.

  17. Elastic coupling of limb joints enables faster bipedal walking

    Science.gov (United States)

    Dean, J.C.; Kuo, A.D.

    2008-01-01

    The passive dynamics of bipedal limbs alone are sufficient to produce a walking motion, without need for control. Humans augment these dynamics with muscles, actively coordinated to produce stable and economical walking. Present robots using passive dynamics walk much slower, perhaps because they lack elastic muscles that couple the joints. Elastic properties are well known to enhance running gaits, but their effect on walking has yet to be explored. Here we use a computational model of dynamic walking to show that elastic joint coupling can help to coordinate faster walking. In walking powered by trailing leg push-off, the model's speed is normally limited by a swing leg that moves too slowly to avoid stumbling. A uni-articular spring about the knee allows faster but uneconomical walking. A combination of uni-articular hip and knee springs can speed the legs for improved speed and economy, but not without the swing foot scuffing the ground. Bi-articular springs coupling the hips and knees can yield high economy and good ground clearance similar to humans. An important parameter is the knee-to-hip moment arm that greatly affects the existence and stability of gaits, and when selected appropriately can allow for a wide range of speeds. Elastic joint coupling may contribute to the economy and stability of human gait. PMID:18957360

  18. Adrenaline in cardiac arrest: Prefilled syringes are faster.

    Science.gov (United States)

    Helm, Claire; Gillett, Mark

    2015-08-01

    Standard ampoules and prefilled syringes of adrenaline are widely available in Australasian EDs for use in cardiac arrest. We hypothesise that prefilled syringes can be administered more rapidly and accurately when compared with the two available standard ampoules. This is a triple arm superiority study comparing the time to i.v. administration and accuracy of dosing of three currently available preparations of adrenaline. In their standard packaging, prefilled syringes were on average more than 12 s faster to administer than the 1 mL 1:1000 ampoules and more than 16 s faster than the 10 mL 1:10,000 ampoules (P adrenaline utilising a Minijet (CSL Limited, Parkville, Victoria, Australia) is faster than using adrenaline in glass ampoules presented in their plastic packaging. Removing the plastic packaging from the 1 mL (1 mg) ampoule might result in more rapid administration similar to the Minijet. Resuscitation personnel requiring rapid access to adrenaline should consider storing it as either Minijets or ampoules devoid of packaging. These results might be extrapolatable to other clinical scenarios, including pre-hospital and anaesthesia, where other drugs are required for rapid use. © 2015 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine.

  19. Longer - Faster - Purer

    CERN Multimedia

    Caroline Duc

    2013-01-01

    The MR-ToF-MS, a new ion trap, has been integrated into ISOLTRAP, the experiment that performs accurate mass measurements on short-lived nuclides produced at ISOLDE. When used as a mass separator and spectrometer, it extends ISOLTRAP’s experimental reach towards the limits of nuclear stability.   Susanne Kreim, the ISOLTRAP local group leader at CERN in front of a part of the ISOLTRAP device. When mass measurement experiments like ISOLTRAP* are placed in an on-line radioactive ion-beam facility they face a major challenge: the efficient and fast transfer of the nuclide of interest to the location where the mass measurement is performed. The biggest yield of one selected nuclide, without contaminants, needs to be transferred to the set-up as quickly as possible in order to measure its mass with the greatest precision. Recently, the ISOLTRAP collaboration installed a new device that provides a faster separation of isobars.** It has significantly improved ISOLTRAP’s purificat...

  20. Faster and timing-attack resistant AES-GCM

    NARCIS (Netherlands)

    Käsper, E.; Schwabe, P.; Clavier, C.; Gaj, K.

    2009-01-01

    We present a bitsliced implementation of AES encryption in counter mode for 64-bit Intel processors. Running at 7.59 cycles/byte on a Core 2, it is up to 25% faster than previous implementations, while simultaneously offering protection against timing attacks. In particular, it is the only

  1. Faster than light, slower than time

    International Nuclear Information System (INIS)

    Rucker, R.

    1981-01-01

    The problem with faster-than-light travel is that, in the framework of Special Relativity, it is logically equivalent to time-travel. The problem with time-travel is that it leads to two types of paradoxes. The paradoxes, and the various means of skirting them, are all discussed here. Virtually all the examples are drawn from science-fiction novels, which are a large and neglected source of thought-experiments. (Auth.)

  2. Comparing fixed sampling with minimizer sampling when using k-mer indexes to find maximal exact matches.

    Science.gov (United States)

    Almutairy, Meznah; Torng, Eric

    2018-01-01

    Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method.

  3. Comparing fixed sampling with minimizer sampling when using k-mer indexes to find maximal exact matches.

    Directory of Open Access Journals (Sweden)

    Meznah Almutairy

    Full Text Available Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method.

  4. Comparing fixed sampling with minimizer sampling when using k-mer indexes to find maximal exact matches

    Science.gov (United States)

    Torng, Eric

    2018-01-01

    Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method. PMID:29389989

  5. ZKBoo: Faster Zero-Knowledge for Boolean Circuits

    DEFF Research Database (Denmark)

    Giacomelli, Irene; Madsen, Jesper; Orlandi, Claudio

    2016-01-01

    variants of IKOS, which highlights their pros and cons for practically rele- vant soundness parameters; ◦ A generalization and simplification of their approach, which leads to faster Σ-protocols (that can be made non-interactive using the Fiat-Shamir heuristic) for state- ments of the form “I know x...

  6. Cortex Matures Faster in Youths With Highest IQ

    Science.gov (United States)

    ... NIH Cortex Matures Faster in Youths With Highest IQ Past Issues / Summer 2006 Table of Contents For ... on. Photo: Getty image (StockDisc) Youths with superior IQ are distinguished by how fast the thinking part ...

  7. A piece of paper falling faster than free fall

    International Nuclear Information System (INIS)

    Vera, F; Rivera, R

    2011-01-01

    We report a simple experiment that clearly demonstrates a common error in the explanation of the classic experiment where a small piece of paper is put over a book and the system is let fall. This classic demonstration is used in introductory physics courses to show that after eliminating the friction force with the air, the piece of paper falls with acceleration g. To test if the paper falls behind the book in a nearly free fall motion or if it is dragged by the book, we designed a version of this experiment that includes a ball and a piece of paper over a book that is forced to fall using elastic cords. We recorded a video of our experiment using a high-speed video camera at 300 frames per second that shows that the book and the paper fall faster than the ball, which falls well behind the book with an acceleration approximately equal to g. Our experiment shows that the piece of paper is dragged behind the book and therefore the paper and book demonstration should not be used to show that all objects fall with acceleration g independently of their mass.

  8. A piece of paper falling faster than free fall

    Energy Technology Data Exchange (ETDEWEB)

    Vera, F; Rivera, R, E-mail: fvera@ucv.cl [Instituto de Fisica, Pontificia Universidad Catolica de ValparaIso, Av. Universidad 330, Curauma, ValparaIso (Chile)

    2011-09-15

    We report a simple experiment that clearly demonstrates a common error in the explanation of the classic experiment where a small piece of paper is put over a book and the system is let fall. This classic demonstration is used in introductory physics courses to show that after eliminating the friction force with the air, the piece of paper falls with acceleration g. To test if the paper falls behind the book in a nearly free fall motion or if it is dragged by the book, we designed a version of this experiment that includes a ball and a piece of paper over a book that is forced to fall using elastic cords. We recorded a video of our experiment using a high-speed video camera at 300 frames per second that shows that the book and the paper fall faster than the ball, which falls well behind the book with an acceleration approximately equal to g. Our experiment shows that the piece of paper is dragged behind the book and therefore the paper and book demonstration should not be used to show that all objects fall with acceleration g independently of their mass.

  9. Pigeons home faster through polluted air

    OpenAIRE

    Zhongqiu Li; Franck Courchamp; Daniel T. Blumstein

    2016-01-01

    Air pollution, especially haze pollution, is creating health issues for both humans and other animals. However, remarkably little is known about how animals behaviourally respond to air pollution. We used multiple linear regression to analyse 415 pigeon races in the North China Plain, an area with considerable air pollution, and found that while the proportion of pigeons successfully homed was not influenced by air pollution, pigeons homed faster when the air was especially polluted. Our resu...

  10. Zero bugs and program faster

    CERN Document Server

    Thompson, Kate

    2015-01-01

    A book about programming, improving skill, and avoiding mistakes. The author spent two years researching every bug avoidance technique she could find. This book contains the best of them. If you want to program faster, with fewer bugs, and write more secure code, buy this book! "This is the best book I have ever read." - Anonymous reviewer "Four score and seven years ago this book helped me debug my server code." -Abraham Lincoln "Would my Javascript have memory leaks without this book? Would fishes fly without water?" -Socrates "This book is the greatest victory since the Spanish Armada, and the best about programming." -Queen Elizabeth

  11. Size matters: bigger is faster.

    Science.gov (United States)

    Sereno, Sara C; O'Donnell, Patrick J; Sereno, Margaret E

    2009-06-01

    A largely unexplored aspect of lexical access in visual word recognition is "semantic size"--namely, the real-world size of an object to which a word refers. A total of 42 participants performed a lexical decision task on concrete nouns denoting either big or small objects (e.g., bookcase or teaspoon). Items were matched pairwise on relevant lexical dimensions. Participants' reaction times were reliably faster to semantically "big" versus "small" words. The results are discussed in terms of possible mechanisms, including more active representations for "big" words, due to the ecological importance attributed to large objects in the environment and the relative speed of neural responses to large objects.

  12. Pedestrian crowd dynamics in merging sections: Revisiting the ;faster-is-slower; phenomenon

    Science.gov (United States)

    Shahhoseini, Zahra; Sarvi, Majid; Saberi, Meead

    2018-02-01

    The study of the discharge of active or self-driven matter in narrow passages has become of the growing interest in a variety of fields. The question has particularly important practical applications for the safety of pedestrian human flows notably in emergency scenarios. It has been suggested predominantly through simulation in some theoretical studies as well as through few experimentations that under certain circumstances, an elevated vigour to escape may exacerbate the outflow and cause further delay although the experimental evidence is rather mixed. The dimensions of this complex phenomenon known as the "faster-is slower" effect are of crucial importance to be understood owing to its potential practical implications for the emergency management. The contextual requirements of observing this phenomenon are yet to be identified. It is not clear whether a "do not speed up" policy is universally beneficial and advisable in an evacuation scenario. Here for the first time we experimentally examine this phenomenon in relation to the pedestrian flows at merging sections as a common geometric feature of crowd egress. Various merging angles and three different speed regimes were examined in high-density laboratory experiments. The measurements of flow interruptions and egress efficiency all indicated that the pedestrians were discharged faster when moving at elevated speed levels. We also observed clear dependencies between the discharge rate and the physical layout of the merging with certain designs clearly outperforming others. But regardless of the design, we observed faster throughput and greater avalanche sizes when we instructed pedestrians to run. Our results give the suggestion that observation of the faster-is-slower effect may necessitate certain critical conditions including passages being overly narrow relative to the size of participles (pedestrians) to create long-lasting blockages. The faster-is-slower assumption may not be universal and there may be

  13. Motivational salience signal in the basal forebrain is coupled with faster and more precise decision speed.

    Science.gov (United States)

    Avila, Irene; Lin, Shih-Chieh

    2014-03-01

    The survival of animals depends critically on prioritizing responses to motivationally salient stimuli. While it is generally believed that motivational salience increases decision speed, the quantitative relationship between motivational salience and decision speed, measured by reaction time (RT), remains unclear. Here we show that the neural correlate of motivational salience in the basal forebrain (BF), defined independently of RT, is coupled with faster and also more precise decision speed. In rats performing a reward-biased simple RT task, motivational salience was encoded by BF bursting response that occurred before RT. We found that faster RTs were tightly coupled with stronger BF motivational salience signals. Furthermore, the fraction of RT variability reflecting the contribution of intrinsic noise in the decision-making process was actively suppressed in faster RT distributions with stronger BF motivational salience signals. Artificially augmenting the BF motivational salience signal via electrical stimulation led to faster and more precise RTs and supports a causal relationship. Together, these results not only describe for the first time, to our knowledge, the quantitative relationship between motivational salience and faster decision speed, they also reveal the quantitative coupling relationship between motivational salience and more precise RT. Our results further establish the existence of an early and previously unrecognized step in the decision-making process that determines both the RT speed and variability of the entire decision-making process and suggest that this novel decision step is dictated largely by the BF motivational salience signal. Finally, our study raises the hypothesis that the dysregulation of decision speed in conditions such as depression, schizophrenia, and cognitive aging may result from the functional impairment of the motivational salience signal encoded by the poorly understood noncholinergic BF neurons.

  14. Better Faster Noise with the GPU

    DEFF Research Database (Denmark)

    Wyvill, Geoff; Frisvad, Jeppe Revall

    Filtered noise [Perlin 1985] has, for twenty years, been a fundamental tool for creating functional texture and it has many other applications; for example, animating water waves or the motion of grass waving in the wind. Perlin noise suffers from a number of defects and there have been many atte...... attempts to create better or faster noise but Perlin’s ‘Gradient Noise’ has consistently proved to be the best compromise between speed and quality. Our objective was to create a better noise cheaply by use of the GPU....

  15. A sub-sampled approach to extremely low-dose STEM

    Energy Technology Data Exchange (ETDEWEB)

    Stevens, A. [OptimalSensing, Southlake, Texas 76092, USA; Duke University, ECE, Durham, North Carolina 27708, USA; Luzi, L. [Rice University, ECE, Houston, Texas 77005, USA; Yang, H. [Lawrence Berkeley National Laboratory, Berkeley, California 94720, USA; Kovarik, L. [Pacific NW National Laboratory, Richland, Washington 99354, USA; Mehdi, B. L. [Pacific NW National Laboratory, Richland, Washington 99354, USA; University of Liverpool, Materials Engineering, Liverpool L69 3GH, United Kingdom; Liyu, A. [Pacific NW National Laboratory, Richland, Washington 99354, USA; Gehm, M. E. [Duke University, ECE, Durham, North Carolina 27708, USA; Browning, N. D. [Pacific NW National Laboratory, Richland, Washington 99354, USA; University of Liverpool, Materials Engineering, Liverpool L69 3GH, United Kingdom

    2018-01-22

    The inpainting of randomly sub-sampled images acquired by scanning transmission electron microscopy (STEM) is an attractive method for imaging under low-dose conditions (≤ 1 e-2) without changing either the operation of the microscope or the physics of the imaging process. We show that 1) adaptive sub-sampling increases acquisition speed, resolution, and sensitivity; and 2) random (non-adaptive) sub-sampling is equivalent, but faster than, traditional low-dose techniques. Adaptive sub-sampling opens numerous possibilities for the analysis of beam sensitive materials and in-situ dynamic processes at the resolution limit of the aberration corrected microscope and is demonstrated here for the analysis of the node distribution in metal-organic frameworks (MOFs).

  16. Comparative Analysis of Clinical Samples Showing Weak Serum Reaction on AutoVue System Causing ABO Blood Typing Discrepancies.

    Science.gov (United States)

    Jo, Su Yeon; Lee, Ju Mi; Kim, Hye Lim; Sin, Kyeong Hwa; Lee, Hyeon Ji; Chang, Chulhun Ludgerus; Kim, Hyung Hoi

    2017-03-01

    ABO blood typing in pre-transfusion testing is a major component of the high workload in blood banks that therefore requires automation. We often experienced discrepant results from an automated system, especially weak serum reactions. We evaluated the discrepant results by the reference manual method to confirm ABO blood typing. In total, 13,113 blood samples were tested with the AutoVue system; all samples were run in parallel with the reference manual method according to the laboratory protocol. The AutoVue system confirmed ABO blood typing of 12,816 samples (97.7%), and these results were concordant with those of the manual method. The remaining 297 samples (2.3%) showed discrepant results in the AutoVue system and were confirmed by the manual method. The discrepant results involved weak serum reactions (serum reactions, samples from patients who had received stem cell transplants, ABO subgroups, and specific system error messages. Among the 98 samples showing ≤1+ reaction grade in the AutoVue system, 70 samples (71.4%) showed a normal serum reaction (≥2+ reaction grade) with the manual method, and 28 samples (28.6%) showed weak serum reaction in both methods. ABO blood tying of 97.7% samples could be confirmed by the AutoVue system and a small proportion (2.3%) needed to be re-evaluated by the manual method. Samples with a 2+ reaction grade in serum typing do not need to be evaluated manually, while those with ≤1+ reaction grade do.

  17. Quantum mechanics and faster-than-light communication: methodological considerations

    International Nuclear Information System (INIS)

    Ghirardi, G.C.; Weber, T.

    1983-06-01

    A detailed quantum mechanical analysis of a recent proposal of faster than light communication through wave packet reduction is performed. The discussion allows us to focus on some methodological problems about critical investigations in physical theories. (author)

  18. Slow light brings faster communications

    International Nuclear Information System (INIS)

    Gauthier, D.

    2006-01-01

    Two teams of researchers have managed to significantly reduce the speed of light in an optical fibre, which could open the door to all-optical routers for telecommunications, as Daniel Gauthier explains. Optical engineers around the globe are working hard to meet the ever-growing demand for higher-speed information networks, and the latest systems being developed operate at rates close to 160 GB per second - which is over 100 times quicker than the fastest broadband services currently available and a world away from the 56 kb per second dial-up connections of the early years of the Internet. Paradoxically, it seems that making light travel slower rather than faster might be the best way to meet these high-speed challenges. (U.K.)

  19. Enhanced conformational sampling using enveloping distribution sampling.

    Science.gov (United States)

    Lin, Zhixiong; van Gunsteren, Wilfred F

    2013-10-14

    To lessen the problem of insufficient conformational sampling in biomolecular simulations is still a major challenge in computational biochemistry. In this article, an application of the method of enveloping distribution sampling (EDS) is proposed that addresses this challenge and its sampling efficiency is demonstrated in simulations of a hexa-β-peptide whose conformational equilibrium encompasses two different helical folds, i.e., a right-handed 2.7(10∕12)-helix and a left-handed 3(14)-helix, separated by a high energy barrier. Standard MD simulations of this peptide using the GROMOS 53A6 force field did not reach convergence of the free enthalpy difference between the two helices even after 500 ns of simulation time. The use of soft-core non-bonded interactions in the centre of the peptide did enhance the number of transitions between the helices, but at the same time led to neglect of relevant helical configurations. In the simulations of a two-state EDS reference Hamiltonian that envelops both the physical peptide and the soft-core peptide, sampling of the conformational space of the physical peptide ensures that physically relevant conformations can be visited, and sampling of the conformational space of the soft-core peptide helps to enhance the transitions between the two helices. The EDS simulations sampled many more transitions between the two helices and showed much faster convergence of the relative free enthalpy of the two helices compared with the standard MD simulations with only a slightly larger computational effort to determine optimized EDS parameters. Combined with various methods to smoothen the potential energy surface, the proposed EDS application will be a powerful technique to enhance the sampling efficiency in biomolecular simulations.

  20. Faster algorithms for RNA-folding using the Four-Russians method.

    Science.gov (United States)

    Venkatachalam, Balaji; Gusfield, Dan; Frid, Yelena

    2014-03-06

    The secondary structure that maximizes the number of non-crossing matchings between complimentary bases of an RNA sequence of length n can be computed in O(n3) time using Nussinov's dynamic programming algorithm. The Four-Russians method is a technique that reduces the running time for certain dynamic programming algorithms by a multiplicative factor after a preprocessing step where solutions to all smaller subproblems of a fixed size are exhaustively enumerated and solved. Frid and Gusfield designed an O(n3logn) algorithm for RNA folding using the Four-Russians technique. In their algorithm the preprocessing is interleaved with the algorithm computation. We simplify the algorithm and the analysis by doing the preprocessing once prior to the algorithm computation. We call this the two-vector method. We also show variants where instead of exhaustive preprocessing, we only solve the subproblems encountered in the main algorithm once and memoize the results. We give a simple proof of correctness and explore the practical advantages over the earlier method.The Nussinov algorithm admits an O(n2) time parallel algorithm. We show a parallel algorithm using the two-vector idea that improves the time bound to O(n2logn). We have implemented the parallel algorithm on graphics processing units using the CUDA platform. We discuss the organization of the data structures to exploit coalesced memory access for fast running times. The ideas to organize the data structures also help in improving the running time of the serial algorithms. For sequences of length up to 6000 bases the parallel algorithm takes only about 2.5 seconds and the two-vector serial method takes about 57 seconds on a desktop and 15 seconds on a server. Among the serial algorithms, the two-vector and memoized versions are faster than the Frid-Gusfield algorithm by a factor of 3, and are faster than Nussinov by up to a factor of 20. The source-code for the algorithms is available at http://github.com/ijalabv/FourRussiansRNAFolding.

  1. Faster dissolution of PuO2 in nitrous media by means of electrolytic oxidation

    International Nuclear Information System (INIS)

    Baumgaertner, F.; Kim, J.I.; Luckner, N.; Brueckl, N.; Lieberer, E.

    1984-03-01

    The contribution shows that the dissolution of PuO 2 in HNO 3 can be accelerated considerably by means of electrolytic oxidation. A glass apparatus has been developed which uses platinum electrodes providing for sufficient contact between electrodes and solids. Increase of temperature, acid concentration, and electrode current density, and a good contact between electrode and metal oxide will improve the dissolution kinetics. The reaction could be made even faster by addition of Ce 4+ . (orig.) [de

  2. Sequential search leads to faster, more efficient fragment-based de novo protein structure prediction.

    Science.gov (United States)

    de Oliveira, Saulo H P; Law, Eleanor C; Shi, Jiye; Deane, Charlotte M

    2018-04-01

    Most current de novo structure prediction methods randomly sample protein conformations and thus require large amounts of computational resource. Here, we consider a sequential sampling strategy, building on ideas from recent experimental work which shows that many proteins fold cotranslationally. We have investigated whether a pseudo-greedy search approach, which begins sequentially from one of the termini, can improve the performance and accuracy of de novo protein structure prediction. We observed that our sequential approach converges when fewer than 20 000 decoys have been produced, fewer than commonly expected. Using our software, SAINT2, we also compared the run time and quality of models produced in a sequential fashion against a standard, non-sequential approach. Sequential prediction produces an individual decoy 1.5-2.5 times faster than non-sequential prediction. When considering the quality of the best model, sequential prediction led to a better model being produced for 31 out of 41 soluble protein validation cases and for 18 out of 24 transmembrane protein cases. Correct models (TM-Score > 0.5) were produced for 29 of these cases by the sequential mode and for only 22 by the non-sequential mode. Our comparison reveals that a sequential search strategy can be used to drastically reduce computational time of de novo protein structure prediction and improve accuracy. Data are available for download from: http://opig.stats.ox.ac.uk/resources. SAINT2 is available for download from: https://github.com/sauloho/SAINT2. saulo.deoliveira@dtc.ox.ac.uk. Supplementary data are available at Bioinformatics online.

  3. Faster native vowel discrimination learning in musicians is mediated by an optimization of mnemonic functions.

    Science.gov (United States)

    Elmer, Stefan; Greber, Marielle; Pushparaj, Arethy; Kühnis, Jürg; Jäncke, Lutz

    2017-09-01

    The ability to discriminate phonemes varying in spectral and temporal attributes constitutes one of the most basic intrinsic elements underlying language learning mechanisms. Since previous work has consistently shown that professional musicians are characterized by perceptual and cognitive advantages in a variety of language-related tasks, and since vowels can be considered musical sounds within the domain of speech, here we investigated the behavioral and electrophysiological correlates of native vowel discrimination learning in a sample of professional musicians and non-musicians. We evaluated the contribution of both the neurophysiological underpinnings of perceptual (i.e., N1/P2 complex) and mnemonic functions (i.e., N400 and P600 responses) while the participants were instructed to judge whether pairs of native consonant-vowel (CV) syllables manipulated in the first formant transition of the vowel (i.e., from /tu/ to /to/) were identical or not. Results clearly demonstrated faster learning in musicians, compared to non-musicians, as reflected by shorter reaction times and higher accuracy. Most notably, in terms of morphology, time course, and voltage strength, this steeper learning curve was accompanied by distinctive N400 and P600 manifestations between the two groups. In contrast, we did not reveal any group differences during the early stages of auditory processing (i.e., N1/P2 complex), suggesting that faster learning was mediated by an optimization of mnemonic but not perceptual functions. Based on a clear taxonomy of the mnemonic functions involved in the task, results are interpreted as pointing to a relationship between faster learning mechanisms in musicians and an optimization of echoic (i.e., N400 component) and working memory (i.e., P600 component) functions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. GSD: An SPSS extension command for sub-sampling and bootstrapping datasets

    Directory of Open Access Journals (Sweden)

    Harding, Bradley

    2016-09-01

    Full Text Available Statistical analyses have grown immensely since the inception of computational methods. However, many quantitative methods classes teach sampling and sub-sampling at a very abstract level despite the fact that, with the faster computers of today, these notions could be demonstrated live to the students. For this reason, we have created a simple extension module for SPSS that can sub-sample and Bootstrap data, GSD (Generator of Sub-sampled Data. In this paper, we describe and show how to use the GSD module as well as provide short descriptions of both the sub-sampling and Bootstrap methods. In addition, as this article aims to inspire instructors to introduce these concepts in their statistics classes of all levels, we provide three short exercises that are ready for curriculum implementation.

  5. Professional Music Training and Novel Word Learning: From Faster Semantic Encoding to Longer-lasting Word Representations.

    Science.gov (United States)

    Dittinger, Eva; Barbaroux, Mylène; D'Imperio, Mariapaola; Jäncke, Lutz; Elmer, Stefan; Besson, Mireille

    2016-10-01

    On the basis of previous results showing that music training positively influences different aspects of speech perception and cognition, the aim of this series of experiments was to test the hypothesis that adult professional musicians would learn the meaning of novel words through picture-word associations more efficiently than controls without music training (i.e., fewer errors and faster RTs). We also expected musicians to show faster changes in brain electrical activity than controls, in particular regarding the N400 component that develops with word learning. In line with these hypotheses, musicians outperformed controls in the most difficult semantic task. Moreover, although a frontally distributed N400 component developed in both groups of participants after only a few minutes of novel word learning, in musicians this frontal distribution rapidly shifted to parietal scalp sites, as typically found for the N400 elicited by known words. Finally, musicians showed evidence for better long-term memory for novel words 5 months after the main experimental session. Results are discussed in terms of cascading effects from enhanced perception to memory as well as in terms of multifaceted improvements of cognitive processing due to music training. To our knowledge, this is the first report showing that music training influences semantic aspects of language processing in adults. These results open new perspectives for education in showing that early music training can facilitate later foreign language learning. Moreover, the design used in the present experiment can help to specify the stages of word learning that are impaired in children and adults with word learning difficulties.

  6. Faster Smith-Waterman database searches with inter-sequence SIMD parallelisation.

    Science.gov (United States)

    Rognes, Torbjørn

    2011-06-01

    The Smith-Waterman algorithm for local sequence alignment is more sensitive than heuristic methods for database searching, but also more time-consuming. The fastest approach to parallelisation with SIMD technology has previously been described by Farrar in 2007. The aim of this study was to explore whether further speed could be gained by other approaches to parallelisation. A faster approach and implementation is described and benchmarked. In the new tool SWIPE, residues from sixteen different database sequences are compared in parallel to one query residue. Using a 375 residue query sequence a speed of 106 billion cell updates per second (GCUPS) was achieved on a dual Intel Xeon X5650 six-core processor system, which is over six times more rapid than software based on Farrar's 'striped' approach. SWIPE was about 2.5 times faster when the programs used only a single thread. For shorter queries, the increase in speed was larger. SWIPE was about twice as fast as BLAST when using the BLOSUM50 score matrix, while BLAST was about twice as fast as SWIPE for the BLOSUM62 matrix. The software is designed for 64 bit Linux on processors with SSSE3. Source code is available from http://dna.uio.no/swipe/ under the GNU Affero General Public License. Efficient parallelisation using SIMD on standard hardware makes it possible to run Smith-Waterman database searches more than six times faster than before. The approach described here could significantly widen the potential application of Smith-Waterman searches. Other applications that require optimal local alignment scores could also benefit from improved performance.

  7. High levels of the type III inorganic phosphate transporter PiT1 (SLC20A1) can confer faster cell adhesion

    DEFF Research Database (Denmark)

    Kongsfelt, Iben Boutrup; Byskov, Kristina; Pedersen, Lasse Ebdrup

    2014-01-01

    overexpression led to faster cell spreading. The final total numbers of attached cells did, however, not differ between cultures of PiT1 overexpressing cells and control cells of neither cell type. We suggest that the PiT1-mediated fast adhesion potentials allow the cells to go faster out of G0/G1 and thereby......The inorganic phosphate transporter PiT1 (SLC20A1) is ubiquitously expressed in mammalian cells. We recently showed that overexpression of human PiT1 was sufficient to increase proliferation of two strict density-inhibited cell lines, murine fibroblastic NIH3T3 and pre-osteoblastic MC3T3-E1 cells......, and allowed the cultures to grow to higher cell densities. In addition, upon transformation NIH3T3 cells showed increased ability to form colonies in soft agar. The cellular regulation of PiT1 expression supports that cells utilize the PiT1 levels to control proliferation, with non-proliferating cells showing...

  8. Skin graft donor site: a procedure for a faster healing.

    Science.gov (United States)

    Cuomo, Roberto; Grimaldi, Luca; Brandi, Cesare; Nisi, Giuseppe; D'Aniello, Carlo

    2017-10-23

    The authors want to evaluate the efficacy of fibrillary tabotamp dressing in skin graft-donor site. A comparison was made with Vaseline gauzes. Tabotamp is an absorbable haemostatic product of Ethicon (Johnson and Johnson) obtained by sterile and oxidized regenerated cellulose (Rayon). It is used for mild to moderate bleeding. 276 patients were subject to skin graft and divided into two group: Group A and Group B. The donor site of patients in Group A was medicated with fibrillary tabotamp, while the patients of Group B were medicated only with Vaseline gauze. We recorded infection, timing of healing, number of dressing change, the pain felt during and after the dressing change with visual analog scale (VAS) and a questionnaire. Patients allocated in Group A healed faster than the Group B. Questionnaires and VAS analysis showed lower pain felt, lower intake of pain drugs and lower infection rate in the Group A than the Group B. Analysis of coast showed lower dressing change in Group A than the Group B. We believe that the use of tabotamp is a very viable alternative to improve healing.

  9. Simulated Tempering Distributed Replica Sampling, Virtual Replica Exchange, and Other Generalized-Ensemble Methods for Conformational Sampling.

    Science.gov (United States)

    Rauscher, Sarah; Neale, Chris; Pomès, Régis

    2009-10-13

    Generalized-ensemble algorithms in temperature space have become popular tools to enhance conformational sampling in biomolecular simulations. A random walk in temperature leads to a corresponding random walk in potential energy, which can be used to cross over energetic barriers and overcome the problem of quasi-nonergodicity. In this paper, we introduce two novel methods: simulated tempering distributed replica sampling (STDR) and virtual replica exchange (VREX). These methods are designed to address the practical issues inherent in the replica exchange (RE), simulated tempering (ST), and serial replica exchange (SREM) algorithms. RE requires a large, dedicated, and homogeneous cluster of CPUs to function efficiently when applied to complex systems. ST and SREM both have the drawback of requiring extensive initial simulations, possibly adaptive, for the calculation of weight factors or potential energy distribution functions. STDR and VREX alleviate the need for lengthy initial simulations, and for synchronization and extensive communication between replicas. Both methods are therefore suitable for distributed or heterogeneous computing platforms. We perform an objective comparison of all five algorithms in terms of both implementation issues and sampling efficiency. We use disordered peptides in explicit water as test systems, for a total simulation time of over 42 μs. Efficiency is defined in terms of both structural convergence and temperature diffusion, and we show that these definitions of efficiency are in fact correlated. Importantly, we find that ST-based methods exhibit faster temperature diffusion and correspondingly faster convergence of structural properties compared to RE-based methods. Within the RE-based methods, VREX is superior to both SREM and RE. On the basis of our observations, we conclude that ST is ideal for simple systems, while STDR is well-suited for complex systems.

  10. No evidence for faster male hybrid sterility in population crosses of an intertidal copepod (Tigriopus californicus).

    Science.gov (United States)

    Willett, Christopher S

    2008-06-01

    Two different forces are thought to contribute to the rapid accumulation of hybrid male sterility that has been observed in many inter-specific crosses, namely the faster male and the dominance theories. For male heterogametic taxa, both faster male and dominance would work in the same direction to cause the rapid evolution of male sterility; however, for taxa lacking differentiated sex chromosomes only the faster male theory would explain the rapid evolution of male hybrid sterility. It is currently unknown what causes the faster evolution of male sterility, but increased sexual selection on males and the sensitivity of genes involved in male reproduction are two hypotheses that could explain the observation. Here, patterns of hybrid sterility in crosses of genetically divergent copepod populations are examined to test potential mechanisms of faster male evolution. The study species, Tigriopus californicus, lacks differentiated, hemizygous sex chromosomes and appears to have low levels of divergence caused by sexual selection acting upon males. Hybrid sterility does not accumulate more rapidly in males than females in these crosses suggesting that in this taxon male reproductive genes are not inherently more prone to disruption in hybrids.

  11. The Faster, Better, Cheaper Approach to Space Missions: An Engineering Management Assessment

    Science.gov (United States)

    Hamaker, Joe

    2000-01-01

    This paper describes, in viewgraph form, the faster, better, cheaper approach to space missions. The topics include: 1) What drives "Faster, Better, Cheaper"? 2) Why Space Programs are Costly; 3) Background; 4) Aerospace Project Management (Old Culture); 5) Aerospace Project Management (New Culture); 6) Scope of Analysis Limited to Engineering Management Culture; 7) Qualitative Analysis; 8) Some Basic Principles of the New Culture; 9) Cause and Effect; 10) "New Ways of Doing Business" Survey Results; 11) Quantitative Analysis; 12) Recent Space System Cost Trends; 13) Spacecraft Dry Weight Trend; 14) Complexity Factor Trends; 15) Cost Normalization; 16) Cost Normalization Algorithm; 17) Unnormalized Cost vs. Normalized Cost; and 18) Concluding Observations.

  12. Faster Smith-Waterman database searches with inter-sequence SIMD parallelisation

    Directory of Open Access Journals (Sweden)

    Rognes Torbjørn

    2011-06-01

    Full Text Available Abstract Background The Smith-Waterman algorithm for local sequence alignment is more sensitive than heuristic methods for database searching, but also more time-consuming. The fastest approach to parallelisation with SIMD technology has previously been described by Farrar in 2007. The aim of this study was to explore whether further speed could be gained by other approaches to parallelisation. Results A faster approach and implementation is described and benchmarked. In the new tool SWIPE, residues from sixteen different database sequences are compared in parallel to one query residue. Using a 375 residue query sequence a speed of 106 billion cell updates per second (GCUPS was achieved on a dual Intel Xeon X5650 six-core processor system, which is over six times more rapid than software based on Farrar's 'striped' approach. SWIPE was about 2.5 times faster when the programs used only a single thread. For shorter queries, the increase in speed was larger. SWIPE was about twice as fast as BLAST when using the BLOSUM50 score matrix, while BLAST was about twice as fast as SWIPE for the BLOSUM62 matrix. The software is designed for 64 bit Linux on processors with SSSE3. Source code is available from http://dna.uio.no/swipe/ under the GNU Affero General Public License. Conclusions Efficient parallelisation using SIMD on standard hardware makes it possible to run Smith-Waterman database searches more than six times faster than before. The approach described here could significantly widen the potential application of Smith-Waterman searches. Other applications that require optimal local alignment scores could also benefit from improved performance.

  13. Speeding Up Non-Parametric Bootstrap Computations for Statistics Based on Sample Moments in Small/Moderate Sample Size Applications.

    Directory of Open Access Journals (Sweden)

    Elias Chaibub Neto

    Full Text Available In this paper we propose a vectorized implementation of the non-parametric bootstrap for statistics based on sample moments. Basically, we adopt the multinomial sampling formulation of the non-parametric bootstrap, and compute bootstrap replications of sample moment statistics by simply weighting the observed data according to multinomial counts instead of evaluating the statistic on a resampled version of the observed data. Using this formulation we can generate a matrix of bootstrap weights and compute the entire vector of bootstrap replications with a few matrix multiplications. Vectorization is particularly important for matrix-oriented programming languages such as R, where matrix/vector calculations tend to be faster than scalar operations implemented in a loop. We illustrate the application of the vectorized implementation in real and simulated data sets, when bootstrapping Pearson's sample correlation coefficient, and compared its performance against two state-of-the-art R implementations of the non-parametric bootstrap, as well as a straightforward one based on a for loop. Our investigations spanned varying sample sizes and number of bootstrap replications. The vectorized bootstrap compared favorably against the state-of-the-art implementations in all cases tested, and was remarkably/considerably faster for small/moderate sample sizes. The same results were observed in the comparison with the straightforward implementation, except for large sample sizes, where the vectorized bootstrap was slightly slower than the straightforward implementation due to increased time expenditures in the generation of weight matrices via multinomial sampling.

  14. Real-time vehicle detection and tracking in video based on faster R-CNN

    Science.gov (United States)

    Zhang, Yongjie; Wang, Jian; Yang, Xin

    2017-08-01

    Vehicle detection and tracking is a significant part in auxiliary vehicle driving system. Using the traditional detection method based on image information has encountered enormous difficulties, especially in complex background. To solve this problem, a detection method based on deep learning, Faster R-CNN, which has very high detection accuracy and flexibility, is introduced. An algorithm of target tracking with the combination of Camshift and Kalman filter is proposed for vehicle tracking. The computation time of Faster R-CNN cannot achieve realtime detection. We use multi-thread technique to detect and track vehicle by parallel computation for real-time application.

  15. Recruitment of faster motor units is associated with greater rates of fascicle strain and rapid changes in muscle force during locomotion.

    Science.gov (United States)

    Lee, Sabrina S M; de Boef Miara, Maria; Arnold, Allison S; Biewener, Andrew A; Wakeling, James M

    2013-01-15

    Animals modulate the power output needed for different locomotor tasks by changing muscle forces and fascicle strain rates. To generate the necessary forces, appropriate motor units must be recruited. Faster motor units have faster activation-deactivation rates than slower motor units, and they contract at higher strain rates; therefore, recruitment of faster motor units may be advantageous for tasks that involve rapid movements or high rates of work. This study identified motor unit recruitment patterns in the gastrocnemii muscles of goats and examined whether faster motor units are recruited when locomotor speed is increased. The study also examined whether locomotor tasks that elicit faster (or slower) motor units are associated with increased (or decreased) in vivo tendon forces, force rise and relaxation rates, fascicle strains and/or strain rates. Electromyography (EMG), sonomicrometry and muscle-tendon force data were collected from the lateral and medial gastrocnemius muscles of goats during level walking, trotting and galloping and during inclined walking and trotting. EMG signals were analyzed using wavelet and principal component analyses to quantify changes in the EMG frequency spectra across the different locomotor conditions. Fascicle strain and strain rate were calculated from the sonomicrometric data, and force rise and relaxation rates were determined from the tendon force data. The results of this study showed that faster motor units were recruited as goats increased their locomotor speeds from level walking to galloping. Slow inclined walking elicited EMG intensities similar to those of fast level galloping but different EMG frequency spectra, indicating that recruitment of the different motor unit types depended, in part, on characteristics of the task. For the locomotor tasks and muscles analyzed here, recruitment patterns were generally associated with in vivo fascicle strain rates, EMG intensity and tendon force. Together, these data provide

  16. Recruitment of faster motor units is associated with greater rates of fascicle strain and rapid changes in muscle force during locomotion

    Science.gov (United States)

    Lee, Sabrina S. M.; de Boef Miara, Maria; Arnold, Allison S.; Biewener, Andrew A.; Wakeling, James M.

    2013-01-01

    SUMMARY Animals modulate the power output needed for different locomotor tasks by changing muscle forces and fascicle strain rates. To generate the necessary forces, appropriate motor units must be recruited. Faster motor units have faster activation–deactivation rates than slower motor units, and they contract at higher strain rates; therefore, recruitment of faster motor units may be advantageous for tasks that involve rapid movements or high rates of work. This study identified motor unit recruitment patterns in the gastrocnemii muscles of goats and examined whether faster motor units are recruited when locomotor speed is increased. The study also examined whether locomotor tasks that elicit faster (or slower) motor units are associated with increased (or decreased) in vivo tendon forces, force rise and relaxation rates, fascicle strains and/or strain rates. Electromyography (EMG), sonomicrometry and muscle-tendon force data were collected from the lateral and medial gastrocnemius muscles of goats during level walking, trotting and galloping and during inclined walking and trotting. EMG signals were analyzed using wavelet and principal component analyses to quantify changes in the EMG frequency spectra across the different locomotor conditions. Fascicle strain and strain rate were calculated from the sonomicrometric data, and force rise and relaxation rates were determined from the tendon force data. The results of this study showed that faster motor units were recruited as goats increased their locomotor speeds from level walking to galloping. Slow inclined walking elicited EMG intensities similar to those of fast level galloping but different EMG frequency spectra, indicating that recruitment of the different motor unit types depended, in part, on characteristics of the task. For the locomotor tasks and muscles analyzed here, recruitment patterns were generally associated with in vivo fascicle strain rates, EMG intensity and tendon force. Together, these

  17. Faster quantum chemistry simulation on fault-tolerant quantum computers

    International Nuclear Information System (INIS)

    Cody Jones, N; McMahon, Peter L; Yamamoto, Yoshihisa; Whitfield, James D; Yung, Man-Hong; Aspuru-Guzik, Alán; Van Meter, Rodney

    2012-01-01

    Quantum computers can in principle simulate quantum physics exponentially faster than their classical counterparts, but some technical hurdles remain. We propose methods which substantially improve the performance of a particular form of simulation, ab initio quantum chemistry, on fault-tolerant quantum computers; these methods generalize readily to other quantum simulation problems. Quantum teleportation plays a key role in these improvements and is used extensively as a computing resource. To improve execution time, we examine techniques for constructing arbitrary gates which perform substantially faster than circuits based on the conventional Solovay–Kitaev algorithm (Dawson and Nielsen 2006 Quantum Inform. Comput. 6 81). For a given approximation error ϵ, arbitrary single-qubit gates can be produced fault-tolerantly and using a restricted set of gates in time which is O(log ϵ) or O(log log ϵ); with sufficient parallel preparation of ancillas, constant average depth is possible using a method we call programmable ancilla rotations. Moreover, we construct and analyze efficient implementations of first- and second-quantized simulation algorithms using the fault-tolerant arbitrary gates and other techniques, such as implementing various subroutines in constant time. A specific example we analyze is the ground-state energy calculation for lithium hydride. (paper)

  18. Semantic size does not matter: "bigger" words are not recognized faster.

    Science.gov (United States)

    Kang, Sean H K; Yap, Melvin J; Tse, Chi-Shing; Kurby, Christopher A

    2011-06-01

    Sereno, O'Donnell, and Sereno (2009) reported that words are recognized faster in a lexical decision task when their referents are physically large than when they are small, suggesting that "semantic size" might be an important variable that should be considered in visual word recognition research and modelling. We sought to replicate their size effect, but failed to find a significant latency advantage in lexical decision for "big" words (cf. "small" words), even though we used the same word stimuli as Sereno et al. and had almost three times as many subjects. We also examined existing data from visual word recognition megastudies (e.g., English Lexicon Project) and found that semantic size is not a significant predictor of lexical decision performance after controlling for the standard lexical variables. In summary, the null results from our lab experiment--despite a much larger subject sample size than Sereno et al.--converged with our analysis of megastudy lexical decision performance, leading us to conclude that semantic size does not matter for word recognition. Discussion focuses on why semantic size (unlike some other semantic variables) is unlikely to play a role in lexical decision.

  19. The hard-won benefits of familiarity in visual search: naturally familiar brand logos are found faster.

    Science.gov (United States)

    Qin, Xiaoyan Angela; Koutstaal, Wilma; Engel, Stephen A

    2014-05-01

    Familiar items are found faster than unfamiliar ones in visual search tasks. This effect has important implications for cognitive theory, because it may reveal how mental representations of commonly encountered items are changed by experience to optimize performance. It remains unknown, however, whether everyday items with moderate levels of exposure would show benefits in visual search, and if so, what kind of experience would be required to produce them. Here, we tested whether familiar product logos were searched for faster than unfamiliar ones, and also familiarized subjects with previously unfamiliar logos. Subjects searched for preexperimentally familiar and unfamiliar logos, half of which were familiarized in the laboratory, amongst other, unfamiliar distractor logos. In three experiments, we used an N-back-like familiarization task, and in four others we used a task that asked detailed questions about the perceptual aspects of the logos. The number of familiarization exposures ranged from 30 to 84 per logo across experiments, with two experiments involving across-day familiarization. Preexperimentally familiar target logos were searched for faster than were unfamiliar, nonfamiliarized logos, by 8 % on average. This difference was reliable in all seven experiments. However, familiarization had little or no effect on search speeds; its average effect was to improve search times by 0.7 %, and its effect was significant in only one of the seven experiments. If priming, mere exposure, episodic memory, or relatively modest familiarity were responsible for familiarity's effects on search, then performance should have improved following familiarization. Our results suggest that the search-related advantage of familiar logos does not develop easily or rapidly.

  20. Drought evolution: greater and faster impacts on blue water than on green water

    Science.gov (United States)

    Destouni, G.; Orth, R.

    2017-12-01

    Drought propagates through the terrestrial water cycle, affecting different interlinked geospheres which have so far been mostly investigated separately and without direct comparison. By use of comprehensive multi-decadal data from >400 near-natural catchments along a steep climate gradient across Europe we here analyze drought propagation from precipitation (deficits) through soil moisture to runoff (blue water) and evapotranspiration (green water). We show that soil-moisture droughts reduce runoff stronger and faster than evapotranspiration. While runoff responds within weeks, evapotranspiration can be unaffected for months, or even entirely as in central and northern Europe. Understanding these different drought pathways towards blue and green water resources contributes to improve food and water security and offers early warning potential to mitigate (future) drought impacts on society and ecosystems.

  1. How to Elect a Leader Faster than a Tournament

    OpenAIRE

    Alistarh, Dan; Gelashvili, Rati; Vladu, Adrian

    2014-01-01

    The problem of electing a leader from among $n$ contenders is one of the fundamental questions in distributed computing. In its simplest formulation, the task is as follows: given $n$ processors, all participants must eventually return a win or lose indication, such that a single contender may win. Despite a considerable amount of work on leader election, the following question is still open: can we elect a leader in an asynchronous fault-prone system faster than just running a $\\Theta(\\log n...

  2. The effect of Ni and Fe doping on Hall anomaly in vortex state of doped YBCO samples

    Directory of Open Access Journals (Sweden)

    M Nazarzadeh

    2010-09-01

    Full Text Available We have investigated hall effect on YBa2Cu3-xMxO7-δ (M=Ni, Fe bulk samples, with dopant amount 0 ≤ x ≤ 0.045 for Ni and 0 ≤ x ≤ 0.03 for Fe, with magnetic field (H=2.52, 4.61, 6.27 kOe perpendicular to sample’s surface with constant current 100 mA. Our study shows that as both dopants increases, TC decreases and it decreases faster by Ni . In these ranges of dopant and magnetic field the Hall sign reversal has been observed in all samples once and also ∆max has occurred in lower temperatures, its magnitude increases by Ni, and in Fe doped samples except in sample with dopant amount x=0.03, which almost decreases, that it can show effect of magnetic doping on hall effect.

  3. Statistical Symbolic Execution with Informed Sampling

    Science.gov (United States)

    Filieri, Antonio; Pasareanu, Corina S.; Visser, Willem; Geldenhuys, Jaco

    2014-01-01

    Symbolic execution techniques have been proposed recently for the probabilistic analysis of programs. These techniques seek to quantify the likelihood of reaching program events of interest, e.g., assert violations. They have many promising applications but have scalability issues due to high computational demand. To address this challenge, we propose a statistical symbolic execution technique that performs Monte Carlo sampling of the symbolic program paths and uses the obtained information for Bayesian estimation and hypothesis testing with respect to the probability of reaching the target events. To speed up the convergence of the statistical analysis, we propose Informed Sampling, an iterative symbolic execution that first explores the paths that have high statistical significance, prunes them from the state space and guides the execution towards less likely paths. The technique combines Bayesian estimation with a partial exact analysis for the pruned paths leading to provably improved convergence of the statistical analysis. We have implemented statistical symbolic execution with in- formed sampling in the Symbolic PathFinder tool. We show experimentally that the informed sampling obtains more precise results and converges faster than a purely statistical analysis and may also be more efficient than an exact symbolic analysis. When the latter does not terminate symbolic execution with informed sampling can give meaningful results under the same time and memory limits.

  4. Report: Independent Environmental Sampling Shows Some Properties Designated by EPA as Available for Use Had Some Contamination

    Science.gov (United States)

    Report #15-P-0221, July 21, 2015. Some OIG sampling results showed contamination was still present at sites designated by the EPA as ready for reuse. This was unexpected and could signal a need to implement changes to ensure human health protection.

  5. CSRtrack Faster Calculation of 3-D CSR Effects

    CERN Document Server

    Dohlus, Martin

    2004-01-01

    CSRtrack is a new code for the simulation of Coherent Synchrotron radiation effects on the beam dynamics of linear accelerators. It incorporates the physics of our previous code, TraFiC4, and adds new algorithms for the calculation of the CSR fields. A one-dimensional projected method allows quick estimates and a greens function method allows 3D calculations about ten times faster than with the `direct' method. The tracking code is written in standard FORTRAN77 and has its own parser for comfortable input of calculation parameters and geometry. Phase space input and the analysis of the traced particle distribution is done with MATLAB interface programs.

  6. An empirical comparison of respondent-driven sampling, time location sampling, and snowball sampling for behavioral surveillance in men who have sex with men, Fortaleza, Brazil.

    Science.gov (United States)

    Kendall, Carl; Kerr, Ligia R F S; Gondim, Rogerio C; Werneck, Guilherme L; Macena, Raimunda Hermelinda Maia; Pontes, Marta Kerr; Johnston, Lisa G; Sabin, Keith; McFarland, Willi

    2008-07-01

    Obtaining samples of populations at risk for HIV challenges surveillance, prevention planning, and evaluation. Methods used include snowball sampling, time location sampling (TLS), and respondent-driven sampling (RDS). Few studies have made side-by-side comparisons to assess their relative advantages. We compared snowball, TLS, and RDS surveys of men who have sex with men (MSM) in Forteleza, Brazil, with a focus on the socio-economic status (SES) and risk behaviors of the samples to each other, to known AIDS cases and to the general population. RDS produced a sample with wider inclusion of lower SES than snowball sampling or TLS-a finding of health significance given the majority of AIDS cases reported among MSM in the state were low SES. RDS also achieved the sample size faster and at lower cost. For reasons of inclusion and cost-efficiency, RDS is the sampling methodology of choice for HIV surveillance of MSM in Fortaleza.

  7. Gram-typing of mastitis bacteria in milk samples using flow cytometry

    DEFF Research Database (Denmark)

    Langerhuus, Sine Nygaard; Ingvartsen, Klaus Lønne; Bennedsgaard, Torben Werner

    2013-01-01

    Fast identification of pathogenic bacteria in milk samples from cows with clinical mastitis is central to proper treatment. In Denmark, time to bacterial diagnosis is typically 24 to 48 h when using traditional culturing methods. The PCR technique provides a faster and highly sensitive identifica......Fast identification of pathogenic bacteria in milk samples from cows with clinical mastitis is central to proper treatment. In Denmark, time to bacterial diagnosis is typically 24 to 48 h when using traditional culturing methods. The PCR technique provides a faster and highly sensitive...... cytometry-based method, which can detect and distinguish gram-negative and gram-positive bacteria in mastitis milk samples. The differentiation was based on bacterial fluorescence intensities upon labeling with biotin-conjugated wheat germ agglutinin and acridine orange. Initially 19 in-house bacterial...... characteristic curves for the 19 bacterial cultures. The method was then tested on 53 selected mastitis cases obtained from the department biobank (milk samples from 6 gram-negative and 47 gram-positive mastitis cases). Gram-negative bacteria in milk samples were detected with a sensitivity of 1...

  8. Bird on Your Smartphone: How to make identification faster?

    Science.gov (United States)

    Hidayat, T.; Kurniawan, I. S.; Tapilow, F. S.

    2018-01-01

    Identification skills of students are needed in the field activities of animal ecology course. Good identification skills will help students to understand the traits, determine differences and similarities in order to naming of birds’ species. This study aims to describe the identification skill of students by using smart phone applications designed in such a way as a support in the field activities. Research method used was quasi experiment involving 60 students which were divided into two groups, one group that use smartphone applications (SA) and other group using a guidebook (GB). This study was carried out in the classroom and outside (the field). Instruments used in this research included tests and questionnaire. The identification skills were measured by tests, indicated by an average score (AS). The results showed that the identification skills of SA students were higher (AS = 3.12) than those of GB one (AS = 2.91). These results are in accordance with response of students. The most of students (90.08%) mentioned that the use of smart phone applications in identifying birds is helpful, more effective and convenience to make identification faster. For further implementation, however, performance of the smartphone used here need to be enhanced to improve the identification skills of students and for wider use.

  9. Vehicle parts detection based on Faster - RCNN with location constraints of vehicle parts feature point

    Science.gov (United States)

    Yang, Liqin; Sang, Nong; Gao, Changxin

    2018-03-01

    Vehicle parts detection plays an important role in public transportation safety and mobility. The detection of vehicle parts is to detect the position of each vehicle part. We propose a new approach by combining Faster RCNN and three level cascaded convolutional neural network (DCNN). The output of Faster RCNN is a series of bounding boxes with coordinate information, from which we can locate vehicle parts. DCNN can precisely predict feature point position, which is the center of vehicle part. We design an output strategy by combining these two results. There are two advantages for this. The quality of the bounding boxes are greatly improved, which means vehicle parts feature point position can be located more precise. Meanwhile we preserve the position relationship between vehicle parts and effectively improve the validity and reliability of the result. By using our algorithm, the performance of the vehicle parts detection improve obviously compared with Faster RCNN.

  10. Sampling large random knots in a confined space

    International Nuclear Information System (INIS)

    Arsuaga, J; Blackstone, T; Diao, Y; Hinson, K; Karadayi, E; Saito, M

    2007-01-01

    DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e n 2 )). We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n 2 ). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications

  11. Sampling large random knots in a confined space

    Science.gov (United States)

    Arsuaga, J.; Blackstone, T.; Diao, Y.; Hinson, K.; Karadayi, E.; Saito, M.

    2007-09-01

    DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e^{n^2}) . We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n2). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications.

  12. Sampling large random knots in a confined space

    Energy Technology Data Exchange (ETDEWEB)

    Arsuaga, J [Department of Mathematics, San Francisco State University, 1600 Holloway Ave, San Francisco, CA 94132 (United States); Blackstone, T [Department of Computer Science, San Francisco State University, 1600 Holloway Ave., San Francisco, CA 94132 (United States); Diao, Y [Department of Mathematics and Statistics, University of North Carolina at Charlotte, Charlotte, NC 28223 (United States); Hinson, K [Department of Mathematics and Statistics, University of North Carolina at Charlotte, Charlotte, NC 28223 (United States); Karadayi, E [Department of Mathematics, University of South Florida, 4202 E Fowler Avenue, Tampa, FL 33620 (United States); Saito, M [Department of Mathematics, University of South Florida, 4202 E Fowler Avenue, Tampa, FL 33620 (United States)

    2007-09-28

    DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e{sup n{sup 2}}). We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n{sup 2}). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications.

  13. Earlier time to aerobic exercise is associated with faster recovery following acute sport concussion.

    Science.gov (United States)

    Lawrence, David Wyndham; Richards, Doug; Comper, Paul; Hutchison, Michael G

    2018-01-01

    To determine whether earlier time to initiation of aerobic exercise following acute concussion is associated with time to full return to (1) sport and (2) school or work. A retrospective stratified propensity score survival analysis of acute (≤14 days) concussion was used to determine whether time (days) to initiation of aerobic exercise post-concussion was associated with, both, time (days) to full return to (1) sport and (2) school or work. A total of 253 acute concussions [median (IQR) age, 17.0 (15.0-20.0) years; 148 (58.5%) males] were included in this study. Multivariate Cox regression models identified that earlier time to aerobic exercise was associated with faster return to sport and school/work adjusting for other covariates, including quintile propensity strata. For each successive day in delay to initiation of aerobic exercise, individuals had a less favourable recovery trajectory. Initiating aerobic exercise at 3 and 7 days following injury was associated with a respective 36.5% (HR, 0.63; 95% CI, 0.53-0.76) and 73.2% (HR, 0.27; 95% CI, 0.16-0.45) reduced probability of faster full return to sport compared to within 1 day; and a respective 45.9% (HR, 0.54; 95% CI, 0.44-0.66) and 83.1% (HR, 0.17; 95% CI, 0.10-0.30) reduced probability of faster full return to school/work. Additionally, concussion history, symptom severity, LOC deleteriously influenced concussion recovery. Earlier initiation of aerobic exercise was associated with faster full return to sport and school or work. This study provides greater insight into the benefits and safety of aerobic exercise within the first week of the injury.

  14. Optimization of Sample Preparation and Instrumental Parameters for the Rapid Analysis of Drugs of Abuse in Hair samples by MALDI-MS/MS Imaging

    Science.gov (United States)

    Flinders, Bryn; Beasley, Emma; Verlaan, Ricky M.; Cuypers, Eva; Francese, Simona; Bassindale, Tom; Clench, Malcolm R.; Heeren, Ron M. A.

    2017-08-01

    Matrix-assisted laser desorption/ionization-mass spectrometry imaging (MALDI-MSI) has been employed to rapidly screen longitudinally sectioned drug user hair samples for cocaine and its metabolites using continuous raster imaging. Optimization of the spatial resolution and raster speed were performed on intact cocaine contaminated hair samples. The optimized settings (100 × 150 μm at 0.24 mm/s) were subsequently used to examine longitudinally sectioned drug user hair samples. The MALDI-MS/MS images showed the distribution of the most abundant cocaine product ion at m/z 182. Using the optimized settings, multiple hair samples obtained from two users were analyzed in approximately 3 h: six times faster than the standard spot-to-spot acquisition method. Quantitation was achieved using longitudinally sectioned control hair samples sprayed with a cocaine dilution series. A multiple reaction monitoring (MRM) experiment was also performed using the `dynamic pixel' imaging method to screen for cocaine and a range of its metabolites, in order to differentiate between contaminated hairs and drug users. Cocaine, benzoylecgonine, and cocaethylene were detectable, in agreement with analyses carried out using the standard LC-MS/MS method. [Figure not available: see fulltext.

  15. Faster gastric emptying of a liquid meal in rats after hypothalamic dorsomedial nucleus lesion

    Directory of Open Access Journals (Sweden)

    Denofre-Carvalho S.

    1997-01-01

    Full Text Available The effects of dorsomedial hypothalamic (DMH nucleus lesion on body weight, plasma glucose levels, and the gastric emptying of a liquid meal were investigated in male Wistar rats (170-250 g. DMH lesions were produced stereotaxically by delivering a 2.0-mA current for 20 s through nichrome electrodes (0.3-mm tip exposure. In a second set of experiments, the DMH and the ventromedial hypothalamic (VMH nucleus were lesioned with a 1.0-mA current for 10 s (0.1-mm tip exposure. The medial hypothalamus (MH was also lesioned separately using a nichrome electrode (0.3-mm tip exposure with a 2.0-mA current for 20 s. Gastric emptying was measured following the orogastric infusion of a liquid test meal consisting of physiological saline (0.9% NaCl, w/v plus phenol red dye (6 mg/dl as a marker. Plasma glucose levels were determined after an 18-h fast before the lesion and on the 7th and 15th postoperative day. Body weight was determined before lesioning and before sacrificing the rats. The DMH-lesioned rats showed a significantly faster (P<0.05 gastric emptying (24.7% gastric retention, N = 11 than control (33.0% gastric retention, N = 8 and sham-lesioned (33.5% gastric retention, N = 12 rats, with a transient hypoglycemia on the 7th postoperative day which returned to normal by the 15th postoperative day. In all cases, weight gain was slower among lesioned rats. Additional experiments using a smaller current to induce lesions confirmed that DMH-lesioned rats had a faster gastric emptying (25.1% gastric retention, N = 7 than control (33.4% gastric retention, N = 17 and VMH-lesioned (34.6% gastric retention, N = 7 rats. MH lesions resulted in an even slower gastric emptying (43.7% gastric retention, N = 7 than in the latter two groups. We conclude that although DMH lesions reduce weight gain, they do not produce consistent changes in plasma glucose levels. These lesions also promote faster gastric emptying of an inert liquid meal, thus suggesting a role for

  16. A methodology for more efficient tail area sampling with discrete probability distribution

    International Nuclear Information System (INIS)

    Park, Sang Ryeol; Lee, Byung Ho; Kim, Tae Woon

    1988-01-01

    Monte Carlo Method is commonly used to observe the overall distribution and to determine the lower or upper bound value in statistical approach when direct analytical calculation is unavailable. However, this method would not be efficient if the tail area of a distribution is concerned. A new method entitled 'Two Step Tail Area Sampling' is developed, which uses the assumption of discrete probability distribution and samples only the tail area without distorting the overall distribution. This method uses two step sampling procedure. First, sampling at points separated by large intervals is done and second, sampling at points separated by small intervals is done with some check points determined at first step sampling. Comparison with Monte Carlo Method shows that the results obtained from the new method converge to analytic value faster than Monte Carlo Method if the numbers of calculation of both methods are the same. This new method is applied to DNBR (Departure from Nucleate Boiling Ratio) prediction problem in design of the pressurized light water nuclear reactor

  17. Faster recovery of a diatom from UV damage under ocean acidification.

    Science.gov (United States)

    Wu, Yaping; Campbell, Douglas A; Gao, Kunshan

    2014-11-01

    Diatoms are the most important group of primary producers in marine ecosystems. As oceanic pH declines and increased stratification leads to the upper mixing layer becoming shallower, diatoms are interactively affected by both lower pH and higher average exposures to solar ultraviolet radiation. The photochemical yields of a model diatom, Phaeodactylum tricornutum, were inhibited by ultraviolet radiation under both growth and excess light levels, while the functional absorbance cross sections of the remaining photosystem II increased. Cells grown under ocean acidification (OA) were less affected during UV exposure. The recovery of PSII under low photosynthetically active radiation was much faster than in the dark, indicating that photosynthetic processes were essential for the full recovery of photosystem II. This light dependent recovery required de novo synthesized protein. Cells grown under ocean acidification recovered faster, possibly attributable to higher CO₂ availability for the Calvin cycle producing more resources for repair. The lower UV inhibition combined with higher recovery rate under ocean acidification could benefit species such as P.tricornutum, and change their competitiveness in the future ocean. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Cone-Beam Computed Tomography (CBCT) Versus CT in Lung Ablation Procedure: Which is Faster?

    Science.gov (United States)

    Cazzato, Roberto Luigi; Battistuzzi, Jean-Benoit; Catena, Vittorio; Grasso, Rosario Francesco; Zobel, Bruno Beomonte; Schena, Emiliano; Buy, Xavier; Palussiere, Jean

    2015-10-01

    To compare cone-beam CT (CBCT) versus computed tomography (CT) guidance in terms of time needed to target and place the radiofrequency ablation (RFA) electrode on lung tumours. Patients at our institution who received CBCT- or CT-guided RFA for primary or metastatic lung tumours were retrospectively included. Time required to target and place the RFA electrode within the lesion was registered and compared across the two groups. Lesions were stratified into three groups according to their size (20 mm). Occurrences of electrode repositioning, repositioning time, RFA complications, and local recurrence after RFA were also reported. Forty tumours (22 under CT, 18 under CBCT guidance) were treated in 27 patients (19 male, 8 female, median age 67.25 ± 9.13 years). Thirty RFA sessions (16 under CBCT and 14 under CT guidance) were performed. Multivariable linear regression analysis showed that CBCT was faster than CT to target and place the electrode within the tumour independently from its size (β = -9.45, t = -3.09, p = 0.004). Electrode repositioning was required in 10/22 (45.4 %) tumours under CT guidance and 5/18 (27.8 %) tumours under CBCT guidance. Pneumothoraces occurred in 6/14 (42.8 %) sessions under CT guidance and in 6/16 (37.5 %) sessions under CBCT guidance. Two recurrences were noted for tumours receiving CBCT-guided RFA (2/17, 11.7 %) and three after CT-guided RFA (3/19, 15.8 %). CBCT with live 3D needle guidance is a useful technique for percutaneous lung ablation. Despite lesion size, CBCT allows faster lung RFA than CT.

  19. Preparation of samples for leaf architecture studies, a method for mounting cleared leaves.

    Science.gov (United States)

    Vasco, Alejandra; Thadeo, Marcela; Conover, Margaret; Daly, Douglas C

    2014-09-01

    Several recent waves of interest in leaf architecture have shown an expanding range of approaches and applications across a number of disciplines. Despite this increased interest, examination of existing archives of cleared and mounted leaves shows that current methods for mounting, in particular, yield unsatisfactory results and deterioration of samples over relatively short periods. Although techniques for clearing and staining leaves are numerous, published techniques for mounting leaves are scarce. • Here we present a complete protocol and recommendations for clearing, staining, and imaging leaves, and, most importantly, a method to permanently mount cleared leaves. • The mounting protocol is faster than other methods, inexpensive, and straightforward; moreover, it yields clear and permanent samples that can easily be imaged, scanned, and stored. Specimens mounted with this method preserve well, with leaves that were mounted more than 35 years ago showing no signs of bubbling or discoloration.

  20. Experimental study of electromagnetic radiation from a faster-than-light vacuum macroscopic source

    Energy Technology Data Exchange (ETDEWEB)

    Bessarab, A.V. [Russian Federal Nuclear Center-All-Russia Scientific Research Institute of Experimental Physics, Sarov, Nizhni Novgorod region, 607188 (Russian Federation); Martynenko, S.P. [Russian Federal Nuclear Center-All-Russia Scientific Research Institute of Experimental Physics, Sarov, Nizhni Novgorod region, 607188 (Russian Federation); Prudkoi, N.A. [Russian Federal Nuclear Center-All-Russia Scientific Research Institute of Experimental Physics, Sarov, Nizhni Novgorod region, 607188 (Russian Federation); Soldatov, A.V. [Russian Federal Nuclear Center-All-Russia Scientific Research Institute of Experimental Physics, Sarov, Nizhni Novgorod region, 607188 (Russian Federation)]. E-mail: soldatov@vniief.ru; Terekhin, V.A. [Russian Federal Nuclear Center-All-Russia Scientific Research Institute of Experimental Physics, Sarov, Nizhni Novgorod region, 607188 (Russian Federation)

    2006-08-15

    The effect which manifests itself in the form of directed electromagnetic pulses (EMP) initiated by an X-ray incident obliquely upon a conducting surface has been confirmed and investigated experimentally in detail. A planar accelerating diode comprising a metallic cathode and grid anode was initiated with an oblique short soft-X-ray pulse from a point laser-plasma source. Then a source of directed EMP-a current of accelerated photoelectrons-was formed whose boundary ran along the anode external surface with a faster-than-light velocity. The plasma was formed when short-pulse ({approx}0.3ns) laser radiation from ISKRA-5 facility was focused on a plane Au target. The amplitude-in-time and spatial characteristics of radiation emitted by the faster-than-light source have been measured. Parameters of the accelerated electron current have been measured too.

  1. The new LLNL AMS sample changer

    International Nuclear Information System (INIS)

    Roberts, M.L.; Norman, P.J.; Garibaldi, J.L.; Hornady, R.S.

    1993-01-01

    The Center for Accelerator Mass Spectrometry at LLNL has installed a new 64 position AMS sample changer on our spectrometer. This new sample changer has the capability of being controlled manually by an operator or automatically by the AMS data acquisition computer. Automatic control of the sample changer by the data acquisition system is a necessary step towards unattended AMS operation in our laboratory. The sample changer uses a fiber optic shaft encoder for rough rotational indexing of the sample wheel and a series of sequenced pneumatic cylinders for final mechanical indexing of the wheel and insertion and retraction of samples. Transit time from sample to sample varies from 4 s to 19 s, depending on distance moved. Final sample location can be set to within 50 microns on the x and y axis and within 100 microns in the z axis. Changing sample wheels on the new sample changer is also easier and faster than was possible on our previous sample changer and does not require the use of any tools

  2. Information needs at the beginning of foraging: grass-cutting ants trade off load size for a faster return to the nest.

    Directory of Open Access Journals (Sweden)

    Martin Bollazzi

    2011-03-01

    Full Text Available Acquisition of information about food sources is essential for animals that forage collectively like social insects. Foragers deliver two commodities to the nest, food and information, and they may favor the delivery of one at the expenses of the other. We predict that information needs should be particularly high at the beginning of foraging: the decision to return faster to the nest will motivate a grass-cutting ant worker to reduce its loading time, and so to leave the source with a partial load.Field results showed that at the initial foraging phase, most grass-cutting ant foragers (Acromyrmex heyeri returned unladen to the nest, and experienced head-on encounters with outgoing workers. Ant encounters were not simply collisions in a probabilistic sense: outgoing workers contacted in average 70% of the returning foragers at the initial foraging phase, and only 20% at the established phase. At the initial foraging phase, workers cut fragments that were shorter, narrower, lighter and tenderer than those harvested at the established one. Foragers walked at the initial phase significantly faster than expected for the observed temperatures, yet not at the established phase. Moreover, when controlling for differences in the fragment-size carried, workers still walked faster at the initial phase. Despite the higher speed, their individual transport rate of vegetable tissue was lower than that of similarly-sized workers foraging later at the same patch.At the initial foraging phase, workers compromised their individual transport rates of material in order to return faster to the colony. We suggest that the observed flexible cutting rules and the selection of partial loads at the beginning of foraging are driven by the need of information transfer, crucial for the establishment and maintenance of a foraging process to monopolize a discovered resource.

  3. Faster Movement Speed Results in Greater Tendon Strain during the Loaded Squat Exercise

    Science.gov (United States)

    Earp, Jacob E.; Newton, Robert U.; Cormie, Prue; Blazevich, Anthony J.

    2016-01-01

    Introduction: Tendon dynamics influence movement performance and provide the stimulus for long-term tendon adaptation. As tendon strain increases with load magnitude and decreases with loading rate, changes in movement speed during exercise should influence tendon strain. Methods: Ten resistance-trained men [squat one repetition maximum (1RM) to body mass ratio: 1.65 ± 0.12] performed parallel-depth back squat lifts with 60% of 1RM load at three different speeds: slow fixed-tempo (TS: 2-s eccentric, 1-s pause, 2-s concentric), volitional-speed without a pause (VS) and maximum-speed jump (JS). In each condition joint kinetics, quadriceps tendon length (LT), patellar tendon force (FT), and rate of force development (RFDT) were estimated using integrated ultrasonography, motion-capture, and force platform recordings. Results: Peak LT, FT, and RFDT were greater in JS than TS (p < 0.05), however no differences were observed between VS and TS. Thus, moving at faster speeds resulted in both greater tendon stress and strain despite an increased RFDT, as would be predicted of an elastic, but not a viscous, structure. Temporal comparisons showed that LT was greater in TS than JS during the early eccentric phase (10–14% movement duration) where peak RFDT occurred, demonstrating that the tendon's viscous properties predominated during initial eccentric loading. However, during the concentric phase (61–70 and 76–83% movement duration) differing FT and similar RFDT between conditions allowed for the tendon's elastic properties to predominate such that peak tendon strain was greater in JS than TS. Conclusions: Based on our current understanding, there may be an additional mechanical stimulus for tendon adaptation when performing large range-of-motion isoinertial exercises at faster movement speeds. PMID:27630574

  4. The Faster, Better, Cheaper Approach to Space Missions: An Engineering Management Assessment

    Science.gov (United States)

    Hamaker, Joseph W.

    1999-01-01

    NASA was chartered as an independent civilian space agency in 1958 following the Soviet Union's dramatic launch of the Sputnik 1 (1957). In his state of the union address in May of 1961, President Kennedy issued to the fledging organization his famous challenge for a manned lunar mission by the end of the decade. The Mercury, Gemini and Apollo programs that followed put the utmost value on high quality, low risk (as low as possible within the context of space flight), quick results, all with little regard for cost. These circumstances essentially melded NASAs culture as an organization capable of great technological achievement but at extremely high cost. The Space Shuttle project, the next major agency endeavor, was put under severe annual budget constraints in the 1970's. NASAs response was to hold to the high quality standards, low risk and annual cost and let schedule suffer. The result was a significant delay in the introduction of the Shuttle as well as overall total cost growth. By the early 1990's, because NASA's budget was declining, the number of projects was also declining. Holding the same cost and schedule productivity levels as before was essentially causing NASA to price itself out of business. In 1992, the helm of NASA was turned over to a new Administrator. Dan Goldin's mantra was "faster, better, cheaper" and his enthusiasm and determination to change the NASA culture was not to be ignored. This research paper documents the various implementations of "faster, better, cheaper" that have been attempted, analyzes their impact and compares the cost performance of these new projects to previous NASA benchmarks. Fundamentally, many elements of "faster, better, cheaper" are found to be working well, especially on smaller projects. Some of the initiatives are found to apply only to smaller or experimental projects however, so that extrapolation to "flagship" projects may be problematic.

  5. DOE translation tool: Faster and better than ever

    International Nuclear Information System (INIS)

    El-Chakieh, T.; Vincent, C.

    2006-01-01

    CAE's constant push to advance power plant simulation practices involves continued investment in our technologies. This commitment has yielded many advances in our simulation technologies and tools to provide faster maintenance updates, easier process updates and higher fidelity models for power plant simulators. Through this quest, a comprehensive, self-contained and user-friendly DCS translation tool for plant control system emulation was created. The translation tool converts an ABB Advant AC160 and/or AC450 control system, used in both gas turbine-based, fossil and nuclear power plants, into Linux or Windows-based ROSE[reg] simulation schematics. The translation for a full combined-cycle gas turbine (CCGT) plant that comprises more than 5,300 function plans distributed over 15 nodes is processed in less than five hours on a dual 2.8Ghz Xeon Linux platform in comparison to the 12 hours required by CAE's previous translation tool. The translation process, using the plant configuration files, includes the parsing of the control algorithms, the databases, the graphic and the interconnection between nodes. A Linux or Windows API is then used to automatically populate the ROSE[reg] database. Without such a translation, tool or if ?stimulation? of real control system is not feasible or too costly, simulation of the DCS manually takes months of error prone manual coding. The translation can be performed for all the nodes constituting the configuration files of the whole plant DCS, or in order to provide faster maintenance updates and easier process updates, partial builds are possible at 3 levels: a. single schematic updates, b. multi-schematic updates and c. single node updates based on the user inputs into the Graphical User Interface. improvements including: - Process time reduction of over 60%; - All communication connections between nodes are fully automated; - New partial build for one schematic, a group of schematics or a single node; - Availability on PC

  6. Faster and more accurate transport procedures for HZETRN

    International Nuclear Information System (INIS)

    Slaba, T.C.; Blattnig, S.R.; Badavi, F.F.

    2010-01-01

    The deterministic transport code HZETRN was developed for research scientists and design engineers studying the effects of space radiation on astronauts and instrumentation protected by various shielding materials and structures. In this work, several aspects of code verification are examined. First, a detailed derivation of the light particle (A ≤ 4) and heavy ion (A > 4) numerical marching algorithms used in HZETRN is given. References are given for components of the derivation that already exist in the literature, and discussions are given for details that may have been absent in the past. The present paper provides a complete description of the numerical methods currently used in the code and is identified as a key component of the verification process. Next, a new numerical method for light particle transport is presented, and improvements to the heavy ion transport algorithm are discussed. A summary of round-off error is also given, and the impact of this error on previously predicted exposure quantities is shown. Finally, a coupled convergence study is conducted by refining the discretization parameters (step-size and energy grid-size). From this study, it is shown that past efforts in quantifying the numerical error in HZETRN were hindered by single precision calculations and computational resources. It is determined that almost all of the discretization error in HZETRN is caused by the use of discretization parameters that violate a numerical convergence criterion related to charged target fragments below 50 AMeV. Total discretization errors are given for the old and new algorithms to 100 g/cm 2 in aluminum and water, and the improved accuracy of the new numerical methods is demonstrated. Run time comparisons between the old and new algorithms are given for one, two, and three layer slabs of 100 g/cm 2 of aluminum, polyethylene, and water. The new algorithms are found to be almost 100 times faster for solar particle event simulations and almost 10 times

  7. They all like it hot: faster cleanup of contaminated soil and groundwater

    Energy Technology Data Exchange (ETDEWEB)

    Newmark, R., LLNL

    1998-03-01

    Clean up a greasy kitchen spill with cold water and the going is slow. Us hot water instead and progress improves markedly. So it makes sense that cleanup of greasy underground contaminants such as gasoline might go faster if hot water or steam were somehow added to the process. The Environmental Protection Agency named hundreds of sites to the Superfund list - sites that have been contaminated with petroleum products or petroleum products or solvents. Elsewhere across the country, thousands of properties not identified on federal cleanup lists are contaminated as well. Given that under current regulations, underground accumulations of solvent and hydrocarbon contaminants (the most serious cause of groundwater pollution) must be cleaned up, finding a rapid and effective method of removing them is imperative. In the early 1990`s, in collaboration with the School of Engineering at the University of California at Berkeley, Lawrence Livermore developed dynamic underground stripping. This method for treating underground contaminants with heat is much faster and more effective than traditional treatment methods.

  8. 20 Recipes for Programming MVC 3 Faster, Smarter Web Development

    CERN Document Server

    Munro, Jamie

    2011-01-01

    There's no need to reinvent the wheel every time you run into a problem with ASP.NET's Model-View-Controller (MVC) framework. This concise cookbook provides recipes to help you solve tasks many web developers encounter every day. Each recipe includes the C# code you need, along with a complete working example of how to implement the solution. Learn practical techniques for applying user authentication, providing faster page reloads, validating user data, filtering search results, and many other issues related to MVC3 development. These recipes help you: Restrict access to views with password

  9. Preparation of Samples for Leaf Architecture Studies, A Method for Mounting Cleared Leaves

    Directory of Open Access Journals (Sweden)

    Alejandra Vasco

    2014-09-01

    Full Text Available Premise of the study: Several recent waves of interest in leaf architecture have shown an expanding range of approaches and applications across a number of disciplines. Despite this increased interest, examination of existing archives of cleared and mounted leaves shows that current methods for mounting, in particular, yield unsatisfactory results and deterioration of samples over relatively short periods. Although techniques for clearing and staining leaves are numerous, published techniques for mounting leaves are scarce. Methods and Results: Here we present a complete protocol and recommendations for clearing, staining, and imaging leaves, and, most importantly, a method to permanently mount cleared leaves. Conclusions: The mounting protocol is faster than other methods, inexpensive, and straightforward; moreover, it yields clear and permanent samples that can easily be imaged, scanned, and stored. Specimens mounted with this method preserve well, with leaves that were mounted more than 35 years ago showing no signs of bubbling or discoloration.

  10. Sequence-based heuristics for faster annotation of non-coding RNA families.

    Science.gov (United States)

    Weinberg, Zasha; Ruzzo, Walter L

    2006-01-01

    Non-coding RNAs (ncRNAs) are functional RNA molecules that do not code for proteins. Covariance Models (CMs) are a useful statistical tool to find new members of an ncRNA gene family in a large genome database, using both sequence and, importantly, RNA secondary structure information. Unfortunately, CM searches are extremely slow. Previously, we created rigorous filters, which provably sacrifice none of a CM's accuracy, while making searches significantly faster for virtually all ncRNA families. However, these rigorous filters make searches slower than heuristics could be. In this paper we introduce profile HMM-based heuristic filters. We show that their accuracy is usually superior to heuristics based on BLAST. Moreover, we compared our heuristics with those used in tRNAscan-SE, whose heuristics incorporate a significant amount of work specific to tRNAs, where our heuristics are generic to any ncRNA. Performance was roughly comparable, so we expect that our heuristics provide a high-quality solution that--unlike family-specific solutions--can scale to hundreds of ncRNA families. The source code is available under GNU Public License at the supplementary web site.

  11. Investigating the Mpemba Effect: When Hot Water Freezes Faster than Cold Water

    Science.gov (United States)

    Ibekwe, R. T.; Cullerne, J. P.

    2016-01-01

    Under certain conditions a body of hot liquid may cool faster and freeze before a body of colder liquid, a phenomenon known as the Mpemba Effect. An initial difference in temperature of 3.2 °C enabled warmer water to reach 0 °C in 14% less time than colder water. Convection currents in the liquid generate a temperature gradient that causes more…

  12. Even Faster Web Sites Performance Best Practices for Web Developers

    CERN Document Server

    Souders, Steve

    2009-01-01

    Performance is critical to the success of any web site, and yet today's web applications push browsers to their limits with increasing amounts of rich content and heavy use of Ajax. In this book, Steve Souders, web performance evangelist at Google and former Chief Performance Yahoo!, provides valuable techniques to help you optimize your site's performance. Souders' previous book, the bestselling High Performance Web Sites, shocked the web development world by revealing that 80% of the time it takes for a web page to load is on the client side. In Even Faster Web Sites, Souders and eight exp

  13. Preparation of samples for leaf architecture studies, a method for mounting cleared leaves1

    Science.gov (United States)

    Vasco, Alejandra; Thadeo, Marcela; Conover, Margaret; Daly, Douglas C.

    2014-01-01

    • Premise of the study: Several recent waves of interest in leaf architecture have shown an expanding range of approaches and applications across a number of disciplines. Despite this increased interest, examination of existing archives of cleared and mounted leaves shows that current methods for mounting, in particular, yield unsatisfactory results and deterioration of samples over relatively short periods. Although techniques for clearing and staining leaves are numerous, published techniques for mounting leaves are scarce. • Methods and Results: Here we present a complete protocol and recommendations for clearing, staining, and imaging leaves, and, most importantly, a method to permanently mount cleared leaves. • Conclusions: The mounting protocol is faster than other methods, inexpensive, and straightforward; moreover, it yields clear and permanent samples that can easily be imaged, scanned, and stored. Specimens mounted with this method preserve well, with leaves that were mounted more than 35 years ago showing no signs of bubbling or discoloration. PMID:25225627

  14. Effects of surface roughness, texture and polymer degradation on cathodic delamination of epoxy coated steel samples

    International Nuclear Information System (INIS)

    Khun, N.W.; Frankel, G.S.

    2013-01-01

    Highlights: ► Cathodic delamination of epoxy coated steel samples was studied using SKP. ► Delamination of the coating decreased with increased substrate surface roughness. ► Delamination of the coating was faster on the substrate with parallel surface scratches. ► Delamination of the coating exposed to weathering conditions increased with prolonged exposure. - Abstract: The Scanning Kelvin Probe (SKP) technique was used to investigate the effects of surface roughness, texture and polymer degradation on cathodic delamination of epoxy coated steel. The cathodic delamination rate of the epoxy coatings dramatically decreased with increased surface roughness of the underlying steel substrate. The surface texture of the steel substrates also had a significant effect in that samples with parallel abrasion lines exhibiting faster cathodic delamination in the direction of the lines compared to the direction perpendicular to the lines. The cathodic delamination kinetics of epoxy coatings previously exposed to weathering conditions increased with prolonged exposure due to pronounced polymer degradation. SEM observation confirmed that the cyclic exposure to UV radiation and water condensation caused severe deterioration in the polymer structures with surface cracking and erosion. The SKP results clearly showed that the cathodic delamination of the epoxy coatings was significantly influenced by the surface features of the underlying steel substrates and the degradation of the coatings.

  15. Cone-Beam Computed Tomography (CBCT) Versus CT in Lung Ablation Procedure: Which is Faster?

    Energy Technology Data Exchange (ETDEWEB)

    Cazzato, Roberto Luigi, E-mail: r.cazzato@unicampus.it; Battistuzzi, Jean-Benoit, E-mail: j.battistuzzi@bordeaux.unicancer.fr; Catena, Vittorio, E-mail: vittoriocatena@gmail.com [Institut Bergonié, Department of Radiology (France); Grasso, Rosario Francesco, E-mail: r.grasso@unicampus.it; Zobel, Bruno Beomonte, E-mail: b.zobel@unicampus.it [Università Campus Bio-Medico di Roma, Department of Radiology and Diagnostic Imaging (Italy); Schena, Emiliano, E-mail: e.schena@unicampus.it [Università Campus Bio-Medico di Roma, Unit of Measurements and Biomedical Instrumentations, Biomedical Engineering Laboratory (Italy); Buy, Xavier, E-mail: x.buy@bordeaux.unicancer.fr; Palussiere, Jean, E-mail: j.palussiere@bordeaux.unicancer.fr [Institut Bergonié, Department of Radiology (France)

    2015-10-15

    AimTo compare cone-beam CT (CBCT) versus computed tomography (CT) guidance in terms of time needed to target and place the radiofrequency ablation (RFA) electrode on lung tumours.Materials and MethodsPatients at our institution who received CBCT- or CT-guided RFA for primary or metastatic lung tumours were retrospectively included. Time required to target and place the RFA electrode within the lesion was registered and compared across the two groups. Lesions were stratified into three groups according to their size (<10, 10–20, >20 mm). Occurrences of electrode repositioning, repositioning time, RFA complications, and local recurrence after RFA were also reported.ResultsForty tumours (22 under CT, 18 under CBCT guidance) were treated in 27 patients (19 male, 8 female, median age 67.25 ± 9.13 years). Thirty RFA sessions (16 under CBCT and 14 under CT guidance) were performed. Multivariable linear regression analysis showed that CBCT was faster than CT to target and place the electrode within the tumour independently from its size (β = −9.45, t = −3.09, p = 0.004). Electrode repositioning was required in 10/22 (45.4 %) tumours under CT guidance and 5/18 (27.8 %) tumours under CBCT guidance. Pneumothoraces occurred in 6/14 (42.8 %) sessions under CT guidance and in 6/16 (37.5 %) sessions under CBCT guidance. Two recurrences were noted for tumours receiving CBCT-guided RFA (2/17, 11.7 %) and three after CT-guided RFA (3/19, 15.8 %).ConclusionCBCT with live 3D needle guidance is a useful technique for percutaneous lung ablation. Despite lesion size, CBCT allows faster lung RFA than CT.

  16. Visual Motion Processing Subserves Faster Visuomotor Reaction in Badminton Players.

    Science.gov (United States)

    Hülsdünker, Thorben; Strüder, Heiko K; Mierau, Andreas

    2017-06-01

    Athletes participating in ball or racquet sports have to respond to visual stimuli under critical time pressure. Previous studies used visual contrast stimuli to determine visual perception and visuomotor reaction in athletes and nonathletes; however, ball and racquet sports are characterized by motion rather than contrast visual cues. Because visual contrast and motion signals are processed in different cortical regions, this study aimed to determine differences in perception and processing of visual motion between athletes and nonathletes. Twenty-five skilled badminton players and 28 age-matched nonathletic controls participated in this study. Using a 64-channel EEG system, we investigated visual motion perception/processing in the motion-sensitive middle temporal (MT) cortical area in response to radial motion of different velocities. In a simple visuomotor reaction task, visuomotor transformation in Brodmann area 6 (BA6) and BA4 as well as muscular activation (EMG onset) and visuomotor reaction time (VMRT) were investigated. Stimulus- and response-locked potentials were determined to differentiate between perceptual and motor-related processes. As compared with nonathletes, athletes showed earlier EMG onset times (217 vs 178 ms, P < 0.001), accompanied by a faster VMRT (274 vs 243 ms, P < 0.001). Furthermore, athletes showed an earlier stimulus-locked peak activation of MT (200 vs 182 ms, P = 0.002) and BA6 (161 vs 137 ms, P = 0.009). Response-locked peak activation in MT was later in athletes (-7 vs 26 ms, P < 0.001), whereas no group differences were observed in BA6 and BA4. Multiple regression analyses with stimulus- and response-locked cortical potentials predicted EMG onset (r = 0.83) and VMRT (r = 0.77). The athletes' superior visuomotor performance in response to visual motion is primarily related to visual perception and, to a minor degree, to motor-related processes.

  17. Total Decomposition of Environmental Radionuclide Samples with a Microwave Oven

    International Nuclear Information System (INIS)

    Ramon Garcia, Bernd Kahn

    1998-01-01

    Closed-vessel microwave assisted acid decomposition was investigated as an alternative to traditional methods of sample dissolution/decomposition. This technique, used in analytical chemistry, has some potential advantages over other procedures. It requires less reagents, it is faster, and it has the potential of achieving total dissolution because of higher temperatures and pressures

  18. Revisit the faster-is-slower effect for an exit at a corner

    Science.gov (United States)

    Chen, Jun Min; Lin, Peng; Wu, Fan Yu; Li Gao, Dong; Wang, Guo Yuan

    2018-02-01

    The faster-is-slower effect (FIS), which means that crowd at a high enough velocity could significantly increase the evacuation time to escape through an exit, is an interesting phenomenon in pedestrian dynamics. Such phenomenon had been studied widely and has been experimentally verified in different systems of discrete particles flowing through a centre exit. To experimentally validate this phenomenon by using people under high pressure is difficult due to ethical issues. A mouse, similar to a human, is a kind of self-driven and soft body creature with competitive behaviour under stressed conditions. Therefore, mice are used to escape through an exit at a corner. A number of repeated tests are conducted and the average escape time per mouse at different levels of stimulus are analysed. The escape times do not increase obviously with the level of stimulus for the corner exit, which is contrary to the experiment with the center exit. The experimental results show that the FIS effect is not necessary a universal law for any discrete system. The observation could help the design of buildings by relocating their exits to the corner in rooms to avoid the formation of FIS effect.

  19. Two-ply channels for faster wicking in paper-based microfluidic devices.

    Science.gov (United States)

    Camplisson, Conor K; Schilling, Kevin M; Pedrotti, William L; Stone, Howard A; Martinez, Andres W

    2015-12-07

    This article describes the development of porous two-ply channels for paper-based microfluidic devices that wick fluids significantly faster than conventional, porous, single-ply channels. The two-ply channels were made by stacking two single-ply channels on top of each other and were fabricated entirely out of paper, wax and toner using two commercially available printers, a convection oven and a thermal laminator. The wicking in paper-based channels was studied and modeled using a modified Lucas-Washburn equation to account for the effect of evaporation, and a paper-based titration device incorporating two-ply channels was demonstrated.

  20. Semantic Size Does Not Matter: “Bigger” Words Are Not Recognised Faster

    OpenAIRE

    Kang, Sean H.K.; Yap, Melvin J.; Tse, Chi-Shing; Kurby, Christopher A.

    2011-01-01

    Sereno, O’Donnell, and Sereno (2009) reported that words are recognised faster in a lexical decision task when their referents are physically large rather than small, suggesting that “semantic size” might be an important variable that should be considered in visual word recognition research and modelling. We sought to replicate their size effect, but failed to find a significant latency advantage in lexical decision for “big” words (cf. “small” words), even though we used the same word stimul...

  1. Innovations for competitiveness: European views on "better-faster-cheaper"

    Science.gov (United States)

    Atzei, A.; Groepper, P.; Novara, M.; Pseiner, K.

    1999-09-01

    The paper elaborates on " lessons learned" from two recent ESA workshops, one focussing on the role of Innovation in the competitiveness of the space sector and the second on technology and engineering aspects conducive to better, faster and cheaper space programmes. The paper focuses primarily on four major aspects, namely: a) the adaptations of industrial and public organisations to the global market needs; b) the understanding of the bottleneck factors limiting competitiveness; c) the trends toward new system architectures and new engineering and production methods; d) the understanding of the role of new technology in the future applications. Under the pressure of market forces and the influence of many global and regional players, applications of space systems and technology are becoming more and more competitive. It is well recognised that without major effort for innovation in industrial practices, organisations, R&D, marketing and financial approaches the European space sector will stagnate and loose its competence as well as its competitiveness. It is also recognised that a programme run according to the "better, faster, cheaper" philosophy relies on much closer integration of system design, development and verification, and draws heavily on a robust and comprehensive programme of technology development, which must run in parallel and off-line with respect to flight programmes. A company's innovation capabilities will determine its future competitive advantage (in time, cost, performance or value) and overall growth potential. Innovation must be a process that can be counted on to provide repetitive, sustainable, long-term performance improvements. As such, it needs not depend on great breakthroughs in technology and concepts (which are accidental and rare). Rather, it could be based on bold evolution through the establishment of know-how, application of best practices, process effectiveness and high standards, performance measurement, and attention to

  2. A faster ordered-subset convex algorithm for iterative reconstruction in a rotation-free micro-CT system

    International Nuclear Information System (INIS)

    Quan, E; Lalush, D S

    2009-01-01

    We present a faster iterative reconstruction algorithm based on the ordered-subset convex (OSC) algorithm for transmission CT. The OSC algorithm was modified such that it calculates the normalization term before the iterative process in order to save computational cost. The modified version requires only one backprojection per iteration as compared to two required for the original OSC. We applied the modified OSC (MOSC) algorithm to a rotation-free micro-CT system that we proposed previously, observed its performance, and compared with the OSC algorithm for 3D cone-beam reconstruction. Measurements on the reconstructed images as well as the point spread functions show that MOSC is quite similar to OSC; in noise-resolution trade-off, MOSC is comparable with OSC in a regular-noise situation and it is slightly worse than OSC in an extremely high-noise situation. The timing record shows that MOSC saves 25-30% CPU time, depending on the number of iterations used. We conclude that the MOSC algorithm is more efficient than OSC and provides comparable images.

  3. Learning to Play in a Day: Faster Deep Reinforcement Learning by Optimality Tightening

    OpenAIRE

    He, Frank S.; Liu, Yang; Schwing, Alexander G.; Peng, Jian

    2016-01-01

    We propose a novel training algorithm for reinforcement learning which combines the strength of deep Q-learning with a constrained optimization approach to tighten optimality and encourage faster reward propagation. Our novel technique makes deep reinforcement learning more practical by drastically reducing the training time. We evaluate the performance of our approach on the 49 games of the challenging Arcade Learning Environment, and report significant improvements in both training time and...

  4. National health expenditure projections, 2013-23: faster growth expected with expanded coverage and improving economy.

    Science.gov (United States)

    Sisko, Andrea M; Keehan, Sean P; Cuckler, Gigi A; Madison, Andrew J; Smith, Sheila D; Wolfe, Christian J; Stone, Devin A; Lizonitz, Joseph M; Poisal, John A

    2014-10-01

    In 2013 health spending growth is expected to have remained slow, at 3.6 percent, as a result of the sluggish economic recovery, the effects of sequestration, and continued increases in private health insurance cost-sharing requirements. The combined effects of the Affordable Care Act's coverage expansions, faster economic growth, and population aging are expected to fuel health spending growth this year and thereafter (5.6 percent in 2014 and 6.0 percent per year for 2015-23). However, the average rate of increase through 2023 is projected to be slower than the 7.2 percent average growth experienced during 1990-2008. Because health spending is projected to grow 1.1 percentage points faster than the average economic growth during 2013-23, the health share of the gross domestic product is expected to rise from 17.2 percent in 2012 to 19.3 percent in 2023. Project HOPE—The People-to-People Health Foundation, Inc.

  5. Faster self-paced rate of drinking for alcohol mixed with energy drinks versus alcohol alone.

    Science.gov (United States)

    Marczinski, Cecile A; Fillmore, Mark T; Maloney, Sarah F; Stamates, Amy L

    2017-03-01

    The consumption of alcohol mixed with energy drinks (AmED) has been associated with higher rates of binge drinking and impaired driving when compared with alcohol alone. However, it remains unclear why the risks of use of AmED are heightened compared with alcohol alone even when the doses of alcohol consumed are similar. Therefore, the purpose of this laboratory study was to investigate if the rate of self-paced beverage consumption was faster for a dose of AmED versus alcohol alone using a double-blind, within-subjects, placebo-controlled study design. Participants (n = 16) of equal gender who were social drinkers attended 4 separate test sessions that involved consumption of alcohol (1.97 ml/kg vodka) and energy drinks, alone and in combination. On each test day, the dose assigned was divided into 10 cups. Participants were informed that they would have a 2-h period to consume the 10 drinks. After the self-paced drinking period, participants completed a cued go/no-go reaction time (RT) task and subjective ratings of stimulation and sedation. The results indicated that participants consumed the AmED dose significantly faster (by ∼16 min) than the alcohol dose. For the performance task, participants' mean RTs were slower in the alcohol conditions and faster in the energy-drink conditions. In conclusion, alcohol consumers should be made aware that rapid drinking might occur for AmED beverages, thus heightening alcohol-related safety risks. The fast rate of drinking may be related to the generalized speeding of responses after energy-drink consumption. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  6. Efficient sampling algorithms for Monte Carlo based treatment planning

    International Nuclear Information System (INIS)

    DeMarco, J.J.; Solberg, T.D.; Chetty, I.; Smathers, J.B.

    1998-01-01

    Efficient sampling algorithms are necessary for producing a fast Monte Carlo based treatment planning code. This study evaluates several aspects of a photon-based tracking scheme and the effect of optimal sampling algorithms on the efficiency of the code. Four areas were tested: pseudo-random number generation, generalized sampling of a discrete distribution, sampling from the exponential distribution, and delta scattering as applied to photon transport through a heterogeneous simulation geometry. Generalized sampling of a discrete distribution using the cutpoint method can produce speedup gains of one order of magnitude versus conventional sequential sampling. Photon transport modifications based upon the delta scattering method were implemented and compared with a conventional boundary and collision checking algorithm. The delta scattering algorithm is faster by a factor of six versus the conventional algorithm for a boundary size of 5 mm within a heterogeneous geometry. A comparison of portable pseudo-random number algorithms and exponential sampling techniques is also discussed

  7. Fast Physics Testbed for the FASTER Project

    Energy Technology Data Exchange (ETDEWEB)

    Lin, W.; Liu, Y.; Hogan, R.; Neggers, R.; Jensen, M.; Fridlind, A.; Lin, Y.; Wolf, A.

    2010-03-15

    This poster describes the Fast Physics Testbed for the new FAst-physics System Testbed and Research (FASTER) project. The overall objective is to provide a convenient and comprehensive platform for fast turn-around model evaluation against ARM observations and to facilitate development of parameterizations for cloud-related fast processes represented in global climate models. The testbed features three major components: a single column model (SCM) testbed, an NWP-Testbed, and high-resolution modeling (HRM). The web-based SCM-Testbed features multiple SCMs from major climate modeling centers and aims to maximize the potential of SCM approach to enhance and accelerate the evaluation and improvement of fast physics parameterizations through continuous evaluation of existing and evolving models against historical as well as new/improved ARM and other complementary measurements. The NWP-Testbed aims to capitalize on the large pool of operational numerical weather prediction products. Continuous evaluations of NWP forecasts against observations at ARM sites are carried out to systematically identify the biases and skills of physical parameterizations under all weather conditions. The highresolution modeling (HRM) activities aim to simulate the fast processes at high resolution to aid in the understanding of the fast processes and their parameterizations. A four-tier HRM framework is established to augment the SCM- and NWP-Testbeds towards eventual improvement of the parameterizations.

  8. Faster-than-real-time robot simulation for plan development and robot safety

    International Nuclear Information System (INIS)

    Crane, C.D. III; Dalton, R.; Ogles, J.; Tulenko, J.S.; Zhou, X.

    1990-01-01

    The University of Florida, in cooperation with the Universities of Texas, Tennessee, and Michigan and Oak Ridge National Laboratory (ORNL), is developing an advanced robotic system for the US Department of Energy under the University Program for Robotics for Advanced Reactors. As part of this program, the University of Florida has been pursuing the development of a faster-than-real-time robotic simulation program for planning and control of mobile robotic operations to ensure the efficient and safe operation of mobile robots in nuclear power plants and other hazardous environments

  9. Energy Preserved Sampling for Compressed Sensing MRI

    Directory of Open Access Journals (Sweden)

    Yudong Zhang

    2014-01-01

    Full Text Available The sampling patterns, cost functions, and reconstruction algorithms play important roles in optimizing compressed sensing magnetic resonance imaging (CS-MRI. Simple random sampling patterns did not take into account the energy distribution in k-space and resulted in suboptimal reconstruction of MR images. Therefore, a variety of variable density (VD based samplings patterns had been developed. To further improve it, we propose a novel energy preserving sampling (ePRESS method. Besides, we improve the cost function by introducing phase correction and region of support matrix, and we propose iterative thresholding algorithm (ITA to solve the improved cost function. We evaluate the proposed ePRESS sampling method, improved cost function, and ITA reconstruction algorithm by 2D digital phantom and 2D in vivo MR brains of healthy volunteers. These assessments demonstrate that the proposed ePRESS method performs better than VD, POWER, and BKO; the improved cost function can achieve better reconstruction quality than conventional cost function; and the ITA is faster than SISTA and is competitive with FISTA in terms of computation time.

  10. The Development of Future Orientation is Associated with Faster Decline in Hopelessness during Adolescence.

    Science.gov (United States)

    Mac Giollabhui, Naoise; Nielsen, Johanna; Seidman, Sam; Olino, Thomas M; Abramson, Lyn Y; Alloy, Lauren B

    2018-01-05

    Hopelessness is implicated in multiple psychological disorders. Little is known, however, about the trajectory of hopelessness during adolescence or how emergent future orientation may influence its trajectory. Parallel process latent growth curve modelling tested whether (i) trajectories of future orientation and hopelessness and (ii) within-individual change in future orientation and hopelessness were related. The study was comprised of 472 adolescents [52% female, 47% Caucasian, 47% received free lunch] recruited at ages 12-13 who completed measures of future orientation and hopelessness at five annual assessments. The results indicate that a general decline in hopelessness across adolescence occurs quicker for those experiencing faster development of future orientation, when controlling for age, sex, low socio-economic status in addition to stressful life events in childhood and adolescence. Stressful childhood life events were associated with worse future orientation at baseline and negative life events experienced during adolescence were associated with both an increase in the trajectory of hopelessness as well as a decrease in the trajectory of future orientation. This study provides compelling evidence that the development of future orientation during adolescence is associated with a faster decline in hopelessness.

  11. Ear Detection under Uncontrolled Conditions with Multiple Scale Faster Region-Based Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Yi Zhang

    2017-04-01

    Full Text Available Ear detection is an important step in ear recognition approaches. Most existing ear detection techniques are based on manually designing features or shallow learning algorithms. However, researchers found that the pose variation, occlusion, and imaging conditions provide a great challenge to the traditional ear detection methods under uncontrolled conditions. This paper proposes an efficient technique involving Multiple Scale Faster Region-based Convolutional Neural Networks (Faster R-CNN to detect ears from 2D profile images in natural images automatically. Firstly, three regions of different scales are detected to infer the information about the ear location context within the image. Then an ear region filtering approach is proposed to extract the correct ear region and eliminate the false positives automatically. In an experiment with a test set of 200 web images (with variable photographic conditions, 98% of ears were accurately detected. Experiments were likewise conducted on the Collection J2 of University of Notre Dame Biometrics Database (UND-J2 and University of Beira Interior Ear dataset (UBEAR, which contain large occlusion, scale, and pose variations. Detection rates of 100% and 98.22%, respectively, demonstrate the effectiveness of the proposed approach.

  12. ADAPTIVE METHODS FOR STOCHASTIC DIFFERENTIAL EQUATIONS VIA NATURAL EMBEDDINGS AND REJECTION SAMPLING WITH MEMORY.

    Science.gov (United States)

    Rackauckas, Christopher; Nie, Qing

    2017-01-01

    Adaptive time-stepping with high-order embedded Runge-Kutta pairs and rejection sampling provides efficient approaches for solving differential equations. While many such methods exist for solving deterministic systems, little progress has been made for stochastic variants. One challenge in developing adaptive methods for stochastic differential equations (SDEs) is the construction of embedded schemes with direct error estimates. We present a new class of embedded stochastic Runge-Kutta (SRK) methods with strong order 1.5 which have a natural embedding of strong order 1.0 methods. This allows for the derivation of an error estimate which requires no additional function evaluations. Next we derive a general method to reject the time steps without losing information about the future Brownian path termed Rejection Sampling with Memory (RSwM). This method utilizes a stack data structure to do rejection sampling, costing only a few floating point calculations. We show numerically that the methods generate statistically-correct and tolerance-controlled solutions. Lastly, we show that this form of adaptivity can be applied to systems of equations, and demonstrate that it solves a stiff biological model 12.28x faster than common fixed timestep algorithms. Our approach only requires the solution to a bridging problem and thus lends itself to natural generalizations beyond SDEs.

  13. New trends in sample preparation techniques for environmental analysis.

    Science.gov (United States)

    Ribeiro, Cláudia; Ribeiro, Ana Rita; Maia, Alexandra S; Gonçalves, Virgínia M F; Tiritan, Maria Elizabeth

    2014-01-01

    Environmental samples include a wide variety of complex matrices, with low concentrations of analytes and presence of several interferences. Sample preparation is a critical step and the main source of uncertainties in the analysis of environmental samples, and it is usually laborious, high cost, time consuming, and polluting. In this context, there is increasing interest in developing faster, cost-effective, and environmentally friendly sample preparation techniques. Recently, new methods have been developed and optimized in order to miniaturize extraction steps, to reduce solvent consumption or become solventless, and to automate systems. This review attempts to present an overview of the fundamentals, procedure, and application of the most recently developed sample preparation techniques for the extraction, cleanup, and concentration of organic pollutants from environmental samples. These techniques include: solid phase microextraction, on-line solid phase extraction, microextraction by packed sorbent, dispersive liquid-liquid microextraction, and QuEChERS (Quick, Easy, Cheap, Effective, Rugged and Safe).

  14. Face and body recognition show similar improvement during childhood.

    Science.gov (United States)

    Bank, Samantha; Rhodes, Gillian; Read, Ainsley; Jeffery, Linda

    2015-09-01

    Adults are proficient in extracting identity cues from faces. This proficiency develops slowly during childhood, with performance not reaching adult levels until adolescence. Bodies are similar to faces in that they convey identity cues and rely on specialized perceptual mechanisms. However, it is currently unclear whether body recognition mirrors the slow development of face recognition during childhood. Recent evidence suggests that body recognition develops faster than face recognition. Here we measured body and face recognition in 6- and 10-year-old children and adults to determine whether these two skills show different amounts of improvement during childhood. We found no evidence that they do. Face and body recognition showed similar improvement with age, and children, like adults, were better at recognizing faces than bodies. These results suggest that the mechanisms of face and body memory mature at a similar rate or that improvement of more general cognitive and perceptual skills underlies improvement of both face and body recognition. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. A new way of controlling NesCOPOs (nested cavity doubly resonant OPO) for faster and more efficient high resolution spectrum measurement

    Science.gov (United States)

    Georges des Aulnois, Johann; Szymanski, Benjamin; Grimieau, Axel; Sillard, Léo.

    2018-02-01

    Optical Parametric Oscillator (OPO) is a well-known solution when wide tunability in the mid-infrared is needed. A specific design called NesCOPO (Nested Cavity doubly resonant OPO) is currently integrated in the X-FLR8 portable gas analyzer from Blue Industry and Science. Thanks to its low threshold this OPO can be pumped by a micro-chip nanosecond YAG (4 kHz repetition rate and a 30 GHz bandwidth). To achieve very high resolution spectra (10 pm of resolution or better), the emitted wavelength has to be finely controlled. Commercial Wavemeter do not meet price and compactness required in the context of an affordable and portable gas analyzer. To overcome this issue, Blue first integrated an active wavelength controller using multiple tunable Fabry-Perot (FP) interferometers. The required resolution was achieved at a 10 Hz measurement rate. We now present an enhanced Wavemeter architecture, based on fixed FP etalons, that is 100 times faster and 2 times smaller. We avoid having FP `blind zones' thanks to one source characteristic: the knowledge of the FSR (Free Spectral Range) of the OPO source and thus, the fact that only discrete wavelengths can be emitted. First results are displayed showing faster measurement for spectroscopic application, and potential future improvement of the device are discussed.

  16. Faster proton transfer dynamics of water on SnO2 compared to TiO2.

    Science.gov (United States)

    Kumar, Nitin; Kent, Paul R C; Bandura, Andrei V; Kubicki, James D; Wesolowski, David J; Cole, David R; Sofo, Jorge O

    2011-01-28

    Proton jump processes in the hydration layer on the iso-structural TiO(2) rutile (110) and SnO(2) cassiterite (110) surfaces were studied with density functional theory molecular dynamics. We find that the proton jump rate is more than three times faster on cassiterite compared with rutile. A local analysis based on the correlation between the stretching band of the O-H vibrations and the strength of H-bonds indicates that the faster proton jump activity on cassiterite is produced by a stronger H-bond formation between the surface and the hydration layer above the surface. The origin of the increased H-bond strength on cassiterite is a combined effect of stronger covalent bonding and stronger electrostatic interactions due to differences of its electronic structure. The bridging oxygens form the strongest H-bonds between the surface and the hydration layer. This higher proton jump rate is likely to affect reactivity and catalytic activity on the surface. A better understanding of its origins will enable methods to control these rates.

  17. Frictional behaviour of sandstone: A sample-size dependent triaxial investigation

    Science.gov (United States)

    Roshan, Hamid; Masoumi, Hossein; Regenauer-Lieb, Klaus

    2017-01-01

    Frictional behaviour of rocks from the initial stage of loading to final shear displacement along the formed shear plane has been widely investigated in the past. However the effect of sample size on such frictional behaviour has not attracted much attention. This is mainly related to the limitations in rock testing facilities as well as the complex mechanisms involved in sample-size dependent frictional behaviour of rocks. In this study, a suite of advanced triaxial experiments was performed on Gosford sandstone samples at different sizes and confining pressures. The post-peak response of the rock along the formed shear plane has been captured for the analysis with particular interest in sample-size dependency. Several important phenomena have been observed from the results of this study: a) the rate of transition from brittleness to ductility in rock is sample-size dependent where the relatively smaller samples showed faster transition toward ductility at any confining pressure; b) the sample size influences the angle of formed shear band and c) the friction coefficient of the formed shear plane is sample-size dependent where the relatively smaller sample exhibits lower friction coefficient compared to larger samples. We interpret our results in terms of a thermodynamics approach in which the frictional properties for finite deformation are viewed as encompassing a multitude of ephemeral slipping surfaces prior to the formation of the through going fracture. The final fracture itself is seen as a result of the self-organisation of a sufficiently large ensemble of micro-slip surfaces and therefore consistent in terms of the theory of thermodynamics. This assumption vindicates the use of classical rock mechanics experiments to constrain failure of pressure sensitive rocks and the future imaging of these micro-slips opens an exciting path for research in rock failure mechanisms.

  18. Screen Space Ambient Occlusion Based Multiple Importance Sampling for Real-Time Rendering

    Science.gov (United States)

    Zerari, Abd El Mouméne; Babahenini, Mohamed Chaouki

    2018-03-01

    We propose a new approximation technique for accelerating the Global Illumination algorithm for real-time rendering. The proposed approach is based on the Screen-Space Ambient Occlusion (SSAO) method, which approximates the global illumination for large, fully dynamic scenes at interactive frame rates. Current algorithms that are based on the SSAO method suffer from difficulties due to the large number of samples that are required. In this paper, we propose an improvement to the SSAO technique by integrating it with a Multiple Importance Sampling technique that combines a stratified sampling method with an importance sampling method, with the objective of reducing the number of samples. Experimental evaluation demonstrates that our technique can produce high-quality images in real time and is significantly faster than traditional techniques.

  19. Automated Cellient(™) cytoblocks: better, stronger, faster?

    Science.gov (United States)

    Prendeville, S; Brosnan, T; Browne, T J; McCarthy, J

    2014-12-01

    Cytoblocks (CBs), or cell blocks, provide additional morphological detail and a platform for immunocytochemistry (ICC) in cytopathology. The Cellient(™) system produces CBs in 45 minutes using methanol fixation, compared with traditional CBs, which require overnight formalin fixation. This study compares Cellient and traditional CB methods in terms of cellularity, morphology and immunoreactivity, evaluates the potential to add formalin fixation to the Cellient method for ICC studies and determines the optimal sectioning depth for maximal cellularity in Cellient CBs. One hundred and sixty CBs were prepared from 40 cytology samples (32 malignant, eight benign) using four processing methods: (A) traditional; (B) Cellient (methanol fixation); (C) Cellient using additional formalin fixation for 30 minutes; (D) Cellient using additional formalin fixation for 60 minutes. Haematoxylin and eosin-stained sections were assessed for cellularity and morphology. ICC was assessed on 14 cases with a panel of antibodies. Three additional Cellient samples were serially sectioned to determine the optimal sectioning depth. Scoring was performed by two independent, blinded reviewers. For malignant cases, morphology was superior with Cellient relative to traditional CBs (P Cellient process did not influence the staining quality. Serial sectioning through Cellient CBs showed optimum cellularity at 30-40 μm with at least 27 sections obtainable. Cellient CBs provide superior morphology to traditional CBs and, if required, formalin fixation may be added to the Cellient process for ICC. Optimal Cellient CB cellularity is achieved at 30-40 μm, which will impact on the handling of cases in daily practice. © 2014 John Wiley & Sons Ltd.

  20. Faster Simulation Methods for the Nonstationary Random Vibrations of Non-linear MDOF Systems

    DEFF Research Database (Denmark)

    Askar, A.; Köylüo, U.; Nielsen, Søren R.K.

    1996-01-01

    subject to nonstationary Gaussian white noise excitation, as an alternative to conventional direct simulation methods. These alternative simulation procedures rely on an assumption of local Gaussianity during each time step. This assumption is tantamount to various linearizations of the equations....... Such a treatment offers higher rates of convergence, faster speed and higher accuracy. These procedures are compared to the direct Monte Carlo simulation procedure, which uses a fourth order Runge-Kutta scheme with the white noise process approximated by a broad band Ruiz-Penzien broken line process...

  1. Faster in-plane switching and reduced rotational viscosity characteristics in a graphene-nematic suspension

    Science.gov (United States)

    Basu, Rajratan; Kinnamon, Daniel; Skaggs, Nicole; Womack, James

    2016-05-01

    The in-plane switching (IPS) for a nematic liquid crystal (LC) was found to be considerably faster when the LC was doped with dilute concentrations of monolayer graphene flakes. Additional studies revealed that the presence of graphene reduced the rotational viscosity of the LC, permitting the nematic director to respond quicker in IPS mode on turning the electric field on. The studies were carried out with several graphene concentrations in the LC, and the experimental results coherently suggest that there exists an optimal concentration of graphene, allowing a reduction in the IPS response time and rotational viscosity in the LC. Above this optimal graphene concentration, the rotational viscosity was found to increase, and consequently, the LC no longer switched faster in IPS mode. The presence of graphene suspension was also found to decrease the LC's pretilt angle significantly due to the π-π electron stacking between the LC molecules and graphene flakes. To understand the π-π stacking interaction, the anchoring mechanism of the LC on a CVD grown monolayer graphene film on copper substrate was studied by reflected crossed polarized microscopy. Optical microphotographs revealed that the LC alignment direction depended on monolayer graphene's hexagonal crystal structure and its orientation.

  2. Faster in-plane switching and reduced rotational viscosity characteristics in a graphene-nematic suspension

    International Nuclear Information System (INIS)

    Basu, Rajratan; Kinnamon, Daniel; Skaggs, Nicole; Womack, James

    2016-01-01

    The in-plane switching (IPS) for a nematic liquid crystal (LC) was found to be considerably faster when the LC was doped with dilute concentrations of monolayer graphene flakes. Additional studies revealed that the presence of graphene reduced the rotational viscosity of the LC, permitting the nematic director to respond quicker in IPS mode on turning the electric field on. The studies were carried out with several graphene concentrations in the LC, and the experimental results coherently suggest that there exists an optimal concentration of graphene, allowing a reduction in the IPS response time and rotational viscosity in the LC. Above this optimal graphene concentration, the rotational viscosity was found to increase, and consequently, the LC no longer switched faster in IPS mode. The presence of graphene suspension was also found to decrease the LC's pretilt angle significantly due to the π-π electron stacking between the LC molecules and graphene flakes. To understand the π-π stacking interaction, the anchoring mechanism of the LC on a CVD grown monolayer graphene film on copper substrate was studied by reflected crossed polarized microscopy. Optical microphotographs revealed that the LC alignment direction depended on monolayer graphene's hexagonal crystal structure and its orientation.

  3. Faster in-plane switching and reduced rotational viscosity characteristics in a graphene-nematic suspension

    Energy Technology Data Exchange (ETDEWEB)

    Basu, Rajratan, E-mail: basu@usna.edu; Kinnamon, Daniel; Skaggs, Nicole; Womack, James [Soft Matter and Nanomaterials Laboratory, Department of Physics, The United States Naval Academy, Annapolis, Maryland 21402 (United States)

    2016-05-14

    The in-plane switching (IPS) for a nematic liquid crystal (LC) was found to be considerably faster when the LC was doped with dilute concentrations of monolayer graphene flakes. Additional studies revealed that the presence of graphene reduced the rotational viscosity of the LC, permitting the nematic director to respond quicker in IPS mode on turning the electric field on. The studies were carried out with several graphene concentrations in the LC, and the experimental results coherently suggest that there exists an optimal concentration of graphene, allowing a reduction in the IPS response time and rotational viscosity in the LC. Above this optimal graphene concentration, the rotational viscosity was found to increase, and consequently, the LC no longer switched faster in IPS mode. The presence of graphene suspension was also found to decrease the LC's pretilt angle significantly due to the π-π electron stacking between the LC molecules and graphene flakes. To understand the π-π stacking interaction, the anchoring mechanism of the LC on a CVD grown monolayer graphene film on copper substrate was studied by reflected crossed polarized microscopy. Optical microphotographs revealed that the LC alignment direction depended on monolayer graphene's hexagonal crystal structure and its orientation.

  4. Learning Faster by Discovering and Exploiting Object Similarities

    Directory of Open Access Journals (Sweden)

    Tadej Janež

    2013-03-01

    Full Text Available In this paper we explore the question: “Is it possible to speed up the learning process of an autonomous agent by performing experiments in a more complex environment (i.e., an environment with a greater number of different objects?” To this end, we use a simple robotic domain, where the robot has to learn a qualitative model predicting the change in the robot's distance to an object. To quantify the environment's complexity, we defined cardinal complexity as the number of objects in the robot's world, and behavioural complexity as the number of objects' distinct behaviours. We propose Error reduction merging (ERM, a new learning method that automatically discovers similarities in the structure of the agent's environment. ERM identifies different types of objects solely from the data measured and merges the observations of objects that behave in the same or similar way in order to speed up the agent's learning. We performed a series of experiments in worlds of increasing complexity. The results in our simple domain indicate that ERM was capable of discovering structural similarities in the data which indeed made the learning faster, clearly superior to conventional learning. This observed trend occurred with various machine learning algorithms used inside the ERM method.

  5. In a warmer Arctic, mosquitoes avoid increased mortality from predators by growing faster.

    Science.gov (United States)

    Culler, Lauren E; Ayres, Matthew P; Virginia, Ross A

    2015-09-22

    Climate change is altering environmental temperature, a factor that influences ectothermic organisms by controlling rates of physiological processes. Demographic effects of warming, however, are determined by the expression of these physiological effects through predator-prey and other species interactions. Using field observations and controlled experiments, we measured how increasing temperatures in the Arctic affected development rates and mortality rates (from predation) of immature Arctic mosquitoes in western Greenland. We then developed and parametrized a demographic model to evaluate how temperature affects survival of mosquitoes from the immature to the adult stage. Our studies showed that warming increased development rate of immature mosquitoes (Q10 = 2.8) but also increased daily mortality from increased predation rates by a dytiscid beetle (Q10 = 1.2-1.5). Despite increased daily mortality, the model indicated that faster development and fewer days exposed to predators resulted in an increased probability of mosquito survival to the adult stage. Warming also advanced mosquito phenology, bringing mosquitoes into phenological synchrony with caribou. Increases in biting pests will have negative consequences for caribou and their role as a subsistence resource for local communities. Generalizable frameworks that account for multiple effects of temperature are needed to understand how climate change impacts coupled human-natural systems. © 2015 The Author(s).

  6. A light and faster regional convolutional neural network for object detection in optical remote sensing images

    Science.gov (United States)

    Ding, Peng; Zhang, Ye; Deng, Wei-Jian; Jia, Ping; Kuijper, Arjan

    2018-07-01

    Detection of objects from satellite optical remote sensing images is very important for many commercial and governmental applications. With the development of deep convolutional neural networks (deep CNNs), the field of object detection has seen tremendous advances. Currently, objects in satellite remote sensing images can be detected using deep CNNs. In general, optical remote sensing images contain many dense and small objects, and the use of the original Faster Regional CNN framework does not yield a suitably high precision. Therefore, after careful analysis we adopt dense convoluted networks, a multi-scale representation and various combinations of improvement schemes to enhance the structure of the base VGG16-Net for improving the precision. We propose an approach to reduce the test-time (detection time) and memory requirements. To validate the effectiveness of our approach, we perform experiments using satellite remote sensing image datasets of aircraft and automobiles. The results show that the improved network structure can detect objects in satellite optical remote sensing images more accurately and efficiently.

  7. A new sampling technique for surface exposure dating using a portable electric rock cutter

    Directory of Open Access Journals (Sweden)

    Yusuke Suganuma

    2012-07-01

    Full Text Available Surface exposure dating using in situ cosmogenic nuclides has contributed to our understanding of Earth-surface processes. The precision of the ages estimated by this method is affected by the sample geometry; therefore, high accuracy measurements of the thickness and shape of the rock sample (thickness and shape is crucial. However, it is sometimes diffi cult to meet these requirements by conventional sampling methods with a hammer and chisel. Here, we propose a new sampling technique using a portable electric rock cutter. This sampling technique is faster, produces more precisely shaped samples, and allows for a more precise age interpretation. A simple theoretical modeldemonstrates that the age error due to defective sample geometry increases as the total sample thickness increases, indicating the importance of precise sampling for surface exposure dating.

  8. Evolution and plasticity: Divergence of song discrimination is faster in birds with innate song than in song learners in Neotropical passerine birds.

    Science.gov (United States)

    Freeman, Benjamin G; Montgomery, Graham A; Schluter, Dolph

    2017-09-01

    Plasticity is often thought to accelerate trait evolution and speciation. For example, plasticity in birdsong may partially explain why clades of song learners are more diverse than related clades with innate song. This "song learning" hypothesis predicts that (1) differences in song traits evolve faster in song learners, and (2) behavioral discrimination against allopatric song (a proxy for premating reproductive isolation) evolves faster in song learners. We tested these predictions by analyzing acoustic traits and conducting playback experiments in allopatric Central American sister pairs of song learning oscines (N = 42) and nonlearning suboscines (N = 27). We found that nonlearners evolved mean acoustic differences slightly faster than did leaners, and that the mean evolutionary rate of song discrimination was 4.3 times faster in nonlearners than in learners. These unexpected results may be a consequence of significantly greater variability in song traits in song learners (by 54-79%) that requires song-learning oscines to evolve greater absolute differences in song before achieving the same level of behavioral song discrimination as nonlearning suboscines. This points to "a downside of learning" for the evolution of species discrimination, and represents an important example of plasticity reducing the rate of evolution and diversification by increasing variability. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.

  9. Faster Double-Size Bipartite Multiplication out of Montgomery Multipliers

    Science.gov (United States)

    Yoshino, Masayuki; Okeya, Katsuyuki; Vuillaume, Camille

    This paper proposes novel algorithms for computing double-size modular multiplications with few modulus-dependent precomputations. Low-end devices such as smartcards are usually equipped with hardware Montgomery multipliers. However, due to progresses of mathematical attacks, security institutions such as NIST have steadily demanded longer bit-lengths for public-key cryptography, making the multipliers quickly obsolete. In an attempt to extend the lifespan of such multipliers, double-size techniques compute modular multiplications with twice the bit-length of the multipliers. Techniques are known for extending the bit-length of classical Euclidean multipliers, of Montgomery multipliers and the combination thereof, namely bipartite multipliers. However, unlike classical and bipartite multiplications, Montgomery multiplications involve modulus-dependent precomputations, which amount to a large part of an RSA encryption or signature verification. The proposed double-size technique simulates double-size multiplications based on single-size Montgomery multipliers, and yet precomputations are essentially free: in an 2048-bit RSA encryption or signature verification with public exponent e=216+1, the proposal with a 1024-bit Montgomery multiplier is at least 1.5 times faster than previous double-size Montgomery multiplications.

  10. Causal events enter awareness faster than non-causal events

    Directory of Open Access Journals (Sweden)

    Pieter Moors

    2017-01-01

    Full Text Available Philosophers have long argued that causality cannot be directly observed but requires a conscious inference (Hume, 1967. Albert Michotte however developed numerous visual phenomena in which people seemed to perceive causality akin to primary visual properties like colour or motion (Michotte, 1946. Michotte claimed that the perception of causality did not require a conscious, deliberate inference but, working over 70 years ago, he did not have access to the experimental methods to test this claim. Here we employ Continuous Flash Suppression (CFS—an interocular suppression technique to render stimuli invisible (Tsuchiya & Koch, 2005—to test whether causal events enter awareness faster than non-causal events. We presented observers with ‘causal’ and ‘non-causal’ events, and found consistent evidence that participants become aware of causal events more rapidly than non-causal events. Our results suggest that, whilst causality must be inferred from sensory evidence, this inference might be computed at low levels of perceptual processing, and does not depend on a deliberative conscious evaluation of the stimulus. This work therefore supports Michotte’s contention that, like colour or motion, causality is an immediate property of our perception of the world.

  11. N-Terminal Domains in Two-Domain Proteins Are Biased to Be Shorter and Predicted to Fold Faster Than Their C-Terminal Counterparts

    Directory of Open Access Journals (Sweden)

    Etai Jacob

    2013-04-01

    Full Text Available Computational analysis of proteomes in all kingdoms of life reveals a strong tendency for N-terminal domains in two-domain proteins to have shorter sequences than their neighboring C-terminal domains. Given that folding rates are affected by chain length, we asked whether the tendency for N-terminal domains to be shorter than their neighboring C-terminal domains reflects selection for faster-folding N-terminal domains. Calculations of absolute contact order, another predictor of folding rate, provide additional evidence that N-terminal domains tend to fold faster than their neighboring C-terminal domains. A possible explanation for this bias, which is more pronounced in prokaryotes than in eukaryotes, is that faster folding of N-terminal domains reduces the risk for protein aggregation during folding by preventing formation of nonnative interdomain interactions. This explanation is supported by our finding that two-domain proteins with a shorter N-terminal domain are much more abundant than those with a shorter C-terminal domain.

  12. REDESIGN OF CERNS QUADRUPOLE RESONATOR FOR TESTING OF SUPERCONDUCTING SAMPLES

    CERN Document Server

    Del Pozo Romano, Veronica

    2017-01-01

    The quadrupole resonator (QPR) was constructed in 1997 to measure the surface resistance of niobium samples at 400 MHz, the technology and RF frequency chosen for the LHC. It allows measurement of the RF properties of superconducting films deposited on disk-shaped metallic substrates. The samples are used to study different coatings which is much faster than the coating, stripping and re-coating of sample cavities. An electromagnetic and mechanical re-design of the existing QPR has been done with the goal of doubling the magnetic peak fields on the samples. Electromagnetic simulations were carried out on a completely parametrized model, using the existings QPR as baseline and modifying its dimensions. The aim was to optimize the measurement range and resolution by increasing the ratio between the magnetic peak fields on the sample and in the cavity. Increasing the average magnetic field on the sample leads to a more homogenous field distribution over the sample. Some of the modifications were based on t...

  13. Myopes show increased susceptibility to nearwork aftereffects.

    Science.gov (United States)

    Ciuffreda, K J; Wallis, D M

    1998-09-01

    Some aspects of accommodation may be slightly abnormal (or different) in myopes, compared with accommodation in emmetropes and hyperopes. For example, the initial magnitude of accommodative adaptation in the dark after nearwork is greatest in myopes. However, the critical test is to assess this initial accommodative aftereffect and its subsequent decay in the light under more natural viewing conditions with blur-related visual feedback present, if a possible link between this phenomenon and clinical myopia is to be considered. Subjects consisted of adult late- (n = 11) and early-onset (n = 13) myopes, emmetropes (n = 11), and hyperopes (n = 9). The distance-refractive state was assessed objectively using an autorefractor immediately before and after a 10-minute binocular near task at 20 cm (5 diopters [D]). Group results showed that myopes were most susceptible to the nearwork aftereffect. It averaged 0.35 D in initial magnitude, with considerably faster posttask decay to baseline in the early-onset (35 seconds) versus late-onset (63 seconds) myopes. There was no myopic aftereffect in the remaining two refractive groups. The myopes showed particularly striking accommodatively related nearwork aftereffect susceptibility. As has been speculated and found by many others, transient pseudomyopia may cause or be a precursor to permanent myopia or myopic progression. Time-integrated increased retinal defocus causing axial elongation is proposed as a possible mechanism.

  14. Faster Simulation Methods for the Non-Stationary Random Vibrations of Non-Linear MDOF Systems

    DEFF Research Database (Denmark)

    Askar, A.; Köylüoglu, H. U.; Nielsen, Søren R. K.

    subject to nonstationary Gaussian white noise excitation, as an alternative to conventional direct simulation methods. These alternative simulation procedures rely on an assumption of local Gaussianity during each time step. This assumption is tantamount to various linearizations of the equations....... Such a treatment offers higher rates of convergence, faster speed and higher accuracy. These procedures are compared to the direct Monte Carlo simulation procedure, which uses a fourth order Runge-Kutta scheme with the white noise process approximated by a broad band Ruiz-Penzien broken line process...

  15. Poisson-Box Sampling algorithms for three-dimensional Markov binary mixtures

    Science.gov (United States)

    Larmier, Coline; Zoia, Andrea; Malvagi, Fausto; Dumonteil, Eric; Mazzolo, Alain

    2018-02-01

    Particle transport in Markov mixtures can be addressed by the so-called Chord Length Sampling (CLS) methods, a family of Monte Carlo algorithms taking into account the effects of stochastic media on particle propagation by generating on-the-fly the material interfaces crossed by the random walkers during their trajectories. Such methods enable a significant reduction of computational resources as opposed to reference solutions obtained by solving the Boltzmann equation for a large number of realizations of random media. CLS solutions, which neglect correlations induced by the spatial disorder, are faster albeit approximate, and might thus show discrepancies with respect to reference solutions. In this work we propose a new family of algorithms (called 'Poisson Box Sampling', PBS) aimed at improving the accuracy of the CLS approach for transport in d-dimensional binary Markov mixtures. In order to probe the features of PBS methods, we will focus on three-dimensional Markov media and revisit the benchmark problem originally proposed by Adams, Larsen and Pomraning [1] and extended by Brantley [2]: for these configurations we will compare reference solutions, standard CLS solutions and the new PBS solutions for scalar particle flux, transmission and reflection coefficients. PBS will be shown to perform better than CLS at the expense of a reasonable increase in computational time.

  16. Having your cake and eating it - Staphylococcus aureus small colony variants can evolve faster growth rate without losing their antibiotic resistance

    Directory of Open Access Journals (Sweden)

    Gerrit Brandis

    2017-08-01

    Full Text Available Staphylococcus aureus can produce small colony variants (SCVs during infections. These cause significant clinical problems because they are difficult to detect in standard microbiological screening and are associated with persistent infections. The major causes of the SCV phenotype are mutations that inhibit respiration by inactivation of genes of the menadione or hemin biosynthesis pathways. This reduces the production of ATP required to support fast growth. Importantly, it also decreases cross-membrane potential in SCVs, resulting in decreased uptake of cationic compounds, with reduced susceptibility to aminoglycoside antibiotics as a consequence. Because SCVs are slow-growing (mutations in men genes are associated with growth rates in rich medium ~30% of the wild-type growth rate bacterial cultures are very susceptible to rapid takeover by faster-growing mutants (revertants or suppressors. In the case of reversion, the resulting fast growth is obviously associated with the loss of antibiotic resistance. However, direct reversion is relatively rare due to the very small genetic target size for such mutations. We explored the phenotypic consequences of SCVs evolving faster growth by routes other than direct reversion, and in particular whether any of those routes allowed for the maintenance of antibiotic resistance. In a recent paper (mBio 8: e00358-17 we demonstrated the existence of several different routes of SCV evolution to faster growth, one of which maintained the antibiotic resistance phenotype. This discovery suggests that SCVs might be more adaptable and problematic that previously thought. They are capable of surviving as a slow-growing persistent form, before evolving into a significantly faster-growing form without sacrificing their antibiotic resistance phenotype.

  17. Sparse-sampling with time-encoded (TICO) stimulated Raman scattering for fast image acquisition

    Science.gov (United States)

    Hakert, Hubertus; Eibl, Matthias; Karpf, Sebastian; Huber, Robert

    2017-07-01

    Modern biomedical imaging modalities aim to provide researchers a multimodal contrast for a deeper insight into a specimen under investigation. A very promising technique is stimulated Raman scattering (SRS) microscopy, which can unveil the chemical composition of a sample with a very high specificity. Although the signal intensities are enhanced manifold to achieve a faster acquisition of images if compared to standard Raman microscopy, there is a trade-off between specificity and acquisition speed. Commonly used SRS concepts either probe only very few Raman transitions as the tuning of the applied laser sources is complicated or record whole spectra with a spectrometer based setup. While the first approach is fast, it reduces the specificity and the spectrometer approach records whole spectra -with energy differences where no Raman information is present-, which limits the acquisition speed. Therefore, we present a new approach based on the TICO-Raman concept, which we call sparse-sampling. The TICO-sparse-sampling setup is fully electronically controllable and allows probing of only the characteristic peaks of a Raman spectrum instead of always acquiring a whole spectrum. By reducing the spectral points to the relevant peaks, the acquisition time can be greatly reduced compared to a uniformly, equidistantly sampled Raman spectrum while the specificity and the signal to noise ratio (SNR) are maintained. Furthermore, all laser sources are completely fiber based. The synchronized detection enables a full resolution of the Raman signal, whereas the analogue and digital balancing allows shot noise limited detection. First imaging results with polystyrene (PS) and polymethylmethacrylate (PMMA) beads confirm the advantages of TICO sparse-sampling. We achieved a pixel dwell time as low as 35 μs for an image differentiating both species. The mechanical properties of the applied voice coil stage for scanning the sample currently limits even faster acquisition.

  18. Efficient inference of population size histories and locus-specific mutation rates from large-sample genomic variation data.

    Science.gov (United States)

    Bhaskar, Anand; Wang, Y X Rachel; Song, Yun S

    2015-02-01

    With the recent increase in study sample sizes in human genetics, there has been growing interest in inferring historical population demography from genomic variation data. Here, we present an efficient inference method that can scale up to very large samples, with tens or hundreds of thousands of individuals. Specifically, by utilizing analytic results on the expected frequency spectrum under the coalescent and by leveraging the technique of automatic differentiation, which allows us to compute gradients exactly, we develop a very efficient algorithm to infer piecewise-exponential models of the historical effective population size from the distribution of sample allele frequencies. Our method is orders of magnitude faster than previous demographic inference methods based on the frequency spectrum. In addition to inferring demography, our method can also accurately estimate locus-specific mutation rates. We perform extensive validation of our method on simulated data and show that it can accurately infer multiple recent epochs of rapid exponential growth, a signal that is difficult to pick up with small sample sizes. Lastly, we use our method to analyze data from recent sequencing studies, including a large-sample exome-sequencing data set of tens of thousands of individuals assayed at a few hundred genic regions. © 2015 Bhaskar et al.; Published by Cold Spring Harbor Laboratory Press.

  19. Bone Shaft Revascularization After Marrow Ablation Is Dramatically Accelerated in BSP-/- Mice, Along With Faster Hematopoietic Recolonization.

    Science.gov (United States)

    Bouleftour, Wafa; Granito, Renata Neves; Vanden-Bossche, Arnaud; Sabido, Odile; Roche, Bernard; Thomas, Mireille; Linossier, Marie Thérèse; Aubin, Jane E; Lafage-Proust, Marie-Hélène; Vico, Laurence; Malaval, Luc

    2017-09-01

    The bone organ integrates the activity of bone tissue, bone marrow, and blood vessels and the factors ensuring this coordination remain ill defined. Bone sialoprotein (BSP) is with osteopontin (OPN) a member of the small integrin binding ligand N-linked glycoprotein (SIBLING) family, involved in bone formation, hematopoiesis and angiogenesis. In rodents, bone marrow ablation induces a rapid formation of medullary bone which peaks by ∼8 days (d8) and is blunted in BSP-/- mice. We investigated the coordinate hematopoietic and vascular recolonization of the bone shaft after marrow ablation of 2 month old BSP+/+ and BSP-/- mice. At d3, the ablated area in BSP-/- femurs showed higher vessel density (×4) and vascular volume (×7) than BSP+/+. Vessel numbers in the shaft of ablated BSP+/+ mice reached BSP-/- values only by d8, but with a vascular volume which was twice the value in BSP-/-, reflecting smaller vessel size in ablated mutants. At d6, a much higher number of Lin - (×3) as well as LSK (Lin - IL-7Rα - Sca-1 hi c-Kit hi , ×2) and hematopoietic stem cells (HSC: Flt3 - LSK, ×2) were counted in BSP-/- marrow, indicating a faster recolonization. However, the proportion of LSK and HSC within the Lin - was lower in BSP-/- and more differentiated stages were more abundant, as also observed in unablated bone, suggesting that hematopoietic differentiation is favored in the absence of BSP. Interestingly, unablated BSP-/- femur marrow also contains more blood vessels than BSP+/+, and in both intact and ablated shafts expression of VEGF and OPN are higher, and DMP1 lower in the mutants. In conclusion, bone marrow ablation in BSP-/- mice is followed by a faster vascular and hematopoietic recolonization, along with lower medullary bone formation. Thus, lack of BSP affects the interplay between hematopoiesis, angiogenesis, and osteogenesis, maybe in part through higher expression of VEGF and the angiogenic SIBLING, OPN. J. Cell. Physiol. 232: 2528-2537, 2017. © 2016

  20. An Adaptive Tuning Mechanism for Phase-Locked Loop Algorithms for Faster Time Performance of Interconnected Renewable Energy Sources

    DEFF Research Database (Denmark)

    Hadjidemetriou, Lenos; Kyriakides, Elias; Blaabjerg, Frede

    2015-01-01

    Interconnected renewable energy sources (RES) require fast and accurate fault ride through (FRT) operation, in order to support the power grid, when faults occur. This paper proposes an adaptive phase-locked loop (adaptive dαβPLL) algorithm, which can be used for a faster and more accurate response...

  1. Quantum tomography via compressed sensing: error bounds, sample complexity and efficient estimators

    International Nuclear Information System (INIS)

    Flammia, Steven T; Gross, David; Liu, Yi-Kai; Eisert, Jens

    2012-01-01

    Intuitively, if a density operator has small rank, then it should be easier to estimate from experimental data, since in this case only a few eigenvectors need to be learned. We prove two complementary results that confirm this intuition. Firstly, we show that a low-rank density matrix can be estimated using fewer copies of the state, i.e. the sample complexity of tomography decreases with the rank. Secondly, we show that unknown low-rank states can be reconstructed from an incomplete set of measurements, using techniques from compressed sensing and matrix completion. These techniques use simple Pauli measurements, and their output can be certified without making any assumptions about the unknown state. In this paper, we present a new theoretical analysis of compressed tomography, based on the restricted isometry property for low-rank matrices. Using these tools, we obtain near-optimal error bounds for the realistic situation where the data contain noise due to finite statistics, and the density matrix is full-rank with decaying eigenvalues. We also obtain upper bounds on the sample complexity of compressed tomography, and almost-matching lower bounds on the sample complexity of any procedure using adaptive sequences of Pauli measurements. Using numerical simulations, we compare the performance of two compressed sensing estimators—the matrix Dantzig selector and the matrix Lasso—with standard maximum-likelihood estimation (MLE). We find that, given comparable experimental resources, the compressed sensing estimators consistently produce higher fidelity state reconstructions than MLE. In addition, the use of an incomplete set of measurements leads to faster classical processing with no loss of accuracy. Finally, we show how to certify the accuracy of a low-rank estimate using direct fidelity estimation, and describe a method for compressed quantum process tomography that works for processes with small Kraus rank and requires only Pauli eigenstate preparations

  2. Correlated sampling added to the specific purpose Monte Carlo code McPNL for neutron lifetime log responses

    International Nuclear Information System (INIS)

    Mickael, M.; Verghese, K.; Gardner, R.P.

    1989-01-01

    The specific purpose neutron lifetime oil well logging simulation code, McPNL, has been rewritten for greater user-friendliness and faster execution. Correlated sampling has been added to the code to enable studies of relative changes in the tool response caused by environmental changes. The absolute responses calculated by the code have been benchmarked against laboratory test pit data. The relative responses from correlated sampling are not directly benchmarked, but they are validated using experimental and theoretical results

  3. Fast multi-dimensional NMR by minimal sampling

    Science.gov (United States)

    Kupče, Ēriks; Freeman, Ray

    2008-03-01

    A new scheme is proposed for very fast acquisition of three-dimensional NMR spectra based on minimal sampling, instead of the customary step-wise exploration of all of evolution space. The method relies on prior experiments to determine accurate values for the evolving frequencies and intensities from the two-dimensional 'first planes' recorded by setting t1 = 0 or t2 = 0. With this prior knowledge, the entire three-dimensional spectrum can be reconstructed by an additional measurement of the response at a single location (t1∗,t2∗) where t1∗ and t2∗ are fixed values of the evolution times. A key feature is the ability to resolve problems of overlap in the acquisition dimension. Applied to a small protein, agitoxin, the three-dimensional HNCO spectrum is obtained 35 times faster than systematic Cartesian sampling of the evolution domain. The extension to multi-dimensional spectroscopy is outlined.

  4. High-resolution, 2- and 3-dimensional imaging of uncut, unembedded tissue biopsy samples.

    Science.gov (United States)

    Torres, Richard; Vesuna, Sam; Levene, Michael J

    2014-03-01

    Despite continuing advances in tissue processing automation, traditional embedding, cutting, and staining methods limit our ability for rapid, comprehensive visual examination. These limitations are particularly relevant to biopsies for which immediate therapeutic decisions are most necessary, faster feedback to the patient is desired, and preservation of tissue for ancillary studies is most important. The recent development of improved tissue clearing techniques has made it possible to consider use of multiphoton microscopy (MPM) tools in clinical settings, which could address difficulties of established methods. To demonstrate the potential of MPM of cleared tissue for the evaluation of unembedded and uncut pathology samples. Human prostate, liver, breast, and kidney specimens were fixed and dehydrated by using traditional histologic techniques, with or without incorporation of nucleic acid fluorescent stains into dehydration steps. A benzyl alcohol/benzyl benzoate clearing protocol was substituted for xylene. Multiphoton microscopy was performed on a home-built system. Excellent morphologic detail was achievable with MPM at depths greater than 500 μm. Pseudocoloring produced images analogous to hematoxylin-eosin-stained images. Concurrent second-harmonic generation detection allowed mapping of collagen. Subsequent traditional section staining with hematoxylin-eosin did not reveal any detrimental morphologic effects. Sample immunostains on renal tissue showed preservation of normal reactivity. Complete reconstructions of 1-mm cubic samples elucidated 3-dimensional architectural organization. Multiphoton microscopy on cleared, unembedded, uncut biopsy specimens shows potential as a practical clinical tool with significant advantages over traditional histology while maintaining compatibility with gold standard techniques. Further investigation to address remaining implementation barriers is warranted.

  5. The effectiveness of cooling conditions on temperature of canine EDTA whole blood samples.

    Science.gov (United States)

    Tobias, Karen M; Serrano, Leslie; Sun, Xiaocun; Flatland, Bente

    2016-01-01

    Preanalytic factors such as time and temperature can have significant effects on laboratory test results. For example, ammonium concentration will increase 31% in blood samples stored at room temperature for 30 min before centrifugation. To reduce preanalytic error, blood samples may be placed in precooled tubes and chilled on ice or in ice water baths; however, the effectiveness of these modalities in cooling blood samples has not been formally evaluated. The purpose of this study was to evaluate the effectiveness of various cooling modalities on reducing temperature of EDTA whole blood samples. Pooled samples of canine EDTA whole blood were divided into two aliquots. Saline was added to one aliquot to produce a packed cell volume (PCV) of 40% and to the second aliquot to produce a PCV of 20% (simulated anemia). Thirty samples from each aliquot were warmed to 37.7 °C and cooled in 2 ml allotments under one of three conditions: in ice, in ice after transfer to a precooled tube, or in an ice water bath. Temperature of each sample was recorded at one minute intervals for 15 min. Within treatment conditions, sample PCV had no significant effect on cooling. Cooling in ice water was significantly faster than cooling in ice only or transferring the sample to a precooled tube and cooling it on ice. Mean temperature of samples cooled in ice water was significantly lower at 15 min than mean temperatures of those cooled in ice, whether or not the tube was precooled. By 4 min, samples cooled in an ice water bath had reached mean temperatures less than 4 °C (refrigeration temperature), while samples cooled in other conditions remained above 4.0 °C for at least 11 min. For samples with a PCV of 40%, precooling the tube had no significant effect on rate of cooling on ice. For samples with a PCV of 20%, transfer to a precooled tube resulted in a significantly faster rate of cooling than direct placement of the warmed tube onto ice. Canine EDTA whole blood samples cool most

  6. When to Blink and when to Think: Preference for Intuitive Decisions Results in Faster and Better Tactical Choices

    Science.gov (United States)

    Raab, Markus; Laborde, Sylvain

    2011-01-01

    Intuition is often considered an effective manner of decision making in sports. In this study we investigated whether a preference for intuition over deliberation results in faster and better lab-based choices in team handball attack situations with 54 male and female handball players of different expertise levels. We assumed that intuitive…

  7. Higher Resolution and Faster MRI of 31Phosphorus in Bone

    Science.gov (United States)

    Frey, Merideth; Barrett, Sean; Sethna, Zachary; Insogna, Karl; Vanhouten, Joshua

    2013-03-01

    Probing the internal composition of bone on the sub-100 μm length scale is important to study normal features and to look for signs of disease. However, few useful non-destructive techniques are available to evaluate changes in the bone mineral chemical structure and functional micro-architecture on the interior of bones. MRI would be an excellent candidate, but bone is a particularly challenging tissue to study given the relatively low water density, wider linewidths of its solid components leading to low spatial resolution, and the long imaging time compared to conventional 1H MRI. Our lab has recently made advances in obtaining high spatial resolution (sub-400 μm)3 three-dimensional 31Phosphorus MRI of bone through use of the quadratic echo line-narrowing sequence (1). In this talk, we describe our current results using proton decoupling to push this technique even further towards the factor of 1000 increase in spatial resolution imposed by fundamental limits. We also discuss our work to speed up imaging through novel, faster reconstruction algorithms that can reconstruct the desired image from very sparse data sets. (1) M. Frey, et al. PNAS 109: 5190 (2012).

  8. Stimulus- and goal-driven control of eye movements: action videogame players are faster but not better.

    Science.gov (United States)

    Heimler, Benedetta; Pavani, Francesco; Donk, Mieke; van Zoest, Wieske

    2014-11-01

    Action videogame players (AVGPs) have been shown to outperform nongamers (NVGPs) in covert visual attention tasks. These advantages have been attributed to improved top-down control in this population. The time course of visual selection, which permits researchers to highlight when top-down strategies start to control performance, has rarely been investigated in AVGPs. Here, we addressed specifically this issue through an oculomotor additional-singleton paradigm. Participants were instructed to make a saccadic eye movement to a unique orientation singleton. The target was presented among homogeneous nontargets and one additional orientation singleton that was more, equally, or less salient than the target. Saliency was manipulated in the color dimension. Our results showed similar patterns of performance for both AVGPs and NVGPs: Fast-initiated saccades were saliency-driven, whereas later-initiated saccades were more goal-driven. However, although AVGPs were faster than NVGPs, they were also less accurate. Importantly, a multinomial model applied to the data revealed comparable underlying saliency-driven and goal-driven functions for the two groups. Taken together, the observed differences in performance are compatible with the presence of a lower decision bound for releasing saccades in AVGPs than in NVGPs, in the context of comparable temporal interplay between the underlying attentional mechanisms. In sum, the present findings show that in both AVGPs and NVGPs, the implementation of top-down control in visual selection takes time to come about, and they argue against the idea of a general enhancement of top-down control in AVGPs.

  9. Validation of an semi-automated multi component method using protein precipitation LC-MS-MS for the analysis of whole blood samples

    DEFF Research Database (Denmark)

    Slots, Tina

    BACKGROUND: Solid phase extraction (SPE) are one of many multi-component methods, but can be very time-consuming and labour-intensive. Protein precipitation is, on the other hand, a much simpler and faster sample pre-treatment than SPE, and protein precipitation also has the ability to cover a wi......-mortem whole blood sample preparation for toxicological analysis; from the primary sample tube to a 96-deepwell plate ready for injection on the liquid chromatography mass spectrometry (LC-MS/MS)....

  10. High Fidelity, “Faster than Real-Time” Simulator for Predicting Power System Dynamic Behavior - Final Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Flueck, Alex [Illinois Inst. of Technology, Chicago, IL (United States)

    2017-07-14

    The “High Fidelity, Faster than Real­Time Simulator for Predicting Power System Dynamic Behavior” was designed and developed by Illinois Institute of Technology with critical contributions from Electrocon International, Argonne National Laboratory, Alstom Grid and McCoy Energy. Also essential to the project were our two utility partners: Commonwealth Edison and AltaLink. The project was a success due to several major breakthroughs in the area of large­scale power system dynamics simulation, including (1) a validated faster than real­ time simulation of both stable and unstable transient dynamics in a large­scale positive sequence transmission grid model, (2) a three­phase unbalanced simulation platform for modeling new grid devices, such as independently controlled single­phase static var compensators (SVCs), (3) the world’s first high fidelity three­phase unbalanced dynamics and protection simulator based on Electrocon’s CAPE program, and (4) a first­of­its­ kind implementation of a single­phase induction motor model with stall capability. The simulator results will aid power grid operators in their true time of need, when there is a significant risk of cascading outages. The simulator will accelerate performance and enhance accuracy of dynamics simulations, enabling operators to maintain reliability and steer clear of blackouts. In the long­term, the simulator will form the backbone of the newly conceived hybrid real­time protection and control architecture that will coordinate local controls, wide­area measurements, wide­area controls and advanced real­time prediction capabilities. The nation’s citizens will benefit in several ways, including (1) less down time from power outages due to the faster­than­real­time simulator’s predictive capability, (2) higher levels of reliability due to the detailed dynamics plus protection simulation capability, and (3) more resiliency due to the three­ phase unbalanced simulator’s ability to

  11. Faster but not smarter: effects of caffeine and caffeine withdrawal on alertness and performance.

    Science.gov (United States)

    Rogers, Peter J; Heatherley, Susan V; Mullings, Emma L; Smith, Jessica E

    2013-03-01

    Despite 100 years of psychopharmacological research, the extent to which caffeine consumption benefits human functioning remains unclear. To measure the effects of overnight caffeine abstinence and caffeine administration as a function of level of habitual caffeine consumption. Medium-high (n = 212) and non-low (n = 157) caffeine consumers completed self-report measures and computer-based tasks before (starting at 10:30 AM) and after double-blind treatment with either caffeine (100 mg, then 150 mg) or placebo. The first treatment was given at 11:15 AM and the second at 12:45 PM, with post-treatment measures repeated twice between 1:45 PM and 3:30 PM. Caffeine withdrawal was associated with some detrimental effects at 10:30 AM, and more severe effects, including greater sleepiness, lower mental alertness, and poorer performance on simple reaction time, choice reaction time and recognition memory tasks, later in the afternoon. Caffeine improved these measures in medium-high consumers but, apart from decreasing sleepiness, had little effect on them in non-low consumers. The failure of caffeine to increase mental alertness and improve mental performance in non-low consumers was related to a substantial caffeine-induced increase in anxiety/jitteriness that offset the benefit of decreased sleepiness. Caffeine enhanced physical performance (faster tapping speed and faster simple and choice reaction times) in both medium-high and non-low consumers. While caffeine benefits motor performance and tolerance develops to its tendency to increase anxiety/jitteriness, tolerance to its effects on sleepiness means that frequent consumption fails to enhance mental alertness and mental performance.

  12. Faster radiotoxicological analyses of alpha emitters

    International Nuclear Information System (INIS)

    Willemot, J.M.; Verry, M.

    1989-10-01

    Ion-exchange resins allow efficient separation of actinides likely to be found in human biological samples. However, time saving during the analysis is most interesting for the biologist especially if the separation and spectrum qualities are not affected. Such a result is possible with urine or feces using a macroporous resin [fr

  13. Faster N Release, but Not C Loss, From Leaf Litter of Invasives Compared to Native Species in Mediterranean Ecosystems

    Directory of Open Access Journals (Sweden)

    Guido Incerti

    2018-04-01

    Full Text Available Plant invasions can have relevant impacts on biogeochemical cycles, whose extent, in Mediterranean ecosystems, have not yet been systematically assessed comparing litter carbon (C and nitrogen (N dynamics between invasive plants and native communities. We carried out a 1-year litterbag experiment in 4 different plant communities (grassland, sand dune, riparian and mixed forests on 8 invasives and 24 autochthonous plant species, used as control. Plant litter was characterized for mass loss, N release, proximate lignin and litter chemistry by 13C CPMAS NMR. Native and invasive species showed significant differences in litter chemical traits, with invaders generally showing higher N concentration and lower lignin/N ratio. Mass loss data revealed no consistent differences between native and invasive species, although some woody and vine invaders showed exceptionally high decomposition rate. In contrast, N release rate from litter was faster for invasive plants compared to native species. N concentration, lignin content and relative abundance of methoxyl and N-alkyl C region from 13C CPMAS NMR spectra were the parameters that better explained mass loss and N mineralization rates. Our findings demonstrate that during litter decomposition invasive species litter has no different decomposition rates but greater N release rate compared to natives. Accordingly, invasives are expected to affect N cycle in Mediterranean plant communities, possibly promoting a shift of plant assemblages.

  14. A faster and more reliable data acquisition system for the full performance of the SciCRT

    International Nuclear Information System (INIS)

    Sasai, Y.; Matsubara, Y.; Itow, Y.; Sako, T.; Kawabata, T.; Lopez, D.; Hikimochi, R.; Tsuchiya, A.; Ikeno, M.; Uchida, T.; Tanaka, M.; Munakata, K.; Kato, C.; Nakamura, Y.; Oshima, T.; Koike, T.; Kozai, M.; Shibata, S.; Oshima, A.; Takamaru, H.

    2017-01-01

    The SciBar Cosmic Ray Telescope (SciCRT) is a massive scintillator tracker to observe cosmic rays at a very high-altitude environment in Mexico. The fully active tracker is based on the Scintillator Bar (SciBar) detector developed as a near detector for the KEK-to-Kamioka long-baseline neutrino oscillation experiment (K2K) in Japan. Since the data acquisition (DAQ) system was developed for the accelerator experiment, we determined to develop a new robust DAQ system to optimize it to our cosmic-ray experiment needs at the top of Mt. Sierra Negra (4600 m). One of our special requirements is to achieve a 10 times faster readout rate. We started to develop a new fast readout back-end board (BEB) based on 100 Mbps SiTCP, a hardware network processor developed for DAQ systems for high energy physics experiments. Then we developed the new BEB which has a potential of 20 times faster than the current one in the case of observing neutrons. Finally we installed the new DAQ system including the new BEBs to a part of the SciCRT in July 2015. The system has been operating since then. In this paper, we describe the development, the basic performance of the new BEB, the status after the installation in the SciCRT, and the future performance.

  15. A faster and more reliable data acquisition system for the full performance of the SciCRT

    Energy Technology Data Exchange (ETDEWEB)

    Sasai, Y., E-mail: sasaiyoshinori@isee.nagoya-u.ac.jp [Institute for Space-Earth Environmental Research, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8601 (Japan); Matsubara, Y.; Itow, Y.; Sako, T.; Kawabata, T.; Lopez, D.; Hikimochi, R.; Tsuchiya, A. [Institute for Space-Earth Environmental Research, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8601 (Japan); Ikeno, M.; Uchida, T.; Tanaka, M. [High Energy Accelerator Research Organization, KEK, 1-1 Oho, Tsukuba, Ibaraki 305-0801 (Japan); Munakata, K.; Kato, C.; Nakamura, Y.; Oshima, T.; Koike, T. [Department of Physics, Shinshu University, Asahi, Matsumoto 390-8621 (Japan); Kozai, M. [Institute of Space and Astronautical Science, Japan Aerospace Exploration Agency (ISAS/JAXA), Sagamihara, Kanagawa 252-5210 (Japan); Shibata, S.; Oshima, A.; Takamaru, H. [College of Engineering, Chubu University, Kasugai 487-8501 (Japan); and others

    2017-06-11

    The SciBar Cosmic Ray Telescope (SciCRT) is a massive scintillator tracker to observe cosmic rays at a very high-altitude environment in Mexico. The fully active tracker is based on the Scintillator Bar (SciBar) detector developed as a near detector for the KEK-to-Kamioka long-baseline neutrino oscillation experiment (K2K) in Japan. Since the data acquisition (DAQ) system was developed for the accelerator experiment, we determined to develop a new robust DAQ system to optimize it to our cosmic-ray experiment needs at the top of Mt. Sierra Negra (4600 m). One of our special requirements is to achieve a 10 times faster readout rate. We started to develop a new fast readout back-end board (BEB) based on 100 Mbps SiTCP, a hardware network processor developed for DAQ systems for high energy physics experiments. Then we developed the new BEB which has a potential of 20 times faster than the current one in the case of observing neutrons. Finally we installed the new DAQ system including the new BEBs to a part of the SciCRT in July 2015. The system has been operating since then. In this paper, we describe the development, the basic performance of the new BEB, the status after the installation in the SciCRT, and the future performance.

  16. Development of Novel Faster-Dissolving Microneedle Patches for Transcutaneous Vaccine Delivery.

    Science.gov (United States)

    Ono, Akihiko; Ito, Sayami; Sakagami, Shun; Asada, Hideo; Saito, Mio; Quan, Ying-Shu; Kamiyama, Fumio; Hirobe, Sachiko; Okada, Naoki

    2017-08-03

    Microneedle (MN) patches are promising for transcutaneous vaccination because they enable vaccine antigens to physically penetrate the stratum corneum via low-invasive skin puncturing, and to be effectively delivered to antigen-presenting cells in the skin. In second-generation MN patches, the dissolving MNs release the loaded vaccine antigen into the skin. To shorten skin application time for clinical practice, this study aims to develop novel faster-dissolving MNs. We designed two types of MNs made from a single thickening agent, carboxymethylcellulose (CMC) or hyaluronan (HN). Both CMC-MN and HN-MN completely dissolved in rat skin after a 5-min application. In pre-clinical studies, both MNs could demonstrably increase antigen-specific IgG levels after vaccination and prolong antigen deposition compared with conventional injections, and deliver antigens into resected human dermal tissue. In clinical research, we demonstrated that both MNs could reliably and safely puncture human skin without any significant skin irritation from transepidermal water loss measurements and ICDRG (International Contact Dermatitis Research Group) evaluation results.

  17. PET/CT Biograph trademark Sensation 16. Performance improvement using faster electronics

    International Nuclear Information System (INIS)

    Martinez, M.J.; Schwaiger, M.; Ziegler, S.I.; Bercier, Y.

    2006-01-01

    Aim: the new PET/CT biograph sensation 16 (BS16) tomographs have faster detector electronics which allow a reduced timing coincidence window and an increased lower energy threshold (from 350 to 400 keV). This paper evaluates the performance of the BS16 PET scanner before and after the Pico-3D electronics upgrade. Methods: four NEMA NU 2-2001 protocols, (i) spatial resolution, (ii) scatter fraction, count losses and random measurement, (iii) sensitivity, and (iv) image quality, have been performed. Results: a considerable change in both PET count-rate performance and image quality is observed after electronics upgrade. The new scatter fraction obtained using Pico-3D electronics showed a 14% decrease compared to that obtained with the previous electronics. At the typical patient background activity (5.3 kBq/ml), the new scatter fraction was approximately 0.42. The noise equivalent count-rate (R NEC ) performance was also improved. The value at which the R NEC curve peaked, increased from 3.7 . 10 4 s -1 at 14 kBq/ml to 6.4 . 10 4 s -1 at 21 kBq/ml (2R-NEC rate). Likewise, the peak true count-rate value increased from 1.9 . 10 5 s -1 at 22 kBq/ml to 3.4 . 10 5 s -1 at 33 kBq/ml. An average increase of 45% in contrast was observed for hot spheres when using AW-OSEM (4ix8s) as the reconstruction algorithm. For cold spheres, the average increase was 12%. Conclusion: the performance of the PET scanners in the BS16 tomographs is improved by the optimization of the signal processing. The narrower energy and timing coincidence windows lead to a considerable increase of signal-to-noise ratio. The existing combination of fast detectors and adapted electronics in the BS16 tomographs allow imaging protocols with reduced acquisition time, providing higher patient throughput. (orig.)

  18. A Systematic Review and Meta-Analysis of the Effects of Transcranial Direct Current Stimulation (tDCS) Over the Dorsolateral Prefrontal Cortex in Healthy and Neuropsychiatric Samples: Influence of Stimulation Parameters.

    Science.gov (United States)

    Dedoncker, Josefien; Brunoni, Andre R; Baeken, Chris; Vanderhasselt, Marie-Anne

    2016-01-01

    Research into the effects of transcranial direct current stimulation of the dorsolateral prefrontal cortex on cognitive functioning is increasing rapidly. However, methodological heterogeneity in prefrontal tDCS research is also increasing, particularly in technical stimulation parameters that might influence tDCS effects. To systematically examine the influence of technical stimulation parameters on DLPFC-tDCS effects. We performed a systematic review and meta-analysis of tDCS studies targeting the DLPFC published from the first data available to February 2016. Only single-session, sham-controlled, within-subject studies reporting the effects of tDCS on cognition in healthy controls and neuropsychiatric patients were included. Evaluation of 61 studies showed that after single-session a-tDCS, but not c-tDCS, participants responded faster and more accurately on cognitive tasks. Sub-analyses specified that following a-tDCS, healthy subjects responded faster, while neuropsychiatric patients responded more accurately. Importantly, different stimulation parameters affected a-tDCS effects, but not c-tDCS effects, on accuracy in healthy samples vs. increased current density and density charge resulted in improved accuracy in healthy samples, most prominently in females; for neuropsychiatric patients, task performance during a-tDCS resulted in stronger increases in accuracy rates compared to task performance following a-tDCS. Healthy participants respond faster, but not more accurate on cognitive tasks after a-tDCS. However, increasing the current density and/or charge might be able to enhance response accuracy, particularly in females. In contrast, online task performance leads to greater increases in response accuracy than offline task performance in neuropsychiatric patients. Possible implications and practical recommendations are discussed. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Rapid extraction and assay of uranium from environmental surface samples

    Energy Technology Data Exchange (ETDEWEB)

    Barrett, Christopher A.; Chouyyok, Wilaiwan; Speakman, Robert J.; Olsen, Khris B.; Addleman, Raymond Shane

    2017-10-01

    Extraction methods enabling faster removal and concentration of uranium compounds for improved trace and low-level assay are demonstrated for standard surface sampling material in support of nuclear safeguards efforts, health monitoring, and other nuclear analysis applications. A key problem with the existing surface sampling swipes is the requirement for complete digestion of sample and sampling matrix. This is a time-consuming and labour-intensive process that limits laboratory throughput, elevates costs, and increases background levels. Various extraction methods are explored for their potential to quickly and efficiently remove different chemical forms of uranium from standard surface sampling material. A combination of carbonate and peroxide solutions is shown to give the most rapid and complete form of uranyl compound extraction and dissolution. This rapid extraction process is demonstrated to be compatible with standard inductive coupled plasma mass spectrometry methods for uranium isotopic assay as well as screening techniques such as x-ray fluorescence. The general approach described has application beyond uranium to other analytes of nuclear forensic interest (e.g., rare earth elements and plutonium) as well as heavy metals for environmental and industrial hygiene monitoring.

  20. Resonant Drag Instabilities in protoplanetary disks: the streaming instability and new, faster-growing instabilities

    Science.gov (United States)

    Squire, Jonathan; Hopkins, Philip F.

    2018-04-01

    We identify and study a number of new, rapidly growing instabilities of dust grains in protoplanetary disks, which may be important for planetesimal formation. The study is based on the recognition that dust-gas mixtures are generically unstable to a Resonant Drag Instability (RDI), whenever the gas, absent dust, supports undamped linear modes. We show that the "streaming instability" is an RDI associated with epicyclic oscillations; this provides simple interpretations for its mechanisms and accurate analytic expressions for its growth rates and fastest-growing wavelengths. We extend this analysis to more general dust streaming motions and other waves, including buoyancy and magnetohydrodynamic oscillations, finding various new instabilities. Most importantly, we identify the disk "settling instability," which occurs as dust settles vertically into the midplane of a rotating disk. For small grains, this instability grows many orders of magnitude faster than the standard streaming instability, with a growth rate that is independent of grain size. Growth timescales for realistic dust-to-gas ratios are comparable to the disk orbital period, and the characteristic wavelengths are more than an order of magnitude larger than the streaming instability (allowing the instability to concentrate larger masses). This suggests that in the process of settling, dust will band into rings then filaments or clumps, potentially seeding dust traps, high-metallicity regions that in turn seed the streaming instability, or even overdensities that coagulate or directly collapse to planetesimals.

  1. PIXE analysis of hair samples from artisanal mining communities in the Acupan region, Benguet, Philippines

    International Nuclear Information System (INIS)

    Clemente, Eligia; Sera, K.; Futatsugawa, S.; Murao, S.

    2004-01-01

    of the 70 respondents showing no traces of mercury, while nine had levels beyond the 5 ppm limit set by the Human Biomonitor II [Bundesgesundheitsblatt 39 (1996) 221]. Further studies using PIXE analysis of hair is recommended on the same communities with a wider area base to show that PIXE analysis on hair samples as an alternative procedure which is faster without sacrificing reliability

  2. Will small energy consumers be faster in transition? Evidence from the early shift from coal to oil in Latin America

    International Nuclear Information System (INIS)

    Rubio, M.d.Mar; Folchi, Mauricio

    2012-01-01

    This paper provide evidence of the early transition from coal to oil for 20 Latin American countries over the first half of the 20th century, which does not fit the transition experiences of large energy consumers. These small energy consumers had earlier and faster transitions than leading nations. We also provide evidence for alternative sequences (inverse, revertible) in the transition from coal to oil. Furthermore, we demonstrate that ‘leapfrogging’ allowed a set of follower economies to reach the next rung of the energy ladder (oil domination) 30 years in advance of the most developed economies. We examine these follower economies, where transition took place earlier and faster than the cases historically known, in order to understand variation within the energy transitions and to expand the array of feasible pathways of future energy transitions. We find that being a small energy consumer makes a difference for the way the energy transition takes place; but also path dependence (including trade and technological partnerships), domestic energy endowment (which dictates relative prices) and policy decisions seem to be the variables that shaped past energy transitions. - Highlights: ► We provide evidence of the early transition from coal to oil for 20 Latin American. ► We find that being a small energy consumer makes a difference for the way the energy transition takes place. ► Followers had earlier and faster transitions than leading nations. ► ‘Leapfrogging’ allowed extremely fast energy transitions. ► Alternative forms (revertible, inverse) of energy transition also exist.

  3. Simple DNA extraction of urine samples: Effects of storage temperature and storage time.

    Science.gov (United States)

    Ng, Huey Hian; Ang, Hwee Chen; Hoe, See Ying; Lim, Mae-Lynn; Tai, Hua Eng; Soh, Richard Choon Hock; Syn, Christopher Kiu-Choong

    2018-06-01

    Urine samples are commonly analysed in cases with suspected illicit drug consumption. In events of alleged sample mishandling, urine sample source identification may be necessary. A simple DNA extraction procedure suitable for STR typing of urine samples was established on the Promega Maxwell ® 16 paramagnetic silica bead platform. A small sample volume of 1.7mL was used. Samples were stored at room temperature, 4°C and -20°C for 100days to investigate the influence of storage temperature and time on extracted DNA quantity and success rate of STR typing. Samples stored at room temperature exhibited a faster decline in DNA yield with time and lower typing success rates as compared to those at 4°C and -20°C. This trend can likely be attributed to DNA degradation. In conclusion, this study presents a quick and effective DNA extraction protocol from a small urine volume stored for up to 100days at 4°C and -20°C. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Developments of solid materials for UF6 sampling

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Nicholas [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division; Hebden, Andrew [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division; Savina, Joseph [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division

    2017-11-15

    This project demonstrated that a device using majority Commercial Off the Shelf (COTS) components could be used to collect uranium hexafluoride samples safely from gaseous or solid sources. The device was based on the successful Cristallini method developed by ABACC over the past 10 years. The system was designed to capture and store the UF6 as an inert fluoride salt to ease transportation regulations. In addition, the method was considerably faster than traditional cryogenic methods, collected enough material to perform analyses without undue waste, and could be used either inside a facility or in the storage yard.

  5. Faster eating rates are associated with higher energy intakes during an ad libitum meal, higher BMI and greater adiposity among 4·5-year-old children: results from the Growing Up in Singapore Towards Healthy Outcomes (GUSTO) cohort.

    Science.gov (United States)

    Fogel, Anna; Goh, Ai Ting; Fries, Lisa R; Sadananthan, Suresh A; Velan, S Sendhil; Michael, Navin; Tint, Mya-Thway; Fortier, Marielle V; Chan, Mei Jun; Toh, Jia Ying; Chong, Yap-Seng; Tan, Kok Hian; Yap, Fabian; Shek, Lynette P; Meaney, Michael J; Broekman, Birit F P; Lee, Yung Seng; Godfrey, Keith M; Chong, Mary F F; Forde, Ciarán G

    2017-04-01

    Faster eating rates are associated with increased energy intake, but little is known about the relationship between children's eating rate, food intake and adiposity. We examined whether children who eat faster consume more energy and whether this is associated with higher weight status and adiposity. We hypothesised that eating rate mediates the relationship between child weight and ad libitum energy intake. Children (n 386) from the Growing Up in Singapore Towards Healthy Outcomes cohort participated in a video-recorded ad libitum lunch at 4·5 years to measure acute energy intake. Videos were coded for three eating-behaviours (bites, chews and swallows) to derive a measure of eating rate (g/min). BMI and anthropometric indices of adiposity were measured. A subset of children underwent MRI scanning (n 153) to measure abdominal subcutaneous and visceral adiposity. Children above/below the median eating rate were categorised as slower and faster eaters, and compared across body composition measures. There was a strong positive relationship between eating rate and energy intake (r 0·61, P<0·001) and a positive linear relationship between eating rate and children's BMI status. Faster eaters consumed 75 % more energy content than slower eating children (Δ548 kJ (Δ131 kcal); 95 % CI 107·6, 154·4, P<0·001), and had higher whole-body (P<0·05) and subcutaneous abdominal adiposity (Δ118·3 cc; 95 % CI 24·0, 212·7, P=0·014). Mediation analysis showed that eating rate mediates the link between child weight and energy intake during a meal (b 13·59; 95 % CI 7·48, 21·83). Children who ate faster had higher energy intake, and this was associated with increased BMI z-score and adiposity.

  6. Oak ridge national laboratory automated clean chemistry for bulk analysis of environmental swipe samples

    Energy Technology Data Exchange (ETDEWEB)

    Bostick, Debra A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Hexel, Cole R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Ticknor, Brian W. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Tevepaugh, Kayron N. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Metzger, Shalina C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-11-01

    . automated chemical separations showed a significant decrease in hands-on time from 9.8 hours to 35 minutes for seven samples, respectively. This documented time savings and reduced labor translates to a significant cost savings per sample. Overall, the system will enable faster sample reporting times at reduced costs by limiting personnel hours dedicated to the chemical separation.

  7. The use of mini-samples in palaeomagnetism

    Science.gov (United States)

    Böhnel, Harald; Michalk, Daniel; Nowaczyk, Norbert; Naranjo, Gildardo Gonzalez

    2009-10-01

    Rock cores of ~25 mm diameter are widely used in palaeomagnetism. Occasionally smaller diameters have been used as well which represents distinct advantages in terms of throughput, weight of equipment and core collections. How their orientation precision compares to 25 mm cores, however, has not been evaluated in detail before. Here we compare the site mean directions and their statistical parameters for 12 lava flows sampled with 25 mm cores (standard samples, typically 8 cores per site) and with 12 mm drill cores (mini-samples, typically 14 cores per site). The site-mean directions for both sample sizes appear to be indistinguishable in most cases. For the mini-samples, site dispersion parameters k on average are slightly lower than for the standard samples reflecting their larger orienting and measurement errors. Applying the Wilcoxon signed-rank test the probability that k or α95 have the same distribution for both sizes is acceptable only at the 17.4 or 66.3 per cent level, respectively. The larger mini-core numbers per site appears to outweigh the lower k values yielding also slightly smaller confidence limits α95. Further, both k and α95 are less variable for mini-samples than for standard size samples. This is interpreted also to result from the larger number of mini-samples per site, which better averages out the detrimental effect of undetected abnormal remanence directions. Sampling of volcanic rocks with mini-samples therefore does not present a disadvantage in terms of the overall obtainable uncertainty of site mean directions. Apart from this, mini-samples do present clear advantages during the field work, as about twice the number of drill cores can be recovered compared to 25 mm cores, and the sampled rock unit is then more widely covered, which reduces the contribution of natural random errors produced, for example, by fractures, cooling joints, and palaeofield inhomogeneities. Mini-samples may be processed faster in the laboratory, which is of

  8. An inexpensive and portable microvolumeter for rapid evaluation of biological samples.

    Science.gov (United States)

    Douglass, John K; Wcislo, William T

    2010-08-01

    We describe an improved microvolumeter (MVM) for rapidly measuring volumes of small biological samples, including live zooplankton, embryos, and small animals and organs. Portability and low cost make this instrument suitable for widespread use, including at remote field sites. Beginning with Archimedes' principle, which states that immersing an arbitrarily shaped sample in a fluid-filled container displaces an equivalent volume, we identified procedures that maximize measurement accuracy and repeatability across a broad range of absolute volumes. Crucial steps include matching the overall configuration to the size of the sample, using reflected light to monitor fluid levels precisely, and accounting for evaporation during measurements. The resulting precision is at least 100 times higher than in previous displacement-based methods. Volumes are obtained much faster than by traditional histological or confocal methods and without shrinkage artifacts due to fixation or dehydration. Calibrations using volume standards confirmed accurate measurements of volumes as small as 0.06 microL. We validated the feasibility of evaluating soft-tissue samples by comparing volumes of freshly dissected ant brains measured with the MVM and by confocal reconstruction.

  9. Shampoo-clay heals diaper rash faster than calendula officinalis.

    Science.gov (United States)

    Adib-Hajbaghery, Mohsen; Mahmoudi, Mansoreh; Mashaiekhi, Mahdi

    2014-06-01

    Diaper rash is one of the most common skin disorders of infancy and childhood. Some studies have shown that Shampoo-clay was effective to treat chronic dermatitis. Then, it is supposed that it may be effective in diaper rash; however, no published studies were found in this regard. This study aimed to compare the effects of Shampoo-clay (S.C) and Calendula officinalis (C.O) to improve infantile diaper rash. A randomized, double blind, parallel controlled, non-inferiority trial was conducted on 60 outpatient infants referred to health care centers or pediatric clinics in Khomein city and diagnosed with diaper rash. Patients were randomly assigned into two treatment groups including S.C group (n = 30) and C.O group (n = 30) by using one to one allocation ratio. The rate of complete recovery in three days was the primary outcome. Data was collected using a checklist and analyzed using t-test, Chi-square and Fisher's exact tests and risk ratio. Totally, 93.3% of lesions in the S.C group healed in the first 6 hours, while this rate was 40% in C.O group (P < 0.001). The healing ratio for improvement in the first 6 hours was 7 times more in the S.C group. In addition, 90% of infants in the SC group and 36.7% in the C.O group were improved completely in the first 3 days (P < 0.001). S.C was effective to heal diaper rash, and also had faster effects compared to C.O.

  10. Slower Perception Followed by Faster Lexical Decision in Longer Words: A Diffusion Model Analysis.

    Science.gov (United States)

    Oganian, Yulia; Froehlich, Eva; Schlickeiser, Ulrike; Hofmann, Markus J; Heekeren, Hauke R; Jacobs, Arthur M

    2015-01-01

    Effects of stimulus length on reaction times (RTs) in the lexical decision task are the topic of extensive research. While slower RTs are consistently found for longer pseudo-words, a finding coined the word length effect (WLE), some studies found no effects for words, and yet others reported faster RTs for longer words. Moreover, the WLE depends on the orthographic transparency of a language, with larger effects in more transparent orthographies. Here we investigate processes underlying the WLE in lexical decision in German-English bilinguals using a diffusion model (DM) analysis, which we compared to a linear regression approach. In the DM analysis, RT-accuracy distributions are characterized using parameters that reflect latent sub-processes, in particular evidence accumulation and decision-independent perceptual encoding, instead of typical parameters such as mean RT and accuracy. The regression approach showed a decrease in RTs with length for pseudo-words, but no length effect for words. However, DM analysis revealed that the null effect for words resulted from opposing effects of length on perceptual encoding and rate of evidence accumulation. Perceptual encoding times increased with length for words and pseudo-words, whereas the rate of evidence accumulation increased with length for real words but decreased for pseudo-words. A comparison between DM parameters in German and English suggested that orthographic transparency affects perceptual encoding, whereas effects of length on evidence accumulation are likely to reflect contextual information and the increase in available perceptual evidence with length. These opposing effects may account for the inconsistent findings on WLEs.

  11. Considering Time in Orthophotography Production: from a General Workflow to a Shortened Workflow for a Faster Disaster Response

    Science.gov (United States)

    Lucas, G.

    2015-08-01

    orthophoto production faster. The shortened workflow reduces the production time by more than three whereas the positional error increases from 1 GSD to 1.5 GSD. The examination of time allocation through the production process shows that it is worth sparing time in the post-processing phase.

  12. In situ characterization of delamination and crack growth of a CGO–LSM multi-layer ceramic sample investigated by X-ray tomographic microscopy

    DEFF Research Database (Denmark)

    Bjørk, Rasmus; Esposito, Vincenzo; Lauridsen, Erik Mejdal

    2014-01-01

    The densification, delamination and crack growth behavior in a Ce0.9Gd0.1O1.95 (CGO) and (La0.85Sr0.15)0.9MnO3 (LSM) multi-layer ceramic sample was studied using in situ X-ray tomographic microscopy (microtomography) to investigate the critical dynamics of crack propagation and delamination...... in a multilayered sample. Naturally occurring defects, caused by the sample preparation process, are shown not to be critical in sample degradation. Instead defects are nucleated during the debinding step. Crack growth is significantly faster along the material layers than perpendicular to them, and crack growth...

  13. Bell violation using entangled photons without the fair-sampling assumption.

    Science.gov (United States)

    Giustina, Marissa; Mech, Alexandra; Ramelow, Sven; Wittmann, Bernhard; Kofler, Johannes; Beyer, Jörn; Lita, Adriana; Calkins, Brice; Gerrits, Thomas; Nam, Sae Woo; Ursin, Rupert; Zeilinger, Anton

    2013-05-09

    The violation of a Bell inequality is an experimental observation that forces the abandonment of a local realistic viewpoint--namely, one in which physical properties are (probabilistically) defined before and independently of measurement, and in which no physical influence can propagate faster than the speed of light. All such experimental violations require additional assumptions depending on their specific construction, making them vulnerable to so-called loopholes. Here we use entangled photons to violate a Bell inequality while closing the fair-sampling loophole, that is, without assuming that the sample of measured photons accurately represents the entire ensemble. To do this, we use the Eberhard form of Bell's inequality, which is not vulnerable to the fair-sampling assumption and which allows a lower collection efficiency than other forms. Technical improvements of the photon source and high-efficiency transition-edge sensors were crucial for achieving a sufficiently high collection efficiency. Our experiment makes the photon the first physical system for which each of the main loopholes has been closed, albeit in different experiments.

  14. The Development of Functional Overreaching Is Associated with a Faster Heart Rate Recovery in Endurance Athletes.

    Directory of Open Access Journals (Sweden)

    Anaël Aubry

    Full Text Available The aim of the study was to investigate whether heart rate recovery (HRR may represent an effective marker of functional overreaching (f-OR in endurance athletes.Thirty-one experienced male triathletes were tested (10 control and 21 overload subjects before (Pre, and immediately after an overload training period (Mid and after a 2-week taper (Post. Physiological responses were assessed during an incremental cycling protocol to exhaustion, including heart rate, catecholamine release and blood lactate concentration. Ten participants from the overload group developed signs of f-OR at Mid (i.e. -2.1 ± 0.8% change in performance associated with concomitant high perceived fatigue. Additionally, only the f-OR group demonstrated a 99% chance of increase in HRR during the overload period (+8 ± 5 bpm, large effect size. Concomitantly, this group also revealed a >80% chance of decreasing blood lactate (-11 ± 14%, large, plasma norepinephrine (-12 ± 37%, small and plasma epinephrine peak concentrations (-51 ± 22%, moderate. These blood measures returned to baseline levels at Post. HRR change was negatively correlated to changes in performance, peak HR and peak blood metabolites concentrations.These findings suggest that i a faster HRR is not systematically associated with improved physical performance, ii changes in HRR should be interpreted in the context of the specific training phase, the athletes perceived level of fatigue and the performance response; and, iii the faster HRR associated with f-OR may be induced by a decreased central command and by a lower chemoreflex activity.

  15. With medium-chain triglycerides, higher and faster oxygen radical production by stimulated polymorphonuclear leukocytes occurs.

    Science.gov (United States)

    Kruimel, J W; Naber, A H; Curfs, J H; Wenker, M A; Jansen, J B

    2000-01-01

    Parenteral lipid emulsions are suspected of suppressing the immune function. However, study results are contradictory and mainly concern the conventional long-chain triglyceride emulsions. Polymorphonuclear leukocytes were preincubated with parenteral lipid emulsions. The influence of the lipid emulsions on the production of oxygen radicals by these stimulated leukocytes was studied by measuring chemiluminescence. Three different parenteral lipid emulsions were tested: long-chain triglycerides, a physical mixture of medium- and long-chain triglycerides, and structured triglycerides. Structured triglycerides consist of triglycerides where the medium- and long-chain fatty acids are attached to the same glycerol molecule. Stimulated polymorphonuclear leukocytes preincubated with the physical mixture of medium- and long-chain triglycerides showed higher levels of oxygen radicals (p triglycerides or structured triglycerides. Additional studies indicated that differences in results of various lipid emulsions were not caused by differences in emulsifier. The overall production of oxygen radicals was significantly lower after preincubation with the three lipid emulsions compared with controls without lipid emulsion. A physical mixture of medium- and long-chain triglycerides induced faster production of oxygen radicals, resulting in higher levels of oxygen radicals, compared with long-chain triglycerides or structured triglycerides. This can be detrimental in cases where oxygen radicals play either a pathogenic role or a beneficial one, such as when rapid phagocytosis and killing of bacteria is needed. The observed lower production of oxygen radicals by polymorphonuclear leukocytes in the presence of parenteral lipid emulsions may result in immunosuppression by these lipids.

  16. Level of headaches after surgical aneurysm clipping decreases significantly faster compared to endovascular coiled patients

    Directory of Open Access Journals (Sweden)

    Athanasios K. Petridis

    2017-04-01

    Full Text Available In incidental aneurysms, endovascular treatment can lead to post-procedural headaches. We studied the difference of surgical clipping vs. endovascular coiling in concern to post-procedural headaches in patients with ruptured aneurysms. Sixtyseven patients with aneurysmal subarachnoidal haemorrhage were treated in our department from September 1st 2015 - September 1st 2016. 43 Patients were included in the study and the rest was excluded because of late recovery or highgrade subarachnoid bleedings. Twenty-two were surgical treated and twenty-one were interventionally treated. We compared the post-procedural headaches at the time points of 24 h, 21 days, and 3 months after treatment using the visual analog scale (VAS for pain. After surgical clipping the headache score decreased for 8.8 points in the VAS, whereas the endovascular treated population showed a decrease of headaches of 3.3 points. This difference was highly statistical significant and remained significant even after 3 weeks where the pain score for the surgically treated patients was 0.68 and for the endovascular treated 1.8. After 3 months the pain was less than 1 for both groups with surgically treated patients scoring 0.1 and endovascular treated patients 0.9 (not significant. Clipping is relieving the headaches of patients with aneurysm rupture faster and more effective than endovascular coiling. This effect stays significant for at least 3 weeks and plays a crucial role in stress relieve during the acute and subacute ICU care of such patients.

  17. Experiments of the EPR-type involving CP-violation do not allow faster-than-light communication between distant observers

    International Nuclear Information System (INIS)

    Ghirardi, G.C.; Grassi, R.; Weber, T.; Rimini, A.

    1987-11-01

    The proof that faster-than-light communication is not permitted by quantum mechanics derived some years ago by three of us, is extended to cover the case of measurements which do not fit within the standard scheme based on sets of orthogonal projections. A detailed discussion of a recent proposal of superluminal transmission making resort to a CP-violating interaction is presented. It is shown that such a proposal cannot work. (author). 8 refs

  18. The importance of moral construal: moral versus non-moral construal elicits faster, more extreme, universal evaluations of the same actions.

    Directory of Open Access Journals (Sweden)

    Jay J Van Bavel

    Full Text Available Over the past decade, intuitionist models of morality have challenged the view that moral reasoning is the sole or even primary means by which moral judgments are made. Rather, intuitionist models posit that certain situations automatically elicit moral intuitions, which guide moral judgments. We present three experiments showing that evaluations are also susceptible to the influence of moral versus non-moral construal. We had participants make moral evaluations (rating whether actions were morally good or bad or non-moral evaluations (rating whether actions were pragmatically or hedonically good or bad of a wide variety of actions. As predicted, moral evaluations were faster, more extreme, and more strongly associated with universal prescriptions-the belief that absolutely nobody or everybody should engage in an action-than non-moral (pragmatic or hedonic evaluations of the same actions. Further, we show that people are capable of flexibly shifting from moral to non-moral evaluations on a trial-by-trial basis. Taken together, these experiments provide evidence that moral versus non-moral construal has an important influence on evaluation and suggests that effects of construal are highly flexible. We discuss the implications of these experiments for models of moral judgment and decision-making.

  19. SWIFT X-RAY OBSERVATIONS OF CLASSICAL NOVAE. II. THE SUPER SOFT SOURCE SAMPLE

    Energy Technology Data Exchange (ETDEWEB)

    Schwarz, Greg J. [American Astronomical Society, 2000 Florida Avenue, NW, Suite 400, Washington, DC 20009-1231 (United States); Ness, Jan-Uwe [XMM-Newton Science Operations Centre, ESAC, Apartado 78, 28691 Villanueva de la Canada, Madrid (Spain); Osborne, J. P.; Page, K. L.; Evans, P. A.; Beardmore, A. P. [Department of Physics and Astronomy, University of Leicester, Leicester LE1 7RH (United Kingdom); Walter, Frederick M. [Department of Physics and Astronomy, Stony Brook University, Stony Brook, NY 11794-3800 (United States); Andrew Helton, L. [SOFIA Science Center, USRA, NASA Ames Research Center, M.S. N211-3, Moffett Field, CA 94035 (United States); Woodward, Charles E. [Minnesota Institute of Astrophysics, 116 Church Street S.E., University of Minnesota, Minneapolis, MN 55455 (United States); Bode, Mike [Astrophysics Research Institute, Liverpool John Moores University, Birkenhead CH41 1LD (United Kingdom); Starrfield, Sumner [School of Earth and Space Exploration, Arizona State University, P.O. Box 871404, Tempe, AZ 85287-1404 (United States); Drake, Jeremy J., E-mail: Greg.Schwarz@aas.org [Smithsonian Astrophysical Observatory, 60 Garden Street, MS 3, Cambridge, MA 02138 (United States)

    2011-12-01

    The Swift gamma-ray burst satellite is an excellent facility for studying novae. Its rapid response time and sensitive X-ray detector provides an unparalleled opportunity to investigate the previously poorly sampled evolution of novae in the X-ray regime. This paper presents Swift observations of 52 Galactic/Magellanic Cloud novae. We included the X-Ray Telescope (0.3-10 keV) instrument count rates and the UltraViolet and Optical Telescope (1700-8000 A) filter photometry. Also included in the analysis are the publicly available pointed observations of 10 additional novae the X-ray archives. This is the largest X-ray sample of Galactic/Magellanic Cloud novae yet assembled and consists of 26 novae with Super Soft X-ray emission, 19 from Swift observations. The data set shows that the faster novae have an early hard X-ray phase that is usually missing in slower novae. The Super Soft X-ray phase occurs earlier and does not last as long in fast novae compared to slower novae. All the Swift novae with sufficient observations show that novae are highly variable with rapid variability and different periodicities. In the majority of cases, nuclear burning ceases less than three years after the outburst begins. Previous relationships, such as the nuclear burning duration versus t{sub 2} or the expansion velocity of the eject and nuclear burning duration versus the orbital period, are shown to be poorly correlated with the full sample indicating that additional factors beyond the white dwarf mass and binary separation play important roles in the evolution of a nova outburst. Finally, we confirm two optical phenomena that are correlated with strong, soft X-ray emission which can be used to further increase the efficiency of X-ray campaigns.

  20. SWIFT X-RAY OBSERVATIONS OF CLASSICAL NOVAE. II. THE SUPER SOFT SOURCE SAMPLE

    International Nuclear Information System (INIS)

    Schwarz, Greg J.; Ness, Jan-Uwe; Osborne, J. P.; Page, K. L.; Evans, P. A.; Beardmore, A. P.; Walter, Frederick M.; Andrew Helton, L.; Woodward, Charles E.; Bode, Mike; Starrfield, Sumner; Drake, Jeremy J.

    2011-01-01

    The Swift gamma-ray burst satellite is an excellent facility for studying novae. Its rapid response time and sensitive X-ray detector provides an unparalleled opportunity to investigate the previously poorly sampled evolution of novae in the X-ray regime. This paper presents Swift observations of 52 Galactic/Magellanic Cloud novae. We included the X-Ray Telescope (0.3-10 keV) instrument count rates and the UltraViolet and Optical Telescope (1700-8000 Å) filter photometry. Also included in the analysis are the publicly available pointed observations of 10 additional novae the X-ray archives. This is the largest X-ray sample of Galactic/Magellanic Cloud novae yet assembled and consists of 26 novae with Super Soft X-ray emission, 19 from Swift observations. The data set shows that the faster novae have an early hard X-ray phase that is usually missing in slower novae. The Super Soft X-ray phase occurs earlier and does not last as long in fast novae compared to slower novae. All the Swift novae with sufficient observations show that novae are highly variable with rapid variability and different periodicities. In the majority of cases, nuclear burning ceases less than three years after the outburst begins. Previous relationships, such as the nuclear burning duration versus t 2 or the expansion velocity of the eject and nuclear burning duration versus the orbital period, are shown to be poorly correlated with the full sample indicating that additional factors beyond the white dwarf mass and binary separation play important roles in the evolution of a nova outburst. Finally, we confirm two optical phenomena that are correlated with strong, soft X-ray emission which can be used to further increase the efficiency of X-ray campaigns.

  1. Efficient Monte Carlo sampling of inverse problems using a neural network-based forward—applied to GPR crosshole traveltime inversion

    Science.gov (United States)

    Hansen, T. M.; Cordua, K. S.

    2017-12-01

    Probabilistically formulated inverse problems can be solved using Monte Carlo-based sampling methods. In principle, both advanced prior information, based on for example, complex geostatistical models and non-linear forward models can be considered using such methods. However, Monte Carlo methods may be associated with huge computational costs that, in practice, limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical forward response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival traveltime inversion of crosshole ground penetrating radar data. An accurate forward model, based on 2-D full-waveform modeling followed by automatic traveltime picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the accurate and computationally expensive forward model, and also considerably faster and more accurate (i.e. with better resolution), than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of non-linear and non-Gaussian inverse problems that have to be solved using Monte Carlo sampling techniques.

  2. The Work Role Functioning Questionnaire v2.0 Showed Consistent Factor Structure Across Six Working Samples

    DEFF Research Database (Denmark)

    Abma, Femke I.; Bültmann, Ute; Amick, Benjamin C.

    2017-01-01

    Objective: The Work Role Functioning Questionnaire v2.0 (WRFQ) is an outcome measure linking a persons’ health to the ability to meet work demands in the twenty-first century. We aimed to examine the construct validity of the WRFQ in a heterogeneous set of working samples in the Netherlands...

  3. The Work Role Functioning Questionnaire v2.0 Showed Consistent Factor Structure Across Six Working Samples

    NARCIS (Netherlands)

    Abma, F.I.; Bultmann, U.; Amick III, B.C.; Arends, I.; Dorland, P.A.; Flach, P.A.; Klink, J.J.L van der; Ven H.A., van de; Bjørner, J.B.

    2017-01-01

    Objective The Work Role Functioning Questionnaire v2.0 (WRFQ) is an outcome measure linking a persons’ health to the ability to meet work demands in the twenty-first century. We aimed to examine the construct validity of the WRFQ in a heterogeneous set of working samples in the Netherlands with

  4. Convolutional neural network using generated data for SAR ATR with limited samples

    Science.gov (United States)

    Cong, Longjian; Gao, Lei; Zhang, Hui; Sun, Peng

    2018-03-01

    Being able to adapt all weather at all times, it has been a hot research topic that using Synthetic Aperture Radar(SAR) for remote sensing. Despite all the well-known advantages of SAR, it is hard to extract features because of its unique imaging methodology, and this challenge attracts the research interest of traditional Automatic Target Recognition(ATR) methods. With the development of deep learning technologies, convolutional neural networks(CNNs) give us another way out to detect and recognize targets, when a huge number of samples are available, but this premise is often not hold, when it comes to monitoring a specific type of ships. In this paper, we propose a method to enhance the performance of Faster R-CNN with limited samples to detect and recognize ships in SAR images.

  5. A Bayesian Justification for Random Sampling in Sample Survey

    Directory of Open Access Journals (Sweden)

    Glen Meeden

    2012-07-01

    Full Text Available In the usual Bayesian approach to survey sampling the sampling design, plays a minimal role, at best. Although a close relationship between exchangeable prior distributions and simple random sampling has been noted; how to formally integrate simple random sampling into the Bayesian paradigm is not clear. Recently it has been argued that the sampling design can be thought of as part of a Bayesian's prior distribution. We will show here that under this scenario simple random sample can be given a Bayesian justification in survey sampling.

  6. Green sample preparation for liquid chromatography and capillary electrophoresis of anionic and cationic analytes.

    Science.gov (United States)

    Wuethrich, Alain; Haddad, Paul R; Quirino, Joselito P

    2015-04-21

    A sample preparation device for the simultaneous enrichment and separation of cationic and anionic analytes was designed and implemented in an eight-channel configuration. The device is based on the use of an electric field to transfer the analytes from a large volume of sample into small volumes of electrolyte that was suspended into two glass micropipettes using a conductive hydrogel. This simple, economical, fast, and green (no organic solvent required) sample preparation scheme was evaluated using cationic and anionic herbicides as test analytes in water. The analytical figures of merit and ecological aspects were evaluated against the state-of-the-art sample preparation, solid-phase extraction. A drastic reduction in both sample preparation time (94% faster) and resources (99% less consumables used) was observed. Finally, the technique in combination with high-performance liquid chromatography and capillary electrophoresis was applied to analysis of quaternary ammonium and phenoxypropionic acid herbicides in fortified river water as well as drinking water (at levels relevant to Australian guidelines). The presented sustainable sample preparation approach could easily be applied to other charged analytes or adopted by other laboratories.

  7. Faster processing of multiple spatially-heterodyned direct to digital holograms

    Science.gov (United States)

    Hanson, Gregory R [Clinton, TN; Bingham, Philip R [Knoxville, TN

    2008-09-09

    Systems and methods are described for faster processing of multiple spatially-heterodyned direct to digital holograms. A method includes of obtaining multiple spatially-heterodyned holograms, includes: digitally recording a first spatially-heterodyned hologram including spatial heterodyne fringes for Fourier analysis; digitally recording a second spatially-heterodyned hologram including spatial heterodyne fringes for Fourier analysis; Fourier analyzing the recorded first spatially-heterodyned hologram by shifting a first original origin of the recorded first spatially-heterodyned hologram including spatial heterodyne fringes in Fourier space to sit on top of a spatial-heterodyne carrier frequency defined as a first angle between a first reference beam and a first object beam; applying a first digital filter to cut off signals around the first original origin and performing an inverse Fourier transform on the result; Fourier analyzing the recorded second spatially-heterodyned hologram by shifting a second original origin of the recorded second spatially-heterodyned hologram including spatial heterodyne fringes in Fourier space to sit on top of a spatial-heterodyne carrier frequency defined as a second angle between a second reference beam and a second object beam; and applying a second digital filter to cut off signals around the second original origin and performing an inverse Fourier transform on the result, wherein digitally recording the first spatially-heterodyned hologram is completed before digitally recording the second spatially-heterodyned hologram and a single digital image includes both the first spatially-heterodyned hologram and the second spatially-heterodyned hologram.

  8. Comparative Analysis of Clinical Samples Showing Weak Serum Reaction on AutoVue System Causing ABO Blood Typing Discrepancies

    OpenAIRE

    Jo, Su Yeon; Lee, Ju Mi; Kim, Hye Lim; Sin, Kyeong Hwa; Lee, Hyeon Ji; Chang, Chulhun Ludgerus; Kim, Hyung-Hoi

    2016-01-01

    Background ABO blood typing in pre-transfusion testing is a major component of the high workload in blood banks that therefore requires automation. We often experienced discrepant results from an automated system, especially weak serum reactions. We evaluated the discrepant results by the reference manual method to confirm ABO blood typing. Methods In total, 13,113 blood samples were tested with the AutoVue system; all samples were run in parallel with the reference manual method according to...

  9. Better, faster and cheaper energy facades for transformation of multi-storey blocks built between 1960 and 1976

    DEFF Research Database (Denmark)

    Vestergaard, Inge

    2013-01-01

    an innovative process. The aim is to inspire clients and consultants to think smart through optimizing the planning and design process as well as the building process. Through this way of thinking we can secure a better, cheaper and faster energy renovation of the existing building stock. The project is under...... scale projects have been driven by the housing association AL2bolig, Tilst, Denmark. Author was part of the architectural discussion through planning process and the evaluation of the first frame competition. The methods used are: - participation through the development process - comparable research...

  10. Osmium Atoms and Os2 Molecules Move Faster on Selenium-Doped Compared to Sulfur-Doped Boronic Graphenic Surfaces.

    Science.gov (United States)

    Barry, Nicolas P E; Pitto-Barry, Anaïs; Tran, Johanna; Spencer, Simon E F; Johansen, Adam M; Sanchez, Ana M; Dove, Andrew P; O'Reilly, Rachel K; Deeth, Robert J; Beanland, Richard; Sadler, Peter J

    2015-07-28

    We deposited Os atoms on S- and Se-doped boronic graphenic surfaces by electron bombardment of micelles containing 16e complexes [Os(p-cymene)(1,2-dicarba-closo-dodecarborane-1,2-diselenate/dithiolate)] encapsulated in a triblock copolymer. The surfaces were characterized by energy-dispersive X-ray (EDX) analysis and electron energy loss spectroscopy of energy filtered TEM (EFTEM). Os atoms moved ca. 26× faster on the B/Se surface compared to the B/S surface (233 ± 34 pm·s(-1) versus 8.9 ± 1.9 pm·s(-1)). Os atoms formed dimers with an average Os-Os distance of 0.284 ± 0.077 nm on the B/Se surface and 0.243 ± 0.059 nm on B/S, close to that in metallic Os. The Os2 molecules moved 0.83× and 0.65× more slowly than single Os atoms on B/S and B/Se surfaces, respectively, and again markedly faster (ca. 20×) on the B/Se surface (151 ± 45 pm·s(-1) versus 7.4 ± 2.8 pm·s(-1)). Os atom motion did not follow Brownian motion and appears to involve anchoring sites, probably S and Se atoms. The ability to control the atomic motion of metal atoms and molecules on surfaces has potential for exploitation in nanodevices of the future.

  11. Self-Assembled Complexes of Horseradish Peroxidase with Magnetic Nanoparticles Showing Enhanced Peroxidase Activity

    KAUST Repository

    Corgié , Sté phane C.; Kahawong, Patarawan; Duan, Xiaonan; Bowser, Daniel; Edward, Joseph B.; Walker, Larry P.; Giannelis, Emmanuel P.

    2012-01-01

    Bio-nanocatalysts (BNCs) consisting of horseradish peroxidase (HRP) self-assembled with magnetic nanoparticles (MNPs) enhance enzymatic activity due to the faster turnover and lower inhibition of the enzyme. The size and magnetization of the MNPs

  12. Extraction and Determination of Crocin in Saffron Samples by Dispersive Liquid-Liquid Microextraction

    Directory of Open Access Journals (Sweden)

    Somayeh Heydari

    2016-11-01

    Full Text Available The main component responsible for color in saffron is crocin with the chemical formula of C44H64O24. Crocin is one of several carotenoids in nature that is soluble in water. This solubility is one of the reasons for its widespread usage as a colorant in food and medicine compared to other carotenoids. The coloring strength of saffron is one of the major factors that determine the quality of the saffron stigma. It will be evaluated with measuring of crocin. Microextraction is the newest and easiest method that can be successfully applied for the preconcentration and separation of crocin in saffron samples. The advantages of this method are faster, cheaper and easier analysis by UV-Vis spectrophotometry in measurement of crocin compared to the chromatographic analysis methods. The studies showed that the type and volume of disperser and extractant solvent have a significant effect on the efficiency of crocin extraction. In this work, acetone as the disperser solvent and dichlorometane as the extractant solvent were found to be suitable combinations. Under the optimal conditions, the calibration curve was linear in the range of 0.15-0.00001 μg mL-1 and the limit of detection (LOD was calculated based on 3 Sb/m (where, Sb and m are the standard deviation of the blank and slop ratio of the calibration curve respectively was 0.000008 μg mL-1. The procedure was applied to saffron samples and the good recovery percent for the saffron samples was obtained.

  13. High-pressure oxygenation of thin-wall YBCO single-domain samples

    International Nuclear Information System (INIS)

    Chaud, X; Savchuk, Y; Sergienko, N; Prikhna, T; Diko, P

    2008-01-01

    The oxygen annealing of ReBCO bulk material, necessary to achieve superconducting properties, usually induces micro- and macro-cracks. This leads to a crack-assisted oxygenation process that allows oxygenating large bulk samples faster than single crystals. But excellent superconducting properties are cancelled by the poor mechanical ones. More progressive oxygenation strategy has been shown to reduce drastically the oxygenation cracks. The problem then arises to keep a reasonable annealing time. The concept of bulk Y123 single-domain samples with thin-wall geometry has been introduced to bypass the inherent limitation due to a slow oxygen diffusion rate. But it is not enough. The use of a high oxygen pressure (16 MPa) enables to speed up further the process. It introduces a displacement in the equilibrium phase diagram towards higher temperatures, i.e., higher diffusion rates, to achieve a given oxygen content in the material. Remarkable results were obtained by applying such a high pressure oxygen annealing process on thin-wall single-domain samples. The trapped field of 16 mm diameter Y123 thin-wall single-domain samples was doubled (0.6T vs 0.3T at 77K) using an annealing time twice shorter (about 3 days). The initial development was made on thin bars. The advantage of thin-wall geometry is that such an annealing can be applied directly to a much larger sample

  14. Faster Than Light (FTL) Travel and Causality in the Context of the Gravity-Electro-Magnetism (GEM) Theory of Field Unification

    Science.gov (United States)

    Brandenburg, J. E.

    2010-01-01

    In The GEM (Brandenburg, 2006) theory, direct manipulation of space-time geometry is possible leading to the possibility of transformation of a starship into a tachyon moving Faster Than Light (FTL). The GEM theory is reviewed and Causality in terms of the time ordering of experienced events is considered as well as examining the space-time curvature signature of such FTL particles. Time ordering and time flow is found to be determined by the 2nd law of thermodynamics and is used to derive a Cosmic time flow in terms of the expansion of the universe. The rate of increase of cosmic entropy is approximately dS/dt = c3/(Gmp), the rate that light transits from a proton-mass Black Hole, reminiscent of the Dirac Larger Number Hypothesis relating Cosmic and subatomic quantities. It is found that the tachyon FTL method, rather than allowing reversal of time ordering of experienced events, actually makes the cosmos age faster by contributing to an increase in ``Dark Energy'' and thus FTL travel via tachyons irreversibly changes the cosmos. Therefore, it appears that FTL travel can be accomplished without violation of Causality.

  15. State of the art of environmentally friendly sample preparation approaches for determination of PBDEs and metabolites in environmental and biological samples: A critical review.

    Science.gov (United States)

    Berton, Paula; Lana, Nerina B; Ríos, Juan M; García-Reyes, Juan F; Altamirano, Jorgelina C

    2016-01-28

    Green chemistry principles for developing methodologies have gained attention in analytical chemistry in recent decades. A growing number of analytical techniques have been proposed for determination of organic persistent pollutants in environmental and biological samples. In this light, the current review aims to present state-of-the-art sample preparation approaches based on green analytical principles proposed for the determination of polybrominated diphenyl ethers (PBDEs) and metabolites (OH-PBDEs and MeO-PBDEs) in environmental and biological samples. Approaches to lower the solvent consumption and accelerate the extraction, such as pressurized liquid extraction, microwave-assisted extraction, and ultrasound-assisted extraction, are discussed in this review. Special attention is paid to miniaturized sample preparation methodologies and strategies proposed to reduce organic solvent consumption. Additionally, extraction techniques based on alternative solvents (surfactants, supercritical fluids, or ionic liquids) are also commented in this work, even though these are scarcely used for determination of PBDEs. In addition to liquid-based extraction techniques, solid-based analytical techniques are also addressed. The development of greener, faster and simpler sample preparation approaches has increased in recent years (2003-2013). Among green extraction techniques, those based on the liquid phase predominate over those based on the solid phase (71% vs. 29%, respectively). For solid samples, solvent assisted extraction techniques are preferred for leaching of PBDEs, and liquid phase microextraction techniques are mostly used for liquid samples. Likewise, green characteristics of the instrumental analysis used after the extraction and clean-up steps are briefly discussed. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Analysis of boron utilization in sample preparation for microorganisms detection by neutron radiography technique

    International Nuclear Information System (INIS)

    Wacha, Reinaldo; Crispim, Verginia R.

    2000-01-01

    The neutron radiography technique applied to the microorganisms detection is the study of a new and faster alternative for diagnosis of infectious means. This work presents the parameters and the effects involved in the use of the boron as a conversion agent, that convert neutrons in a particles, capable ones of generating latent tracks in a solid state nuclear tracks detector, CR-39. The collected samples are doped with the boron by the incubation method, propitiating an interaction microorganisms/boron, that will guarantee the identification of the images of those microorganisms, through your morphology. (author)

  17. An adaptive Phase-Locked Loop algorithm for faster fault ride through performance of interconnected renewable energy sources

    DEFF Research Database (Denmark)

    Hadjidemetriou, Lenos; Kyriakides, Elias; Blaabjerg, Frede

    2013-01-01

    Interconnected renewable energy sources require fast and accurate fault ride through operation in order to support the power grid when faults occur. This paper proposes an adaptive Phase-Locked Loop (adaptive dαβPLL) algorithm, which can be used for a faster and more accurate response of the grid...... side converter control of a renewable energy source, especially under fault ride through operation. The adaptive dαβPLL is based on modifying the control parameters of the dαβPLL according to the type and voltage characteristic of the grid fault with the purpose of accelerating the performance...

  18. High-Throughput and Rapid Screening of Low-Mass Hazardous Compounds in Complex Samples.

    Science.gov (United States)

    Wang, Jing; Liu, Qian; Gao, Yan; Wang, Yawei; Guo, Liangqia; Jiang, Guibin

    2015-07-07

    Rapid screening and identification of hazardous chemicals in complex samples is of extreme importance for public safety and environmental health studies. In this work, we report a new method for high-throughput, sensitive, and rapid screening of low-mass hazardous compounds in complex media without complicated sample preparation procedures. This method is achieved based on size-selective enrichment on ordered mesoporous carbon followed by matrix-assisted laser desorption/ionization-time-of-flight mass spectrometry analysis with graphene as a matrix. The ordered mesoporous carbon CMK-8 can exclude interferences from large molecules in complex samples (e.g., human serum, urine, and environmental water samples) and efficiently enrich a wide variety of low-mass hazardous compounds. The method can work at very low concentrations down to part per trillion (ppt) levels, and it is much faster and more facile than conventional methods. It was successfully applied to rapidly screen and identify unknown toxic substances such as perfluorochemicals in human serum samples from athletes and workers. Therefore, this method not only can sensitively detect target compounds but also can identify unknown hazardous compounds in complex media.

  19. Polygenic scores predict alcohol problems in an independent sample and show moderation by the environment.

    Science.gov (United States)

    Salvatore, Jessica E; Aliev, Fazil; Edwards, Alexis C; Evans, David M; Macleod, John; Hickman, Matthew; Lewis, Glyn; Kendler, Kenneth S; Loukola, Anu; Korhonen, Tellervo; Latvala, Antti; Rose, Richard J; Kaprio, Jaakko; Dick, Danielle M

    2014-04-10

    Alcohol problems represent a classic example of a complex behavioral outcome that is likely influenced by many genes of small effect. A polygenic approach, which examines aggregate measured genetic effects, can have predictive power in cases where individual genes or genetic variants do not. In the current study, we first tested whether polygenic risk for alcohol problems-derived from genome-wide association estimates of an alcohol problems factor score from the age 18 assessment of the Avon Longitudinal Study of Parents and Children (ALSPAC; n = 4304 individuals of European descent; 57% female)-predicted alcohol problems earlier in development (age 14) in an independent sample (FinnTwin12; n = 1162; 53% female). We then tested whether environmental factors (parental knowledge and peer deviance) moderated polygenic risk to predict alcohol problems in the FinnTwin12 sample. We found evidence for both polygenic association and for additive polygene-environment interaction. Higher polygenic scores predicted a greater number of alcohol problems (range of Pearson partial correlations 0.07-0.08, all p-values ≤ 0.01). Moreover, genetic influences were significantly more pronounced under conditions of low parental knowledge or high peer deviance (unstandardized regression coefficients (b), p-values (p), and percent of variance (R2) accounted for by interaction terms: b = 1.54, p = 0.02, R2 = 0.33%; b = 0.94, p = 0.04, R2 = 0.30%, respectively). Supplementary set-based analyses indicated that the individual top single nucleotide polymorphisms (SNPs) contributing to the polygenic scores were not individually enriched for gene-environment interaction. Although the magnitude of the observed effects are small, this study illustrates the usefulness of polygenic approaches for understanding the pathways by which measured genetic predispositions come together with environmental factors to predict complex behavioral outcomes.

  20. Polygenic Scores Predict Alcohol Problems in an Independent Sample and Show Moderation by the Environment

    Directory of Open Access Journals (Sweden)

    Jessica E. Salvatore

    2014-04-01

    Full Text Available Alcohol problems represent a classic example of a complex behavioral outcome that is likely influenced by many genes of small effect. A polygenic approach, which examines aggregate measured genetic effects, can have predictive power in cases where individual genes or genetic variants do not. In the current study, we first tested whether polygenic risk for alcohol problems—derived from genome-wide association estimates of an alcohol problems factor score from the age 18 assessment of the Avon Longitudinal Study of Parents and Children (ALSPAC; n = 4304 individuals of European descent; 57% female—predicted alcohol problems earlier in development (age 14 in an independent sample (FinnTwin12; n = 1162; 53% female. We then tested whether environmental factors (parental knowledge and peer deviance moderated polygenic risk to predict alcohol problems in the FinnTwin12 sample. We found evidence for both polygenic association and for additive polygene-environment interaction. Higher polygenic scores predicted a greater number of alcohol problems (range of Pearson partial correlations 0.07–0.08, all p-values ≤ 0.01. Moreover, genetic influences were significantly more pronounced under conditions of low parental knowledge or high peer deviance (unstandardized regression coefficients (b, p-values (p, and percent of variance (R2 accounted for by interaction terms: b = 1.54, p = 0.02, R2 = 0.33%; b = 0.94, p = 0.04, R2 = 0.30%, respectively. Supplementary set-based analyses indicated that the individual top single nucleotide polymorphisms (SNPs contributing to the polygenic scores were not individually enriched for gene-environment interaction. Although the magnitude of the observed effects are small, this study illustrates the usefulness of polygenic approaches for understanding the pathways by which measured genetic predispositions come together with environmental factors to predict complex behavioral outcomes.

  1. [DAILY AND ABNORMAL EATING BEHAVIORS IN A COMMUNITY SAMPLE OF CHILEAN ADULTS].

    Science.gov (United States)

    Oda-Montecinos, Camila; Saldaña, Carmina; Andrés Valle, Ana

    2015-08-01

    this research aimed to characterize the daily eating behavior in a sample of Chilean adults according to their Body Mass Index (BMI) and gender and to analyze the possible links between these variables and abnormal eating behaviors. 657 participants (437 women and 220 men, age range 18-64 years) were evaluated with a battery of self-administered questionnaires. Mean BMI was 25.50 kg/m2 (women 24.96 kg/m2, men 26.58 kg/m2), being significantly higher the mean of BMI in the men group, being the BMI mean of the total sample and that of the male group in the overweight range. participants with overweight (BMI ≥ 25 kg/m2), in contrast with normal-weight group, tended to do more frequently the following behaviors: skip meals, follow a diet, eat less homemade food, eat faster and in greater quantities, in addition to do a greater number of abnormal eating behaviors of various kinds and to rate significantly higher in clinical scales that evaluated eating restraint and overeating. Men showed significantly more eating behaviors linked with overeating, and women performed more behaviors related with eating restraint and emotional eating. the results suggest that, besides "what" people eat, "how" people eat, in terms of specific behaviors, may contribute to the rapid increase of overweight in Chilean population. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.

  2. Ionic liquids: solvents and sorbents in sample preparation.

    Science.gov (United States)

    Clark, Kevin D; Emaus, Miranda N; Varona, Marcelino; Bowers, Ashley N; Anderson, Jared L

    2018-01-01

    The applications of ionic liquids (ILs) and IL-derived sorbents are rapidly expanding. By careful selection of the cation and anion components, the physicochemical properties of ILs can be altered to meet the requirements of specific applications. Reports of IL solvents possessing high selectivity for specific analytes are numerous and continue to motivate the development of new IL-based sample preparation methods that are faster, more selective, and environmentally benign compared to conventional organic solvents. The advantages of ILs have also been exploited in solid/polymer formats in which ordinarily nonspecific sorbents are functionalized with IL moieties in order to impart selectivity for an analyte or analyte class. Furthermore, new ILs that incorporate a paramagnetic component into the IL structure, known as magnetic ionic liquids (MILs), have emerged as useful solvents for bioanalytical applications. In this rapidly changing field, this Review focuses on the applications of ILs and IL-based sorbents in sample preparation with a special emphasis on liquid phase extraction techniques using ILs and MILs, IL-based solid-phase extraction, ILs in mass spectrometry, and biological applications. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Field Sample Preparation Method Development for Isotope Ratio Mass Spectrometry

    International Nuclear Information System (INIS)

    Leibman, C.; Weisbrod, K.; Yoshida, T.

    2015-01-01

    Non-proliferation and International Security (NA-241) established a working group of researchers from Los Alamos National Laboratory (LANL), Pacific Northwest National Laboratory (PNNL) and Savannah River National Laboratory (SRNL) to evaluate the utilization of in-field mass spectrometry for safeguards applications. The survey of commercial off-the-shelf (COTS) mass spectrometers (MS) revealed no instrumentation existed capable of meeting all the potential safeguards requirements for performance, portability, and ease of use. Additionally, fieldable instruments are unlikely to meet the International Target Values (ITVs) for accuracy and precision for isotope ratio measurements achieved with laboratory methods. The major gaps identified for in-field actinide isotope ratio analysis were in the areas of: 1. sample preparation and/or sample introduction, 2. size reduction of mass analyzers and ionization sources, 3. system automation, and 4. decreased system cost. Development work in 2 through 4, numerated above continues, in the private and public sector. LANL is focusing on developing sample preparation/sample introduction methods for use with the different sample types anticipated for safeguard applications. Addressing sample handling and sample preparation methods for MS analysis will enable use of new MS instrumentation as it becomes commercially available. As one example, we have developed a rapid, sample preparation method for dissolution of uranium and plutonium oxides using ammonium bifluoride (ABF). ABF is a significantly safer and faster alternative to digestion with boiling combinations of highly concentrated mineral acids. Actinides digested with ABF yield fluorides, which can then be analyzed directly or chemically converted and separated using established column chromatography techniques as needed prior to isotope analysis. The reagent volumes and the sample processing steps associated with ABF sample digestion lend themselves to automation and field

  4. A high precision mass spectrometer for hydrogen isotopic analysis of water samples

    International Nuclear Information System (INIS)

    Murthy, M.S.; Prahallada Rao, B.S.; Handu, V.K.; Satam, J.V.

    1979-01-01

    A high precision mass spectrometer with two ion collector assemblies and direct on line reduction facility (with uranium at 700 0 C) for water samples for hydrogen isotopic analysis has been designed and developed. The ion source particularly gives high sensitivity and at the same tike limits the H 3 + ions to a minimum. A digital ratiometer with a H 2 + compensator has also been developed. The overall precision obtained on the spectrometer is 0.07% 2sub(sigmasub(10)) value. Typical results on the performance of the spectrometer, which is working since a year and a half are given. Possible methods of extending the ranges of concentration the spectrometer can handle, both on lower and higher sides are discussed. Problems of memory between samples are briefly listed. A multiple inlet system to overcome these problems is suggested. This will also enable faster analysis when samples of highly varying concentrations are to be analyzed. A few probable areas in which the spectrometer will be shortly put to use are given. (auth.)

  5. Revisiting random walk based sampling in networks: evasion of burn-in period and frequent regenerations.

    Science.gov (United States)

    Avrachenkov, Konstantin; Borkar, Vivek S; Kadavankandy, Arun; Sreedharan, Jithin K

    2018-01-01

    In the framework of network sampling, random walk (RW) based estimation techniques provide many pragmatic solutions while uncovering the unknown network as little as possible. Despite several theoretical advances in this area, RW based sampling techniques usually make a strong assumption that the samples are in stationary regime, and hence are impelled to leave out the samples collected during the burn-in period. This work proposes two sampling schemes without burn-in time constraint to estimate the average of an arbitrary function defined on the network nodes, for example, the average age of users in a social network. The central idea of the algorithms lies in exploiting regeneration of RWs at revisits to an aggregated super-node or to a set of nodes, and in strategies to enhance the frequency of such regenerations either by contracting the graph or by making the hitting set larger. Our first algorithm, which is based on reinforcement learning (RL), uses stochastic approximation to derive an estimator. This method can be seen as intermediate between purely stochastic Markov chain Monte Carlo iterations and deterministic relative value iterations. The second algorithm, which we call the Ratio with Tours (RT)-estimator, is a modified form of respondent-driven sampling (RDS) that accommodates the idea of regeneration. We study the methods via simulations on real networks. We observe that the trajectories of RL-estimator are much more stable than those of standard random walk based estimation procedures, and its error performance is comparable to that of respondent-driven sampling (RDS) which has a smaller asymptotic variance than many other estimators. Simulation studies also show that the mean squared error of RT-estimator decays much faster than that of RDS with time. The newly developed RW based estimators (RL- and RT-estimators) allow to avoid burn-in period, provide better control of stability along the sample path, and overall reduce the estimation time. Our

  6. A simplified implementation of edge detection in MATLAB is faster and more sensitive than fast fourier transform for actin fiber alignment quantification.

    Science.gov (United States)

    Kemeny, Steven Frank; Clyne, Alisa Morss

    2011-04-01

    Fiber alignment plays a critical role in the structure and function of cells and tissues. While fiber alignment quantification is important to experimental analysis and several different methods for quantifying fiber alignment exist, many studies focus on qualitative rather than quantitative analysis perhaps due to the complexity of current fiber alignment methods. Speed and sensitivity were compared in edge detection and fast Fourier transform (FFT) for measuring actin fiber alignment in cells exposed to shear stress. While edge detection using matrix multiplication was consistently more sensitive than FFT, image processing time was significantly longer. However, when MATLAB functions were used to implement edge detection, MATLAB's efficient element-by-element calculations and fast filtering techniques reduced computation cost 100 times compared to the matrix multiplication edge detection method. The new computation time was comparable to the FFT method, and MATLAB edge detection produced well-distributed fiber angle distributions that statistically distinguished aligned and unaligned fibers in half as many sample images. When the FFT sensitivity was improved by dividing images into smaller subsections, processing time grew larger than the time required for MATLAB edge detection. Implementation of edge detection in MATLAB is simpler, faster, and more sensitive than FFT for fiber alignment quantification.

  7. Respondent-Driven Sampling – Testing Assumptions: Sampling with Replacement

    Directory of Open Access Journals (Sweden)

    Barash Vladimir D.

    2016-03-01

    Full Text Available Classical Respondent-Driven Sampling (RDS estimators are based on a Markov Process model in which sampling occurs with replacement. Given that respondents generally cannot be interviewed more than once, this assumption is counterfactual. We join recent work by Gile and Handcock in exploring the implications of the sampling-with-replacement assumption for bias of RDS estimators. We differ from previous studies in examining a wider range of sampling fractions and in using not only simulations but also formal proofs. One key finding is that RDS estimates are surprisingly stable even in the presence of substantial sampling fractions. Our analyses show that the sampling-with-replacement assumption is a minor contributor to bias for sampling fractions under 40%, and bias is negligible for the 20% or smaller sampling fractions typical of field applications of RDS.

  8. Why We Respond Faster to the Self than to Others? An Implicit Positive Association Theory of Self-Advantage during Implicit Face Recognition

    Science.gov (United States)

    Ma, Yina; Han, Shihui

    2010-01-01

    Human adults usually respond faster to their own faces rather than to those of others. We tested the hypothesis that an implicit positive association (IPA) with self mediates self-advantage in face recognition through 4 experiments. Using a self-concept threat (SCT) priming that associated the self with negative personal traits and led to a…

  9. Comparison of two methods for the detection of hepatitis A virus in clam samples (Tapes spp.) by reverse transcription-nested PCR.

    Science.gov (United States)

    Suñén, Ester; Casas, Nerea; Moreno, Belén; Zigorraga, Carmen

    2004-03-01

    The detection of hepatitis A virus in shellfish by reverse transcription-nested polymerase chain reaction (RT-nested PCR) is hampered mainly by low levels of virus contamination and PCR inhibitors in shellfish. In this study, we focused on getting a rapid and sensitive processing procedure for the detection of HAV by RT-nested PCR in clam samples (Tapes spp.). Two previously developed processing methods for virus concentration in shellfish have been improved upon and compared. The first method involves acid adsorption, elution, polyethylene glycol (PEG) precipitation, chloroform extraction and PEG precipitation. The second method is based on elution with a glycine buffer at pH 10, chloroform extraction and concentration by ultracentrifugation. Final clam concentrates were processed by RNA extraction or immunomagnetic capture of viruses (IMC) before the RT-nested PCR reaction. Both methods of sample processing combined with the RNA extraction from the concentrates were very efficient when they were assayed in seeded and naturally contaminated samples. The results show that the first method was more effective in removal inhibitors and the second was simpler and faster. The IMC of HAV from clam concentrates processed by method 1 was revealed to be a very effective method of simultaneously removing residual PCR inhibitors and of concentrating the virus.

  10. QERx- A Faster than Real-Time Emulator for Space Processors

    Science.gov (United States)

    Carvalho, B.; Pidgeon, A.; Robinson, P.

    2012-08-01

    Developing software for space systems is challenging. Especially because, in order to be sure it can cope with the harshness of the environment and the imperative requirements and constrains imposed by the platform were it will run, it needs to be tested exhaustively. Software Validation Facilities (SVF) are known to the industry and developers, and provide the means to run the On-Board Software (OBSW) in a realistic environment, allowing the development team to debug and test the software.But the challenge is to be able to keep up with the performance of the new processors (LEON2 and LEON3), which need to be emulated within the SVF. Such processor emulators are also used in Operational Simulators, used to support mission preparation and train mission operators. These simulators mimic the satellite and its behaviour, as realistically as possible. For test/operational efficiency reasons and because they will need to interact with external systems, both these uses cases require the processor emulators to provide real-time, or faster, performance.It is known to the industry that the performance of previously available emulators is not enough to cope with the performance of the new processors available in the market. SciSys approached this problem with dynamic translation technology trying to keep costs down by avoiding a hardware solution and keeping the integration flexibility of full software emulation.SciSys presented “QERx: A High Performance Emulator for Software Validation and Simulations” [1], in a previous DASIA event. Since then that idea has evolved and QERx has been successfully validated. SciSys is now presenting QERx as a product that can be tailored to fit different emulation needs. This paper will present QERx latest developments and current status.

  11. A method for estimating radioactive cesium concentrations in cattle blood using urine samples.

    Science.gov (United States)

    Sato, Itaru; Yamagishi, Ryoma; Sasaki, Jun; Satoh, Hiroshi; Miura, Kiyoshi; Kikuchi, Kaoru; Otani, Kumiko; Okada, Keiji

    2017-12-01

    In the region contaminated by the Fukushima nuclear accident, radioactive contamination of live cattle should be checked before slaughter. In this study, we establish a precise method for estimating radioactive cesium concentrations in cattle blood using urine samples. Blood and urine samples were collected from a total of 71 cattle on two farms in the 'difficult-to-return zone'. Urine 137 Cs, specific gravity, electrical conductivity, pH, sodium, potassium, calcium, and creatinine were measured and various estimation methods for blood 137 Cs were tested. The average error rate of the estimation was 54.2% without correction. Correcting for urine creatinine, specific gravity, electrical conductivity, or potassium improved the precision of the estimation. Correcting for specific gravity using the following formula gave the most precise estimate (average error rate = 16.9%): [blood 137 Cs] = [urinary 137 Cs]/([specific gravity] - 1)/329. Urine samples are faster to measure than blood samples because urine can be obtained in larger quantities and has a higher 137 Cs concentration than blood. These advantages of urine and the estimation precision demonstrated in our study, indicate that estimation of blood 137 Cs using urine samples is a practical means of monitoring radioactive contamination in live cattle. © 2017 Japanese Society of Animal Science.

  12. Faster Growth of Road Transportation CO2 Emissions in Asia Pacific Economies: Exploring Differences in Trends of the Rapidly Developing and Developed Worlds

    Science.gov (United States)

    Marcotullio, Peter J.

    2006-01-01

    Researchers have identified how in some rapidly developing countries, road and aviation transportation CO2 emissions are rising faster (over time) when compared to the experiences of the USA at similar levels of economic development. While suggestive of how experiences of the rapidly developing Asia are different from those of the developed world…

  13. Processing language in face-to-face conversation: Questions with gestures get faster responses.

    Science.gov (United States)

    Holler, Judith; Kendrick, Kobin H; Levinson, Stephen C

    2017-09-08

    The home of human language use is face-to-face interaction, a context in which communicative exchanges are characterised not only by bodily signals accompanying what is being said but also by a pattern of alternating turns at talk. This transition between turns is astonishingly fast-typically a mere 200-ms elapse between a current and a next speaker's contribution-meaning that comprehending, producing, and coordinating conversational contributions in time is a significant challenge. This begs the question of whether the additional information carried by bodily signals facilitates or hinders language processing in this time-pressured environment. We present analyses of multimodal conversations revealing that bodily signals appear to profoundly influence language processing in interaction: Questions accompanied by gestures lead to shorter turn transition times-that is, to faster responses-than questions without gestures, and responses come earlier when gestures end before compared to after the question turn has ended. These findings hold even after taking into account prosodic patterns and other visual signals, such as gaze. The empirical findings presented here provide a first glimpse of the role of the body in the psycholinguistic processes underpinning human communication.

  14. Detection of vehicle parts based on Faster R-CNN and relative position information

    Science.gov (United States)

    Zhang, Mingwen; Sang, Nong; Chen, Youbin; Gao, Changxin; Wang, Yongzhong

    2018-03-01

    Detection and recognition of vehicles are two essential tasks in intelligent transportation system (ITS). Currently, a prevalent method is to detect vehicle body, logo or license plate at first, and then recognize them. So the detection task is the most basic, but also the most important work. Besides the logo and license plate, some other parts, such as vehicle face, lamp, windshield and rearview mirror, are also key parts which can reflect the characteristics of vehicle and be used to improve the accuracy of recognition task. In this paper, the detection of vehicle parts is studied, and the work is novel. We choose Faster R-CNN as the basic algorithm, and take the local area of an image where vehicle body locates as input, then can get multiple bounding boxes with their own scores. If the box with maximum score is chosen as final result directly, it is often not the best one, especially for small objects. This paper presents a method which corrects original score with relative position information between two parts. Then we choose the box with maximum comprehensive score as the final result. Compared with original output strategy, the proposed method performs better.

  15. Inactivation of Geobacillus stearothermophilus in canned food and coconut milk samples by addition of enterocin AS-48.

    Science.gov (United States)

    Viedma, Pilar Martínez; Abriouel, Hikmate; Ben Omar, Nabil; López, Rosario Lucas; Valdivia, Eva; Gálvez, Antonio

    2009-05-01

    The cyclic bacteriocin enterocin AS-48 was tested on a cocktail of two Geobacillus stearothermophilus strains in canned food samples (corn and peas), and in coconut milk. AS-48 (7 microg/g) reduced viable cell counts below detection levels in samples from canned corn and peas stored at 45 degrees C for 30 days. In coconut milk, bacterial inactivation by AS-48 (1.75 microg/ml) was even faster. In all canned food and drink samples inoculated with intact G. stearothermophilus endospores, bacteriocin addition (1.75 microg per g or ml of food sample) rapidly reduced viable cell counts below detection levels and avoided regrowth during storage. After a short-time bacteriocin treatment of endospores, trypsin addition markedly increased G. stearothermophilus survival, supporting the effect of residual bacteriocin on the observed loss of viability for endospores. Results from this study support the potential of enterocin AS-48 as a biopreservative against G. stearothermophilus.

  16. Statistical sampling plan for the TRU waste assay facility

    International Nuclear Information System (INIS)

    Beauchamp, J.J.; Wright, T.; Schultz, F.J.; Haff, K.; Monroe, R.J.

    1983-08-01

    Due to limited space, there is a need to dispose appropriately of the Oak Ridge National Laboratory transuranic waste which is presently stored below ground in 55-gal (208-l) drums within weather-resistant structures. Waste containing less than 100 nCi/g transuranics can be removed from the present storage and be buried, while waste containing greater than 100 nCi/g transuranics must continue to be retrievably stored. To make the necessary measurements needed to determine the drums that can be buried, a transuranic Neutron Interrogation Assay System (NIAS) has been developed at Los Alamos National Laboratory and can make the needed measurements much faster than previous techniques which involved γ-ray spectroscopy. The previous techniques are reliable but time consuming. Therefore, a validation study has been planned to determine the ability of the NIAS to make adequate measurements. The validation of the NIAS will be based on a paired comparison of a sample of measurements made by the previous techniques and the NIAS. The purpose of this report is to describe the proposed sampling plan and the statistical analyses needed to validate the NIAS. 5 references, 4 figures, 5 tables

  17. Thinning of N-face GaN (0001) samples by inductively coupled plasma etching and chemomechanical polishing

    International Nuclear Information System (INIS)

    Rizzi, F.; Gu, E.; Dawson, M. D.; Watson, I. M.; Martin, R. W.; Kang, X. N.; Zhang, G. Y.

    2007-01-01

    The processing of N-polar GaN (0001) samples has been studied, motivated by applications in which extensive back side thinning of freestanding GaN (FS-GaN) substrates is required. Experiments were conducted on FS-GaN from two commercial sources, in addition to epitaxial GaN with the N-face exposed by a laser lift-off process. The different types of samples produced equivalent results. Surface morphologies were examined over relatively large areas, using scanning electron microscopy and stylus profiling. The main focus of this study was on inductively coupled plasma (ICP) etch processes, employing Cl 2 /Ar or Cl 2 /BCl 3 Ar gas mixtures. Application of a standard etch recipe, optimized for feature etching of Ga-polar GaN (0001) surfaces, caused severe roughening of N-polar samples and confirmed the necessity for specific optimization of etch conditions for N-face material. A series of recipes with a reduced physical (sputter-based) contribution to etching allowed average surface roughness values to be consistently reduced to below 3 nm. Maximum N-face etch rates of 370-390 nm/min have been obtained in recipes examined to date. These are typically faster than etch rates obtained on Ga-face samples under the same conditions and adequate for the process flows of interest. Mechanistic aspects of the ICP etch process and possible factors contributing to residual surface roughness are discussed. This study also included work on chemomechanical polishing (CMP). The optimized CMP process had stock removal rates of ∼500 nm/h on the GaN N face. This was much slower than the ICP etching but showed the important capability of recovering smooth surfaces on samples roughened in previous processing. In one example, a surface roughened by nonoptimized ICP etching was smoothed to give an average surface roughness of ∼2 nm

  18. Registered nurse supply grows faster than projected amid surge in new entrants ages 23-26.

    Science.gov (United States)

    Auerbach, David I; Buerhaus, Peter I; Staiger, Douglas O

    2011-12-01

    The vast preponderance of the nation's registered nurses are women. In the 1980s and 1990 s, a decline in the number of women ages 23-26 who were choosing nursing as a career led to concerns that there would be future nurse shortages unless the trend was reversed. Between 2002 and 2009, however, the number of full-time-equivalent registered nurses ages 23-26 increased by 62 percent. If these young nurses follow the same life-cycle employment patterns as those who preceded them--as they appear to be thus far--then they will be the largest cohort of registered nurses ever observed. Because of this surge in the number of young people entering nursing during the past decade, the nurse workforce is projected to grow faster during the next two decades than previously anticipated. However, it is uncertain whether interest in nursing will continue to grow in the future.

  19. Charged Particles are Prevented from Going Faster than the Speed of Light by Light Itself: A Biophysical Cell Biologist's Contribution to Physics

    International Nuclear Information System (INIS)

    Wayne, R.

    2010-01-01

    Investigations of living organisms have led biologists and physicians to introduce fundamental concepts, including Brownian motion, the First Law of Thermodynamics, Poiseuille's Law of fluid flow, and Fick's Law of diffusion into physics. Given the prominence of viscous forces within and around cells and the experience of identifying and quantifying such resistive forces, biophysical cell biologists have an unique perspective in discovering the viscous forces that cause moving particles to respond to an applied force in a nonlinear manner. Using my experience as a biophysical cell biologist, I show that in any space consisting of a photon gas with a temperature above absolute zero, Doppler-shifted photons exert a velocity-dependent viscous force on moving charged particles. This viscous force prevents charged particles from exceeding the speed of light. Consequently, light itself prevents charged particles from moving faster than the speed of light. This interpretation provides a testable alternative to the interpretation provided by the Special Theory of Relativity, which contends that particles are prevented from exceeding the speed of light as a result of the relativity of time. (author)

  20. Paired Synchronous Rhythmic Finger Tapping without an External Timing Cue Shows Greater Speed Increases Relative to Those for Solo Tapping.

    Science.gov (United States)

    Okano, Masahiro; Shinya, Masahiro; Kudo, Kazutoshi

    2017-03-09

    In solo synchronization-continuation (SC) tasks, intertap intervals (ITI) are known to drift from the initial tempo. It has been demonstrated that people in paired and group contexts modulate their action timing unconsciously in various situations such as choice reaction tasks, rhythmic body sway, and hand clapping in concerts, which suggests the possibility that ITI drift is also affected by paired context. We conducted solo and paired SC tapping experiments with three tempos (75, 120, and 200 bpm) and examined whether tempo-keeping performance changed according to tempo and/or the number of players. Results indicated that those tapping in the paired conditions were faster, relative to those observed in the solo conditions, for all tempos. For the faster participants, the degree of ITI drift in the solo conditions was strongly correlated with that in the paired conditions. Regression analyses suggested that both faster and slower participants adapted their tap timing to that of their partners. A possible explanation for these results is that the participants reset the phase of their internal clocks according to the faster beat between their own tap and the partners' tap. Our results indicated that paired context could bias the direction of ITI drift toward decreasing.

  1. The Galaxy mass function up to z =4 in the GOODS-MUSIC sample: into the epoch of formation of massive galaxies

    Science.gov (United States)

    Fontana, A.; Salimbeni, S.; Grazian, A.; Giallongo, E.; Pentericci, L.; Nonino, M.; Fontanot, F.; Menci, N.; Monaco, P.; Cristiani, S.; Vanzella, E.; de Santis, C.; Gallozzi, S.

    2006-12-01

    Aims.The goal of this work is to measure the evolution of the Galaxy Stellar Mass Function and of the resulting Stellar Mass Density up to redshift ≃4, in order to study the assembly of massive galaxies in the high redshift Universe. Methods: .We have used the GOODS-MUSIC catalog, containing 3000 Ks-selected galaxies with multi-wavelength coverage extending from the U band to the Spitzer 8 μm band, of which 27% have spectroscopic redshifts and the remaining fraction have accurate photometric redshifts. On this sample we have applied a standard fitting procedure to measure stellar masses. We compute the Galaxy Stellar Mass Function and the resulting Stellar Mass Density up to redshift ≃4, taking into proper account the biases and incompleteness effects. Results: .Within the well known trend of global decline of the Stellar Mass Density with redshift, we show that the decline of the more massive galaxies may be described by an exponential timescale of ≃6 Gyr up to z≃ 1.5, and proceeds much faster thereafter, with an exponential timescale of ≃0.6 Gyr. We also show that there is some evidence for a differential evolution of the Galaxy Stellar Mass Function, with low mass galaxies evolving faster than more massive ones up to z≃ 1{-}1.5 and that the Galaxy Stellar Mass Function remains remarkably flat (i.e. with a slope close to the local one) up to z≃ 1{-}1.3. Conclusions: .The observed behaviour of the Galaxy Stellar Mass Function is consistent with a scenario where about 50% of present-day massive galaxies formed at a vigorous rate in the epoch between redshift 4 and 1.5, followed by a milder evolution until the present-day epoch.

  2. Network and adaptive sampling

    CERN Document Server

    Chaudhuri, Arijit

    2014-01-01

    Combining the two statistical techniques of network sampling and adaptive sampling, this book illustrates the advantages of using them in tandem to effectively capture sparsely located elements in unknown pockets. It shows how network sampling is a reliable guide in capturing inaccessible entities through linked auxiliaries. The text also explores how adaptive sampling is strengthened in information content through subsidiary sampling with devices to mitigate unmanageable expanding sample sizes. Empirical data illustrates the applicability of both methods.

  3. Spherical sampling

    CERN Document Server

    Freeden, Willi; Schreiner, Michael

    2018-01-01

    This book presents, in a consistent and unified overview, results and developments in the field of today´s spherical sampling, particularly arising in mathematical geosciences. Although the book often refers to original contributions, the authors made them accessible to (graduate) students and scientists not only from mathematics but also from geosciences and geoengineering. Building a library of topics in spherical sampling theory it shows how advances in this theory lead to new discoveries in mathematical, geodetic, geophysical as well as other scientific branches like neuro-medicine. A must-to-read for everybody working in the area of spherical sampling.

  4. GARN: Sampling RNA 3D Structure Space with Game Theory and Knowledge-Based Scoring Strategies.

    Science.gov (United States)

    Boudard, Mélanie; Bernauer, Julie; Barth, Dominique; Cohen, Johanne; Denise, Alain

    2015-01-01

    Cellular processes involve large numbers of RNA molecules. The functions of these RNA molecules and their binding to molecular machines are highly dependent on their 3D structures. One of the key challenges in RNA structure prediction and modeling is predicting the spatial arrangement of the various structural elements of RNA. As RNA folding is generally hierarchical, methods involving coarse-grained models hold great promise for this purpose. We present here a novel coarse-grained method for sampling, based on game theory and knowledge-based potentials. This strategy, GARN (Game Algorithm for RNa sampling), is often much faster than previously described techniques and generates large sets of solutions closely resembling the native structure. GARN is thus a suitable starting point for the molecular modeling of large RNAs, particularly those with experimental constraints. GARN is available from: http://garn.lri.fr/.

  5. UNLABELED SELECTED SAMPLES IN FEATURE EXTRACTION FOR CLASSIFICATION OF HYPERSPECTRAL IMAGES WITH LIMITED TRAINING SAMPLES

    Directory of Open Access Journals (Sweden)

    A. Kianisarkaleh

    2015-12-01

    Full Text Available Feature extraction plays a key role in hyperspectral images classification. Using unlabeled samples, often unlimitedly available, unsupervised and semisupervised feature extraction methods show better performance when limited number of training samples exists. This paper illustrates the importance of selecting appropriate unlabeled samples that used in feature extraction methods. Also proposes a new method for unlabeled samples selection using spectral and spatial information. The proposed method has four parts including: PCA, prior classification, posterior classification and sample selection. As hyperspectral image passes these parts, selected unlabeled samples can be used in arbitrary feature extraction methods. The effectiveness of the proposed unlabeled selected samples in unsupervised and semisupervised feature extraction is demonstrated using two real hyperspectral datasets. Results show that through selecting appropriate unlabeled samples, the proposed method can improve the performance of feature extraction methods and increase classification accuracy.

  6. On incomplete sampling under birth-death models and connections to the sampling-based coalescent.

    Science.gov (United States)

    Stadler, Tanja

    2009-11-07

    The constant rate birth-death process is used as a stochastic model for many biological systems, for example phylogenies or disease transmission. As the biological data are usually not fully available, it is crucial to understand the effect of incomplete sampling. In this paper, we analyze the constant rate birth-death process with incomplete sampling. We derive the density of the bifurcation events for trees on n leaves which evolved under this birth-death-sampling process. This density is used for calculating prior distributions in Bayesian inference programs and for efficiently simulating trees. We show that the birth-death-sampling process can be interpreted as a birth-death process with reduced rates and complete sampling. This shows that joint inference of birth rate, death rate and sampling probability is not possible. The birth-death-sampling process is compared to the sampling-based population genetics model, the coalescent. It is shown that despite many similarities between these two models, the distribution of bifurcation times remains different even in the case of very large population sizes. We illustrate these findings on an Hepatitis C virus dataset from Egypt. We show that the transmission times estimates are significantly different-the widely used Gamma statistic even changes its sign from negative to positive when switching from the coalescent to the birth-death process.

  7. Experimental and numerical examination of eddy (Foucault) currents in rotating micro-coils: Generation of heat and its impact on sample temperature

    Science.gov (United States)

    Aguiar, Pedro M.; Jacquinot, Jacques-François; Sakellariou, Dimitris

    2009-09-01

    The application of nuclear magnetic resonance (NMR) to systems of limited quantity has stimulated the use of micro-coils (diameter Foucault (eddy) currents, which generate heat. We report the first data acquired with a 4 mm MACS system and spinning up to 10 kHz. The need to spin faster necessitates improved methods to control heating. We propose an approximate solution to calculate the power losses (heat) from the eddy currents for a solenoidal coil, in order to provide insight into the functional dependencies of Foucault currents. Experimental tests of the dependencies reveal conditions which result in reduced sample heating and negligible temperature distributions over the sample volume.

  8. Experimental and numerical examination of eddy (Foucault) currents in rotating micro-coils: Generation of heat and its impact on sample temperature.

    Science.gov (United States)

    Aguiar, Pedro M; Jacquinot, Jacques-François; Sakellariou, Dimitris

    2009-09-01

    The application of nuclear magnetic resonance (NMR) to systems of limited quantity has stimulated the use of micro-coils (diameter Foucault (eddy) currents, which generate heat. We report the first data acquired with a 4mm MACS system and spinning up to 10kHz. The need to spin faster necessitates improved methods to control heating. We propose an approximate solution to calculate the power losses (heat) from the eddy currents for a solenoidal coil, in order to provide insight into the functional dependencies of Foucault currents. Experimental tests of the dependencies reveal conditions which result in reduced sample heating and negligible temperature distributions over the sample volume.

  9. Online Italian fandoms of American TV shows

    Directory of Open Access Journals (Sweden)

    Eleonora Benecchi

    2015-06-01

    Full Text Available The Internet has changed media fandom in two main ways: it helps fans connect with each other despite physical distance, leading to the formation of international fan communities; and it helps fans connect with the creators of the TV show, deepening the relationship between TV producers and international fandoms. To assess whether Italian fan communities active online are indeed part of transnational online communities and whether the Internet has actually altered their relationship with the creators of the original text they are devoted to, qualitative analysis and narrative interviews of 26 Italian fans of American TV shows were conducted to explore the fan-producer relationship. Results indicated that the online Italian fans surveyed preferred to stay local, rather than using geography-leveling online tools. Further, the sampled Italian fans' relationships with the show runners were mediated or even absent.

  10. Introducing difference recurrence relations for faster semi-global alignment of long sequences.

    Science.gov (United States)

    Suzuki, Hajime; Kasahara, Masahiro

    2018-02-19

    The read length of single-molecule DNA sequencers is reaching 1 Mb. Popular alignment software tools widely used for analyzing such long reads often take advantage of single-instruction multiple-data (SIMD) operations to accelerate calculation of dynamic programming (DP) matrices in the Smith-Waterman-Gotoh (SWG) algorithm with a fixed alignment start position at the origin. Nonetheless, 16-bit or 32-bit integers are necessary for storing the values in a DP matrix when sequences to be aligned are long; this situation hampers the use of the full SIMD width of modern processors. We proposed a faster semi-global alignment algorithm, "difference recurrence relations," that runs more rapidly than the state-of-the-art algorithm by a factor of 2.1. Instead of calculating and storing all the values in a DP matrix directly, our algorithm computes and stores mainly the differences between the values of adjacent cells in the matrix. Although the SWG algorithm and our algorithm can output exactly the same result, our algorithm mainly involves 8-bit integer operations, enabling us to exploit the full width of SIMD operations (e.g., 32) on modern processors. We also developed a library, libgaba, so that developers can easily integrate our algorithm into alignment programs. Our novel algorithm and optimized library implementation will facilitate accelerating nucleotide long-read analysis algorithms that use pairwise alignment stages. The library is implemented in the C programming language and available at https://github.com/ocxtal/libgaba .

  11. Multi-rate cubature Kalman filter based data fusion method with residual compensation to adapt to sampling rate discrepancy in attitude measurement system.

    Science.gov (United States)

    Guo, Xiaoting; Sun, Changku; Wang, Peng

    2017-08-01

    This paper investigates the multi-rate inertial and vision data fusion problem in nonlinear attitude measurement systems, where the sampling rate of the inertial sensor is much faster than that of the vision sensor. To fully exploit the high frequency inertial data and obtain favorable fusion results, a multi-rate CKF (Cubature Kalman Filter) algorithm with estimated residual compensation is proposed in order to adapt to the problem of sampling rate discrepancy. During inter-sampling of slow observation data, observation noise can be regarded as infinite. The Kalman gain is unknown and approaches zero. The residual is also unknown. Therefore, the filter estimated state cannot be compensated. To obtain compensation at these moments, state error and residual formulas are modified when compared with the observation data available moments. Self-propagation equation of the state error is established to propagate the quantity from the moments with observation to the moments without observation. Besides, a multiplicative adjustment factor is introduced as Kalman gain, which acts on the residual. Then the filter estimated state can be compensated even when there are no visual observation data. The proposed method is tested and verified in a practical setup. Compared with multi-rate CKF without residual compensation and single-rate CKF, a significant improvement is obtained on attitude measurement by using the proposed multi-rate CKF with inter-sampling residual compensation. The experiment results with superior precision and reliability show the effectiveness of the proposed method.

  12. Large Sample Neutron Activation Analysis of Heterogeneous Samples

    International Nuclear Information System (INIS)

    Stamatelatos, I.E.; Vasilopoulou, T.; Tzika, F.

    2018-01-01

    A Large Sample Neutron Activation Analysis (LSNAA) technique was developed for non-destructive analysis of heterogeneous bulk samples. The technique incorporated collimated scanning and combining experimental measurements and Monte Carlo simulations for the identification of inhomogeneities in large volume samples and the correction of their effect on the interpretation of gamma-spectrometry data. Corrections were applied for the effect of neutron self-shielding, gamma-ray attenuation, geometrical factor and heterogeneous activity distribution within the sample. A benchmark experiment was performed to investigate the effect of heterogeneity on the accuracy of LSNAA. Moreover, a ceramic vase was analyzed as a whole demonstrating the feasibility of the technique. The LSNAA results were compared against results obtained by INAA and a satisfactory agreement between the two methods was observed. This study showed that LSNAA is a technique capable to perform accurate non-destructive, multi-elemental compositional analysis of heterogeneous objects. It also revealed the great potential of the technique for the analysis of precious objects and artefacts that need to be preserved intact and cannot be damaged for sampling purposes. (author)

  13. Fluorescence imaging of sample zone narrowing and dispersion in a glass microchip: the effects of organic solvent (acetonitrile)-salt mixtures in the sample matrix and surfactant micelles in the running buffer.

    Science.gov (United States)

    Jia, Zhijian; Lee, Yi-kuen; Fang, Qun; Huie, Carmen W

    2006-03-01

    A mismatch in the EOF velocities between the sample zone and running buffer region is known to generate pressure-driven, parabolic flow profile of the sample plug in electrokinetic separation systems. In the present study, video fluorescence microscopy was employed to capture real-time dynamics of the sample plug (containing fluorescein as the probe molecule) in a discontinuous conductivity system within a glass microchip, in which the sample matrix consisted of a mixture of ACN and salt (NaCl), and the running buffer contained sodium cholate (SC) micelles as the pseudo-stationary phase (i.e., performing "ACN stacking" in the mode of MEKC). Upon application of the separation voltage, the video images revealed that zone narrowing and broadening of the probe molecules occurred as the sample plug headed toward the cathode during the initial time period, probably resulting in part from the stacking/sweeping, and destacking of the SC micelles at the boundaries between the sample zone and running buffer. Interestingly, a second sample zone narrowing event can be observed as the sample plug moved further toward the cathode, which could be attributed to the sweeping of the slower moving probe molecules by the faster moving SC micelles that originated from the anode. This phenomenon was studied as a function of pH, sample plug length, as well as the concentration of organic solvent and salt in the sample matrix. The data suggested that the presence of large amounts of an organic solvent (such as ACN or methanol) and salts in the sample matrix not only induces sample dispersion due to the formation of a pressure-driven (hydrodynamic) flow, but may also lead to the formation of a double sample zone narrowing phenomenon by altering the local EOF dynamics within the separation system.

  14. Determination of 129I in environmental samples by AMS and NAA using an anion exchange resin disk

    Science.gov (United States)

    Suzuki, Takashi; Banba, Shigeru; Kitamura, Toshikatsu; Kabuto, Shoji; Isogai, Keisuke; Amano, Hikaru

    2007-06-01

    We have developed a new extraction method for the measurement of 129I by accelerator mass spectrometry (AMS) utilizing an anion exchange resin disk. In comparison to traditional methods such as solvent extraction and ion exchange, this method provides for simple and quick sample handling. This extraction method was tested on soil, seaweed and milk samples, but because of disk clogging, the milk samples and some of the seaweed could not be applied successfully. Using this new extraction method to prepare samples for AMS analysis produced isotope ratios of iodine in good agreement with neutron activation analysis (NAA). The disk extraction method which take half an hour is faster than previous techniques, such as solvent extraction or ion exchange which take a few hours. The combination of the disk method and the AMS measurement is a powerful tool for the determination of 129I. Furthermore, these data will be available for the environmental monitoring before and during the operation of a new nuclear fuel reprocessing plant in Japan.

  15. Determination of 129I in environmental samples by AMS and NAA using an anion exchange resin disk

    International Nuclear Information System (INIS)

    Suzuki, Takashi; Banba, Shigeru; Kitamura, Toshikatsu; Kabuto, Shoji; Isogai, Keisuke; Amano, Hikaru

    2007-01-01

    We have developed a new extraction method for the measurement of 129 I by accelerator mass spectrometry (AMS) utilizing an anion exchange resin disk. In comparison to traditional methods such as solvent extraction and ion exchange, this method provides for simple and quick sample handling. This extraction method was tested on soil, seaweed and milk samples, but because of disk clogging, the milk samples and some of the seaweed could not be applied successfully. Using this new extraction method to prepare samples for AMS analysis produced isotope ratios of iodine in good agreement with neutron activation analysis (NAA). The disk extraction method which take half an hour is faster than previous techniques, such as solvent extraction or ion exchange which take a few hours. The combination of the disk method and the AMS measurement is a powerful tool for the determination of 129 I. Furthermore, these data will be available for the environmental monitoring before and during the operation of a new nuclear fuel reprocessing plant in Japan

  16. Self-Assembled Complexes of Horseradish Peroxidase with Magnetic Nanoparticles Showing Enhanced Peroxidase Activity

    KAUST Repository

    Corgié, Stéphane C.

    2012-02-15

    Bio-nanocatalysts (BNCs) consisting of horseradish peroxidase (HRP) self-assembled with magnetic nanoparticles (MNPs) enhance enzymatic activity due to the faster turnover and lower inhibition of the enzyme. The size and magnetization of the MNPs affect the formation of the BNCs, and ultimately control the activity of the bound enzymes. Smaller MNPs form small clusters with a low affinity for the HRP. While the turnover for the bound fraction is drastically increased, there is no difference in the H 2O 2 inhibitory concentration. Larger MNPs with a higher magnetization aggregate in larger clusters and have a higher affinity for the enzyme and a lower substrate inhibition. All of the BNCs are more active than the free enzyme or the MNPs (BNCs > HRP ≤laquo; MNPs). Since the BNCs show surprising resilience in various reaction conditions, they may pave the way towards new hybrid biocatalysts with increased activities and unique catalytic properties for magnetosensitive enzymatic reactions. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Typeability of PowerPlex Y (Promega) profiles in selected tissue samples incubated in various environments.

    Science.gov (United States)

    Niemcunowicz-Janica, Anna; Pepiński, Witold; Janica, Jacek Robert; Janica, Jerzy; Skawrońska, Małgorzata; Koc-Zórawska, Ewa

    2007-01-01

    In cases of decomposed bodies, Y chromosomal STR markers may be useful in identification of a male relative. The authors assessed typeability of PowerPlex Y (Promega) loci in post mortem tissue material stored in various environments. Kidney, spleen and pancreas specimens were collected during autopsies of five persons aged 20-30 years, whose time of death was determined within the limit of 14 hours. Tissue material was incubated at 21 degrees C and 4 degrees C in various environmental conditions. DNA was extracted by the organic method from tissue samples collected in 7-day intervals and subsequently typed using the PowerPlexY-STR kit and ABI 310. A fast decrease in the typeability rate was seen in specimens incubated in peat soil and in sand. Kidney tissue samples were typeable in all PowerPlexY-STR loci within 63 days of incubation at 4 degrees C. Faster DNA degradation was recorded in spleen and pancreas specimens. In samples with negative genotyping results, no DNA was found by fluorometric quantitation. Decomposed soft tissues are a potential material for DNA typing.

  18. Characterization of Fetal Keratinocytes, Showing Enhanced Stem Cell-Like Properties: A Potential Source of Cells for Skin Reconstruction

    Directory of Open Access Journals (Sweden)

    Kenneth K.B. Tan

    2014-08-01

    Full Text Available Epidermal stem cells have been in clinical application as a source of culture-generated grafts. Although applications for such cells are increasing due to aging populations and the greater incidence of diabetes, current keratinocyte grafting technology is limited by immunological barriers and the time needed for culture amplification. We studied the feasibility of using human fetal skin cells for allogeneic transplantation and showed that fetal keratinocytes have faster expansion times, longer telomeres, lower immunogenicity indicators, and greater clonogenicity with more stem cell indicators than adult keratinocytes. The fetal cells did not induce proliferation of T cells in coculture and were able to suppress the proliferation of stimulated T cells. Nevertheless, fetal keratinocytes could stratify normally in vitro. Experimental transplantation of fetal keratinocytes in vivo seeded on an engineered plasma scaffold yielded a well-stratified epidermal architecture and showed stable skin regeneration. These results support the possibility of using fetal skin cells for cell-based therapeutic grafting.

  19. Biological feedbacks as cause and demise of the Neoproterozoic icehouse: astrobiological prospects for faster evolution and importance of cold conditions.

    Science.gov (United States)

    Janhunen, Pekka; Kaartokallio, Hermanni; Oksanen, Ilona; Lehto, Kirsi; Lehto, Harry

    2007-02-14

    Several severe glaciations occurred during the Neoproterozoic eon, and especially near its end in the Cryogenian period (630-850 Ma). While the glacial periods themselves were probably related to the continental positions being appropriate for glaciation, the general coldness of the Neoproterozoic and Cryogenian as a whole lacks specific explanation. The Cryogenian was immediately followed by the Ediacaran biota and Cambrian Metazoan, thus understanding the climate-biosphere interactions around the Cryogenian period is central to understanding the development of complex multicellular life in general. Here we present a feedback mechanism between growth of eukaryotic algal phytoplankton and climate which explains how the Earth system gradually entered the Cryogenian icehouse from the warm Mesoproterozoic greenhouse. The more abrupt termination of the Cryogenian is explained by the increase in gaseous carbon release caused by the more complex planktonic and benthic foodwebs and enhanced by a diversification of metazoan zooplankton and benthic animals. The increased ecosystem complexity caused a decrease in organic carbon burial rate, breaking the algal-climatic feedback loop of the earlier Neoproterozoic eon. Prior to the Neoproterozoic eon, eukaryotic evolution took place in a slow timescale regulated by interior cooling of the Earth and solar brightening. Evolution could have proceeded faster had these geophysical processes been faster. Thus, complex life could theoretically also be found around stars that are more massive than the Sun and have main sequence life shorter than 10 Ga. We also suggest that snow and glaciers are, in a statistical sense, important markers for conditions that may possibly promote the development of complex life on extrasolar planets.

  20. Emotion, Etmnooi, or Emitoon?--Faster lexical access to emotional than to neutral words during reading.

    Science.gov (United States)

    Kissler, Johanna; Herbert, Cornelia

    2013-03-01

    Cortical processing of emotional words differs from that of neutral words. Using EEG event-related potentials (ERPs), the present study examines the functional stage(s) of this differentiation. Positive, negative, and neutral nouns were randomly mixed with pseudowords and letter strings derived from words within each valence and presented for reading while participants' EEG was recorded. Results indicated emotion effects in the N1 (110-140 ms), early posterior negativity (EPN, 216-320) and late positive potential (LPP, 432-500 ms) time windows. Across valence, orthographic word-form effects occurred from about 180 ms after stimulus presentation. Crucially, in emotional words, lexicality effects (real words versus pseudowords) were identified from 216 ms, words being more negative over posterior cortex, coinciding with EPN effects, whereas neutral words differed from pseudowords only after 320 ms. Emotional content affects word processing at pre-lexical, lexical and post-lexical levels, but remarkably lexical access to emotional words is faster than access to neutral words. Copyright © 2012 Elsevier B.V. All rights reserved.

  1. Faster-Than-Real-Time Simulation of Lithium Ion Batteries with Full Spatial and Temporal Resolution

    Directory of Open Access Journals (Sweden)

    Sandip Mazumder

    2013-01-01

    Full Text Available A one-dimensional coupled electrochemical-thermal model of a lithium ion battery with full temporal and normal-to-electrode spatial resolution is presented. Only a single pair of electrodes is considered in the model. It is shown that simulation of a lithium ion battery with the inclusion of detailed transport phenomena and electrochemistry is possible with faster-than-real-time compute times. The governing conservation equations of mass, charge, and energy are discretized using the finite volume method and solved using an iterative procedure. The model is first successfully validated against experimental data for both charge and discharge processes in a LixC6-LiyMn2O4 battery. Finally, it is demonstrated for an arbitrary rapidly changing transient load typical of a hybrid electric vehicle drive cycle. The model is able to predict the cell voltage of a 15-minute drive cycle in less than 12 seconds of compute time on a laptop with a 2.33 GHz Intel Pentium 4 processor.

  2. Second generation stationary digital breast tomosynthesis system with faster scan time and wider angular span.

    Science.gov (United States)

    Calliste, Jabari; Wu, Gongting; Laganis, Philip E; Spronk, Derrek; Jafari, Houman; Olson, Kyle; Gao, Bo; Lee, Yueh Z; Zhou, Otto; Lu, Jianping

    2017-09-01

    The aim of this study was to characterize a new generation stationary digital breast tomosynthesis system with higher tube flux and increased angular span over a first generation system. The linear CNT x-ray source was designed, built, and evaluated to determine its performance parameters. The second generation system was then constructed using the CNT x-ray source and a Hologic gantry. Upon construction, test objects and phantoms were used to characterize system resolution as measured by the modulation transfer function (MTF), and artifact spread function (ASF). The results indicated that the linear CNT x-ray source was capable of stable operation at a tube potential of 49 kVp, and measured focal spot sizes showed source-to-source consistency with a nominal focal spot size of 1.1 mm. After construction, the second generation (Gen 2) system exhibited entrance surface air kerma rates two times greater the previous s-DBT system. System in-plane resolution as measured by the MTF is 7.7 cycles/mm, compared to 6.7 cycles/mm for the Gen 1 system. As expected, an increase in the z-axis depth resolution was observed, with a decrease in the ASF from 4.30 mm to 2.35 mm moving from the Gen 1 system to the Gen 2 system as result of an increased angular span. The results indicate that the Gen 2 stationary digital breast tomosynthesis system, which has a larger angular span, increased entrance surface air kerma, and faster image acquisition time over the Gen 1 s-DBT system, results in higher resolution images. With the detector operating at full resolution, the Gen 2 s-DBT system can achieve an in-plane resolution of 7.7 cycles per mm, which is better than the current commercial DBT systems today, and may potentially result in better patient diagnosis. © 2017 American Association of Physicists in Medicine.

  3. Platelet-rich fibrin, "a faster healing aid" in the treatment of combined lesions: A report of two cases.

    Science.gov (United States)

    Karunakar, Parupalli; Prasanna, Jammula Surya; Jayadev, Matapathi; Shravani, Guniganti Sushma

    2014-09-01

    Anatomically the pulp and periodontium are connected through apical foramen, and the lateral, accessory, and furcal canals. Diseases of one tissue may affect the other. In the present case report with two cases, a primary periodontal lesion with secondary endodontic involvement is described. In both cases, root canal treatment was done followed by periodontal therapy with the use of platelet-rich fibrin (PRF) as the regenerative material of choice. PRF has been a breakthrough in the stimulation and acceleration of tissue healing. It is used to achieve faster healing of the intrabony defects. Absence of an intraradicular lesion, pain, and swelling, along with tooth stability and adequate radiographic bone fill at 9 months of follow-up indicated a successful outcome.

  4. Platelet-rich fibrin, "a faster healing aid" in the treatment of combined lesions: A report of two cases

    Directory of Open Access Journals (Sweden)

    Parupalli Karunakar

    2014-01-01

    Full Text Available Anatomically the pulp and periodontium are connected through apical foramen, and the lateral, accessory, and furcal canals. Diseases of one tissue may affect the other. In the present case report with two cases, a primary periodontal lesion with secondary endodontic involvement is described. In both cases, root canal treatment was done followed by periodontal therapy with the use of platelet-rich fibrin (PRF as the regenerative material of choice. PRF has been a breakthrough in the stimulation and acceleration of tissue healing. It is used to achieve faster healing of the intrabony defects. Absence of an intraradicular lesion, pain, and swelling, along with tooth stability and adequate radiographic bone fill at 9 months of follow-up indicated a successful outcome.

  5. Which Ultrasound-Guided Sciatic Nerve Block Strategy Works Faster? Prebifurcation or Separate Tibial-Peroneal Nerve Block? A Randomized Clinical Trial.

    Science.gov (United States)

    Faiz, Seyed Hamid Reza; Imani, Farnad; Rahimzadeh, Poupak; Alebouyeh, Mahmoud Reza; Entezary, Saeed Reza; Shafeinia, Amineh

    2017-08-01

    Peripheral nerve block is an accepted method in lower limb surgeries regarding its convenience and good tolerance by the patients. Quick performance and fast sensory and motor block are highly demanded in this method. The aim of the present study was to compare 2 different methods of sciatic and tibial-peroneal nerve block in lower limb surgeries in terms of block onset. In this clinical trial, 52 candidates for elective lower limb surgery were randomly divided into 2 groups: sciatic nerve block before bifurcation (SG; n = 27) and separate tibial-peroneal nerve block (TPG; n = 25) under ultrasound plus nerve stimulator guidance. The mean duration of block performance, as well as complete sensory and motor block, was recorded and compared between the groups. The mean duration of complete sensory block in the SG and TPG groups was 35.4 ± 4.1 and 24.9 ± 4.2 minutes, respectively, which was significantly lower in the TPG group (P = 0.001). The mean duration of complete motor block in the SG and TPG groups was 63.3 ± 4.4 and 48.4 ± 4.6 minutes, respectively, which was significantly lower in the TPG group (P = 0.001). No nerve injuries, paresthesia, or other possible side effects were reported in patients. According to the present study, it seems that TPG shows a faster sensory and motor block than SG.

  6. Human Milk Shows Immunological Advantages Over Organic Milk Samples For Infants in the Presence of Lipopolysaccharide (LPS in 3D Energy Maps Using an Organic Nanobiomimetic Memristor/Memcapacitor

    Directory of Open Access Journals (Sweden)

    S-H. DUH

    2016-08-01

    Full Text Available Human milk is well known for its immunological advantages of protection and support for healthy early childhood cognitive development and prevention of chronic diseases over cow milk for infants. However, little is known about how the immunological advantages are linked to reduce Pathological High Frequency Oscillation (pHFO regarding neural synapse net energy outcomes when lipopolysaccharide (LPS attacks at a clinical concentration range compared with that in cow milk in a 3D energy map. We developed a nanostructure biomimetic memristor/memcapacitor device with a dual function of chronoamperometric (CA sensing/voltage sensing for the direct quantitative evaluation of immunological advantages between human milk and organic cow milk for infants in the presence of wide LPS concentration ranges; those ranges were between 5.0 pg/mL to 500 ng/mL and from 50 ng/mL to 1 µg/mL for both a CA and a voltage method, respectively. The Detection of Limit (DOL results are as follows: 3.73×10-18 g LPS vs. 1.2×10-16 g LPS in 40 µL milk samples using the 3.11×10-7cm3 voltage sensor and the 0.031cm2 CA sensor, respectively, under antibody-free and reagent-free conditions. The 3D energy map results show that cow milk is ten-times more prone to E. Coli attack, and the positive link was revealed that Pathological High Frequency Oscillation (pHFO formations occurred over the studied LPS concentration range from 50 ng/mL up to 1000 ng/mL from Rapid Eye Movement (REM sleep frequency, fast gamma frequency to Sharp Wave-Ripple Complexes (SPW- R frequency. There had no pHFO with human milk samples at Slow Wave Sleeping (SWS, REM and SPW- R frequencies. The microbiota in the human milk samples successfully overcame the endotoxin attack from E. coli bacteria, however the pHFO only occurred at fast gamma frequency linked with the LPS level ≥ 500 ng/mL. Organic milk samples show an order of magnitude lower synapse energy density compared with human milk at SWS for with

  7. Sampling or gambling

    Energy Technology Data Exchange (ETDEWEB)

    Gy, P.M.

    1981-12-01

    Sampling can be compared to no other technique. A mechanical sampler must above all be selected according to its aptitude for supressing or reducing all components of the sampling error. Sampling is said to be correct when it gives all elements making up the batch of matter submitted to sampling an uniform probability of being selected. A sampler must be correctly designed, built, installed, operated and maintained. When the conditions of sampling correctness are not strictly respected, the sampling error can no longer be controlled and can, unknown to the user, be unacceptably large: the sample is no longer representative. The implementation of an incorrect sampler is a form of gambling and this paper intends to show that at this game the user is nearly always the loser in the long run. The users' and the manufacturers' interests may diverge and the standards which should safeguard the users' interests very often fail to do so by tolerating or even recommending incorrect techniques such as the implementation of too narrow cutters traveling too fast through the stream to be sampled.

  8. On Invertible Sampling and Adaptive Security

    DEFF Research Database (Denmark)

    Ishai, Yuval; Kumarasubramanian, Abishek; Orlandi, Claudio

    2011-01-01

    functionalities was left open. We provide the first convincing evidence that the answer to this question is negative, namely that some (randomized) functionalities cannot be realized with adaptive security. We obtain this result by studying the following related invertible sampling problem: given an efficient...... sampling algorithm A, obtain another sampling algorithm B such that the output of B is computationally indistinguishable from the output of A, but B can be efficiently inverted (even if A cannot). This invertible sampling problem is independently motivated by other cryptographic applications. We show......, under strong but well studied assumptions, that there exist efficient sampling algorithms A for which invertible sampling as above is impossible. At the same time, we show that a general feasibility result for adaptively secure MPC implies that invertible sampling is possible for every A, thereby...

  9. FAMOUS, faster: using parallel computing techniques to accelerate the FAMOUS/HadCM3 climate model with a focus on the radiative transfer algorithm

    Directory of Open Access Journals (Sweden)

    P. Hanappe

    2011-09-01

    Full Text Available We have optimised the atmospheric radiation algorithm of the FAMOUS climate model on several hardware platforms. The optimisation involved translating the Fortran code to C and restructuring the algorithm around the computation of a single air column. Instead of the existing MPI-based domain decomposition, we used a task queue and a thread pool to schedule the computation of individual columns on the available processors. Finally, four air columns are packed together in a single data structure and computed simultaneously using Single Instruction Multiple Data operations.

    The modified algorithm runs more than 50 times faster on the CELL's Synergistic Processing Element than on its main PowerPC processing element. On Intel-compatible processors, the new radiation code runs 4 times faster. On the tested graphics processor, using OpenCL, we find a speed-up of more than 2.5 times as compared to the original code on the main CPU. Because the radiation code takes more than 60 % of the total CPU time, FAMOUS executes more than twice as fast. Our version of the algorithm returns bit-wise identical results, which demonstrates the robustness of our approach. We estimate that this project required around two and a half man-years of work.

  10. Calculation of absolute protein-ligand binding free energy using distributed replica sampling.

    Science.gov (United States)

    Rodinger, Tomas; Howell, P Lynne; Pomès, Régis

    2008-10-21

    Distributed replica sampling [T. Rodinger et al., J. Chem. Theory Comput. 2, 725 (2006)] is a simple and general scheme for Boltzmann sampling of conformational space by computer simulation in which multiple replicas of the system undergo a random walk in reaction coordinate or temperature space. Individual replicas are linked through a generalized Hamiltonian containing an extra potential energy term or bias which depends on the distribution of all replicas, thus enforcing the desired sampling distribution along the coordinate or parameter of interest regardless of free energy barriers. In contrast to replica exchange methods, efficient implementation of the algorithm does not require synchronicity of the individual simulations. The algorithm is inherently suited for large-scale simulations using shared or heterogeneous computing platforms such as a distributed network. In this work, we build on our original algorithm by introducing Boltzmann-weighted jumping, which allows moves of a larger magnitude and thus enhances sampling efficiency along the reaction coordinate. The approach is demonstrated using a realistic and biologically relevant application; we calculate the standard binding free energy of benzene to the L99A mutant of T4 lysozyme. Distributed replica sampling is used in conjunction with thermodynamic integration to compute the potential of mean force for extracting the ligand from protein and solvent along a nonphysical spatial coordinate. Dynamic treatment of the reaction coordinate leads to faster statistical convergence of the potential of mean force than a conventional static coordinate, which suffers from slow transitions on a rugged potential energy surface.

  11. World oil demand's shift toward faster growing and less price-responsive products and regions

    International Nuclear Information System (INIS)

    Dargay, Joyce M.; Gately, Dermot

    2010-01-01

    Using data for 1971-2008, we estimate the effects of changes in price and income on world oil demand, disaggregated by product - transport oil, fuel oil (residual and heating oil), and other oil - for six groups of countries. Most of the demand reductions since 1973-74 were due to fuel-switching away from fuel oil, especially in the OECD; in addition, the collapse of the Former Soviet Union (FSU) reduced their oil consumption substantially. Demand for transport and other oil was much less price-responsive, and has grown almost as rapidly as income, especially outside the OECD and FSU. World oil demand has shifted toward products and regions that are faster growing and less price-responsive. In contrast to projections to 2030 of declining per-capita demand for the world as a whole - by the U.S. Department of Energy (DOE), International Energy Agency (IEA) and OPEC - we project modest growth. Our projections for total world demand in 2030 are at least 20% higher than projections by those three institutions, using similar assumptions about income growth and oil prices, because we project rest-of-world growth that is consistent with historical patterns, in contrast to the dramatic slowdowns which they project. (author)

  12. Large sample neutron activation analysis of a reference inhomogeneous sample

    International Nuclear Information System (INIS)

    Vasilopoulou, T.; Athens National Technical University, Athens; Tzika, F.; Stamatelatos, I.E.; Koster-Ammerlaan, M.J.J.

    2011-01-01

    A benchmark experiment was performed for Neutron Activation Analysis (NAA) of a large inhomogeneous sample. The reference sample was developed in-house and consisted of SiO 2 matrix and an Al-Zn alloy 'inhomogeneity' body. Monte Carlo simulations were employed to derive appropriate correction factors for neutron self-shielding during irradiation as well as self-attenuation of gamma rays and sample geometry during counting. The large sample neutron activation analysis (LSNAA) results were compared against reference values and the trueness of the technique was evaluated. An agreement within ±10% was observed between LSNAA and reference elemental mass values, for all matrix and inhomogeneity elements except Samarium, provided that the inhomogeneity body was fully simulated. However, in cases that the inhomogeneity was treated as not known, the results showed a reasonable agreement for most matrix elements, while large discrepancies were observed for the inhomogeneity elements. This study provided a quantification of the uncertainties associated with inhomogeneity in large sample analysis and contributed to the identification of the needs for future development of LSNAA facilities for analysis of inhomogeneous samples. (author)

  13. Effects of the number of people on efficient capture and sample collection: A lion case study

    Directory of Open Access Journals (Sweden)

    Sam M. Ferreira

    2013-05-01

    Full Text Available Certain carnivore research projects and approaches depend on successful capture of individuals of interest. The number of people present at a capture site may determine success of a capture. In this study 36 lion capture cases in the Kruger National Park were used to evaluate whether the number of people present at a capture site influenced lion response rates and whether the number of people at a sampling site influenced the time it took to process the collected samples. The analyses suggest that when nine or fewer people were present, lions appeared faster at a call-up locality compared with when there were more than nine people. The number of people, however, did not influence the time it took to process the lions. It is proposed that efficient lion capturing should spatially separate capture and processing sites and minimise the number of people at a capture site.

  14. Effects of the number of people on efficient capture and sample collection: a lion case study.

    Science.gov (United States)

    Ferreira, Sam M; Maruping, Nkabeng T; Schoultz, Darius; Smit, Travis R

    2013-05-24

    Certain carnivore research projects and approaches depend on successful capture of individuals of interest. The number of people present at a capture site may determine success of a capture. In this study 36 lion capture cases in the Kruger National Park were used to evaluate whether the number of people present at a capture site influenced lion response rates and whether the number of people at a sampling site influenced the time it took to process the collected samples. The analyses suggest that when nine or fewer people were present, lions appeared faster at a call-up locality compared with when there were more than nine people. The number of people, however, did not influence the time it took to process the lions. It is proposed that efficient lion capturing should spatially separate capture and processing sites and minimise the number of people at a capture site.

  15. HPV-FASTER

    DEFF Research Database (Denmark)

    Bosch, F Xavier; Robles, Claudia; Díaz, Mireia

    2016-01-01

    protocol would represent an attractive approach for many health-care systems, in particular, countries in Central and Eastern Europe, Latin America, Asia, and some more-developed parts of Africa. The role of vaccination in women aged >30 years and the optimal number of HPV-screening tests required......Human papillomavirus (HPV)-related screening technologies and HPV vaccination offer enormous potential for cancer prevention, notably prevention of cervical cancer. The effectiveness of these approaches is, however, suboptimal owing to limited implementation of screening programmes and restricted...... indications for HPV vaccination. Trials of HPV vaccination in women aged up to 55 years have shown almost 90% protection from cervical precancer caused by HPV16/18 among HPV16/18-DNA-negative women. We propose extending routine vaccination programmes to women of up to 30 years of age (and to the 45-50-year...

  16. Reading faster

    Directory of Open Access Journals (Sweden)

    Paul Nation

    2009-12-01

    Full Text Available This article describes the visual nature of the reading process as it relates to reading speed. It points out that there is a physical limit on normal reading speed and beyond this limit the reading process will be different from normal reading where almost every word is attended to. The article describes a range of activities for developing reading fluency, and suggests how the development of fluency can become part of a reading programme.

  17. Biological feedbacks as cause and demise of the Neoproterozoic icehouse: astrobiological prospects for faster evolution and importance of cold conditions.

    Directory of Open Access Journals (Sweden)

    Pekka Janhunen

    Full Text Available Several severe glaciations occurred during the Neoproterozoic eon, and especially near its end in the Cryogenian period (630-850 Ma. While the glacial periods themselves were probably related to the continental positions being appropriate for glaciation, the general coldness of the Neoproterozoic and Cryogenian as a whole lacks specific explanation. The Cryogenian was immediately followed by the Ediacaran biota and Cambrian Metazoan, thus understanding the climate-biosphere interactions around the Cryogenian period is central to understanding the development of complex multicellular life in general. Here we present a feedback mechanism between growth of eukaryotic algal phytoplankton and climate which explains how the Earth system gradually entered the Cryogenian icehouse from the warm Mesoproterozoic greenhouse. The more abrupt termination of the Cryogenian is explained by the increase in gaseous carbon release caused by the more complex planktonic and benthic foodwebs and enhanced by a diversification of metazoan zooplankton and benthic animals. The increased ecosystem complexity caused a decrease in organic carbon burial rate, breaking the algal-climatic feedback loop of the earlier Neoproterozoic eon. Prior to the Neoproterozoic eon, eukaryotic evolution took place in a slow timescale regulated by interior cooling of the Earth and solar brightening. Evolution could have proceeded faster had these geophysical processes been faster. Thus, complex life could theoretically also be found around stars that are more massive than the Sun and have main sequence life shorter than 10 Ga. We also suggest that snow and glaciers are, in a statistical sense, important markers for conditions that may possibly promote the development of complex life on extrasolar planets.

  18. Faster diffraction-based overlay measurements with smaller targets using 3D gratings

    Science.gov (United States)

    Li, Jie; Kritsun, Oleg; Liu, Yongdong; Dasari, Prasad; Volkman, Catherine; Hu, Jiangtao

    2012-03-01

    Diffraction-based overlay (DBO) technologies have been developed to address the overlay metrology challenges for 22nm technology node and beyond. Most DBO technologies require specially designed targets that consist of multiple measurement pads, which consume too much space and increase measurement time. The traditional empirical approach (eDBO) using normal incidence spectroscopic reflectometry (NISR) relies on linear response of the reflectance with respect to overlay displacement within a small range. It offers convenience of quick recipe setup since there is no need to establish a model. However it requires three or four pads per direction (x or y) which adds burden to throughput and target size. Recent advances in modeling capability and computation power enabled mDBO, which allows overlay measurement with reduced number of pads, thus reducing measurement time and DBO target space. In this paper we evaluate the performance of single pad mDBO measurements using two 3D targets that have different grating shapes: squares in boxes and L-shapes in boxes. Good overlay sensitivities are observed for both targets. The correlation to programmed shifts and image-based overlay (IBO) is excellent. Despite the difference in shapes, the mDBO results are comparable for square and L-shape targets. The impact of process variations on overlay measurements is studied using a focus and exposure matrix (FEM) wafer. Although the FEM wafer has larger process variations, the correlation of mDBO results with IBO measurements is as good as the normal process wafer. We demonstrate the feasibility of single pad DBO measurements with faster throughput and smaller target size, which is particularly important in high volume manufacturing environment.

  19. Some connections between importance sampling and enhanced sampling methods in molecular dynamics.

    Science.gov (United States)

    Lie, H C; Quer, J

    2017-11-21

    In molecular dynamics, enhanced sampling methods enable the collection of better statistics of rare events from a reference or target distribution. We show that a large class of these methods is based on the idea of importance sampling from mathematical statistics. We illustrate this connection by comparing the Hartmann-Schütte method for rare event simulation (J. Stat. Mech. Theor. Exp. 2012, P11004) and the Valsson-Parrinello method of variationally enhanced sampling [Phys. Rev. Lett. 113, 090601 (2014)]. We use this connection in order to discuss how recent results from the Monte Carlo methods literature can guide the development of enhanced sampling methods.

  20. Faster, better, and cheaper” at NASA: Lessons learned in managing and accepting risk

    Science.gov (United States)

    Paxton, Larry J.

    2007-11-01

    Can Earth observing missions be done "better, faster and cheaper"? In this paper I explore the management and technical issues that arose from the attempt to do things "faster, better and cheaper" at NASA. The FBC mantra lead to some failures and, more significantly, an increase in the cadence of missions. Mission cadence is a major enabler of innovation and the driver for the training and testing of the next generation of managers, engineers, and scientists. A high mission cadence is required to maintain and develop competence in mission design, management, and execution and, for an exploration-driven organization, to develop and train the next generation of leaders: the time between missions must be short enough that careers span the complete life of more than a few missions. This process reduces risk because the "lessons learned" are current and widely held. Increasing the cadence of missions has the added benefit of reducing the pressure to do everything on one particular mission thus reducing mission complexity. Since failures are inevitable in such a complex endeavor, a higher mission cadence has the advantage of providing some resiliency to the scientific program the missions support. Some failures are avoidable (often only in hindsight) but most are due to some combination of interacting factors. This interaction is often only appreciated as a potential failure mode after the fact. There is always the pressure to do more with less: the scope of the project may become too ambitious or the management and oversight of the project may be reduced to fit the money allocated, or the project time line may be lengthened due to external factors (launcher availability, budgetary constraints) without a concomitant increase in the total funding. This leads to increased risk. Risks are always deemed acceptable until they change from a "risk" to a "failure mode". Identifying and managing those risks are particularly difficult when the activities are dispersed

  1. Directional dependency of air sampling

    International Nuclear Information System (INIS)

    1994-01-01

    A field study was performed by Idaho State University-Environmental Monitoring Laboratory (EML) to examine the directional dependency of low-volume air samplers. A typical continuous low volume air sampler contains a sample head that is mounted on the sampler housing either horizontally through one of four walls or vertically on an exterior wall 'looking down or up.' In 1992, a field study was undertaken to estimate sampling error and to detect the directional effect of sampler head orientation. Approximately 1/2 mile downwind from a phosphate plant (continuous source of alpha activity), four samplers were positioned in identical orientation alongside one sampler configured with the sample head 'looking down'. At least five consecutive weekly samples were collected. The alpha activity, beta activity, and the Be-7 activity collected on the particulate filter were analyzed to determine sampling error. Four sample heads were than oriented to the four different horizontal directions. Samples were collected for at least five weeks. Analysis of the alpha data can show the effect of sampler orientation to a know near source term. Analysis of the beta and Be-7 activity shows the effect of sampler orientation to a ubiquitous source term

  2. Getting Innovative Therapies Faster to Patients at the Right Dose: Impact of Quantitative Pharmacology Towards First Registration and Expanding Therapeutic Use.

    Science.gov (United States)

    Nayak, Satyaprakash; Sander, Oliver; Al-Huniti, Nidal; de Alwis, Dinesh; Chain, Anne; Chenel, Marylore; Sunkaraneni, Soujanya; Agrawal, Shruti; Gupta, Neeraj; Visser, Sandra A G

    2018-03-01

    Quantitative pharmacology (QP) applications in translational medicine, drug-development, and therapeutic use were crowd-sourced by the ASCPT Impact and Influence initiative. Highlighted QP case studies demonstrated faster access to innovative therapies for patients through 1) rational dose selection for pivotal trials; 2) reduced trial-burden for vulnerable populations; or 3) simplified posology. Critical success factors were proactive stakeholder engagement, alignment on the value of model-informed approaches, and utilizing foundational clinical pharmacology understanding of the therapy. © 2018 The Authors Clinical Pharmacology & Therapeutics published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.

  3. Method validation and dissipation kinetics of four herbicides in maize and soil using QuEChERS sample preparation and liquid chromatography tandem mass spectrometry.

    Science.gov (United States)

    Pang, Nannan; Wang, Tielong; Hu, Jiye

    2016-01-01

    A versatile liquid chromatography tandem mass spectrometry method with modified QuEChERS (quick, easy, cheap, effective, rugged, and safe) sample preparation was developed for the determination of rimsulfuron, mesotrione, fluroxypyr-meptyl, and fluroxypyr. By adjusting the amount of graphitized carbon black, the herbicide analytes could be quantified with satisfactory recoveries in the range of 80-110%. A dissipation kinetics study conducted under open field conditions at two sites during 2014 showed first order equations with half-lives between 0.6d and 3.6d, illustrating an appropriate degree of stability and safety. The dissipation kinetics were different in the different matrices. Although the herbicides had higher initial residues in straw than those in soil, they degraded faster in straw. The terminal residues for the herbicides formulated in two water dispersible granules were all below maximum residue limits. These results not only gave insights about the analytes but also contributed to environmental protection and food safety. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Platelet-rich fibrin, “a faster healing aid” in the treatment of combined lesions: A report of two cases

    Science.gov (United States)

    Karunakar, Parupalli; Prasanna, Jammula Surya; Jayadev, Matapathi; Shravani, Guniganti Sushma

    2014-01-01

    Anatomically the pulp and periodontium are connected through apical foramen, and the lateral, accessory, and furcal canals. Diseases of one tissue may affect the other. In the present case report with two cases, a primary periodontal lesion with secondary endodontic involvement is described. In both cases, root canal treatment was done followed by periodontal therapy with the use of platelet-rich fibrin (PRF) as the regenerative material of choice. PRF has been a breakthrough in the stimulation and acceleration of tissue healing. It is used to achieve faster healing of the intrabony defects. Absence of an intraradicular lesion, pain, and swelling, along with tooth stability and adequate radiographic bone fill at 9 months of follow-up indicated a successful outcome. PMID:25425831

  5. Preliminary study : Extremely low frequency electromagnetic field (ELF EMF) effects on the growth of plant

    International Nuclear Information System (INIS)

    Roha Tukimin; Wan Norsuhaila Wan Aziz; Rozaimah Abd Rahim; Wan Saffiey Wan Abdulah

    2010-01-01

    A research has been done to study the effects of magnetic fields on the growth of plants.Two samples of maize seedlings and green beans have been studied. Helmholtz coil systems were used as magnetic field source at frequency 50 Hz with 440 mGauss field strength. Sample characteristics such height, leaf, colour and length of roots were observed. The results show that the magnetic field influenced the growth of the sample. The sample that were exposed to the magnetic field show faster growth compared to the controlled sample. (author)

  6. Show-Bix &

    DEFF Research Database (Denmark)

    2014-01-01

    The anti-reenactment 'Show-Bix &' consists of 5 dias projectors, a dial phone, quintophonic sound, and interactive elements. A responsive interface will enable the Dias projectors to show copies of original dias slides from the Show-Bix piece ”March på Stedet”, 265 images in total. The copies are...

  7. Do Zebra Mussels Grow Faster on Live Unionids than on Inanimate Substrate? A Study with Field Enclosures

    Science.gov (United States)

    Hörmann, Leonhard; Maier, Gerhard

    2006-05-01

    The zebra mussel Dreissena polymorpha has invaded numerous freshwaters in Europe and North America and can foul many types of solid substrates, including unionid bivalves. In field experiments we compared growth rates of dreissenids on live specimens of the freshwater bivalve Anodonta cygnea to growth rates of dreissenids on stones. Dreissena density in the study lake was about 1000 m-2 in most places, Anodonta density approximately 1 m-2 and about 50% of the Anodonta were infested with 10-30 Dreissena . In summer/autumn small dreissenids generally grew faster on live Anodonta than on stones. Similar trends were observed for spring, but differences of growth increments between dreissenids on live Anodonta and stones were usually not significant. Dreissenids settled down or moved towards the ingestion/egestion siphons of Anodonta and ingestion siphons of dreissenids were directed towards siphons of Anodonta . These results suggest that dreissenids can use the food provided by the filter current of the large Anodonta .

  8. Quality analysis of commercial samples of Ziziphi spinosae semen (suanzaoren by means of chromatographic fingerprinting assisted by principal component analysis

    Directory of Open Access Journals (Sweden)

    Shuai Sun

    2014-06-01

    Full Text Available Due to the scarcity of resources of Ziziphi spinosae semen (ZSS, many inferior goods and even adulterants are generally found in medicine markets. To strengthen the quality control, HPLC fingerprint common pattern established in this paper showed three main bioactive compounds in one chromatogram simultaneously. Principal component analysis based on DAD signals could discriminate adulterants and inferiorities. Principal component analysis indicated that all samples could be mainly regrouped into two main clusters according to the first principal component (PC1, redefined as Vicenin II and the second principal component (PC2, redefined as zizyphusine. PC1 and PC2 could explain 91.42% of the variance. Content of zizyphusine fluctuated more greatly than that of spinosin, and this result was also confirmed by the HPTLC result. Samples with low content of jujubosides and two common adulterants could not be used equivalently with authenticated ones in clinic, while one reference standard extract could substitute the crude drug in pharmaceutical production. Giving special consideration to the well-known bioactive saponins but with low response by end absorption, a fast and cheap HPTLC method for quality control of ZSS was developed and the result obtained was commensurate well with that of HPLC analysis. Samples having similar fingerprints to HPTLC common pattern targeting at saponins could be regarded as authenticated ones. This work provided a faster and cheaper way for quality control of ZSS and laid foundation for establishing a more effective quality control method for ZSS. Keywords: Adulterant, Common pattern, Principal component analysis, Quality control, Ziziphi spinosae semen

  9. Vegetation sampling for the screening of subsurface pollution

    Science.gov (United States)

    Karlson, U. G.; Petersen, M. D.; Algreen, M.; Rein, A.; Sheehan, E.; Limmer, M. A.; Burken, J. G.; Mayer, P.; Trapp, S.

    2012-04-01

    Measurement of vegetation samples has been reported as an alternative, cheap method to drilling for exploring subsurface pollution. The purpose of this presentation is to give an update on some further developments of this field method - faster sampling and improved analysis for chlorinated solvents, and application of phytomonitoring to heavy metal contamination. Rapid analysis of trees for chlorinated solvents was facilitated by employing automated headspace SPME-GC/ECD, resulting in a detection limit of 0.87 and 0.04 μg/kg fresh weight of wood for TCE and PCE, respectively, which is significantly lower than we have reported earlier, using manual injection of 1mL headspace air into a GC/MS. Technical details of the new method will be presented. As an even more direct alternative, time weighted average SPME analysis has been developed for in planta sampling of trees, using novel polydimethylsiloxane/carboxen SPME fibres designed for field application. In a different study, trees growing on a former dump site in Norway were analyzed for arsenic (As), cadmium (Cd), chromium (Cr), copper (Cu), nickel (Ni), and zinc (Zn). Concentrations in wood were in averages (dw) 30 mg/kg for Zn, 2 mg/kg for Cu, and <1 mg/kg for Cd, Cr, As and Ni. For all except one case, mean concentrations from the dump site were higher than those from a unpolluted reference site, but the difference was small and not always significant. Differences between tree species were typically higher than differences between the polluted and the unpolluted site. As all these elements occur naturally, and Cu, Ni, and Zn are essential elements, all trees will have a natural background of these elements, and the occurrence alone does not indicate soil pollution. For the interpretation of the results, a comparison to wood samples from an unpolluted reference site with the same tree species and similar soil conditions is required. This makes the tree core screening method less reliable for heavy metals than, e

  10. Rhabdomyosarcoma cells show an energy producing anabolic metabolic phenotype compared with primary myocytes

    Directory of Open Access Journals (Sweden)

    Higashi Richard M

    2008-10-01

    Full Text Available Abstract Background The functional status of a cell is expressed in its metabolic activity. We have applied stable isotope tracing methods to determine the differences in metabolic pathways in proliferating Rhabdomysarcoma cells (Rh30 and human primary myocytes in culture. Uniformly 13C-labeled glucose was used as a source molecule to follow the incorporation of 13C into more than 40 marker metabolites using NMR and GC-MS. These include metabolites that report on the activity of glycolysis, Krebs' cycle, pentose phosphate pathway and pyrimidine biosynthesis. Results The Rh30 cells proliferated faster than the myocytes. Major differences in flux through glycolysis were evident from incorporation of label into secreted lactate, which accounts for a substantial fraction of the glucose carbon utilized by the cells. Krebs' cycle activity as determined by 13C isotopomer distributions in glutamate, aspartate, malate and pyrimidine rings was considerably higher in the cancer cells than in the primary myocytes. Large differences were also evident in de novo biosynthesis of riboses in the free nucleotide pools, as well as entry of glucose carbon into the pyrimidine rings in the free nucleotide pool. Specific labeling patterns in these metabolites show the increased importance of anaplerotic reactions in the cancer cells to maintain the high demand for anabolic and energy metabolism compared with the slower growing primary myocytes. Serum-stimulated Rh30 cells showed higher degrees of labeling than serum starved cells, but they retained their characteristic anabolic metabolism profile. The myocytes showed evidence of de novo synthesis of glycogen, which was absent in the Rh30 cells. Conclusion The specific 13C isotopomer patterns showed that the major difference between the transformed and the primary cells is the shift from energy and maintenance metabolism in the myocytes toward increased energy and anabolic metabolism for proliferation in the Rh30 cells

  11. A faster numerical scheme for a coupled system modeling soil erosion and sediment transport

    Science.gov (United States)

    Le, M.-H.; Cordier, S.; Lucas, C.; Cerdan, O.

    2015-02-01

    Overland flow and soil erosion play an essential role in water quality and soil degradation. Such processes, involving the interactions between water flow and the bed sediment, are classically described by a well-established system coupling the shallow water equations and the Hairsine-Rose model. Numerical approximation of this coupled system requires advanced methods to preserve some important physical and mathematical properties; in particular, the steady states and the positivity of both water depth and sediment concentration. Recently, finite volume schemes based on Roe's solver have been proposed by Heng et al. (2009) and Kim et al. (2013) for one and two-dimensional problems. In their approach, an additional and artificial restriction on the time step is required to guarantee the positivity of sediment concentration. This artificial condition can lead the computation to be costly when dealing with very shallow flow and wet/dry fronts. The main result of this paper is to propose a new and faster scheme for which only the CFL condition of the shallow water equations is sufficient to preserve the positivity of sediment concentration. In addition, the numerical procedure of the erosion part can be used with any well-balanced and positivity preserving scheme of the shallow water equations. The proposed method is tested on classical benchmarks and also on a realistic configuration.

  12. Differentially-Expressed Genes Associated with Faster Growth of the Pacific Abalone, Haliotis discus hannai.

    Science.gov (United States)

    Choi, Mi-Jin; Kim, Gun-Do; Kim, Jong-Myoung; Lim, Han Kyu

    2015-11-18

    The Pacific abalone Haliotis discus hannai is used for commercial aquaculture in Korea. We examined the transcriptome of Pacific abalone Haliotis discus hannai siblings using NGS technology to identify genes associated with high growth rates. Pacific abalones grown for 200 days post-fertilization were divided into small-, medium-, and large-size groups with mean weights of 0.26 ± 0.09 g, 1.43 ± 0.405 g, and 5.24 ± 1.09 g, respectively. RNA isolated from the soft tissues of each group was subjected to RNA sequencing. Approximately 1%-3% of the transcripts were differentially expressed in abalones, depending on the growth rate. RT-PCR was carried out on thirty four genes selected to confirm the relative differences in expression detected by RNA sequencing. Six differentially-expressed genes were identified as associated with faster growth of the Pacific abalone. These include five up-regulated genes (including one specific to females) encoding transcripts homologous to incilarin A, perlucin, transforming growth factor-beta-induced protein immunoglobulin-heavy chain 3 (ig-h3), vitelline envelope zona pellucida domain 4, and defensin, and one down-regulated gene encoding tomoregulin in large abalones. Most of the transcripts were expressed predominantly in the hepatopancreas. The genes identified in this study will lead to development of markers for identification of high-growth-rate abalones and female abalones.

  13. Differentially-Expressed Genes Associated with Faster Growth of the Pacific Abalone, Haliotis discus hannai

    Directory of Open Access Journals (Sweden)

    Mi-Jin Choi

    2015-11-01

    Full Text Available The Pacific abalone Haliotis discus hannai is used for commercial aquaculture in Korea. We examined the transcriptome of Pacific abalone Haliotis discus hannai siblings using NGS technology to identify genes associated with high growth rates. Pacific abalones grown for 200 days post-fertilization were divided into small-, medium-, and large-size groups with mean weights of 0.26 ± 0.09 g, 1.43 ± 0.405 g, and 5.24 ± 1.09 g, respectively. RNA isolated from the soft tissues of each group was subjected to RNA sequencing. Approximately 1%–3% of the transcripts were differentially expressed in abalones, depending on the growth rate. RT-PCR was carried out on thirty four genes selected to confirm the relative differences in expression detected by RNA sequencing. Six differentially-expressed genes were identified as associated with faster growth of the Pacific abalone. These include five up-regulated genes (including one specific to females encoding transcripts homologous to incilarin A, perlucin, transforming growth factor-beta-induced protein immunoglobulin-heavy chain 3 (ig-h3, vitelline envelope zona pellucida domain 4, and defensin, and one down-regulated gene encoding tomoregulin in large abalones. Most of the transcripts were expressed predominantly in the hepatopancreas. The genes identified in this study will lead to development of markers for identification of high-growth-rate abalones and female abalones.

  14. Building an Internet of Samples: The Australian Contribution

    Science.gov (United States)

    Wyborn, Lesley; Klump, Jens; Bastrakova, Irina; Devaraju, Anusuriya; McInnes, Brent; Cox, Simon; Karssies, Linda; Martin, Julia; Ross, Shawn; Morrissey, John; Fraser, Ryan

    2017-04-01

    metadata is available in more than one format. The software for IGSN web services is based on components developed for DataCite and adapted to the specific requirements of IGSN. This cooperation in open source development ensures sustainable implementation and faster turnaround times for updates. IGSN, in particular in its Australian implementation, is characterised by a federated approach to system architecture and organisational governance giving it the necessary flexibility to adapt to particular local practices within multiple domains, whilst maintaining an overarching international standard. The three current IGSN allocation agents in Australia: Geoscience Australia, CSIRO and Curtin University, represent different sectors. Through funding from the Australian Research Data Services Program they have combined to develop a common web portal that allows discovery of physical samples and sample collections at a national level.International governance then ensures we can link to an international community but at the same time act locally to ensure the services offered are relevant to the needs of Australian researchers. This flexibility aids the integration of new disciplines into a global community of a physical samples information network.

  15. Optimization of sampling parameters for standardized exhaled breath sampling.

    Science.gov (United States)

    Doran, Sophie; Romano, Andrea; Hanna, George B

    2017-09-05

    The lack of standardization of breath sampling is a major contributing factor to the poor repeatability of results and hence represents a barrier to the adoption of breath tests in clinical practice. On-line and bag breath sampling have advantages but do not suit multicentre clinical studies whereas storage and robust transport are essential for the conduct of wide-scale studies. Several devices have been developed to control sampling parameters and to concentrate volatile organic compounds (VOCs) onto thermal desorption (TD) tubes and subsequently transport those tubes for laboratory analysis. We conducted three experiments to investigate (i) the fraction of breath sampled (whole vs. lower expiratory exhaled breath); (ii) breath sample volume (125, 250, 500 and 1000ml) and (iii) breath sample flow rate (400, 200, 100 and 50 ml/min). The target VOCs were acetone and potential volatile biomarkers for oesophago-gastric cancer belonging to the aldehyde, fatty acids and phenol chemical classes. We also examined the collection execution time and the impact of environmental contamination. The experiments showed that the use of exhaled breath-sampling devices requires the selection of optimum sampling parameters. The increase in sample volume has improved the levels of VOCs detected. However, the influence of the fraction of exhaled breath and the flow rate depends on the target VOCs measured. The concentration of potential volatile biomarkers for oesophago-gastric cancer was not significantly different between the whole and lower airway exhaled breath. While the recovery of phenols and acetone from TD tubes was lower when breath sampling was performed at a higher flow rate, other VOCs were not affected. A dedicated 'clean air supply' overcomes the contamination from ambient air, but the breath collection device itself can be a source of contaminants. In clinical studies using VOCs to diagnose gastro-oesophageal cancer, the optimum parameters are 500mls sample volume

  16. In silico sampling reveals the effect of clustering and shows that the log-normal rank abundance curve is an artefact

    NARCIS (Netherlands)

    Neuteboom, J.H.; Struik, P.C.

    2005-01-01

    The impact of clustering on rank abundance, species-individual (S-N)and species-area curves was investigated using a computer programme for in silico sampling. In a rank abundance curve the abundances of species are plotted on log-scale against species sequence. In an S-N curve the number of species

  17. Sampling soils for 137Cs using various field-sampling volumes

    International Nuclear Information System (INIS)

    Nyhan, J.W.; Schofield, T.G.; White, G.C.; Trujillo, G.

    1981-10-01

    The sediments from a liquid effluent receiving area at the Los Alamos National Laboratory and soils from intensive study area in the fallout pathway of Trinity were sampled for 137 Cs using 25-, 500-, 2500-, and 12 500-cm 3 field sampling volumes. A highly replicated sampling program was used to determine mean concentrations and inventories of 137 Cs at each site, as well as estimates of spatial, aliquoting, and counting variance components of the radionuclide data. The sampling methods were also analyzed as a function of soil size fractions collected in each field sampling volume and of the total cost of the program for a given variation in the radionuclide survey results. Coefficients of variation (CV) of 137 Cs inventory estimates ranged from 0.063 to 0.14 for Mortandad Canyon sediments, where CV values for Trinity soils were observed from 0.38 to 0.57. Spatial variance components of 137 Cs concentration data were usually found to be larger than either the aliquoting or counting variance estimates and were inversely related to field sampling volume at the Trinity intensive site. Subsequent optimization studies of the sampling schemes demonstrated that each aliquot should be counted once, and that only 2 to 4 aliquots out of an many as 30 collected need be assayed for 137 Cs. The optimization studies showed that as sample costs increased to 45 man-hours of labor per sample, the variance of the mean 137 Cs concentration decreased dramatically, but decreased very little with additional labor

  18. High-resolution, high-sensitivity NMR of nano-litre anisotropic samples by coil spinning

    Energy Technology Data Exchange (ETDEWEB)

    Sakellariou, D [CEA Saclay, DSM, DRECAM, SCM, Lab Struct and Dynam Resonance Magnet, CNRS URA 331, F-91191 Gif Sur Yvette, (France); Le Goff, G; Jacquinot, J F [CEA Saclay, DSM, DRECAM, SPEC: Serv Phys Etat Condense, CNRS URA 2464, F-91191 Gif Sur Yvette, (France)

    2007-07-01

    Nuclear magnetic resonance (NMR) can probe the local structure and dynamic properties of liquids and solids, making it one of the most powerful and versatile analytical methods available today. However, its intrinsically low sensitivity precludes NMR analysis of very small samples - as frequently used when studying isotopically labelled biological molecules or advanced materials, or as preferred when conducting high-throughput screening of biological samples or 'lab-on-a-chip' studies. The sensitivity of NMR has been improved by using static micro-coils, alternative detection schemes and pre-polarization approaches. But these strategies cannot be easily used in NMR experiments involving the fast sample spinning essential for obtaining well-resolved spectra from non-liquid samples. Here we demonstrate that inductive coupling allows wireless transmission of radio-frequency pulses and the reception of NMR signals under fast spinning of both detector coil and sample. This enables NMR measurements characterized by an optimal filling factor, very high radio-frequency field amplitudes and enhanced sensitivity that increases with decreasing sample volume. Signals obtained for nano-litre-sized samples of organic powders and biological tissue increase by almost one order of magnitude (or, equivalently, are acquired two orders of magnitude faster), compared to standard NMR measurements. Our approach also offers optimal sensitivity when studying samples that need to be confined inside multiple safety barriers, such as radioactive materials. In principle, the co-rotation of a micrometer-sized detector coil with the sample and the use of inductive coupling (techniques that are at the heart of our method) should enable highly sensitive NMR measurements on any mass-limited sample that requires fast mechanical rotation to obtain well-resolved spectra. The method is easy to implement on a commercial NMR set-up and exhibits improved performance with miniaturization, and we

  19. A novel fast method for aqueous derivatization of THC, OH-THC and THC-COOH in human whole blood and urine samples for routine forensic analyses.

    Science.gov (United States)

    Stefanelli, Fabio; Pesci, Federica Giorgia; Giusiani, Mario; Chericoni, Silvio

    2018-04-01

    A novel aqueous in situ derivatization procedure with propyl chloroformate (PCF) for the simultaneous, quantitative analysis of Δ 9 -tetrahydrocannabinol (THC), 11-hydroxy-Δ 9 -tetrahydrocannabinol (OH-THC) and 11-nor-Δ 9 -tetrahydrocannabinol-carboxylic acid (THC-COOH) in human blood and urine is proposed. Unlike current methods based on the silylating agent [N,O-bis(trimethylsilyl)trifluoroacetamide] added in an anhydrous environment, this new proposed method allows the addition of the derivatizing agent (propyl chloroformate, PCF) directly to the deproteinized blood and recovery of the derivatives by liquid-liquid extraction. This novel method can be also used for hydrolyzed urine samples. It is faster than the traditional method involving a derivatization with trimethyloxonium tetrafluoroborate. The analytes are separated, detected and quantified by gas chromatography-mass spectrometry in selected ion monitoring mode (SIM). The method was validated in terms of selectivity, capacity of identification, limits of detection (LOD) and quantification (LOQ), carryover, linearity, intra-assay precision, inter-assay precision and accuracy. The LOD and LOQ in hydrolyzed urine were 0.5 and 1.3 ng/mL for THC and 1.2 and 2.6 ng/mL for THC-COOH, respectively. In blood, the LOD and LOQ were 0.2 and 0.5 ng/mL for THC, 0.2 and 0.6 ng/mL for OH-THC, and 0.9 and 2.4 ng/mL for THC-COOH, respectively. This method was applied to 35 urine samples and 50 blood samples resulting to be equivalent to the previously used ones with the advantage of a simpler method and faster sample processing time. We believe that this method will be a more convenient option for the routine analysis of cannabinoids in toxicological and forensic laboratories. Copyright © 2017 John Wiley & Sons, Ltd.

  20. Faster acquisition of laparoscopic skills in virtual reality with haptic feedback and 3D vision.

    Science.gov (United States)

    Hagelsteen, Kristine; Langegård, Anders; Lantz, Adam; Ekelund, Mikael; Anderberg, Magnus; Bergenfelz, Anders

    2017-10-01

    The study investigated whether 3D vision and haptic feedback in combination in a virtual reality environment leads to more efficient learning of laparoscopic skills in novices. Twenty novices were allocated to two groups. All completed a training course in the LapSim ® virtual reality trainer consisting of four tasks: 'instrument navigation', 'grasping', 'fine dissection' and 'suturing'. The study group performed with haptic feedback and 3D vision and the control group without. Before and after the LapSim ® course, the participants' metrics were recorded when tying a laparoscopic knot in the 2D video box trainer Simball ® Box. The study group completed the training course in 146 (100-291) minutes compared to 215 (175-489) minutes in the control group (p = .002). The number of attempts to reach proficiency was significantly lower. The study group had significantly faster learning of skills in three out of four individual tasks; instrument navigation, grasping and suturing. Using the Simball ® Box, no difference in laparoscopic knot tying after the LapSim ® course was noted when comparing the groups. Laparoscopic training in virtual reality with 3D vision and haptic feedback made training more time efficient and did not negatively affect later video box-performance in 2D. [Formula: see text].

  1. Operational air sampling report

    International Nuclear Information System (INIS)

    Lyons, C.L.

    1994-03-01

    Nevada Test Site vertical shaft and tunnel events generate beta/gamma fission products. The REECo air sampling program is designed to measure these radionuclides at various facilities supporting these events. The current testing moratorium and closure of the Decontamination Facility has decreased the scope of the program significantly. Of the 118 air samples collected in the only active tunnel complex, only one showed any airborne fission products. Tritiated water vapor concentrations were very similar to previously reported levels. The 206 air samples collected at the Area-6 decontamination bays and laundry were again well below any Derived Air Concentration calculation standard. Laboratory analyses of these samples were negative for any airborne fission products

  2. Test of a sample container for shipment of small size plutonium samples with PAT-2

    International Nuclear Information System (INIS)

    Kuhn, E.; Aigner, H.; Deron, S.

    1981-11-01

    A light-weight container for the air transport of plutonium, to be designated PAT-2, has been developed in the USA and is presently undergoing licensing. The very limited effective space for bearing plutonium required the design of small size sample canisters to meet the needs of international safeguards for the shipment of plutonium samples. The applicability of a small canister for the sampling of small size powder and solution samples has been tested in an intralaboratory experiment. The results of the experiment, based on the concept of pre-weighed samples, show that the tested canister can successfully be used for the sampling of small size PuO 2 -powder samples of homogeneous source material, as well as for dried aliquands of plutonium nitrate solutions. (author)

  3. Time flies faster under time pressure.

    Science.gov (United States)

    Rattat, Anne-Claire; Matha, Pauline; Cegarra, Julien

    2018-04-01

    We examined the effects of time pressure on duration estimation in a verbal estimation task and a production task. In both temporal tasks, participants had to solve mazes in two conditions of time pressure (with or without), and with three different target durations (30 s, 60 s, and 90 s). In each trial of the verbal estimation task, participants had to estimate in conventional time units (minutes and seconds) the amount of time that had elapsed since they started to solve the maze. In the production task, they had to press a key while solving the maze when they thought that the trial's duration had reached a target value. Results showed that in both tasks, durations were judged longer with time pressure than without it. However, this temporal overestimation under time pressure did not increase with the length of the target duration. These results are discussed within the framework of scalar expectancy theory. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Visual detection of gas shows from coal core and cuttings using liquid leak detector

    Energy Technology Data Exchange (ETDEWEB)

    Barker, C.E. [United States Geological Survey, Denver, CO (United States)

    2006-09-15

    Coal core descriptions are difficult to obtain, as they must be obtained immediately after the core is retrieved and before the core is closed in a canister. This paper described a method of marking gas shows on a core surface by coating the core with a water-based liquid leak detector and photographing the subsequent foam developed on the core surface while the core is still in the core tray. Coals from a borehole at the Yukon Flats Basin in Alaska and the Maverick Basin in Texas were used to illustrate the method. Drilling mud and debris were removed from the coal samples before the leak detector solution was applied onto the core surfaces. A white froth or dripping foam developed rapidly at gas shows on the sample surfaces. A hand-held lens and a binocular microscope were used to magnify the foaming action. It was noted that foaming was not continuous across the core surface, but was restricted to localized points along the surface. It was suggested that the localized point foaming may have resulted from the coring process. However, the same tendency toward point gas show across the sample surface was found in some hard, well-indurated samples that still had undisturbed bedding and other sedimentary structures. It was concluded that gas shows marked as separate foam centres may indicate a real condition of local permeability paths. Results suggested that the new gas show detection method could be used in core selection studies to reduce the costs of exploration programs. 6 refs., 4 figs.

  5. On ISSM and leveraging the Cloud towards faster quantification of the uncertainty in ice-sheet mass balance projections

    Science.gov (United States)

    Larour, E.; Schlegel, N.

    2016-11-01

    With the Amazon EC2 Cloud becoming available as a viable platform for parallel computing, Earth System Models are increasingly interested in leveraging its capabilities towards improving climate projections. In particular, faced with long wait periods on high-end clusters, the elasticity of the Cloud presents a unique opportunity of potentially "infinite" availability of small-sized clusters running on high-performance instances. Among specific applications of this new paradigm, we show here how uncertainty quantification in climate projections of polar ice sheets (Antarctica and Greenland) can be significantly accelerated using the Cloud. Indeed, small-sized clusters are very efficient at delivering sensitivity and sampling analysis, core tools of uncertainty quantification. We demonstrate how this approach was used to carry out an extensive analysis of ice-flow projections on one of the largest basins in Greenland, the North-East Greenland Glacier, using the Ice Sheet System Model, the public-domain NASA-funded ice-flow modeling software. We show how errors in the projections were accurately quantified using Monte-Carlo sampling analysis on the EC2 Cloud, and how a judicious mix of high-end parallel computing and Cloud use can best leverage existing infrastructures, and significantly accelerate delivery of potentially ground-breaking climate projections, and in particular, enable uncertainty quantification that were previously impossible to achieve.

  6. Soft robotics: a review and progress towards faster and higher torque actuators (presentation video)

    Science.gov (United States)

    Shepherd, Robert

    2014-03-01

    Last year, nearly 160,000 industrial robots were shipped worldwide—into a total market valued at 26 Bn (including hardware, software, and peripherals).[1] Service robots for professional (e.g., defense, medical, agriculture) and personal (e.g., household, handicap assistance, toys, and education) use accounted for 16,000 units, 3.4 Bn and 3,000,000 units, $1.2 Bn respectively.[1] The vast majority of these robotic systems use fully actuated, rigid components that take little advantage of passive dynamics. Soft robotics is a field that is taking advantage of compliant actuators and passive dynamics to achieve several goals: reduced design, manufacturing and control complexity, improved energy efficiency, more sophisticated motions, and safe human-machine interactions to name a few. The potential for societal impact is immense. In some instances, soft actuators have achieved commercial success; however, large scale adoption will require improved methods of controlling non-linear systems, greater reliability in their function, and increased utility from faster and more forceful actuation. In my talk, I will describe efforts from my work in the Whitesides group at Harvard to prove sophisticated motions in these machines using simple controls, as well capabilities unique to soft machines. I will also describe the potential for combinations of different classes of soft actuators (e.g., electrically and pneumatically actuated systems) to improve the utility of soft robots. 1. World Robotics - Industrial Robots 2013, 2013, International Federation of Robotics.

  7. Vibronic Boson Sampling: Generalized Gaussian Boson Sampling for Molecular Vibronic Spectra at Finite Temperature.

    Science.gov (United States)

    Huh, Joonsuk; Yung, Man-Hong

    2017-08-07

    Molecular vibroic spectroscopy, where the transitions involve non-trivial Bosonic correlation due to the Duschinsky Rotation, is strongly believed to be in a similar complexity class as Boson Sampling. At finite temperature, the problem is represented as a Boson Sampling experiment with correlated Gaussian input states. This molecular problem with temperature effect is intimately related to the various versions of Boson Sampling sharing the similar computational complexity. Here we provide a full description to this relation in the context of Gaussian Boson Sampling. We find a hierarchical structure, which illustrates the relationship among various Boson Sampling schemes. Specifically, we show that every instance of Gaussian Boson Sampling with an initial correlation can be simulated by an instance of Gaussian Boson Sampling without initial correlation, with only a polynomial overhead. Since every Gaussian state is associated with a thermal state, our result implies that every sampling problem in molecular vibronic transitions, at any temperature, can be simulated by Gaussian Boson Sampling associated with a product of vacuum modes. We refer such a generalized Gaussian Boson Sampling motivated by the molecular sampling problem as Vibronic Boson Sampling.

  8. Fast neutron (14 MeV) attenuation analysis in saturated core samples and its application in well logging

    International Nuclear Information System (INIS)

    Amin Attarzadeh; Mohammad Kamal Ghassem Al Askari; Tagy Bayat

    2009-01-01

    To introduce the application of nuclear logging, it is appropriate to provide a motivation for the use of nuclear measurement techniques in well logging. Importance aspects of the geological sciences are for instance grain and porosity structure and porosity volume of the rocks, as well as the transport properties of a fluid in the porous media. Nuclear measurements are, as a rule non-intrusive. Namely, a measurement does not destroy the sample, and it does not interfere with the process to be measured. Also, non- intrusive measurements are often much faster than the radiation methods, and can also be applied in field measurements. A common type of nuclear measurement employs neutron irradiation. It is powerful technique for geophysical analysis. In this research we illustrate the detail of this technique and it's applications to well logging and oil industry. Experiments have been performed to investigate the possibilities of using neutron attenuation measurements to determine water and oil content of rock sample. A beam of 14 MeV neutrons produced by a 150 KV neutron generator was attenuated by different samples and subsequently detected with plastic scintillators NE102 (Fast counter). Each sample was saturated with water and oil. The difference in neutron attenuation between dry and wet samples was compared with the fluid content determined by mass balance of the sample. In this experiment we were able to determine 3% of humidity in standard sample model (SiO 2 ) and estimate porosity in geological samples when saturated with different fluids. (Author)

  9. Optimized preparation of urine samples for two-dimensional electrophoresis and initial application to patient samples

    DEFF Research Database (Denmark)

    Lafitte, Daniel; Dussol, Bertrand; Andersen, Søren

    2002-01-01

    OBJECTIVE: We optimized of the preparation of urinary samples to obtain a comprehensive map of urinary proteins of healthy subjects and then compared this map with the ones obtained with patient samples to show that the pattern was specific of their kidney disease. DESIGN AND METHODS: The urinary...

  10. EFSA NDA Panel (EFSA Panel on Dietetic Products, Nutrition and Allergies), 2014. Scientific Opinion on the substantiation of a health claim related to citrulline-malate and faster recovery from muscle fatigue after exercise pursuant to Article 13(5) of Regulation (EC) No 1924/2006

    DEFF Research Database (Denmark)

    Tetens, Inge

    Following an application from Biocodex, submitted for authorisation of a health claim pursuant to Article 13(5) of Regulation (EC) No 1924/2006 via the Competent Authority of Belgium, the EFSA Panel on Dietetic Products, Nutrition and Allergies (NDA) was asked to deliver an opinion on the scienti......Following an application from Biocodex, submitted for authorisation of a health claim pursuant to Article 13(5) of Regulation (EC) No 1924/2006 via the Competent Authority of Belgium, the EFSA Panel on Dietetic Products, Nutrition and Allergies (NDA) was asked to deliver an opinion...... on the scientific substantiation of a health claim related to citrulline-malate and faster recovery from muscle fatigue after exercise. The Panel considers that citrulline-malate is sufficiently characterised. The claimed effect proposed by the applicant is “improved recovery from muscle fatigue”. Faster recovery...... function. The evidence provided by the applicant did not establish that a faster reduction of blood lactate concentrations through a dietary intervention leads to faster recovery from muscle fatigue by contributing to the restoration of muscle function after exercise. No conclusions could be drawn from...

  11. Analysis of the research sample collections of Uppsala biobank.

    Science.gov (United States)

    Engelmark, Malin T; Beskow, Anna H

    2014-10-01

    Uppsala Biobank is the joint and only biobank organization of the two principals, Uppsala University and Uppsala University Hospital. Biobanks are required to have updated registries on sample collection composition and management in order to fulfill legal regulations. We report here the results from the first comprehensive and overall analysis of the 131 research sample collections organized in the biobank. The results show that the median of the number of samples in the collections was 700 and that the number of samples varied from less than 500 to over one million. Blood samples, such as whole blood, serum, and plasma, were included in the vast majority, 84.0%, of the research sample collections. Also, as much as 95.5% of the newly collected samples within healthcare included blood samples, which further supports the concept that blood samples have fundamental importance for medical research. Tissue samples were also commonly used and occurred in 39.7% of the research sample collections, often combined with other types of samples. In total, 96.9% of the 131 sample collections included samples collected for healthcare, showing the importance of healthcare as a research infrastructure. Of the collections that had accessed existing samples from healthcare, as much as 96.3% included tissue samples from the Department of Pathology, which shows the importance of pathology samples as a resource for medical research. Analysis of different research areas shows that the most common of known public health diseases are covered. Collections that had generated the most publications, up to over 300, contained a large number of samples collected systematically and repeatedly over many years. More knowledge about existing biobank materials, together with public registries on sample collections, will support research collaborations, improve transparency, and bring us closer to the goals of biobanks, which is to save and prolong human lives and improve health and quality of life.

  12. Simplified sample treatment for the determination of total concentrations and chemical fractionation forms of Ca, Fe, Mg and Mn in soluble coffees.

    Science.gov (United States)

    Pohl, Pawel; Stelmach, Ewelina; Szymczycha-Madeja, Anna

    2014-11-15

    A simpler, and faster than wet digestion, sample treatment was proposed prior to determination of total concentrations for selected macro- (Ca, Mg) and microelements (Fe, Mn) in soluble coffees by flame atomic absorption spectrometry. Samples were dissolved in water and acidified with HNO3. Precision was in the range 1-4% and accuracy was better than 2.5%. The method was used in analysis of 18 soluble coffees available on the Polish market. Chemical fractionation patterns for Ca, Fe, Mg and Mn in soluble coffees, as consumed, using a two-column solid-phase extraction method, determined Ca, Mg and Mn were present predominantly as cations (80-93% of total content). This suggests these elements are likely to be highly bioaccessible. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. SYBR green-based detection of Leishmania infantum DNA using peripheral blood samples.

    Science.gov (United States)

    Ghasemian, Mehrdad; Gharavi, Mohammad Javad; Akhlaghi, Lame; Mohebali, Mehdi; Meamar, Ahmad Reza; Aryan, Ehsan; Oormazdi, Hormozd; Ghayour, Zahra

    2016-03-01

    Parasitological methods for the diagnosis of visceral leishmaniasis (VL) require invasive sampling procedures. The aim of this study was to detect Leishmania infantum (L. infantum) DNA by real time-PCR method in peripheral blood of symptomatic VL patient and compared its performance with nested PCR, an established molecular method with very high diagnostic indices. 47 parasitologically confirmed VL patients diagnosed by direct agglutination test (DAT > 3200), bone marrow aspiration and presented characteristic clinical features (fever, hepatosplenomegaly, and anemia) and 40 controls (non-endemic healthy control-30, Malaria-2, Toxoplasma gondii-2, Mycobacterium tuberculosis-2, HBV-1, HCV-1, HSV-1 and CMV-1) were enrolled in this study. SYBR-green based real time-PCR and nested PCR was performed to amplify the Kinetoplast DNA minicircle gene using the DNA extracted from Buffy coat. From among 47 patients, 45 (95.7 %) were positive by both nested-PCR and real time-PCR. These results indicate that real time-PCR was not only as sensitive as a nested-PCR assay for detection of Leishmania kDNA in clinical sample, but also more rapid. The advantage of real time-PCR based methods over nested-PCR is simple to perform, more faster in which nested-PCR requires post-PCR processing and reducing contamination risk.

  14. Radioactivity monitoring of export/import samples - an update

    International Nuclear Information System (INIS)

    Shukla, V.K.; Murthy, M.V.R.; Sartandel, S.J.; Negi, B.S.; Sadasivan, S.

    2001-01-01

    137 Cs activity was measured in food samples exported from and imported into India during the period from 1993 to 2000. At present, on an average of about 1200 sample are estimated every year. Results showed no contamination of 137 Cs activity in samples that are exported from India. The few samples of diary products, imported in India during 1995 and 1996, showed low levels of 137 Cs activity. However, the levels were well with in the permissible values of Atomic Energy Regulatory Board (AERB). (author)

  15. Fungal diversity in oil palm leaves showing symptoms of Fatal Yellowing disease.

    Science.gov (United States)

    de Assis Costa, Ohana Yonara; Tupinambá, Daiva Domenech; Bergmann, Jessica Carvalho; Barreto, Cristine Chaves; Quirino, Betania Ferraz

    2018-01-01

    Oil palm (Elaeis guineensis Jacq.) is an excellent source of vegetable oil for biodiesel production; however, there are still some limitations for its cultivation in Brazil such as Fatal Yellowing (FY) disease. FY has been studied for many years, but its causal agent has never been determined. In Colombia and nearby countries, it was reported that the causal agent of Fatal Yellowing (Pudrición del Cogollo) is the oomycete Phytophthora palmivora, however, several authors claim that Fatal Yellowing and Pudrición del Cogollo (PC) are different diseases. The major aims of this work were to test, using molecular biology tools, Brazilian oil palm trees for the co-occurrence of the oomycete Phytophthora and FY symptoms, and to characterize the fungal diversity in FY diseased and healthy leaves by next generation sequencing. Investigation with specific primers for the genus Phytophthora showed amplification in only one of the samples. Analysis of the fungal ITS region demonstrated that, at the genus level, different groups predominated in all symptomatic samples, while Pyrenochaetopsis and unclassified fungi predominated in all asymptomatic samples. Our results show that fungal communities were not the same between samples at the same stage of the disease or among all the symptomatic samples. This is the first study that describes the evolution of the microbial community in the course of plant disease and also the first work to use high throughput next generation sequencing to evaluate the fungal community associated with leaves of oil palm trees with and without symptoms of FY.

  16. Fungal diversity in oil palm leaves showing symptoms of Fatal Yellowing disease.

    Directory of Open Access Journals (Sweden)

    Ohana Yonara de Assis Costa

    Full Text Available Oil palm (Elaeis guineensis Jacq. is an excellent source of vegetable oil for biodiesel production; however, there are still some limitations for its cultivation in Brazil such as Fatal Yellowing (FY disease. FY has been studied for many years, but its causal agent has never been determined. In Colombia and nearby countries, it was reported that the causal agent of Fatal Yellowing (Pudrición del Cogollo is the oomycete Phytophthora palmivora, however, several authors claim that Fatal Yellowing and Pudrición del Cogollo (PC are different diseases. The major aims of this work were to test, using molecular biology tools, Brazilian oil palm trees for the co-occurrence of the oomycete Phytophthora and FY symptoms, and to characterize the fungal diversity in FY diseased and healthy leaves by next generation sequencing. Investigation with specific primers for the genus Phytophthora showed amplification in only one of the samples. Analysis of the fungal ITS region demonstrated that, at the genus level, different groups predominated in all symptomatic samples, while Pyrenochaetopsis and unclassified fungi predominated in all asymptomatic samples. Our results show that fungal communities were not the same between samples at the same stage of the disease or among all the symptomatic samples. This is the first study that describes the evolution of the microbial community in the course of plant disease and also the first work to use high throughput next generation sequencing to evaluate the fungal community associated with leaves of oil palm trees with and without symptoms of FY.

  17. Deltamethrin in sediment samples of the Okavango Delta, Botswana ...

    African Journals Online (AJOL)

    Analysis of samples for organic matter content showed percentage total organic carbon (% TOC) ranging between 0.19% and 8.21%, with samples collected from the pool having the highest total organic carbon. The concentrations of deltamethrin residues and the % TOC in sediment samples showed a similar trend with ...

  18. Tank 241-C-111 headspace gas and vapor sample results - August 1993 samples

    International Nuclear Information System (INIS)

    Huckaby, J.L.

    1994-01-01

    Tank 241-C-111 is on the ferrocyanide Watch List. Gas and vapor samples were collected to assure safe conditions before planned intrusive work was performed. Sample analyses showed that hydrogen is about ten times higher in the tank headspace than in ambient air. Nitrous oxide is about sixty times higher than ambient levels. The hydrogen cyanide concentration was below 0.04 ppbv, and the average NO x concentration was 8.6 ppmv

  19. Probability sampling in legal cases: Kansas cellphone users

    Science.gov (United States)

    Kadane, Joseph B.

    2012-10-01

    Probability sampling is a standard statistical technique. This article introduces the basic ideas of probability sampling, and shows in detail how probability sampling was used in a particular legal case.

  20. Permian tetrapods from the Sahara show climate-controlled endemism in Pangaea.

    Science.gov (United States)

    Sidor, Christian A; O'Keefe, F Robin; Damiani, Ross; Steyer, J Sébastien; Smith, Roger M H; Larsson, Hans C E; Sereno, Paul C; Ide, Oumarou; Maga, Abdoulaye

    2005-04-14

    New fossils from the Upper Permian Moradi Formation of northern Niger provide an insight into the faunas that inhabited low-latitude, xeric environments near the end of the Palaeozoic era (approximately 251 million years ago). We describe here two new temnospondyl amphibians, the cochleosaurid Nigerpeton ricqlesi gen. et sp. nov. and the stem edopoid Saharastega moradiensis gen. et sp. nov., as relicts of Carboniferous lineages that diverged 40-90 million years earlier. Coupled with a scarcity of therapsids, the new finds suggest that faunas from the poorly sampled xeric belt that straddled the Equator during the Permian period differed markedly from well-sampled faunas that dominated tropical-to-temperate zones to the north and south. Our results show that long-standing theories of Late Permian faunal homogeneity are probably oversimplified as the result of uneven latitudinal sampling.

  1. Hyphenation of optimized microfluidic sample preparation with nano liquid chromatography for faster and greener alkaloid analysis

    NARCIS (Netherlands)

    Shen, Y.; Beek, van T.A.; Zuilhof, H.; Chen, B.

    2013-01-01

    A glass liquid–liquid extraction (LLE) microchip with three parallel 3.5 cm long and 100 µm wide interconnecting channels was optimized in terms of more environmentally friendly (greener) solvents and extraction efficiency. In addition, the optimized chip was successfully hyphenated with nano-liquid

  2. CHARACTERIZATION OF TANK 18F WALL AND SCALE SAMPLES

    International Nuclear Information System (INIS)

    Hay, Michael; Click, Damon; Diprete, C.; Diprete, David

    2010-01-01

    Samples from the wall of Tank 18F were obtained to determine the associated source term using a special wall sampling device. Two wall samples and a scale sample were obtained and characterized at the Savannah River National Laboratory (SRNL). All the analyses of the Tank 18F wall and scale samples met the targeted detection limits. The upper wall samples show ∼2X to 6X higher concentrations for U, Pu, and Np on an activity per surface area basis than the lower wall samples. On an activity per mass basis, the upper and lower wall samples show similar compositions for U and Pu. The Np activity is still ∼2.5X higher in the upper wall sample on a per mass basis. The scale sample contains 2-3X higher concentrations of U, Pu, and Sr-90 than the wall samples on an activity per mass basis. The plutonium isotopics differ for all three wall samples (upper, lower, and scale samples). The Pu-238 appears to increase as a proportion of total plutonium as you move up the tank wall from the lowest sample (scale sample) to the upper wall sample. The elemental composition of the scale sample appears similar to other F-Area PUREX sludge compositions. The composition of the scale sample is markedly different than the material on the floor of Tank 18F. However, the scale sample shows elevated Mg and Ca concentrations relative to typical PUREX sludge as do the floor samples.

  3. Alcohol-Preferring Rats Show Goal Oriented Behaviour to Food Incentives but Are Neither Sign-Trackers Nor Impulsive.

    Directory of Open Access Journals (Sweden)

    Yolanda Peña-Oliver

    Full Text Available Drug addiction is often associated with impulsivity and altered behavioural responses to both primary and conditioned rewards. Here we investigated whether selectively bred alcohol-preferring (P and alcohol-nonpreferring (NP rats show differential levels of impulsivity and conditioned behavioural responses to food incentives. P and NP rats were assessed for impulsivity in the 5-choice serial reaction time task (5-CSRTT, a widely used translational task in humans and other animals, as well as Pavlovian conditioned approach to measure sign- and goal-tracking behaviour. Drug-naïve P and NP rats showed similar levels of impulsivity on the 5-CSRTT, assessed by the number of premature, anticipatory responses, even when the waiting interval to respond was increased. However, unlike NP rats, P rats were faster to enter the food magazine and spent more time in this area. In addition, P rats showed higher levels of goal-tracking responses than NP rats, as measured by the number of magazine nose-pokes during the presentation of a food conditioned stimulus. By contrast, NP showed higher levels of sign-tracking behaviour than P rats. Following a 4-week exposure to intermittent alcohol we confirmed that P rats had a marked preference for, and consumed more alcohol than, NP rats, but were not more impulsive when re-tested in the 5-CSRTT. These findings indicate that high alcohol preferring and drinking P rats are neither intrinsically impulsive nor do they exhibit impulsivity after exposure to alcohol. However, P rats do show increased goal-directed behaviour to food incentives and this may be associated with their strong preference for alcohol.

  4. Alcohol-Preferring Rats Show Goal Oriented Behaviour to Food Incentives but Are Neither Sign-Trackers Nor Impulsive.

    Science.gov (United States)

    Peña-Oliver, Yolanda; Giuliano, Chiara; Economidou, Daina; Goodlett, Charles R; Robbins, Trevor W; Dalley, Jeffrey W; Everitt, Barry J

    2015-01-01

    Drug addiction is often associated with impulsivity and altered behavioural responses to both primary and conditioned rewards. Here we investigated whether selectively bred alcohol-preferring (P) and alcohol-nonpreferring (NP) rats show differential levels of impulsivity and conditioned behavioural responses to food incentives. P and NP rats were assessed for impulsivity in the 5-choice serial reaction time task (5-CSRTT), a widely used translational task in humans and other animals, as well as Pavlovian conditioned approach to measure sign- and goal-tracking behaviour. Drug-naïve P and NP rats showed similar levels of impulsivity on the 5-CSRTT, assessed by the number of premature, anticipatory responses, even when the waiting interval to respond was increased. However, unlike NP rats, P rats were faster to enter the food magazine and spent more time in this area. In addition, P rats showed higher levels of goal-tracking responses than NP rats, as measured by the number of magazine nose-pokes during the presentation of a food conditioned stimulus. By contrast, NP showed higher levels of sign-tracking behaviour than P rats. Following a 4-week exposure to intermittent alcohol we confirmed that P rats had a marked preference for, and consumed more alcohol than, NP rats, but were not more impulsive when re-tested in the 5-CSRTT. These findings indicate that high alcohol preferring and drinking P rats are neither intrinsically impulsive nor do they exhibit impulsivity after exposure to alcohol. However, P rats do show increased goal-directed behaviour to food incentives and this may be associated with their strong preference for alcohol.

  5. Uniform Sampling Table Method and its Applications II--Evaluating the Uniform Sampling by Experiment.

    Science.gov (United States)

    Chen, Yibin; Chen, Jiaxi; Chen, Xuan; Wang, Min; Wang, Wei

    2015-01-01

    A new method of uniform sampling is evaluated in this paper. The items and indexes were adopted to evaluate the rationality of the uniform sampling. The evaluation items included convenience of operation, uniformity of sampling site distribution, and accuracy and precision of measured results. The evaluation indexes included operational complexity, occupation rate of sampling site in a row and column, relative accuracy of pill weight, and relative deviation of pill weight. They were obtained from three kinds of drugs with different shape and size by four kinds of sampling methods. Gray correlation analysis was adopted to make the comprehensive evaluation by comparing it with the standard method. The experimental results showed that the convenience of uniform sampling method was 1 (100%), odds ratio of occupation rate in a row and column was infinity, relative accuracy was 99.50-99.89%, reproducibility RSD was 0.45-0.89%, and weighted incidence degree exceeded the standard method. Hence, the uniform sampling method was easy to operate, and the selected samples were distributed uniformly. The experimental results demonstrated that the uniform sampling method has good accuracy and reproducibility, which can be put into use in drugs analysis.

  6. Determination of Bisphenol A and Bisphenol AF in Vinegar samples by two-component mixed ionic liquid dispersive liquid-phase microextraction coupled with high performance liquid chromatography

    International Nuclear Information System (INIS)

    Tai, Z.; Liu, M.; Hu, X.; Yang, Y.

    2014-01-01

    This paper describes a sensitive and simple method for the determination of bisphenol A (BPA) and bisphenol AF (BPAF) in vinegar samples using two-component mixed ionic liquid dispersive liquid-phase microextraction coupled with high performance liquid chromatography. In this work, BPA and BPAF were selected as the model analytes, and two-component mixed ionic liquid included 1-butyl-3-methylimidazolium hexafluorophosphate ((C4Mim)PF6) and 1-hexyl-3-methyl-imidazolium hexafluorophosphate ((C6Mim)PF6) was used as the extraction solvent for the first time here. Parameters that affect the extraction efficiency were investigated. Under the optimum conditions, good linear relationships were discovered in the range of 1.0-100 micro g/L for BPA and 2.0-150 micro g/L for BPAF, respectively. Detection limits of proposed method based on the signal-to-noise ratio (S/N=3) were in the range of 0.15-0.38 micro g/L. The efficiencies of proposed method have also been demonstrated with spiked real vinegar samples. The result show this method/ procedure to be a more efficient approach for the determination of BPA and BPAF in real vinegar, presenting average recovery rate of 89.3-112 % and precision values of 0.9-13.5% (RSDs, n = 6). In comparison with traditional solid phase extraction procedures this method results in lower solvent consumption, low pollution levels, and faster sample preparation. (author)

  7. Ancient bacteria show evidence of DNA repair

    DEFF Research Database (Denmark)

    Johnson, Sarah Stewart; Hebsgaard, Martin B; Christensen, Torben R

    2007-01-01

    -term survival of bacteria sealed in frozen conditions for up to one million years. Our results show evidence of bacterial survival in samples up to half a million years in age, making this the oldest independently authenticated DNA to date obtained from viable cells. Additionally, we find strong evidence...... geological timescales. There has been no direct evidence in ancient microbes for the most likely mechanism, active DNA repair, or for the metabolic activity necessary to sustain it. In this paper, we couple PCR and enzymatic treatment of DNA with direct respiration measurements to investigate long...... that this long-term survival is closely tied to cellular metabolic activity and DNA repair that over time proves to be superior to dormancy as a mechanism in sustaining bacteria viability....

  8. Low-sampling-rate ultra-wideband channel estimation using equivalent-time sampling

    KAUST Repository

    Ballal, Tarig

    2014-09-01

    In this paper, a low-sampling-rate scheme for ultra-wideband channel estimation is proposed. The scheme exploits multiple observations generated by transmitting multiple pulses. In the proposed scheme, P pulses are transmitted to produce channel impulse response estimates at a desired sampling rate, while the ADC samples at a rate that is P times slower. To avoid loss of fidelity, the number of sampling periods (based on the desired rate) in the inter-pulse interval is restricted to be co-prime with P. This condition is affected when clock drift is present and the transmitted pulse locations change. To handle this case, and to achieve an overall good channel estimation performance, without using prior information, we derive an improved estimator based on the bounded data uncertainty (BDU) model. It is shown that this estimator is related to the Bayesian linear minimum mean squared error (LMMSE) estimator. Channel estimation performance of the proposed sub-sampling scheme combined with the new estimator is assessed in simulation. The results show that high reduction in sampling rate can be achieved. The proposed estimator outperforms the least squares estimator in almost all cases, while in the high SNR regime it also outperforms the LMMSE estimator. In addition to channel estimation, a synchronization method is also proposed that utilizes the same pulse sequence used for channel estimation. © 2014 IEEE.

  9. World oil demand's shift toward faster growing and less price-responsive products and regions

    Energy Technology Data Exchange (ETDEWEB)

    Dargay, Joyce M. [Institute for Transport Studies, University of Leeds, Leeds LS2 9JT (United Kingdom); Gately, Dermot [Dept. of Economics, New York University, 19W. 4 St., New York, NY 10012 (United States)

    2010-10-15

    Using data for 1971-2008, we estimate the effects of changes in price and income on world oil demand, disaggregated by product - transport oil, fuel oil (residual and heating oil), and other oil - for six groups of countries. Most of the demand reductions since 1973-74 were due to fuel-switching away from fuel oil, especially in the OECD; in addition, the collapse of the Former Soviet Union (FSU) reduced their oil consumption substantially. Demand for transport and other oil was much less price-responsive, and has grown almost as rapidly as income, especially outside the OECD and FSU. World oil demand has shifted toward products and regions that are faster growing and less price-responsive. In contrast to projections to 2030 of declining per-capita demand for the world as a whole - by the U.S. Department of Energy (DOE), International Energy Agency (IEA) and OPEC - we project modest growth. Our projections for total world demand in 2030 are at least 20% higher than projections by those three institutions, using similar assumptions about income growth and oil prices, because we project rest-of-world growth that is consistent with historical patterns, in contrast to the dramatic slowdowns which they project. (author)

  10. Fast Bayesian Non-Negative Matrix Factorisation and Tri-Factorisation

    DEFF Research Database (Denmark)

    Brouwer, Thomas; Frellsen, Jes; Liò, Pietro

    We present a fast variational Bayesian algorithm for performing non-negative matrix factorisation and tri-factorisation. We show that our approach achieves faster convergence per iteration and timestep (wall-clock) than Gibbs sampling and non-probabilistic approaches, and do not require additional...... samples to estimate the posterior. We show that in particular for matrix tri-factorisation convergence is difficult, but our variational Bayesian approach offers a fast solution, allowing the tri-factorisation approach to be used more effectively....

  11. Sample size estimation and sampling techniques for selecting a representative sample

    Directory of Open Access Journals (Sweden)

    Aamir Omair

    2014-01-01

    Full Text Available Introduction: The purpose of this article is to provide a general understanding of the concepts of sampling as applied to health-related research. Sample Size Estimation: It is important to select a representative sample in quantitative research in order to be able to generalize the results to the target population. The sample should be of the required sample size and must be selected using an appropriate probability sampling technique. There are many hidden biases which can adversely affect the outcome of the study. Important factors to consider for estimating the sample size include the size of the study population, confidence level, expected proportion of the outcome variable (for categorical variables/standard deviation of the outcome variable (for numerical variables, and the required precision (margin of accuracy from the study. The more the precision required, the greater is the required sample size. Sampling Techniques: The probability sampling techniques applied for health related research include simple random sampling, systematic random sampling, stratified random sampling, cluster sampling, and multistage sampling. These are more recommended than the nonprobability sampling techniques, because the results of the study can be generalized to the target population.

  12. Fast and slow readers of the Hebrew language show divergence in brain response ∼200 ms post stimulus: an ERP study.

    Directory of Open Access Journals (Sweden)

    Sebastian Peter Korinth

    Full Text Available Higher N170 amplitudes to words and to faces were recently reported for faster readers of German. Since the shallow German orthography allows phonological recoding of single letters, the reported speed advantages might have their origin in especially well-developed visual processing skills of faster readers. In contrast to German, adult readers of Hebrew are forced to process letter chunks up to whole words. This dependence on more complex visual processing might have created ceiling effects for this skill. Therefore, the current study examined whether also in the deep Hebrew orthography visual processing skills as reflected by N170 amplitudes explain reading speed differences. Forty university students, native speakers of Hebrew without reading impairments, accomplished a lexical decision task (i.e., deciding whether a visually presented stimulus represents a real or a pseudo word and a face decision task (i.e., deciding whether a face was presented complete or with missing facial features while their electroencephalogram was recorded from 64 scalp positions. In both tasks stronger event related potentials (ERPs were observed for faster readers in time windows at about 200 ms. Unlike in previous studies, ERP waveforms in relevant time windows did not correspond to N170 scalp topographies. The results support the notion of visual processing ability as an orthography independent marker of reading proficiency, which advances our understanding about regular and impaired reading development.

  13. Measurement of 90Sr in fresh water samples

    International Nuclear Information System (INIS)

    Belanova, A.; Meresova, J.; Svetlik, I.; Tomaskova, L.

    2008-01-01

    This preliminary study show new experimental approach to the determination of the radionuclide 90 Sr in water samples. The new method of dynamic windows utilizing liquid scintillation counting was applied on model and surface water samples. Our results show the demand of separation technique with significantly higher yields. (authors)

  14. Verifying mixing in dilution tunnels How to ensure cookstove emissions samples are unbiased

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, Daniel L. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Rapp, Vi H. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Caubel, Julien J. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Chen, Sharon S. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Gadgil, Ashok J. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2017-12-15

    A well-mixed diluted sample is essential for unbiased measurement of cookstove emissions. Most cookstove testing labs employ a dilution tunnel, also referred to as a “duct,” to mix clean dilution air with cookstove emissions before sampling. It is important that the emissions be well-mixed and unbiased at the sampling port so that instruments can take representative samples of the emission plume. Some groups have employed mixing baffles to ensure the gaseous and aerosol emissions from cookstoves are well-mixed before reaching the sampling location [2, 4]. The goal of these baffles is to to dilute and mix the emissions stream with the room air entering the fume hood by creating a local zone of high turbulence. However, potential drawbacks of mixing baffles include increased flow resistance (larger blowers needed for the same exhaust flow), nuisance cleaning of baffles as soot collects, and, importantly, the potential for loss of PM2.5 particles on the baffles themselves, thus biasing results. A cookstove emission monitoring system with baffles will collect particles faster than the duct’s walls alone. This is mostly driven by the available surface area for deposition by processes of Brownian diffusion (through the boundary layer) and turbophoresis (i.e. impaction). The greater the surface area available for diffusive and advection-driven deposition to occur, the greater the particle loss will be at the sampling port. As a layer of larger particle “fuzz” builds on the mixing baffles, even greater PM2.5 loss could occur. The micro structure of the deposited aerosol will lead to increased rates of particle loss by interception and a tendency for smaller particles to deposit due to impaction on small features of the micro structure. If the flow stream could be well-mixed without the need for baffles, these drawbacks could be avoided and the cookstove emissions sampling system would be more robust.

  15. Faster Black-Box Algorithms Through Higher Arity Operators

    DEFF Research Database (Denmark)

    Doerr, Benjamin; Johannsen, Daniel; Kötzing, Timo

    2011-01-01

    We extend the work of Lehre and Witt (GECCO 2010) on the unbiased black-box model by considering higher arity variation operators. In particular, we show that already for binary operators the black-box complexity of LeadingOnes drops from (n2) for unary operators to O(n log n). For OneMax, the (n...

  16. Analysis of wastewater samples by direct combination of thin-film microextraction and desorption electrospray ionization mass spectrometry.

    Science.gov (United States)

    Strittmatter, Nicole; Düring, Rolf-Alexander; Takáts, Zoltán

    2012-09-07

    An analysis method for aqueous samples by the direct combination of C18/SCX mixed mode thin-film microextraction (TFME) and desorption electrospray ionization mass spectrometry (DESI-MS) was developed. Both techniques make analytical workflow simpler and faster, hence the combination of the two techniques enables considerably shorter analysis time compared to the traditional liquid chromatography mass spectrometry (LC-MS) approach. The method was characterized using carbamazepine and triclosan as typical examples for pharmaceuticals and personal care product (PPCP) components which draw increasing attention as wastewater-derived environmental contaminants. Both model compounds were successfully detected in real wastewater samples and their concentrations determined using external calibration with isotope labeled standards. Effects of temperature, agitation, sample volume, and exposure time were investigated in the case of spiked aqueous samples. Results were compared to those of parallel HPLC-MS determinations and good agreement was found through a three orders of magnitude wide concentration range. Serious matrix effects were observed in treated wastewater, but lower limits of detection were still found to be in the low ng L(-1) range. Using an Orbitrap mass spectrometer, the technique was found to be ideal for screening purposes and led to the detection of various different PPCP components in wastewater treatment plant effluents, including beta-blockers, nonsteroidal anti-inflammatory drugs, and UV filters.

  17. Efficiently sampling conformations and pathways using the concurrent adaptive sampling (CAS) algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Ahn, Surl-Hee; Grate, Jay W.; Darve, Eric F.

    2017-08-21

    Molecular dynamics (MD) simulations are useful in obtaining thermodynamic and kinetic properties of bio-molecules but are limited by the timescale barrier, i.e., we may be unable to efficiently obtain properties because we need to run microseconds or longer simulations using femtoseconds time steps. While there are several existing methods to overcome this timescale barrier and efficiently sample thermodynamic and/or kinetic properties, problems remain in regard to being able to sample un- known systems, deal with high-dimensional space of collective variables, and focus the computational effort on slow timescales. Hence, a new sampling method, called the “Concurrent Adaptive Sampling (CAS) algorithm,” has been developed to tackle these three issues and efficiently obtain conformations and pathways. The method is not constrained to use only one or two collective variables, unlike most reaction coordinate-dependent methods. Instead, it can use a large number of collective vari- ables and uses macrostates (a partition of the collective variable space) to enhance the sampling. The exploration is done by running a large number of short simula- tions, and a clustering technique is used to accelerate the sampling. In this paper, we introduce the new methodology and show results from two-dimensional models and bio-molecules, such as penta-alanine and triazine polymer

  18. Sample Results From Tank 48H Samples HTF-48-14-158, -159, -169, and -170

    Energy Technology Data Exchange (ETDEWEB)

    Peters, T. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Hang, T. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2015-04-28

    Savannah River National Laboratory (SRNL) analyzed samples from Tank 48H in support of determining the cause for the unusually high dose rates at the sampling points for this tank. A set of two samples was taken from the quiescent tank, and two additional samples were taken after the contents of the tank were mixed. The results of the analyses of all the samples show that the contents of the tank have changed very little since the analysis of the previous sample in 2012. The solids are almost exclusively composed of tetraphenylborate (TPB) salts, and there is no indication of acceleration in the TPB decomposition. The filtrate composition shows a moderate increase in salt concentration and density, which is attributable to the addition of NaOH for the purposes of corrosion control. An older modeling simulation of the TPB degradation was updated, and the supernate results from a 2012 sample were run in the model. This result was compared to the results from the 2014 recent sample results reported in this document. The model indicates there is no change in the TPB degradation from 2012 to 2014. SRNL measured the buoyancy of the TPB solids in Tank 48H simulant solutions. It was determined that a solution of density 1.279 g/mL (~6.5M sodium) was capable of indefinitely suspending the TPB solids evenly throughout the solution. A solution of density 1.296 g/mL (~7M sodium) caused a significant fraction of the solids to float on the solution surface. As the experiments could not include the effect of additional buoyancy elements such as benzene or hydrogen generation, the buoyancy measurements provide an upper bound estimate of the density in Tank 48H required to float the solids.

  19. Diagnostic herd sensitivity using environmental samples

    DEFF Research Database (Denmark)

    Vigre, Håkan; Josefsen, Mathilde Hartmann; Seyfarth, Anne Mette

    either at farm or slaughter. Three sample matrices were collected; dust samples (5 environmental swabs), nasal swabs (10 pools with 5 animals per pool) and air samples (1 filter). Based on the assumption that MRSA occurred in all 48 herds the overall herd sensitivity was 58% for nasal swabs, 33% for dust....... In our example, the prevalence of infected pigs in each herd was estimated from the pooled samples of nasal swabs. Logistic regression was used to estimate the effect of animal prevalence on the probability to detect MRSA in the dust and air samples at herd level. The results show a significant increase...

  20. Spatial Heterodyne Observation of Water (SHOW) from a high altitude aircraft

    Science.gov (United States)

    Bourassa, A. E.; Langille, J.; Solheim, B.; Degenstein, D. A.; Letros, D.; Lloyd, N. D.; Loewen, P.

    2017-12-01

    The Spatial Heterodyne Observations of Water instrument (SHOW) is limb-sounding satellite prototype that is being developed in collaboration between the University of Saskatchewan, York University, the Canadian Space Agency and ABB. The SHOW instrument combines a field-widened SHS with an imaging system to observe limb-scattered sunlight in a vibrational band of water (1363 nm - 1366 nm). Currently, the instrument has been optimized for deployment on NASA's ER-2 aircraft. Flying at an altitude of 70, 000 ft the ER-2 configuration and SHOW viewing geometry provides high spatial resolution (limb-measurements of water vapor in the Upper troposphere and lower stratosphere region. During an observation campaign from July 15 - July 22, the SHOW instrument performed 10 hours of observations from the ER-2. This paper describes the SHOW measurement technique and presents the preliminary analysis and results from these flights. These observations are used to validate the SHOW measurement technique and demonstrate the sampling capabilities of the instrument.

  1. Fast, faster, poorest decisions?: A practical theological exploration of the role of a speedy mobinomic world in decision-making

    Directory of Open Access Journals (Sweden)

    Jan Albert van den Berg

    2014-07-01

    Full Text Available In a digital world, it seems as if the boundaries between rich and poor are becoming increasingly blurred. A mobinomic world is created through the use of cellular telephones, which plays an important role on multiple levels of socioeconomic understanding. Various advantages are created through the interplay between the power of mobility and the convergence of various forms of media. Considering the immediate accessibility of an overflow of data in various forms as well as time pressure, decision-making is increasingly becoming associated with living in the fast lane of the digital world. Unfortunately, the cost of faster decision-making is that it could potentially result in individuals making poor decisions on various levels. A practical-theological exploration, as embedded in a transversal rational engagement, entails a preliminary investigation and description of this digital reality, especially as portrayed in the dynamics of decision-making associated with the social media platform Twitter.

  2. Selective information sampling

    Directory of Open Access Journals (Sweden)

    Peter A. F. Fraser-Mackenzie

    2009-06-01

    Full Text Available This study investigates the amount and valence of information selected during single item evaluation. One hundred and thirty-five participants evaluated a cell phone by reading hypothetical customers reports. Some participants were first asked to provide a preliminary rating based on a picture of the phone and some technical specifications. The participants who were given the customer reports only after they made a preliminary rating exhibited valence bias in their selection of customers reports. In contrast, the participants that did not make an initial rating sought subsequent information in a more balanced, albeit still selective, manner. The preliminary raters used the least amount of information in their final decision, resulting in faster decision times. The study appears to support the notion that selective exposure is utilized in order to develop cognitive coherence.

  3. Probabilistic Equilibrium Sampling of Protein Structures from SAXS Data and a Coarse Grained Debye Formula

    DEFF Research Database (Denmark)

    Andreetta, Christian

    -likelihood estimators for the form factors employed in the Debye formula, a theoretical forward model for SAXS profiles. The resulting computation compares favorably with the state of the art tool in the field, the program CRYSOL in the suite ATSAS. A faster, parallel implementation on Graphical Processor Units (GPUs......The present work describes the design and the implementation of a protocol for arbitrary precision computation of Small Angle X-ray Scattering (SAXS) profiles, and its inclusion in a probabilistic framework for protein structure determination. This protocol identifies a set of maximum...... of protein structures all fitting the experimental data. For the first time, we describe in full atomic detail a set of different conformations attainable by flexible polypeptides in solution. This method is not limited by assumptions in shape or size of the samples. It allows therefore to investigate...

  4. Are OPERA neutrinos faster than light because of non-inertial reference frames?

    Science.gov (United States)

    Germanà, C.

    2012-02-01

    Context. Recent results from the OPERA experiment reported a neutrino beam traveling faster than light. The challenging experiment measured the neutrino time of flight (TOF) over a baseline from the CERN to the Gran Sasso site, concluding that the neutrino beam arrives ~60 ns earlier than a light ray would do. Because the result, if confirmed, has an enormous impact on science, it might be worth double-checking the time definitions with respect to the non-inertial system in which the neutrino travel time was measured. An observer with a clock measuring the proper time τ free of non-inertial effects is the one located at the solar system barycenter (SSB). Aims: Potential problems in the OPERA data analysis connected with the definition of the reference frame and time synchronization are emphasized. We aim to investigate the synchronization of non-inertial clocks on Earth by relating this time to the proper time of an inertial observer at SSB. Methods: The Tempo2 software was used to time-stamp events observed on the geoid with respect to the SSB inertial observer time. Results: Neutrino results from OPERA might carry the fingerprint of non-inertial effects because they are timed by terrestrial clocks. The CERN-Gran Sasso clock synchronization is accomplished by applying corrections that depend on special and general relativistic time dilation effects at the clocks, depending on the position of the clocks in the solar system gravitational well. As a consequence, TOF distributions are centered on values shorter by tens of nanoseconds than expected, integrating over a period from April to December, longer if otherwise. It is worth remarking that the OPERA runs have always been carried out from April/May to November. Conclusions: If the analysis by Tempo2 holds for the OPERA experiment, the excellent measurement by the OPERA collaboration will turn into a proof of the general relativity theory in a weak field approximation. The analysis presented here is falsifiable

  5. Controlling chaos faster

    International Nuclear Information System (INIS)

    Bick, Christian; Kolodziejski, Christoph; Timme, Marc

    2014-01-01

    Predictive feedback control is an easy-to-implement method to stabilize unknown unstable periodic orbits in chaotic dynamical systems. Predictive feedback control is severely limited because asymptotic convergence speed decreases with stronger instabilities which in turn are typical for larger target periods, rendering it harder to effectively stabilize periodic orbits of large period. Here, we study stalled chaos control, where the application of control is stalled to make use of the chaotic, uncontrolled dynamics, and introduce an adaptation paradigm to overcome this limitation and speed up convergence. This modified control scheme is not only capable of stabilizing more periodic orbits than the original predictive feedback control but also speeds up convergence for typical chaotic maps, as illustrated in both theory and application. The proposed adaptation scheme provides a way to tune parameters online, yielding a broadly applicable, fast chaos control that converges reliably, even for periodic orbits of large period

  6. Controlling chaos faster.

    Science.gov (United States)

    Bick, Christian; Kolodziejski, Christoph; Timme, Marc

    2014-09-01

    Predictive feedback control is an easy-to-implement method to stabilize unknown unstable periodic orbits in chaotic dynamical systems. Predictive feedback control is severely limited because asymptotic convergence speed decreases with stronger instabilities which in turn are typical for larger target periods, rendering it harder to effectively stabilize periodic orbits of large period. Here, we study stalled chaos control, where the application of control is stalled to make use of the chaotic, uncontrolled dynamics, and introduce an adaptation paradigm to overcome this limitation and speed up convergence. This modified control scheme is not only capable of stabilizing more periodic orbits than the original predictive feedback control but also speeds up convergence for typical chaotic maps, as illustrated in both theory and application. The proposed adaptation scheme provides a way to tune parameters online, yielding a broadly applicable, fast chaos control that converges reliably, even for periodic orbits of large period.

  7. Controlling chaos faster

    Energy Technology Data Exchange (ETDEWEB)

    Bick, Christian [Network Dynamics, Max Planck Institute for Dynamics and Self-Organization (MPIDS), 37077 Göttingen (Germany); Bernstein Center for Computational Neuroscience (BCCN), 37077 Göttingen (Germany); Institute for Mathematics, Georg–August–Universität Göttingen, 37073 Göttingen (Germany); Kolodziejski, Christoph [Network Dynamics, Max Planck Institute for Dynamics and Self-Organization (MPIDS), 37077 Göttingen (Germany); III. Physical Institute—Biophysics, Georg–August–Universität Göttingen, 37077 Göttingen (Germany); Timme, Marc [Network Dynamics, Max Planck Institute for Dynamics and Self-Organization (MPIDS), 37077 Göttingen (Germany); Institute for Nonlinear Dynamics, Georg–August–Universität Göttingen, 37077 Göttingen (Germany)

    2014-09-01

    Predictive feedback control is an easy-to-implement method to stabilize unknown unstable periodic orbits in chaotic dynamical systems. Predictive feedback control is severely limited because asymptotic convergence speed decreases with stronger instabilities which in turn are typical for larger target periods, rendering it harder to effectively stabilize periodic orbits of large period. Here, we study stalled chaos control, where the application of control is stalled to make use of the chaotic, uncontrolled dynamics, and introduce an adaptation paradigm to overcome this limitation and speed up convergence. This modified control scheme is not only capable of stabilizing more periodic orbits than the original predictive feedback control but also speeds up convergence for typical chaotic maps, as illustrated in both theory and application. The proposed adaptation scheme provides a way to tune parameters online, yielding a broadly applicable, fast chaos control that converges reliably, even for periodic orbits of large period.

  8. Faster methods for estimating arc centre position during VAR and results from Ti-6Al-4V and INCONEL 718 alloys

    Science.gov (United States)

    Nair, B. G.; Winter, N.; Daniel, B.; Ward, R. M.

    2016-07-01

    Direct measurement of the flow of electric current during VAR is extremely difficult due to the aggressive environment as the arc process itself controls the distribution of current. In previous studies the technique of “magnetic source tomography” was presented; this was shown to be effective but it used a computationally intensive iterative method to analyse the distribution of arc centre position. In this paper we present faster computational methods requiring less numerical optimisation to determine the centre position of a single distributed arc both numerically and experimentally. Numerical validation of the algorithms were done on models and experimental validation on measurements based on titanium and nickel alloys (Ti6Al4V and INCONEL 718). The results are used to comment on the effects of process parameters on arc behaviour during VAR.

  9. Architectures for Quantum Simulation Showing a Quantum Speedup

    Science.gov (United States)

    Bermejo-Vega, Juan; Hangleiter, Dominik; Schwarz, Martin; Raussendorf, Robert; Eisert, Jens

    2018-04-01

    One of the main aims in the field of quantum simulation is to achieve a quantum speedup, often referred to as "quantum computational supremacy," referring to the experimental realization of a quantum device that computationally outperforms classical computers. In this work, we show that one can devise versatile and feasible schemes of two-dimensional, dynamical, quantum simulators showing such a quantum speedup, building on intermediate problems involving nonadaptive, measurement-based, quantum computation. In each of the schemes, an initial product state is prepared, potentially involving an element of randomness as in disordered models, followed by a short-time evolution under a basic translationally invariant Hamiltonian with simple nearest-neighbor interactions and a mere sampling measurement in a fixed basis. The correctness of the final-state preparation in each scheme is fully efficiently certifiable. We discuss experimental necessities and possible physical architectures, inspired by platforms of cold atoms in optical lattices and a number of others, as well as specific assumptions that enter the complexity-theoretic arguments. This work shows that benchmark settings exhibiting a quantum speedup may require little control, in contrast to universal quantum computing. Thus, our proposal puts a convincing experimental demonstration of a quantum speedup within reach in the near term.

  10. Thermogravimetric and x-ray diffraction analyses of Luna-24 regolith samples

    International Nuclear Information System (INIS)

    Deshpande, V.V.; Dharwadkar, S.R.; Jakkal, V.S.

    1979-01-01

    Two samples of Luna-24 were analysed by X-ray diffraction and thermogravimetric (TG) techniques. The sample 24123.12 shows a weight loss of nearly 0.85 percent between 23O-440deg C and followed by 1.16 percent weight gain from 500 to 800deg C. The sample 23190.13 showed only a weight gain of about 1.5 percent from 5O0deg C to 900deg C. X-ray diffraction analyses show the presence of olivine, plagioclase, pigeonite, enstatite, and native iron in both the virgin samples. The heated samples, however, show that only the native iron got oxidized to iron oxide. The other constituents remain unaltered. (auth.)

  11. Respondent-driven sampling as Markov chain Monte Carlo.

    Science.gov (United States)

    Goel, Sharad; Salganik, Matthew J

    2009-07-30

    Respondent-driven sampling (RDS) is a recently introduced, and now widely used, technique for estimating disease prevalence in hidden populations. RDS data are collected through a snowball mechanism, in which current sample members recruit future sample members. In this paper we present RDS as Markov chain Monte Carlo importance sampling, and we examine the effects of community structure and the recruitment procedure on the variance of RDS estimates. Past work has assumed that the variance of RDS estimates is primarily affected by segregation between healthy and infected individuals. We examine an illustrative model to show that this is not necessarily the case, and that bottlenecks anywhere in the networks can substantially affect estimates. We also show that variance is inflated by a common design feature in which the sample members are encouraged to recruit multiple future sample members. The paper concludes with suggestions for implementing and evaluating RDS studies.

  12. Field sampling, preparation procedure and plutonium analyses of large freshwater samples

    International Nuclear Information System (INIS)

    Straelberg, E.; Bjerk, T.O.; Oestmo, K.; Brittain, J.E.

    2002-01-01

    This work is part of an investigation of the mobility of plutonium in freshwater systems containing humic substances. A well-defined bog-stream system located in the catchment area of a subalpine lake, Oevre Heimdalsvatn, Norway, is being studied. During the summer of 1999, six water samples were collected from the tributary stream Lektorbekken and the lake itself. However, the analyses showed that the plutonium concentration was below the detection limit in all the samples. Therefore renewed sampling at the same sites was carried out in August 2000. The results so far are in agreement with previous analyses from the Heimdalen area. However, 100 times higher concentrations are found in the lowlands in the eastern part of Norway. The reason for this is not understood, but may be caused by differences in the concentrations of humic substances and/or the fact that the mountain areas are covered with snow for a longer period of time every year. (LN)

  13. Network Sampling with Memory: A proposal for more efficient sampling from social networks

    Science.gov (United States)

    Mouw, Ted; Verdery, Ashton M.

    2013-01-01

    Techniques for sampling from networks have grown into an important area of research across several fields. For sociologists, the possibility of sampling from a network is appealing for two reasons: (1) A network sample can yield substantively interesting data about network structures and social interactions, and (2) it is useful in situations where study populations are difficult or impossible to survey with traditional sampling approaches because of the lack of a sampling frame. Despite its appeal, methodological concerns about the precision and accuracy of network-based sampling methods remain. In particular, recent research has shown that sampling from a network using a random walk based approach such as Respondent Driven Sampling (RDS) can result in high design effects (DE)—the ratio of the sampling variance to the sampling variance of simple random sampling (SRS). A high design effect means that more cases must be collected to achieve the same level of precision as SRS. In this paper we propose an alternative strategy, Network Sampling with Memory (NSM), which collects network data from respondents in order to reduce design effects and, correspondingly, the number of interviews needed to achieve a given level of statistical power. NSM combines a “List” mode, where all individuals on the revealed network list are sampled with the same cumulative probability, with a “Search” mode, which gives priority to bridge nodes connecting the current sample to unexplored parts of the network. We test the relative efficiency of NSM compared to RDS and SRS on 162 school and university networks from Add Health and Facebook that range in size from 110 to 16,278 nodes. The results show that the average design effect for NSM on these 162 networks is 1.16, which is very close to the efficiency of a simple random sample (DE=1), and 98.5% lower than the average DE we observed for RDS. PMID:24159246

  14. SWOT ANALYSIS ON SAMPLING METHOD

    Directory of Open Access Journals (Sweden)

    CHIS ANCA OANA

    2014-07-01

    Full Text Available Audit sampling involves the application of audit procedures to less than 100% of items within an account balance or class of transactions. Our article aims to study audit sampling in audit of financial statements. As an audit technique largely used, in both its statistical and nonstatistical form, the method is very important for auditors. It should be applied correctly for a fair view of financial statements, to satisfy the needs of all financial users. In order to be applied correctly the method must be understood by all its users and mainly by auditors. Otherwise the risk of not applying it correctly would cause loose of reputation and discredit, litigations and even prison. Since there is not a unitary practice and methodology for applying the technique, the risk of incorrectly applying it is pretty high. The SWOT analysis is a technique used that shows the advantages, disadvantages, threats and opportunities. We applied SWOT analysis in studying the sampling method, from the perspective of three players: the audit company, the audited entity and users of financial statements. The study shows that by applying the sampling method the audit company and the audited entity both save time, effort and money. The disadvantages of the method are difficulty in applying and understanding its insight. Being largely used as an audit method and being a factor of a correct audit opinion, the sampling method’s advantages, disadvantages, threats and opportunities must be understood by auditors.

  15. Sampling, feasibility, and priors in Bayesian estimation

    OpenAIRE

    Chorin, Alexandre J.; Lu, Fei; Miller, Robert N.; Morzfeld, Matthias; Tu, Xuemin

    2015-01-01

    Importance sampling algorithms are discussed in detail, with an emphasis on implicit sampling, and applied to data assimilation via particle filters. Implicit sampling makes it possible to use the data to find high-probability samples at relatively low cost, making the assimilation more efficient. A new analysis of the feasibility of data assimilation is presented, showing in detail why feasibility depends on the Frobenius norm of the covariance matrix of the noise and not on the number of va...

  16. UMAPRM: Uniformly sampling the medial axis

    KAUST Repository

    Yeh, Hsin-Yi Cindy

    2014-05-01

    © 2014 IEEE. Maintaining clearance, or distance from obstacles, is a vital component of successful motion planning algorithms. Maintaining high clearance often creates safer paths for robots. Contemporary sampling-based planning algorithms That utilize The medial axis, or The set of all points equidistant To Two or more obstacles, produce higher clearance paths. However, They are biased heavily Toward certain portions of The medial axis, sometimes ignoring parts critical To planning, e.g., specific Types of narrow passages. We introduce Uniform Medial Axis Probabilistic RoadMap (UMAPRM), a novel planning variant That generates samples uniformly on The medial axis of The free portion of Cspace. We Theoretically analyze The distribution generated by UMAPRM and show its uniformity. Our results show That UMAPRM\\'s distribution of samples along The medial axis is not only uniform but also preferable To other medial axis samplers in certain planning problems. We demonstrate That UMAPRM has negligible computational overhead over other sampling Techniques and can solve problems The others could not, e.g., a bug Trap. Finally, we demonstrate UMAPRM successfully generates higher clearance paths in The examples.

  17. An evaluation of soil sampling for 137Cs using various field-sampling volumes.

    Science.gov (United States)

    Nyhan, J W; White, G C; Schofield, T G; Trujillo, G

    1983-05-01

    The sediments from a liquid effluent receiving area at the Los Alamos National Laboratory and soils from an intensive study area in the fallout pathway of Trinity were sampled for 137Cs using 25-, 500-, 2500- and 12,500-cm3 field sampling volumes. A highly replicated sampling program was used to determine mean concentrations and inventories of 137Cs at each site, as well as estimates of spatial, aliquoting, and counting variance components of the radionuclide data. The sampling methods were also analyzed as a function of soil size fractions collected in each field sampling volume and of the total cost of the program for a given variation in the radionuclide survey results. Coefficients of variation (CV) of 137Cs inventory estimates ranged from 0.063 to 0.14 for Mortandad Canyon sediments, whereas CV values for Trinity soils were observed from 0.38 to 0.57. Spatial variance components of 137Cs concentration data were usually found to be larger than either the aliquoting or counting variance estimates and were inversely related to field sampling volume at the Trinity intensive site. Subsequent optimization studies of the sampling schemes demonstrated that each aliquot should be counted once, and that only 2-4 aliquots out of as many as 30 collected need be assayed for 137Cs. The optimization studies showed that as sample costs increased to 45 man-hours of labor per sample, the variance of the mean 137Cs concentration decreased dramatically, but decreased very little with additional labor.

  18. Correction of Sample-Time Error for Time-Interleaved Sampling System Using Cubic Spline Interpolation

    Directory of Open Access Journals (Sweden)

    Qin Guo-jie

    2014-08-01

    Full Text Available Sample-time errors can greatly degrade the dynamic range of a time-interleaved sampling system. In this paper, a novel correction technique employing a cubic spline interpolation is proposed for inter-channel sample-time error compensation. The cubic spline interpolation compensation filter is developed in the form of a finite-impulse response (FIR filter structure. The correction method of the interpolation compensation filter coefficients is deduced. A 4GS/s two-channel, time-interleaved ADC prototype system has been implemented to evaluate the performance of the technique. The experimental results showed that the correction technique is effective to attenuate the spurious spurs and improve the dynamic performance of the system.

  19. Item hierarchy-based analysis of the Rivermead Mobility Index resulted in improved interpretation and enabled faster scoring in patients undergoing rehabilitation after stroke.

    Science.gov (United States)

    Roorda, Leo D; Green, John R; Houwink, Annemieke; Bagley, Pam J; Smith, Jane; Molenaar, Ivo W; Geurts, Alexander C

    2012-06-01

    To enable improved interpretation of the total score and faster scoring of the Rivermead Mobility Index (RMI) by studying item ordering or hierarchy and formulating start-and-stop rules in patients after stroke. Cohort study. Rehabilitation center in the Netherlands; stroke rehabilitation units and the community in the United Kingdom. Item hierarchy of the RMI was studied in an initial group of patients (n=620; mean age ± SD, 69.2±12.5y; 297 [48%] men; 304 [49%] left hemisphere lesion, and 269 [43%] right hemisphere lesion), and the adequacy of the item hierarchy-based start-and-stop rules was checked in a second group of patients (n=237; mean age ± SD, 60.0±11.3y; 139 [59%] men; 103 [44%] left hemisphere lesion, and 93 [39%] right hemisphere lesion) undergoing rehabilitation after stroke. Not applicable. Mokken scale analysis was used to investigate the fit of the double monotonicity model, indicating hierarchical item ordering. The percentages of patients with a difference between the RMI total score and the scores based on the start-and-stop rules were calculated to check the adequacy of these rules. The RMI had good fit of the double monotonicity model (coefficient H(T)=.87). The interpretation of the total score improved. Item hierarchy-based start-and-stop rules were formulated. The percentages of patients with a difference between the RMI total score and the score based on the recommended start-and-stop rules were 3% and 5%, respectively. Ten of the original 15 items had to be scored after applying the start-and-stop rules. Item hierarchy was established, enabling improved interpretation and faster scoring of the RMI. Copyright © 2012 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  20. Faster Detection of Poliomyelitis Outbreaks to Support Polio Eradication.

    Science.gov (United States)

    Blake, Isobel M; Chenoweth, Paul; Okayasu, Hiro; Donnelly, Christl A; Aylward, R Bruce; Grassly, Nicholas C

    2016-03-01

    As the global eradication of poliomyelitis approaches the final stages, prompt detection of new outbreaks is critical to enable a fast and effective outbreak response. Surveillance relies on reporting of acute flaccid paralysis (AFP) cases and laboratory confirmation through isolation of poliovirus from stool. However, delayed sample collection and testing can delay outbreak detection. We investigated whether weekly testing for clusters of AFP by location and time, using the Kulldorff scan statistic, could provide an early warning for outbreaks in 20 countries. A mixed-effects regression model was used to predict background rates of nonpolio AFP at the district level. In Tajikistan and Congo, testing for AFP clusters would have resulted in an outbreak warning 39 and 11 days, respectively, before official confirmation of large outbreaks. This method has relatively high specificity and could be integrated into the current polio information system to support rapid outbreak response activities.

  1. Interactive Editing of GigaSample Terrain Fields

    KAUST Repository

    Treib, Marc

    2012-05-01

    Previous terrain rendering approaches have addressed the aspect of data compression and fast decoding for rendering, but applications where the terrain is repeatedly modified and needs to be buffered on disk have not been considered so far. Such applications require both decoding and encoding to be faster than disk transfer. We present a novel approach for editing gigasample terrain fields at interactive rates and high quality. To achieve high decoding and encoding throughput, we employ a compression scheme for height and pixel maps based on a sparse wavelet representation. On recent GPUs it can encode and decode up to 270 and 730 MPix/s of color data, respectively, at compression rates and quality superior to JPEG, and it achieves more than twice these rates for lossless height field compression. The construction and rendering of a height field triangulation is avoided by using GPU ray-casting directly on the regular grid underlying the compression scheme. We show the efficiency of our method for interactive editing and continuous level-of-detail rendering of terrain fields comprised of several hundreds of gigasamples. © 2012 The Author(s).

  2. Sample contamination with NMP-oxidation products and byproduct-free NMP removal from sample solutions

    Energy Technology Data Exchange (ETDEWEB)

    Cesar Berrueco; Patricia Alvarez; Silvia Venditti; Trevor J. Morgan; Alan A. Herod; Marcos Millan; Rafael Kandiyoti [Imperial College London, London (United Kingdom). Department of Chemical Engineering

    2009-05-15

    1-Methyl-2-pyrrolidinone (NMP) is widely used as a solvent for coal-derived products and as eluent in size exclusion chromatography. It was observed that sample contamination may take place, through reactions of NMP, during extraction under refluxing conditions and during the process of NMP evaporation to concentrate or isolate samples. In this work, product distributions from experiments carried out in contact with air and under a blanket of oxygen-free nitrogen have been compared. Gas chromatography/mass spectrometry (GC-MS) clearly shows that oxidation products form when NMP is heated in the presence of air. Upon further heating, these oxidation products appear to polymerize, forming material with large molecular masses. Potentially severe levels of interference have been encountered in the size exclusion chromatography (SEC) of actual samples. Laser desorption mass spectrometry and SEC agree in showing an upper mass limit of nearly 7000 u for a residue left after distilling 'pure' NMP in contact with air. Furthermore, experiments have shown that these effects could be completely avoided by a strict exclusion of air during the refluxing and evaporation of NMP to dryness. 45 refs., 13 figs.

  3. Handling missing data in ranked set sampling

    CERN Document Server

    Bouza-Herrera, Carlos N

    2013-01-01

    The existence of missing observations is a very important aspect to be considered in the application of survey sampling, for example. In human populations they may be caused by a refusal of some interviewees to give the true value for the variable of interest. Traditionally, simple random sampling is used to select samples. Most statistical models are supported by the use of samples selected by means of this design. In recent decades, an alternative design has started being used, which, in many cases, shows an improvement in terms of accuracy compared with traditional sampling. It is called R

  4. Molecular diagnosis of lyssaviruses and sequence comparison of Australian bat lyssavirus samples.

    Science.gov (United States)

    Foord, A J; Heine, H G; Pritchard, L I; Lunt, R A; Newberry, K M; Rootes, C L; Boyle, D B

    2006-07-01

    To evaluate and implement molecular diagnostic tests for the detection of lyssaviruses in Australia. A published hemi-nested reverse transcriptase polymerase chain reaction (RT-PCR) for the detection of all lyssavirus genotypes was modified to a fully nested RT-PCR format and compared with the original assay. TaqMan assays for the detection of Australian bat lyssavirus (ABLV) were compared with both the nested and hemi-nested RT-PCR assays. The sequences of RT-PCR products were determined to assess sequence variations of the target region (nucleocapsid gene) in samples of ABLV originating from different regions. The nested RT-PCR assay was highly analytically specific, and at least as analytically sensitive as the hemi-nested assay. The TaqMan assays were highly analytically specific and more analytically sensitive than either RT-PCR assay, with a detection level of approximately 10 genome equivalents per microl. Sequence of the first 544 nucleotides of the nucleocapsid protein coding sequence was obtained from all samples of ABLV received at Australian Animal Health Laboratory during the study period. The nested RT-PCR provided a means for molecular diagnosis of all tested genotypes of lyssavirus including classical rabies virus and Australian bat lyssavirus. The published TaqMan assay proved to be superior to the RT-PCR assays for the detection of ABLV in terms of analytical sensitivity. The TaqMan assay would also be faster and cross contamination is less likely. Nucleotide sequence analyses of samples of ABLV from a wide geographical range in Australia demonstrated the conserved nature of this region of the genome and therefore the suitability of this region for molecular diagnosis.

  5. The Distribution of the Sample Minimum-Variance Frontier

    OpenAIRE

    Raymond Kan; Daniel R. Smith

    2008-01-01

    In this paper, we present a finite sample analysis of the sample minimum-variance frontier under the assumption that the returns are independent and multivariate normally distributed. We show that the sample minimum-variance frontier is a highly biased estimator of the population frontier, and we propose an improved estimator of the population frontier. In addition, we provide the exact distribution of the out-of-sample mean and variance of sample minimum-variance portfolios. This allows us t...

  6. Sampling method of environmental radioactivity monitoring

    International Nuclear Information System (INIS)

    1984-01-01

    This manual provides sampling methods of environmental samples of airborne dust, precipitated dust, precipitated water (rain or snow), fresh water, soil, river sediment or lake sediment, discharged water from a nuclear facility, grains, tea, milk, pasture grass, limnetic organisms, daily diet, index organisms, sea water, marine sediment, marine organisms, and that for tritium and radioiodine determination for radiation monitoring from radioactive fallout or radioactivity release by nuclear facilities. This manual aims at the presentation of standard sampling procedures for environmental radioactivity monitoring regardless of monitoring objectives, and shows preservation method of environmental samples acquired at the samplingpoint for radiation counting for those except human body. Sampling techniques adopted in this manual is decided by the criteria that they are suitable for routine monitoring and any special skillfulness is not necessary. Based on the above-mentioned principle, this manual presents outline and aims of sampling, sampling position or object, sampling quantity, apparatus, equipment or vessel for sampling, sampling location, sampling procedures, pretreatment and preparation procedures of a sample for radiation counting, necessary recording items for sampling and sample transportation procedures. Special attention is described in the chapter of tritium and radioiodine because these radionuclides might be lost by the above-mentioned sample preservation method for radiation counting of less volatile radionuclides than tritium or radioiodine. (Takagi, S.)

  7. Probability Sampling Method for a Hidden Population Using Respondent-Driven Sampling: Simulation for Cancer Survivors.

    Science.gov (United States)

    Jung, Minsoo

    2015-01-01

    When there is no sampling frame within a certain group or the group is concerned that making its population public would bring social stigma, we say the population is hidden. It is difficult to approach this kind of population survey-methodologically because the response rate is low and its members are not quite honest with their responses when probability sampling is used. The only alternative known to address the problems caused by previous methods such as snowball sampling is respondent-driven sampling (RDS), which was developed by Heckathorn and his colleagues. RDS is based on a Markov chain, and uses the social network information of the respondent. This characteristic allows for probability sampling when we survey a hidden population. We verified through computer simulation whether RDS can be used on a hidden population of cancer survivors. According to the simulation results of this thesis, the chain-referral sampling of RDS tends to minimize as the sample gets bigger, and it becomes stabilized as the wave progresses. Therefore, it shows that the final sample information can be completely independent from the initial seeds if a certain level of sample size is secured even if the initial seeds were selected through convenient sampling. Thus, RDS can be considered as an alternative which can improve upon both key informant sampling and ethnographic surveys, and it needs to be utilized for various cases domestically as well.

  8. Making Brains run Faster: are they Becoming Smarter?

    Science.gov (United States)

    Pahor, Anja; Jaušovec, Norbert

    2016-12-05

    A brief overview of structural and functional brain characteristics related to g is presented in the light of major neurobiological theories of intelligence: Neural Efficiency, P-FIT and Multiple-Demand system. These theories provide a framework to discuss the main objective of the paper: what is the relationship between individual alpha frequency (IAF) and g? Three studies were conducted in order to investigate this relationship: two correlational studies and a third study in which we experimentally induced changes in IAF by means of transcranial alternating current stimulation (tACS). (1) In a large scale study (n = 417), no significant correlations between IAF and IQ were observed. However, in males IAF positively correlated with mental rotation and shape manipulation and with an attentional focus on detail. (2) The second study showed sex-specific correlations between IAF (obtained during task performance) and scope of attention in males and between IAF and reaction time in females. (3) In the third study, individuals' IAF was increased with tACS. The induced changes in IAF had a disrupting effect on male performance on Raven's matrices, whereas a mild positive effect was observed for females. Neuro-electric activity after verum tACS showed increased desynchronization in the upper alpha band and dissociation between fronto-parietal and right temporal brain areas during performance on Raven's matrices. The results are discussed in the light of gender differences in brain structure and activity.

  9. CNTs-Modified Nb3O7F Hybrid Nanocrystal towards Faster Carrier Migration, Lower Bandgap and Higher Photocatalytic Activity.

    Science.gov (United States)

    Huang, Fei; Li, Zhen; Yan, Aihua; Zhao, Hui; Liang, Huagen; Gao, Qingyu; Qiang, Yinghuai

    2017-01-06

    Novel semiconductor photocatalysts have been the research focus and received much attention in recent years. The key issues for novel semiconductor photocatalysts are to effectively harvest solar energy and enhance the separation efficiency of the electron-hole pairs. In this work, novel Nb 3 O 7 F/CNTs hybrid nanocomposites with enhanced photocatalytic activity have been successfully synthesized by a facile hydrothermal plus etching technique. The important finding is that appropriate pH values lead to the formation of Nb 3 O 7 F nanocrystal directly. A general strategy to introdue interaction between Nb 3 O 7 F and CNTs markedly enhances the photocatalytic activity of Nb 3 O 7 F. Comparatively, Nb 3 O 7 F/CNTs nanocomposites exhibit higher photodegradation efficiency and faster photodegradation rate in the solution of methylene blue (MB) under visible-light irradiation. The higher photocatalytic activity may be attributed to more exposed active sites, higher carrier migration and narrower bandgap because of good synergistic effect. The results here may inspire more engineering, new design and facile fabrication of novel photocatalysts with highly photocatalytic activity.

  10. Including Below Detection Limit Samples in Post Decommissioning Soil Sample Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jung Hwan; Yim, Man Sung [KAIST, Daejeon (Korea, Republic of)

    2016-05-15

    To meet the required standards the site owner has to show that the soil at the facility has been sufficiently cleaned up. To do this one must know the contamination of the soil at the site prior to clean up. This involves sampling that soil to identify the degree of contamination. However there is a technical difficulty in determining how much decontamination should be done. The problem arises when measured samples are below the detection limit. Regulatory guidelines for site reuse after decommissioning are commonly challenged because the majority of the activity in the soil at or below the limit of detection. Using additional statistical analyses of contaminated soil after decommissioning is expected to have the following advantages: a better and more reliable probabilistic exposure assessment, better economics (lower project costs) and improved communication with the public. This research will develop an approach that defines an acceptable method for demonstrating compliance of decommissioned NPP sites and validates that compliance. Soil samples from NPP often contain censored data. Conventional methods for dealing with censored data sets are statistically biased and limited in their usefulness.

  11. Pre-analytical sample quality: metabolite ratios as an intrinsic marker for prolonged room temperature exposure of serum samples.

    Directory of Open Access Journals (Sweden)

    Gabriele Anton

    Full Text Available Advances in the "omics" field bring about the need for a high number of good quality samples. Many omics studies take advantage of biobanked samples to meet this need. Most of the laboratory errors occur in the pre-analytical phase. Therefore evidence-based standard operating procedures for the pre-analytical phase as well as markers to distinguish between 'good' and 'bad' quality samples taking into account the desired downstream analysis are urgently needed. We studied concentration changes of metabolites in serum samples due to pre-storage handling conditions as well as due to repeated freeze-thaw cycles. We collected fasting serum samples and subjected aliquots to up to four freeze-thaw cycles and to pre-storage handling delays of 12, 24 and 36 hours at room temperature (RT and on wet and dry ice. For each treated aliquot, we quantified 127 metabolites through a targeted metabolomics approach. We found a clear signature of degradation in samples kept at RT. Storage on wet ice led to less pronounced concentration changes. 24 metabolites showed significant concentration changes at RT. In 22 of these, changes were already visible after only 12 hours of storage delay. Especially pronounced were increases in lysophosphatidylcholines and decreases in phosphatidylcholines. We showed that the ratio between the concentrations of these molecule classes could serve as a measure to distinguish between 'good' and 'bad' quality samples in our study. In contrast, we found quite stable metabolite concentrations during up to four freeze-thaw cycles. We concluded that pre-analytical RT handling of serum samples should be strictly avoided and serum samples should always be handled on wet ice or in cooling devices after centrifugation. Moreover, serum samples should be frozen at or below -80°C as soon as possible after centrifugation.

  12. Sampling methods for low-frequency electromagnetic imaging

    International Nuclear Information System (INIS)

    Gebauer, Bastian; Hanke, Martin; Schneider, Christoph

    2008-01-01

    For the detection of hidden objects by low-frequency electromagnetic imaging the linear sampling method works remarkably well despite the fact that the rigorous mathematical justification is still incomplete. In this work, we give an explanation for this good performance by showing that in the low-frequency limit the measurement operator fulfils the assumptions for the fully justified variant of the linear sampling method, the so-called factorization method. We also show how the method has to be modified in the physically relevant case of electromagnetic imaging with divergence-free currents. We present numerical results to illustrate our findings, and to show that similar performance can be expected for the case of conducting objects and layered backgrounds

  13. Greater carbon stocks and faster turnover rates with increasing agricultural productivity

    Science.gov (United States)

    Sanderman, J.; Fallon, S.; Baisden, T. W.

    2013-12-01

    H.H. Janzen (2006) eloquently argued that from an agricultural perspective there is a tradeoff between storing carbon as soil organic matter (SOM) and the soil nutrient and energy benefit provided during SOM mineralization. Here we report on results from the Permanent Rotation Trial at the Waite Agricultural Institute, South Australia, indicating that shifting to an agricultural management strategy which returns more carbon to the soil, not only leads to greater carbon stocks but also increases the rate of carbon cycling through the soil. The Permanent Rotation Trial was established on a red Chromosol in 1925 with upgrades made to several treatments in 1948. Decadal soil samples were collected starting in 1963 at two depths, 0-10 and 10-22.5 cm, by compositing 20 soil cores taken along the length of each plot. We have chosen to analyze five trials representing a gradient in productivity: permanent pasture (Pa), wheat-pasture rotation (2W4Pa), continuous wheat (WW), wheat-oats-fallow rotation (WOF) and wheat-fallow (WF). For each of the soil samples (40 in total), the radiocarbon activity in the bulk soil as well as size-fractionated samples was measured by accelerator mass spectrometry at ANU's Radiocarbon Dating Laboratory (Fallon et al. 2010). After nearly 70 years under each rotation, SOC stocks increased linearly with productivity data across the trials from 24 to 58 tC ha-1. Importantly, these differences were due to greater losses over time in the low productivity trials rather than gains in SOC in any of the trials. Uptake of the bomb-spike in atmospheric 14C into the soil was greatest in the trials with the greatest productivity. The coarse size fraction always had greater Δ14C values than the bulk soil samples. Several different multi-pool steady state and non-steady state models were used to interpret the Δ14C data in terms of SOC turnover rates. Regardless of model choice, either the decay rates of all pools needed to increase or the allocation of C to

  14. Synthesis of nano-structured tin oxide thin films with faster response to LPG and ammonia by spray pyrolysis

    Science.gov (United States)

    PrasannaKumari, K.; Thomas, Boben

    2018-01-01

    Nanostructured SnO2 thin film have been efficiently fabricated by spray pyrolysis using atomizers of different types. The structure and morphology of as-prepared samples are investigated by techniques such as x-ray diffraction, and field-emission scanning electron microscopy. Significant morphological changes are observed in films by modifying the precursor atomization as a result of change of spray device. The optical characterization indicates that change in atomization, affects the absorbance and the band gap, following the varied crystallite size. Gas sensing investigations on ultrasonically prepared tin oxide films show NH3 response at operating temperatures lower down to 50 °C. For 1000 ppm of LPG the response at 350 °C for air blast atomizer film is about 99%, with short response and recovery times. The photoluminescence emmision spectra reveal the correlation between atomization process and the quantity of oxygen vacancies present in the samples. The favorable size reduction in microstructure with good crystallinity with slight change in lattice properties suggest their scope in gas sensing applications. On the basis of these characterizations, the mechanism of LPG and NH3 gas sensing of nanostructured SnO2 thin films has been proposed.

  15. Searching for the Optimal Sampling Solution: Variation in Invertebrate Communities, Sample Condition and DNA Quality.

    Directory of Open Access Journals (Sweden)

    Martin M Gossner

    Full Text Available There is a great demand for standardising biodiversity assessments in order to allow optimal comparison across research groups. For invertebrates, pitfall or flight-interception traps are commonly used, but sampling solution differs widely between studies, which could influence the communities collected and affect sample processing (morphological or genetic. We assessed arthropod communities with flight-interception traps using three commonly used sampling solutions across two forest types and two vertical strata. We first considered the effect of sampling solution and its interaction with forest type, vertical stratum, and position of sampling jar at the trap on sample condition and community composition. We found that samples collected in copper sulphate were more mouldy and fragmented relative to other solutions which might impair morphological identification, but condition depended on forest type, trap type and the position of the jar. Community composition, based on order-level identification, did not differ across sampling solutions and only varied with forest type and vertical stratum. Species richness and species-level community composition, however, differed greatly among sampling solutions. Renner solution was highly attractant for beetles and repellent for true bugs. Secondly, we tested whether sampling solution affects subsequent molecular analyses and found that DNA barcoding success was species-specific. Samples from copper sulphate produced the fewest successful DNA sequences for genetic identification, and since DNA yield or quality was not particularly reduced in these samples additional interactions between the solution and DNA must also be occurring. Our results show that the choice of sampling solution should be an important consideration in biodiversity studies. Due to the potential bias towards or against certain species by Ethanol-containing sampling solution we suggest ethylene glycol as a suitable sampling solution when

  16. Ants show a leftward turning bias when exploring unknown nest sites.

    Science.gov (United States)

    Hunt, Edmund R; O'Shea-Wheller, Thomas; Albery, Gregory F; Bridger, Tamsyn H; Gumn, Mike; Franks, Nigel R

    2014-12-01

    Behavioural lateralization in invertebrates is an important field of study because it may provide insights into the early origins of lateralization seen in a diversity of organisms. Here, we present evidence for a leftward turning bias in Temnothorax albipennis ants exploring nest cavities and in branching mazes, where the bias is initially obscured by thigmotaxis (wall-following) behaviour. Forward travel with a consistent turning bias in either direction is an effective nest exploration method, and a simple decision-making heuristic to employ when faced with multiple directional choices. Replication of the same bias at the colony level would also reduce individual predation risk through aggregation effects, and may lead to a faster attainment of a quorum threshold for nest migration. We suggest the turning bias may be the result of an evolutionary interplay between vision, exploration and migration factors, promoted by the ants' eusociality.

  17. A comparison of continuous pneumatic nebulization and flow injection-direct injection nebulization for sample introduction in inductively coupled plasma-mass spectrometry

    International Nuclear Information System (INIS)

    Crain, J.S.; Kiely, J.T.

    1995-08-01

    Dilute nitric acid blanks and solutions containing Ni, Cd, Pb, and U (including two laboratory waste samples) were analyzed eighteen times over a two-month period using inductively coupled plasma-mass spectrometry (ICP-MS). Two different sample introduction techniques were employed: flow injection-direct injection nebulization (FI-DIN) and continuous pneumatic nebulization (CPN). Using comparable instrumental measurement procedures, FI-DIN analyses were 33% faster and generated 52% less waste than CPN analyses. Instrumental limits of detection obtained with FI-DIN and CPN were comparable but not equivalent (except in the case of Pb) because of nebulizer-related differences in sensitivity (i.e., signal per unit analyte concentration) and background. Substantial and statistically significant differences were found between FI-DIN and CPN Ni determinations, and in the case of the laboratory waste samples, there were also small but statistically significant differences between Cd determinations. These small (2 to 3%) differences were not related to polyatomic ion interference (e.g., 95 Mo 16 O + ), but in light of the time savings and waste reduction to be realized, they should not preclude the use of FI-DIN in place of CPN for determination of Cd, Pb, U and chemically

  18. Spatial-dependence recurrence sample entropy

    Science.gov (United States)

    Pham, Tuan D.; Yan, Hong

    2018-03-01

    Measuring complexity in terms of the predictability of time series is a major area of research in science and engineering, and its applications are spreading throughout many scientific disciplines, where the analysis of physiological signals is perhaps the most widely reported in literature. Sample entropy is a popular measure for quantifying signal irregularity. However, the sample entropy does not take sequential information, which is inherently useful, into its calculation of sample similarity. Here, we develop a method that is based on the mathematical principle of the sample entropy and enables the capture of sequential information of a time series in the context of spatial dependence provided by the binary-level co-occurrence matrix of a recurrence plot. Experimental results on time-series data of the Lorenz system, physiological signals of gait maturation in healthy children, and gait dynamics in Huntington's disease show the potential of the proposed method.

  19. Analytical results from Tank 38H criticality Sample HTF-093

    International Nuclear Information System (INIS)

    Wilmarth, W.R.

    2000-01-01

    Resumption of processing in the 242-16H Evaporator could cause salt dissolution in the Waste Concentration Receipt Tank (Tank 38H). Therefore, High Level Waste personnel sampled the tank at the salt surface. Results of elemental analysis of the dried sludge solids from this sample (HTF-093) show significant quantities of neutron poisons (i.e., sodium, iron, and manganese) present to mitigate the potential for nuclear criticality. Comparison of this sample with the previous chemical and radiometric analyses of H-Area Evaporator samples show high poison to actinide ratios

  20. Adaptive sampling of AEM transients

    Science.gov (United States)

    Di Massa, Domenico; Florio, Giovanni; Viezzoli, Andrea

    2016-02-01

    This paper focuses on the sampling of the electromagnetic transient as acquired by airborne time-domain electromagnetic (TDEM) systems. Typically, the sampling of the electromagnetic transient is done using a fixed number of gates whose width grows logarithmically (log-gating). The log-gating has two main benefits: improving the signal to noise (S/N) ratio at late times, when the electromagnetic signal has amplitudes equal or lower than the natural background noise, and ensuring a good resolution at the early times. However, as a result of fixed time gates, the conventional log-gating does not consider any geological variations in the surveyed area, nor the possibly varying characteristics of the measured signal. We show, using synthetic models, how a different, flexible sampling scheme can increase the resolution of resistivity models. We propose a new sampling method, which adapts the gating on the base of the slope variations in the electromagnetic (EM) transient. The use of such an alternative sampling scheme aims to get more accurate inverse models by extracting the geoelectrical information from the measured data in an optimal way.

  1. Multielement analysis of aerosol samples by X-ray fluorescence analysis with totally reflecting sample holders

    International Nuclear Information System (INIS)

    Ketelsen, P.; Knoechel, A.

    1984-01-01

    Aerosole samples on filter support were analyzed using the X-ray flourescence analytical method (Mo excitation) with totally reflecting sample carrier (TXFA). Wet decomposition of the sample material with HNO 3 in an enclosed system and subsequent sample preparation by evaporating an aliquot of the solution on the sample carrier yields detection limits up to 0.3 ng/cm 2 . The reproducibilities of the measurements of the elements K, Ca, Ti, V, Cr, Mn, Fe, Ni, Cu, Zn, As, Se, Rb, Sr, Ba and Pb lie between 5 and 25%. Similar detection limits and reproducibilities are obtained, when low-temperature oxygen plasma is employed for the direct ashing of the homogenously covered filter on the sample carrier. For the systematic loss of elements both methods were investigated with radiotracers as well as with inactive techniques. A comparison of the results with those obtained by NAA, AAS and PIXE shows good agreement in most cases. For the bromine determination and the fast coverage of the main elements a possibility for measuring the filter membrane has been indicated, which neglects the ashing step. The corresponding detection limits are up to 3 ng/cm 2 . (orig.) [de

  2. Rational learning and information sampling: on the "naivety" assumption in sampling explanations of judgment biases.

    Science.gov (United States)

    Le Mens, Gaël; Denrell, Jerker

    2011-04-01

    Recent research has argued that several well-known judgment biases may be due to biases in the available information sample rather than to biased information processing. Most of these sample-based explanations assume that decision makers are "naive": They are not aware of the biases in the available information sample and do not correct for them. Here, we show that this "naivety" assumption is not necessary. Systematically biased judgments can emerge even when decision makers process available information perfectly and are also aware of how the information sample has been generated. Specifically, we develop a rational analysis of Denrell's (2005) experience sampling model, and we prove that when information search is interested rather than disinterested, even rational information sampling and processing can give rise to systematic patterns of errors in judgments. Our results illustrate that a tendency to favor alternatives for which outcome information is more accessible can be consistent with rational behavior. The model offers a rational explanation for behaviors that had previously been attributed to cognitive and motivational biases, such as the in-group bias or the tendency to prefer popular alternatives. 2011 APA, all rights reserved

  3. Direct visualization of solute locations in laboratory ice samples

    Directory of Open Access Journals (Sweden)

    T. Hullar

    2016-09-01

    Full Text Available Many important chemical reactions occur in polar snow, where solutes may be present in several reservoirs, including at the air–ice interface and in liquid-like regions within the ice matrix. Some recent laboratory studies suggest chemical reaction rates may differ in these two reservoirs. While investigations have examined where solutes are found in natural snow and ice, few studies have examined either solute locations in laboratory samples or the possible factors controlling solute segregation. To address this, we used micro-computed tomography (microCT to examine solute locations in ice samples prepared from either aqueous cesium chloride (CsCl or rose bengal solutions that were frozen using several different methods. Samples frozen in a laboratory freezer had the largest liquid-like inclusions and air bubbles, while samples frozen in a custom freeze chamber had somewhat smaller air bubbles and inclusions; in contrast, samples frozen in liquid nitrogen showed much smaller concentrated inclusions and air bubbles, only slightly larger than the resolution limit of our images (∼ 2 µm. Freezing solutions in plastic vs. glass vials had significant impacts on the sample structure, perhaps because the poor heat conductivity of plastic vials changes how heat is removed from the sample as it cools. Similarly, the choice of solute had a significant impact on sample structure, with rose bengal solutions yielding smaller inclusions and air bubbles compared to CsCl solutions frozen using the same method. Additional experiments using higher-resolution imaging of an ice sample show that CsCl moves in a thermal gradient, supporting the idea that the solutes in ice are present in mobile liquid-like regions. Our work shows that the structure of laboratory ice samples, including the location of solutes, is sensitive to the freezing method, sample container, and solute characteristics, requiring careful experimental design and interpretation of results.

  4. Assessing the precision of a time-sampling-based study among GPs: balancing sample size and measurement frequency.

    Science.gov (United States)

    van Hassel, Daniël; van der Velden, Lud; de Bakker, Dinny; van der Hoek, Lucas; Batenburg, Ronald

    2017-12-04

    Our research is based on a technique for time sampling, an innovative method for measuring the working hours of Dutch general practitioners (GPs), which was deployed in an earlier study. In this study, 1051 GPs were questioned about their activities in real time by sending them one SMS text message every 3 h during 1 week. The required sample size for this study is important for health workforce planners to know if they want to apply this method to target groups who are hard to reach or if fewer resources are available. In this time-sampling method, however, standard power analyses is not sufficient for calculating the required sample size as this accounts only for sample fluctuation and not for the fluctuation of measurements taken from every participant. We investigated the impact of the number of participants and frequency of measurements per participant upon the confidence intervals (CIs) for the hours worked per week. Statistical analyses of the time-use data we obtained from GPs were performed. Ninety-five percent CIs were calculated, using equations and simulation techniques, for various different numbers of GPs included in the dataset and for various frequencies of measurements per participant. Our results showed that the one-tailed CI, including sample and measurement fluctuation, decreased from 21 until 3 h between one and 50 GPs. As a result of the formulas to calculate CIs, the increase of the precision continued and was lower with the same additional number of GPs. Likewise, the analyses showed how the number of participants required decreased if more measurements per participant were taken. For example, one measurement per 3-h time slot during the week requires 300 GPs to achieve a CI of 1 h, while one measurement per hour requires 100 GPs to obtain the same result. The sample size needed for time-use research based on a time-sampling technique depends on the design and aim of the study. In this paper, we showed how the precision of the

  5. Fungal communities in wheat grain show significant co-existence patterns among species

    DEFF Research Database (Denmark)

    Nicolaisen, M.; Justesen, A. F.; Knorr, K.

    2014-01-01

    identified as ‘core’ OTUs as they were found in all or almost all samples and accounted for almost 99 % of all sequences. The remaining OTUs were only sporadically found and only in small amounts. Cluster and factor analyses showed patterns of co-existence among the core species. Cluster analysis grouped...... the 21 core OTUs into three clusters: cluster 1 consisting of saprotrophs, cluster 2 consisting mainly of yeasts and saprotrophs and cluster 3 consisting of wheat pathogens. Principal component extraction showed that the Fusarium graminearum group was inversely related to OTUs of clusters 1 and 2....

  6. Estimation after classification using lot quality assurance sampling: corrections for curtailed sampling with application to evaluating polio vaccination campaigns.

    Science.gov (United States)

    Olives, Casey; Valadez, Joseph J; Pagano, Marcello

    2014-03-01

    To assess the bias incurred when curtailment of Lot Quality Assurance Sampling (LQAS) is ignored, to present unbiased estimators, to consider the impact of cluster sampling by simulation and to apply our method to published polio immunization data from Nigeria. We present estimators of coverage when using two kinds of curtailed LQAS strategies: semicurtailed and curtailed. We study the proposed estimators with independent and clustered data using three field-tested LQAS designs for assessing polio vaccination coverage, with samples of size 60 and decision rules of 9, 21 and 33, and compare them to biased maximum likelihood estimators. Lastly, we present estimates of polio vaccination coverage from previously published data in 20 local government authorities (LGAs) from five Nigerian states. Simulations illustrate substantial bias if one ignores the curtailed sampling design. Proposed estimators show no bias. Clustering does not affect the bias of these estimators. Across simulations, standard errors show signs of inflation as clustering increases. Neither sampling strategy nor LQAS design influences estimates of polio vaccination coverage in 20 Nigerian LGAs. When coverage is low, semicurtailed LQAS strategies considerably reduces the sample size required to make a decision. Curtailed LQAS designs further reduce the sample size when coverage is high. Results presented dispel the misconception that curtailed LQAS data are unsuitable for estimation. These findings augment the utility of LQAS as a tool for monitoring vaccination efforts by demonstrating that unbiased estimation using curtailed designs is not only possible but these designs also reduce the sample size. © 2014 John Wiley & Sons Ltd.

  7. Optimum sample size allocation to minimize cost or maximize power for the two-sample trimmed mean test.

    Science.gov (United States)

    Guo, Jiin-Huarng; Luh, Wei-Ming

    2009-05-01

    When planning a study, sample size determination is one of the most important tasks facing the researcher. The size will depend on the purpose of the study, the cost limitations, and the nature of the data. By specifying the standard deviation ratio and/or the sample size ratio, the present study considers the problem of heterogeneous variances and non-normality for Yuen's two-group test and develops sample size formulas to minimize the total cost or maximize the power of the test. For a given power, the sample size allocation ratio can be manipulated so that the proposed formulas can minimize the total cost, the total sample size, or the sum of total sample size and total cost. On the other hand, for a given total cost, the optimum sample size allocation ratio can maximize the statistical power of the test. After the sample size is determined, the present simulation applies Yuen's test to the sample generated, and then the procedure is validated in terms of Type I errors and power. Simulation results show that the proposed formulas can control Type I errors and achieve the desired power under the various conditions specified. Finally, the implications for determining sample sizes in experimental studies and future research are discussed.

  8. Fat stigmatization in television shows and movies: a content analysis.

    Science.gov (United States)

    Himes, Susan M; Thompson, J Kevin

    2007-03-01

    To examine the phenomenon of fat stigmatization messages presented in television shows and movies, a content analysis was used to quantify and categorize fat-specific commentary and humor. Fat stigmatization vignettes were identified using a targeted sampling procedure, and 135 scenes were excised from movies and television shows. The material was coded by trained raters. Reliability indices were uniformly high for the seven categories (percentage agreement ranged from 0.90 to 0.98; kappas ranged from 0.66 to 0.94). Results indicated that fat stigmatization commentary and fat humor were often verbal, directed toward another person, and often presented directly in the presence of the overweight target. Results also indicated that male characters were three times more likely to engage in fat stigmatization commentary or fat humor than female characters. To our knowledge, these findings provide the first information regarding the specific gender, age, and types of fat stigmatization that occur frequently in movies and television shows. The stimuli should prove useful in future research examining the role of individual difference factors (e.g., BMI) in the reaction to viewing such vignettes.

  9. Comparison of the Effectiveness of Oral Sucrose and Emla Cream in Reduction of Acute Pain Due to Heel Sticks for Blood Sampling in Neonates

    Directory of Open Access Journals (Sweden)

    S Abyari

    2007-04-01

    Full Text Available Introduction: Certain painful, invasive procedures are necessary for care, and are commonly performed in both healthly and sick neonates. Current evidence shows that the newborn infant has both physiologic and anatomic capacity to experience pain. Recent research suggests that pain experienced in the neonatal period might have long-term effects later in life. Previous research has shown that orally administered sweet-tasting solutions reduce signs of pain during painful procedures. This effect is considered to be mediated both by the release of endorphins and by a preabsorptive mechanism related to the sweet taste. Methods: This study was a controlled , randomized and double – blind study on 210 neonates. These newborns were randomly divided into 3 groups; A, B and C. Group A received 2 ml of 25% sucrose orally as well as base cream was applied at the site for heel stick, group B received 2 ml of distilled water and application of EMLA cream, while group C received 2 ml of distilled water and base cream. The heart rates of the newborn were recorded by the cardiac monitor before and after heel stick blood sampling and the duration of crying was determined as well. Pain was scored by DAN scale.There were no differences in demographic characteristics of all neonates. Results: The results showed that the DAN scale was significantly lower in the sucrose group (mean : 3.840 as compared to the EMLA group (mean: 3.366 and the placebo group(5.557, but the difference in the duration of crying was not significantly different in the sucrose group (mean: 10.5 second and the EMLA group(mean 8.76. Conclusion: Both sucrose and EMLA are effective in reducing stress associated with heel lancet in newborns, but as sucrose acts faster and is healthier, its usage is proposed in neonates requiring heel sticks for blood sampling.

  10. Nonuniform sampling by quantiles

    Science.gov (United States)

    Craft, D. Levi; Sonstrom, Reilly E.; Rovnyak, Virginia G.; Rovnyak, David

    2018-03-01

    A flexible strategy for choosing samples nonuniformly from a Nyquist grid using the concept of statistical quantiles is presented for broad classes of NMR experimentation. Quantile-directed scheduling is intuitive and flexible for any weighting function, promotes reproducibility and seed independence, and is generalizable to multiple dimensions. In brief, weighting functions are divided into regions of equal probability, which define the samples to be acquired. Quantile scheduling therefore achieves close adherence to a probability distribution function, thereby minimizing gaps for any given degree of subsampling of the Nyquist grid. A characteristic of quantile scheduling is that one-dimensional, weighted NUS schedules are deterministic, however higher dimensional schedules are similar within a user-specified jittering parameter. To develop unweighted sampling, we investigated the minimum jitter needed to disrupt subharmonic tracts, and show that this criterion can be met in many cases by jittering within 25-50% of the subharmonic gap. For nD-NUS, three supplemental components to choosing samples by quantiles are proposed in this work: (i) forcing the corner samples to ensure sampling to specified maximum values in indirect evolution times, (ii) providing an option to triangular backfill sampling schedules to promote dense/uniform tracts at the beginning of signal evolution periods, and (iii) providing an option to force the edges of nD-NUS schedules to be identical to the 1D quantiles. Quantile-directed scheduling meets the diverse needs of current NUS experimentation, but can also be used for future NUS implementations such as off-grid NUS and more. A computer program implementing these principles (a.k.a. QSched) in 1D- and 2D-NUS is available under the general public license.

  11. The Lyα Reference Sample

    DEFF Research Database (Denmark)

    Ostlin, Goran; Hayes, Matthew; Duval, Florent

    2014-01-01

    The Lyα Reference Sample (LARS) is a substantial program with the Hubble Space Telescope (HST) that provides a sample of local universe laboratory galaxies in which to study the detailed astrophysics of the visibility and strength of the Lyαline of neutral hydrogen. Lyα is the dominant spectral...... are produced (whether or not they escape), we demanded an Hα equivalent width W(Hα) ≥100 Å. The final sample of 14 galaxies covers far-UV (FUV, λ ~ 1500 Å) luminosities that overlap with those of high-z Lyα emitters (LAEs) and Lyman break galaxies (LBGs), making LARS a valid comparison sample. We present......) but strongly asymmetric Lyα emission. Spectroscopy from the Cosmic Origins Spectrograph on board HST centered on the brightest UV knot shows a moderate outflow in the neutral interstellar medium (probed by low ionization stage absorption features) and Lyα emission with an asymmetric profile. Radiative transfer...

  12. Direct sampling methods for inverse elastic scattering problems

    Science.gov (United States)

    Ji, Xia; Liu, Xiaodong; Xi, Yingxia

    2018-03-01

    We consider the inverse elastic scattering of incident plane compressional and shear waves from the knowledge of the far field patterns. Specifically, three direct sampling methods for location and shape reconstruction are proposed using the different component of the far field patterns. Only inner products are involved in the computation, thus the novel sampling methods are very simple and fast to be implemented. With the help of the factorization of the far field operator, we give a lower bound of the proposed indicator functionals for sampling points inside the scatterers. While for the sampling points outside the scatterers, we show that the indicator functionals decay like the Bessel functions as the sampling point goes away from the boundary of the scatterers. We also show that the proposed indicator functionals continuously dependent on the far field patterns, which further implies that the novel sampling methods are extremely stable with respect to data error. For the case when the observation directions are restricted into the limited aperture, we firstly introduce some data retrieval techniques to obtain those data that can not be measured directly and then use the proposed direct sampling methods for location and shape reconstructions. Finally, some numerical simulations in two dimensions are conducted with noisy data, and the results further verify the effectiveness and robustness of the proposed sampling methods, even for multiple multiscale cases and limited-aperture problems.

  13. Kinematics and stellar populations of 17 dwarf early-type galaxies

    OpenAIRE

    Thomas, D.; Bender, R.; Hopp, U.; Maraston, C.; Greggio, L.

    2002-01-01

    We present kinematics and stellar population properties of 17 dwarf early-type galaxies in the luminosity range -14> M_B> -19. Our sample fills the gap between the intensively studied giant elliptical and Local Group dwarf spheroidal galaxies. The dwarf ellipticals of the present sample have constant velocity dispersion profiles within their effective radii and do not show significant rotation, hence are clearly anisotropic. The dwarf lenticulars, instead, rotate faster and are, at least part...

  14. Systematic sampling with errors in sample locations

    DEFF Research Database (Denmark)

    Ziegel, Johanna; Baddeley, Adrian; Dorph-Petersen, Karl-Anton

    2010-01-01

    analysis using point process methods. We then analyze three different models for the error process, calculate exact expressions for the variances, and derive asymptotic variances. Errors in the placement of sample points can lead to substantial inflation of the variance, dampening of zitterbewegung......Systematic sampling of points in continuous space is widely used in microscopy and spatial surveys. Classical theory provides asymptotic expressions for the variance of estimators based on systematic sampling as the grid spacing decreases. However, the classical theory assumes that the sample grid...... is exactly periodic; real physical sampling procedures may introduce errors in the placement of the sample points. This paper studies the effect of errors in sample positioning on the variance of estimators in the case of one-dimensional systematic sampling. First we sketch a general approach to variance...

  15. [A comparison of convenience sampling and purposive sampling].

    Science.gov (United States)

    Suen, Lee-Jen Wu; Huang, Hui-Man; Lee, Hao-Hsien

    2014-06-01

    Convenience sampling and purposive sampling are two different sampling methods. This article first explains sampling terms such as target population, accessible population, simple random sampling, intended sample, actual sample, and statistical power analysis. These terms are then used to explain the difference between "convenience sampling" and purposive sampling." Convenience sampling is a non-probabilistic sampling technique applicable to qualitative or quantitative studies, although it is most frequently used in quantitative studies. In convenience samples, subjects more readily accessible to the researcher are more likely to be included. Thus, in quantitative studies, opportunity to participate is not equal for all qualified individuals in the target population and study results are not necessarily generalizable to this population. As in all quantitative studies, increasing the sample size increases the statistical power of the convenience sample. In contrast, purposive sampling is typically used in qualitative studies. Researchers who use this technique carefully select subjects based on study purpose with the expectation that each participant will provide unique and rich information of value to the study. As a result, members of the accessible population are not interchangeable and sample size is determined by data saturation not by statistical power analysis.

  16. Sample normalization methods in quantitative metabolomics.

    Science.gov (United States)

    Wu, Yiman; Li, Liang

    2016-01-22

    To reveal metabolomic changes caused by a biological event in quantitative metabolomics, it is critical to use an analytical tool that can perform accurate and precise quantification to examine the true concentration differences of individual metabolites found in different samples. A number of steps are involved in metabolomic analysis including pre-analytical work (e.g., sample collection and storage), analytical work (e.g., sample analysis) and data analysis (e.g., feature extraction and quantification). Each one of them can influence the quantitative results significantly and thus should be performed with great care. Among them, the total sample amount or concentration of metabolites can be significantly different from one sample to another. Thus, it is critical to reduce or eliminate the effect of total sample amount variation on quantification of individual metabolites. In this review, we describe the importance of sample normalization in the analytical workflow with a focus on mass spectrometry (MS)-based platforms, discuss a number of methods recently reported in the literature and comment on their applicability in real world metabolomics applications. Sample normalization has been sometimes ignored in metabolomics, partially due to the lack of a convenient means of performing sample normalization. We show that several methods are now available and sample normalization should be performed in quantitative metabolomics where the analyzed samples have significant variations in total sample amounts. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. On sampling and modeling complex systems

    International Nuclear Information System (INIS)

    Marsili, Matteo; Mastromatteo, Iacopo; Roudi, Yasser

    2013-01-01

    The study of complex systems is limited by the fact that only a few variables are accessible for modeling and sampling, which are not necessarily the most relevant ones to explain the system behavior. In addition, empirical data typically undersample the space of possible states. We study a generic framework where a complex system is seen as a system of many interacting degrees of freedom, which are known only in part, that optimize a given function. We show that the underlying distribution with respect to the known variables has the Boltzmann form, with a temperature that depends on the number of unknown variables. In particular, when the influence of the unknown degrees of freedom on the known variables is not too irregular, the temperature decreases as the number of variables increases. This suggests that models can be predictable only when the number of relevant variables is less than a critical threshold. Concerning sampling, we argue that the information that a sample contains on the behavior of the system is quantified by the entropy of the frequency with which different states occur. This allows us to characterize the properties of maximally informative samples: within a simple approximation, the most informative frequency size distributions have power law behavior and Zipf’s law emerges at the crossover between the under sampled regime and the regime where the sample contains enough statistics to make inferences on the behavior of the system. These ideas are illustrated in some applications, showing that they can be used to identify relevant variables or to select the most informative representations of data, e.g. in data clustering. (paper)

  18. Development of analytical methods for the separation of plutonium, americium, curium and neptunium from environmental samples

    Energy Technology Data Exchange (ETDEWEB)

    Salminen, S.

    2009-07-01

    In this work, separation methods have been developed for the analysis of anthropogenic transuranium elements plutonium, americium, curium and neptunium from environmental samples contaminated by global nuclear weapons testing and the Chernobyl accident. The analytical methods utilized in this study are based on extraction chromatography. Highly varying atmospheric plutonium isotope concentrations and activity ratios were found at both Kurchatov (Kazakhstan), near the former Semipalatinsk test site, and Sodankylae (Finland). The origin of plutonium is almost impossible to identify at Kurchatov, since hundreds of nuclear tests were performed at the Semipalatinsk test site. In Sodankylae, plutonium in the surface air originated from nuclear weapons testing, conducted mostly by USSR and USA before the sampling year 1963. The variation in americium, curium and neptunium concentrations was great as well in peat samples collected in southern and central Finland in 1986 immediately after the Chernobyl accident. The main source of transuranium contamination in peats was from global nuclear test fallout, although there are wide regional differences in the fraction of Chernobyl-originated activity (of the total activity) for americium, curium and neptunium. The separation methods developed in this study yielded good chemical recovery for the elements investigated and adequately pure fractions for radiometric activity determination. The extraction chromatographic methods were faster compared to older methods based on ion exchange chromatography. In addition, extraction chromatography is a more environmentally friendly separation method than ion exchange, because less acidic waste solutions are produced during the analytical procedures. (orig.)

  19. Sauerbraten, Rotkappchen und Goethe: The Quiz Show as an Introduction to German Studies.

    Science.gov (United States)

    White, Diane

    1980-01-01

    Proposes an adaptation of the quiz-show format for classroom use, discussing a set of rules and sample questions designed for beginning and intermediate German students. Presents questions based on German life and culture which are especially selected to encourage participation from students majoring in subjects other than German. (MES)

  20. Geographical affinities of the HapMap samples.

    Directory of Open Access Journals (Sweden)

    Miao He

    Full Text Available The HapMap samples were collected for medical-genetic studies, but are also widely used in population-genetic and evolutionary investigations. Yet the ascertainment of the samples differs from most population-genetic studies which collect individuals who live in the same local region as their ancestors. What effects could this non-standard ascertainment have on the interpretation of HapMap results?We compared the HapMap samples with more conventionally-ascertained samples used in population- and forensic-genetic studies, including the HGDP-CEPH panel, making use of published genome-wide autosomal SNP data and Y-STR haplotypes, as well as producing new Y-STR data. We found that the HapMap samples were representative of their broad geographical regions of ancestry according to all tests applied. The YRI and JPT were indistinguishable from independent samples of Yoruba and Japanese in all ways investigated. However, both the CHB and the CEU were distinguishable from all other HGDP-CEPH populations with autosomal markers, and both showed Y-STR similarities to unusually large numbers of populations, perhaps reflecting their admixed origins.The CHB and JPT are readily distinguished from one another with both autosomal and Y-chromosomal markers, and results obtained after combining them into a single sample should be interpreted with caution. The CEU are better described as being of Western European ancestry than of Northern European ancestry as often reported. Both the CHB and CEU show subtle but detectable signs of admixture. Thus the YRI and JPT samples are well-suited to standard population-genetic studies, but the CHB and CEU less so.

  1. Evaluation of cysticercus-specific IgG (total and subclasses and IgE antibody responses in cerebrospinal fluid samples from patients with neurocysticercosis showing intrathecal production of specific IgG antibodies

    Directory of Open Access Journals (Sweden)

    Lisandra Akemi Suzuki

    Full Text Available In the present study, an enzyme-linked immunosorbent assay (ELISA standardized with vesicular fluid of Taenia solium cysticerci was used to screen for IgG (total and subclasses and IgE antibodies in cerebrospinal fluid (CSF samples from patients with neurocysticercosis showing intrathecal production of specific IgG antibodies and patients with other neurological disorders. The following results were obtained: IgG-ELISA: 100% sensitivity (median of the ELISA absorbances (MEA=1.17 and 100% specificity; IgG1-ELISA: 72.7% sensitivity (MEA=0.49 and 100% specificity; IgG2-ELISA: 81.8% sensitivity (MEA=0.46 and 100% specificity; IgG3-ELISA: 63.6% sensitivity (MEA=0.12 and 100% specificity; IgG4-ELISA: 90.9% sensitivity (MEA=0.85 and 100% specificity; IgE-ELISA 93.8% sensitivity (MEA=0.60 and 100% specificity. There were no significant differences between the sensitivities and specificities in the detection of IgG-ELISA and IgE-ELISA, although in CSF samples from patients with neurocysticercosis the MEA of the IgG-ELISA was significantly higher than that of the IgE-ELISA. The sensitivity and MEA values of the IgG4-ELISA were higher than the corresponding values for the other IgG subclasses. Future studies should address the contribution of IgG4 and IgE antibodies to the physiopathology of neurocysticercosis.

  2. A Piece of Paper Falling Faster than Free Fall

    Science.gov (United States)

    Vera, F.; Rivera, R.

    2011-01-01

    We report a simple experiment that clearly demonstrates a common error in the explanation of the classic experiment where a small piece of paper is put over a book and the system is let fall. This classic demonstration is used in introductory physics courses to show that after eliminating the friction force with the air, the piece of paper falls…

  3. Hybrid Cubature Kalman filtering for identifying nonlinear models from sampled recording: Estimation of neuronal dynamics.

    Science.gov (United States)

    Madi, Mahmoud K; Karameh, Fadi N

    2017-01-01

    Kalman filtering methods have long been regarded as efficient adaptive Bayesian techniques for estimating hidden states in models of linear dynamical systems under Gaussian uncertainty. Recent advents of the Cubature Kalman filter (CKF) have extended this efficient estimation property to nonlinear systems, and also to hybrid nonlinear problems where by the processes are continuous and the observations are discrete (continuous-discrete CD-CKF). Employing CKF techniques, therefore, carries high promise for modeling many biological phenomena where the underlying processes exhibit inherently nonlinear, continuous, and noisy dynamics and the associated measurements are uncertain and time-sampled. This paper investigates the performance of cubature filtering (CKF and CD-CKF) in two flagship problems arising in the field of neuroscience upon relating brain functionality to aggregate neurophysiological recordings: (i) estimation of the firing dynamics and the neural circuit model parameters from electric potentials (EP) observations, and (ii) estimation of the hemodynamic model parameters and the underlying neural drive from BOLD (fMRI) signals. First, in simulated neural circuit models, estimation accuracy was investigated under varying levels of observation noise (SNR), process noise structures, and observation sampling intervals (dt). When compared to the CKF, the CD-CKF consistently exhibited better accuracy for a given SNR, sharp accuracy increase with higher SNR, and persistent error reduction with smaller dt. Remarkably, CD-CKF accuracy shows only a mild deterioration for non-Gaussian process noise, specifically with Poisson noise, a commonly assumed form of background fluctuations in neuronal systems. Second, in simulated hemodynamic models, parametric estimates were consistently improved under CD-CKF. Critically, time-localization of the underlying neural drive, a determinant factor in fMRI-based functional connectivity studies, was significantly more accurate

  4. Hybrid Cubature Kalman filtering for identifying nonlinear models from sampled recording: Estimation of neuronal dynamics

    Science.gov (United States)

    2017-01-01

    Kalman filtering methods have long been regarded as efficient adaptive Bayesian techniques for estimating hidden states in models of linear dynamical systems under Gaussian uncertainty. Recent advents of the Cubature Kalman filter (CKF) have extended this efficient estimation property to nonlinear systems, and also to hybrid nonlinear problems where by the processes are continuous and the observations are discrete (continuous-discrete CD-CKF). Employing CKF techniques, therefore, carries high promise for modeling many biological phenomena where the underlying processes exhibit inherently nonlinear, continuous, and noisy dynamics and the associated measurements are uncertain and time-sampled. This paper investigates the performance of cubature filtering (CKF and CD-CKF) in two flagship problems arising in the field of neuroscience upon relating brain functionality to aggregate neurophysiological recordings: (i) estimation of the firing dynamics and the neural circuit model parameters from electric potentials (EP) observations, and (ii) estimation of the hemodynamic model parameters and the underlying neural drive from BOLD (fMRI) signals. First, in simulated neural circuit models, estimation accuracy was investigated under varying levels of observation noise (SNR), process noise structures, and observation sampling intervals (dt). When compared to the CKF, the CD-CKF consistently exhibited better accuracy for a given SNR, sharp accuracy increase with higher SNR, and persistent error reduction with smaller dt. Remarkably, CD-CKF accuracy shows only a mild deterioration for non-Gaussian process noise, specifically with Poisson noise, a commonly assumed form of background fluctuations in neuronal systems. Second, in simulated hemodynamic models, parametric estimates were consistently improved under CD-CKF. Critically, time-localization of the underlying neural drive, a determinant factor in fMRI-based functional connectivity studies, was significantly more accurate

  5. Probabilistic simulation of mesoscopic “Schrödinger cat” states

    Energy Technology Data Exchange (ETDEWEB)

    Opanchuk, B.; Rosales-Zárate, L.; Reid, M.D.; Drummond, P.D., E-mail: pdrummond@swin.edu.au

    2014-02-01

    We carry out probabilistic phase-space sampling of mesoscopic Schrödinger cat quantum states, demonstrating multipartite Bell violations for up to 60 qubits. We use states similar to those generated in photonic and ion-trap experiments. These results show that mesoscopic quantum superpositions are directly accessible to probabilistic sampling, and we analyze the properties of sampling errors. We also demonstrate dynamical simulation of super-decoherence in ion traps. Our computer simulations can be either exponentially faster or slower than experiment, depending on the correlations measured.

  6. Sampling from complex networks with high community structures.

    Science.gov (United States)

    Salehi, Mostafa; Rabiee, Hamid R; Rajabi, Arezo

    2012-06-01

    In this paper, we propose a novel link-tracing sampling algorithm, based on the concepts from PageRank vectors, to sample from networks with high community structures. Our method has two phases; (1) Sampling the closest nodes to the initial nodes by approximating personalized PageRank vectors and (2) Jumping to a new community by using PageRank vectors and unknown neighbors. Empirical studies on several synthetic and real-world networks show that the proposed method improves the performance of network sampling compared to the popular link-based sampling methods in terms of accuracy and visited communities.

  7. Sampling for Beryllium Surface Contamination using Wet, Dry and Alcohol Wipe Sampling

    Energy Technology Data Exchange (ETDEWEB)

    Kerr, Kent [Central Missouri State Univ., Warrensburg, MO (United States)

    2004-12-01

    This research project was conducted at the National Nuclear Security Administration's Kansas City Plant, operated by Honeywell Federal Manufacturing and Technologies, in conjunction with the Safety Sciences Department of Central Missouri State University, to compare relative removal efficiencies of three wipe sampling techniques currently used at Department of Energy facilities. Efficiencies of removal of beryllium contamination from typical painted surfaces were tested by wipe sampling with dry Whatman 42 filter paper, with water-moistened (Ghost Wipe) materials, and by methanol-moistened wipes. Test plates were prepared using 100 mm X 15 mm Pyrex Petri dishes with interior surfaces spray painted with a bond coat primer. To achieve uniform deposition over the test plate surface, 10 ml aliquots of solution containing 1 beryllium and 0.1 ml of metal working fluid were transferred to the test plates and subsequently evaporated. Metal working fluid was added to simulate the slight oiliness common on surfaces in metal working shops where fugitive oil mist accumulates over time. Sixteen test plates for each wipe method (dry, water, and methanol) were processed and sampled using a modification of wiping patterns recommended by OSHA Method 125G. Laboratory and statistical analysis showed that methanol-moistened wipe sampling removed significantly more (about twice as much) beryllium/oil-film surface contamination as water-moistened wipes (p< 0.001), which removed significantly more (about twice as much) residue as dry wipes (p <0.001). Evidence for beryllium sensitization via skin exposure argues in favor of wipe sampling with wetting agents that provide enhanced residue removal efficiency.

  8. DOE-2 sample run book: Version 2.1E

    Energy Technology Data Exchange (ETDEWEB)

    Winkelmann, F.C.; Birdsall, B.E.; Buhl, W.F.; Ellington, K.L.; Erdem, A.E. [Lawrence Berkeley Lab., CA (United States); Hirsch, J.J.; Gates, S. [Hirsch (James J.) and Associates, Camarillo, CA (United States)

    1993-11-01

    The DOE-2 Sample Run Book shows inputs and outputs for a variety of building and system types. The samples start with a simple structure and continue to a high-rise office building, a medical building, three small office buildings, a bar/lounge, a single-family residence, a small office building with daylighting, a single family residence with an attached sunspace, a ``parameterized`` building using input macros, and a metric input/output example. All of the samples use Chicago TRY weather. The main purpose of the Sample Run Book is instructional. It shows the relationship of LOADS-SYSTEMS-PLANT-ECONOMICS inputs, displays various input styles, and illustrates many of the basic and advanced features of the program. Many of the sample runs are preceded by a sketch of the building showing its general appearance and the zoning used in the input. In some cases we also show a 3-D rendering of the building as produced by the program DrawBDL. Descriptive material has been added as comments in the input itself. We find that a number of users have loaded these samples onto their editing systems and use them as ``templates`` for creating new inputs. Another way of using them would be to store various portions as files that can be read into the input using the {number_sign}{number_sign} include command, which is part of the Input Macro feature introduced in version DOE-2.lD. Note that the energy rate structures here are the same as in the DOE-2.lD samples, but have been rewritten using the new DOE-2.lE commands and keywords for ECONOMICS. The samples contained in this report are the same as those found on the DOE-2 release files. However, the output numbers that appear here may differ slightly from those obtained from the release files. The output on the release files can be used as a check set to compare results on your computer.

  9. Environmental sample accounting at the Savannah River Plant

    International Nuclear Information System (INIS)

    Zeigler, C.C.; Wood, M.B.

    1978-01-01

    At the Savannah River Plant Environmental Monitoring Laboratories, a computer-based systematic accounting method was developed to ensure that all scheduled samples are collected, processed through the laboratory, and counted without delay. The system employs an IBM 360/195 computer with a magnetic tape master file, an online disk file, and cathode ray tube (CRT) terminals. Scheduling and accounting are accomplished using computer-generated schedules, bottle labels, and output/ input cards. A printed card is issued for the collecting, analyzing, and counting of each scheduled sample. The card also contains information for the personnel who are to perform the work, e.g., sample location, aliquot to be processed, and procedure to be used. Manual entries are made on the card when each step in the process is completed. Additional pertinent data such as the reason a sample is not collected, the need for a nonstandard aliquot, and field measurement results are keypunched and then read into the computer files as required. The computer files are audited daily and summaries showing samples not processed in pre-established normal schedules are provided. The progress of sample analyses is readily determined at any time using the CRT terminal. Historic data are maintained on magnetic tape, and workload summaries showing the number of samples and number of determinations per month are issued. (author)

  10. Determinants of social media usage among a sample of rural South African youth

    Directory of Open Access Journals (Sweden)

    Herring Shava

    2018-03-01

    Full Text Available Background: Youths have been found to utilise and adopt information communication technology (ICT faster than any other population cohort. This has been aided by the advent of social media, especially Facebook and Instagram as platforms of choice. Calls have been made for more research (especially in rural communities on the usage of ICT platforms such as social media among the youth as a basis for interventions that not only allow for better communication but also for learning.   Objectives: The research investigated the relationship between knowledge sharing, habit and obligation in relation to social media usage among a sample of rural South African youth.   Method: This study is descriptive by design. Primary data were collected from 447 youths domiciled within a rural community in the Eastern Cape Province of South Africa using a self-administered questionnaire. The respondents to the study were all social media users. A combination of descriptive statistics and Pearson’s correlation analysis was used to make meaning of the data.   Results: The study found a significant positive correlation to exist in all three independent variables (knowledge sharing, habit and obligation with the dependent variable (social media usage concerning Facebook usage among the sample of South African rural youth.   Conclusion: Based on the findings of the research, recommendations and implications with regard to theory and practice are made.

  11. Sample preparation strategies for food and biological samples prior to nanoparticle detection and imaging

    DEFF Research Database (Denmark)

    Larsen, Erik Huusfeldt; Löschner, Katrin

    2014-01-01

    microscopy (TEM) proved to be necessary for trouble shooting of results obtained from AFFF-LS-ICP-MS. Aqueous and enzymatic extraction strategies were tested for thorough sample preparation aiming at degrading the sample matrix and to liberate the AgNPs from chicken meat into liquid suspension. The resulting...... AFFF-ICP-MS fractograms, which corresponded to the enzymatic digests, showed a major nano-peak (about 80 % recovery of AgNPs spiked to the meat) plus new smaller peaks that eluted close to the void volume of the fractograms. Small, but significant shifts in retention time of AFFF peaks were observed...... for the meat sample extracts and the corresponding neat AgNP suspension, and rendered sizing by way of calibration with AgNPs as sizing standards inaccurate. In order to gain further insight into the sizes of the separated AgNPs, or their possible dissolved state, fractions of the AFFF eluate were collected...

  12. On-line coupling of sample preconcentration by LVSEP with gel electrophoretic separation on T-channel chips.

    Science.gov (United States)

    Kitagawa, Fumihiko; Kinami, Saeko; Takegawa, Yuuki; Nukatsuka, Isoshi; Sueyoshi, Kenji; Kawai, Takayuki; Otsuka, Koji

    2017-01-01

    To achieve an on-line coupling of the sample preconcentration by a large-volume sample stacking with an electroosmotic flow pump (LVSEP) with microchip gel electrophoresis (MCGE), a sample solution, a background solution for LVSEP and a sieving solution for MCGE were loaded in a T-form channel and three reservoirs on PDMS microchips. By utilizing the difference in the flow resistance of the two channels, a low-viscosity sample and a viscous polymer solution were easily introduced into the LVSEP and MCGE channels, respectively. Fluorescence imaging of the sequential LVSEP-MCGE processes clearly demonstrated that a faster stacking of anionic fluorescein and successive introduction into the MCGE channel can be carried out on the T-channel chip. To evaluate the preconcentration performance, a conventional MCZE analysis of fluorescein on the cross-channel chip was compared with LVSEP-MCGE on the short T-channel chip, and as a result that the value of sensitive enhancement factor (SEF) was estimated to be 370. The repeatability of the peak height was good with the RSD value of 3.2%, indicating the robustness of the enrichment performance. In the successive LVSEP-MCGE analysis of φX174/HaeIII digest, the DNA fragments were well enriched to a sharp peak in the LVSEP channel, and they were separated in the MCGE channel, whose electropherogram was well-resembled with that in the conventional MCGE. The values of SEF for the DNA fragments were calculated to be ranging from 74 to 108. Thus, the successive LVSEP-MCGE analysis was effective for both preconcentrating and separating DNA fragments. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Information sampling behavior with explicit sampling costs

    Science.gov (United States)

    Juni, Mordechai Z.; Gureckis, Todd M.; Maloney, Laurence T.

    2015-01-01

    The decision to gather information should take into account both the value of information and its accrual costs in time, energy and money. Here we explore how people balance the monetary costs and benefits of gathering additional information in a perceptual-motor estimation task. Participants were rewarded for touching a hidden circular target on a touch-screen display. The target’s center coincided with the mean of a circular Gaussian distribution from which participants could sample repeatedly. Each “cue” — sampled one at a time — was plotted as a dot on the display. Participants had to repeatedly decide, after sampling each cue, whether to stop sampling and attempt to touch the hidden target or continue sampling. Each additional cue increased the participants’ probability of successfully touching the hidden target but reduced their potential reward. Two experimental conditions differed in the initial reward associated with touching the hidden target and the fixed cost per cue. For each condition we computed the optimal number of cues that participants should sample, before taking action, to maximize expected gain. Contrary to recent claims that people gather less information than they objectively should before taking action, we found that participants over-sampled in one experimental condition, and did not significantly under- or over-sample in the other. Additionally, while the ideal observer model ignores the current sample dispersion, we found that participants used it to decide whether to stop sampling and take action or continue sampling, a possible consequence of imperfect learning of the underlying population dispersion across trials. PMID:27429991

  14. Tokyo Motor Show 2003; Tokyo Motor Show 2003

    Energy Technology Data Exchange (ETDEWEB)

    Joly, E.

    2004-01-01

    The text which follows present the different techniques exposed during the 37. Tokyo Motor Show. The report points out the great tendencies of developments of the Japanese automobile industry. The hybrid electric-powered vehicles or those equipped with fuel cells have been highlighted by the Japanese manufacturers which allow considerable budgets in the research of less polluting vehicles. The exposed models, although being all different according to the manufacturer, use always a hybrid system: fuel cell/battery. The manufacturers have stressed too on the intelligent systems for navigation and safety as well as on the design and comfort. (O.M.)

  15. Sample Selection for Training Cascade Detectors.

    Science.gov (United States)

    Vállez, Noelia; Deniz, Oscar; Bueno, Gloria

    2015-01-01

    Automatic detection systems usually require large and representative training datasets in order to obtain good detection and false positive rates. Training datasets are such that the positive set has few samples and/or the negative set should represent anything except the object of interest. In this respect, the negative set typically contains orders of magnitude more images than the positive set. However, imbalanced training databases lead to biased classifiers. In this paper, we focus our attention on a negative sample selection method to properly balance the training data for cascade detectors. The method is based on the selection of the most informative false positive samples generated in one stage to feed the next stage. The results show that the proposed cascade detector with sample selection obtains on average better partial AUC and smaller standard deviation than the other compared cascade detectors.

  16. The effects of sampling on the efficiency and accuracy of k-mer indexes: Theoretical and empirical comparisons using the human genome.

    Science.gov (United States)

    Almutairy, Meznah; Torng, Eric

    2017-01-01

    One of the most common ways to search a sequence database for sequences that are similar to a query sequence is to use a k-mer index such as BLAST. A big problem with k-mer indexes is the space required to store the lists of all occurrences of all k-mers in the database. One method for reducing the space needed, and also query time, is sampling where only some k-mer occurrences are stored. Most previous work uses hard sampling, in which enough k-mer occurrences are retained so that all similar sequences are guaranteed to be found. In contrast, we study soft sampling, which further reduces the number of stored k-mer occurrences at a cost of decreasing query accuracy. We focus on finding highly similar local alignments (HSLA) over nucleotide sequences, an operation that is fundamental to biological applications such as cDNA sequence mapping. For our comparison, we use the NCBI BLAST tool with the human genome and human ESTs. When identifying HSLAs, we find that soft sampling significantly reduces both index size and query time with relatively small losses in query accuracy. For the human genome and HSLAs of length at least 100 bp, soft sampling reduces index size 4-10 times more than hard sampling and processes queries 2.3-6.8 times faster, while still achieving retention rates of at least 96.6%. When we apply soft sampling to the problem of mapping ESTs against the genome, we map more than 98% of ESTs perfectly while reducing the index size by a factor of 4 and query time by 23.3%. These results demonstrate that soft sampling is a simple but effective strategy for performing efficient searches for HSLAs. We also provide a new model for sampling with BLAST that predicts empirical retention rates with reasonable accuracy by modeling two key problem factors.

  17. Hanford Site Environmental Surveillance Master Sampling Schedule

    International Nuclear Information System (INIS)

    Bisping, L.E.

    1999-01-01

    Environmental surveillance of the Hanford Site and surrounding areas is conducted by the Pacific Northwest National Laboratory (PNNL) for the U.S. Department of Energy (DOE). Sampling is conducted to evaluate levels of radioactive and nonradioactive pollutants in the Hanford environs, as required in DOE Order 5400.1, ''General Environmental protection Program,'' and DOE Order 5400.5, ''Radiation Protection of the Public and the Environment.'' The sampling methods are described in the Environmental Monitoring Plan, United States Department of Energy, Richland Operations Office, DOE/RL-91-50, Rev.2, U.S. Department of Energy, Richland, Washington. This document contains the CY1999 schedules for the routine collection of samples for the Surface Environmental Surveillance Project (SESP) and Drinking Water Monitoring Project. Each section includes the sampling location, sample type, and analyses to be performed on the sample. In some cases, samples are scheduled on a rotating basis and may not be collected in 1999 in which case the anticipated year for collection is provided. In addition, a map is included for each media showing approximate sampling locations

  18. Accurate EPR radiosensitivity calibration using small sample masses

    Science.gov (United States)

    Hayes, R. B.; Haskell, E. H.; Barrus, J. K.; Kenner, G. H.; Romanyukha, A. A.

    2000-03-01

    We demonstrate a procedure in retrospective EPR dosimetry which allows for virtually nondestructive sample evaluation in terms of sample irradiations. For this procedure to work, it is shown that corrections must be made for cavity response characteristics when using variable mass samples. Likewise, methods are employed to correct for empty tube signals, sample anisotropy and frequency drift while considering the effects of dose distribution optimization. A demonstration of the method's utility is given by comparing sample portions evaluated using both the described methodology and standard full sample additive dose techniques. The samples used in this study are tooth enamel from teeth removed during routine dental care. We show that by making all the recommended corrections, very small masses can be both accurately measured and correlated with measurements of other samples. Some issues relating to dose distribution optimization are also addressed.

  19. Accurate EPR radiosensitivity calibration using small sample masses

    International Nuclear Information System (INIS)

    Hayes, R.B.; Haskell, E.H.; Barrus, J.K.; Kenner, G.H.; Romanyukha, A.A.

    2000-01-01

    We demonstrate a procedure in retrospective EPR dosimetry which allows for virtually nondestructive sample evaluation in terms of sample irradiations. For this procedure to work, it is shown that corrections must be made for cavity response characteristics when using variable mass samples. Likewise, methods are employed to correct for empty tube signals, sample anisotropy and frequency drift while considering the effects of dose distribution optimization. A demonstration of the method's utility is given by comparing sample portions evaluated using both the described methodology and standard full sample additive dose techniques. The samples used in this study are tooth enamel from teeth removed during routine dental care. We show that by making all the recommended corrections, very small masses can be both accurately measured and correlated with measurements of other samples. Some issues relating to dose distribution optimization are also addressed

  20. Samples and Sampling Protocols for Scientific Investigations | Joel ...

    African Journals Online (AJOL)

    ... from sampling, through sample preparation, calibration to final measurement and reporting. This paper, therefore offers useful information on practical guidance on sampling protocols in line with best practice and international standards. Keywords: Sampling, sampling protocols, chain of custody, analysis, documentation ...

  1. Forensic Comparison of Soil Samples Using Nondestructive Elemental Analysis.

    Science.gov (United States)

    Uitdehaag, Stefan; Wiarda, Wim; Donders, Timme; Kuiper, Irene

    2017-07-01

    Soil can play an important role in forensic cases in linking suspects or objects to a crime scene by comparing samples from the crime scene with samples derived from items. This study uses an adapted ED-XRF analysis (sieving instead of grinding to prevent destruction of microfossils) to produce elemental composition data of 20 elements. Different data processing techniques and statistical distances were evaluated using data from 50 samples and the log-LR cost (C llr ). The best performing combination, Canberra distance, relative data, and square root values, is used to construct a discriminative model. Examples of the spatial resolution of the method in crime scenes are shown for three locations, and sampling strategy is discussed. Twelve test cases were analyzed, and results showed that the method is applicable. The study shows how the combination of an analysis technique, a database, and a discriminative model can be used to compare multiple soil samples quickly. © 2016 American Academy of Forensic Sciences.

  2. Sampling Polya-Gamma random variates: alternate and approximate techniques

    OpenAIRE

    Windle, Jesse; Polson, Nicholas G.; Scott, James G.

    2014-01-01

    Efficiently sampling from the P\\'olya-Gamma distribution, ${PG}(b,z)$, is an essential element of P\\'olya-Gamma data augmentation. Polson et. al (2013) show how to efficiently sample from the ${PG}(1,z)$ distribution. We build two new samplers that offer improved performance when sampling from the ${PG}(b,z)$ distribution and $b$ is not unity.

  3. Ca II TRIPLET SPECTROSCOPY OF SMALL MAGELLANIC CLOUD RED GIANTS. I. ABUNDANCES AND VELOCITIES FOR A SAMPLE OF CLUSTERS

    International Nuclear Information System (INIS)

    Parisi, M. C.; Claria, J. J.; Grocholski, A. J.; Geisler, D.; Sarajedini, A.

    2009-01-01

    We have obtained near-infrared spectra covering the Ca II triplet lines for a large number of stars associated with 16 Small Magellanic Cloud (SMC) clusters using the VLT + FORS2. These data compose the largest available sample of SMC clusters with spectroscopically derived abundances and velocities. Our clusters span a wide range of ages and provide good areal coverage of the galaxy. Cluster members are selected using a combination of their positions relative to the cluster center as well as their location in the color-magnitude diagram, abundances, and radial velocities (RVs). We determine mean cluster velocities to typically 2.7 km s -1 and metallicities to 0.05 dex (random errors), from an average of 6.4 members per cluster. By combining our clusters with previously published results, we compile a sample of 25 clusters on a homogeneous metallicity scale and with relatively small metallicity errors, and thereby investigate the metallicity distribution, metallicity gradient, and age-metallicity relation (AMR) of the SMC cluster system. For all 25 clusters in our expanded sample, the mean metallicity [Fe/H] = -0.96 with σ = 0.19. The metallicity distribution may possibly be bimodal, with peaks at ∼-0.9 dex and -1.15 dex. Similar to the Large Magellanic Cloud (LMC), the SMC cluster system gives no indication of a radial metallicity gradient. However, intermediate age SMC clusters are both significantly more metal-poor and have a larger metallicity spread than their LMC counterparts. Our AMR shows evidence for three phases: a very early (>11 Gyr) phase in which the metallicity reached ∼-1.2 dex, a long intermediate phase from ∼10 to 3 Gyr in which the metallicity only slightly increased, and a final phase from 3 to 1 Gyr ago in which the rate of enrichment was substantially faster. We find good overall agreement with the model of Pagel and Tautvaisiene, which assumes a burst of star formation at 4 Gyr. Finally, we find that the mean RV of the cluster system

  4. Lightweight link dimensioning using sFlow sampling

    DEFF Research Database (Denmark)

    de Oliviera Schmidt, Ricardo; Sadre, Ramin; Sperotto, Anna

    2013-01-01

    not be trivial in high-speed links. Aiming scalability, operators often deploy packet sampling on monitoring, but little is known how it affects link dimensioning. In this paper we assess the feasibility of lightweight link dimensioning using sFlow, which is a widely-deployed traffic monitoring tool. We...... implement sFlow sampling algorithm and use a previously proposed and validated dimensioning formula that needs traffic variance. We validate our approach using packet captures from real networks. Results show that the proposed procedure is successful for a range of sampling rates and that, due to randomness...... of sampling algorithm, the error introduced by scaling the traffic variance yields more conservative results that cope with short-term traffic fluctuations....

  5. Evaluating sampling strategy for DNA barcoding study of coastal and inland halo-tolerant Poaceae and Chenopodiaceae: A case study for increased sample size.

    Directory of Open Access Journals (Sweden)

    Peng-Cheng Yao

    Full Text Available Environmental conditions in coastal salt marsh habitats have led to the development of specialist genetic adaptations. We evaluated six DNA barcode loci of the 53 species of Poaceae and 15 species of Chenopodiaceae from China's coastal salt marsh area and inland area. Our results indicate that the optimum DNA barcode was ITS for coastal salt-tolerant Poaceae and matK for the Chenopodiaceae. Sampling strategies for ten common species of Poaceae and Chenopodiaceae were analyzed according to optimum barcode. We found that by increasing the number of samples collected from the coastal salt marsh area on the basis of inland samples, the number of haplotypes of Arundinella hirta, Digitaria ciliaris, Eleusine indica, Imperata cylindrica, Setaria viridis, and Chenopodium glaucum increased, with a principal coordinate plot clearly showing increased distribution points. The results of a Mann-Whitney test showed that for Digitaria ciliaris, Eleusine indica, Imperata cylindrica, and Setaria viridis, the distribution of intraspecific genetic distances was significantly different when samples from the coastal salt marsh area were included (P < 0.01. These results suggest that increasing the sample size in specialist habitats can improve measurements of intraspecific genetic diversity, and will have a positive effect on the application of the DNA barcodes in widely distributed species. The results of random sampling showed that when sample size reached 11 for Chloris virgata, Chenopodium glaucum, and Dysphania ambrosioides, 13 for Setaria viridis, and 15 for Eleusine indica, Imperata cylindrica and Chenopodium album, average intraspecific distance tended to reach stability. These results indicate that the sample size for DNA barcode of globally distributed species should be increased to 11-15.

  6. Statistical literacy and sample survey results

    Science.gov (United States)

    McAlevey, Lynn; Sullivan, Charles

    2010-10-01

    Sample surveys are widely used in the social sciences and business. The news media almost daily quote from them, yet they are widely misused. Using students with prior managerial experience embarking on an MBA course, we show that common sample survey results are misunderstood even by those managers who have previously done a statistics course. In general, they fare no better than managers who have never studied statistics. There are implications for teaching, especially in business schools, as well as for consulting.

  7. Validation of the Cognition Test Battery for Spaceflight in a Sample of Highly Educated Adults.

    Science.gov (United States)

    Moore, Tyler M; Basner, Mathias; Nasrini, Jad; Hermosillo, Emanuel; Kabadi, Sushila; Roalf, David R; McGuire, Sarah; Ecker, Adrian J; Ruparel, Kosha; Port, Allison M; Jackson, Chad T; Dinges, David F; Gur, Ruben C

    2017-10-01

    Neuropsychological changes that may occur due to the environmental and psychological stressors of prolonged spaceflight motivated the development of the Cognition Test Battery. The battery was designed to assess multiple domains of neurocognitive functions linked to specific brain systems. Tests included in Cognition have been validated, but not in high-performing samples comparable to astronauts, which is an essential step toward ensuring their usefulness in long-duration space missions. We administered Cognition (on laptop and iPad) and the WinSCAT, counterbalanced for order and version, in a sample of 96 subjects (50% women; ages 25-56 yr) with at least a Master's degree in science, technology, engineering, or mathematics (STEM). We assessed the associations of age, sex, and administration device with neurocognitive performance, and compared the scores on the Cognition battery with those of WinSCAT. Confirmatory factor analysis compared the structure of the iPad and laptop administration methods using Wald tests. Age was associated with longer response times (mean β = 0.12) and less accurate (mean β = -0.12) performance, women had longer response times on psychomotor (β = 0.62), emotion recognition (β = 0.30), and visuo-spatial (β = 0.48) tasks, men outperformed women on matrix reasoning (β = -0.34), and performance on an iPad was generally faster (mean β = -0.55). The WinSCAT appeared heavily loaded with tasks requiring executive control, whereas Cognition assessed a larger variety of neurocognitive domains. Overall results supported the interpretation of Cognition scores as measuring their intended constructs in high performing astronaut analog samples.Moore TM, Basner M, Nasrini J, Hermosillo E, Kabadi S, Roalf DR, McGuire S, Ecker AJ, Ruparel K, Port AM, Jackson CT, Dinges DF, Gur RC. Validation of the Cognition Test Battery for spaceflight in a sample of highly educated adults. Aerosp Med Hum Perform. 2017; 88(10):937-946.

  8. An integrated sample pretreatment platform for quantitative N-glycoproteome analysis with combination of on-line glycopeptide enrichment, deglycosylation and dimethyl labeling

    Energy Technology Data Exchange (ETDEWEB)

    Weng, Yejing; Qu, Yanyan; Jiang, Hao; Wu, Qi [National Chromatographic Research and Analysis Center, Key Laboratory of Separation Science for Analytical Chemistry, Dalian Institute of Chemical Physics, Chinese Academy of Sciences, Dalian 116023 (China); University of the Chinese Academy of Sciences, Beijing 100039 (China); Zhang, Lihua, E-mail: lihuazhang@dicp.ac.cn [National Chromatographic Research and Analysis Center, Key Laboratory of Separation Science for Analytical Chemistry, Dalian Institute of Chemical Physics, Chinese Academy of Sciences, Dalian 116023 (China); Yuan, Huiming [National Chromatographic Research and Analysis Center, Key Laboratory of Separation Science for Analytical Chemistry, Dalian Institute of Chemical Physics, Chinese Academy of Sciences, Dalian 116023 (China); Zhou, Yuan [National Chromatographic Research and Analysis Center, Key Laboratory of Separation Science for Analytical Chemistry, Dalian Institute of Chemical Physics, Chinese Academy of Sciences, Dalian 116023 (China); University of the Chinese Academy of Sciences, Beijing 100039 (China); Zhang, Xiaodan; Zhang, Yukui [National Chromatographic Research and Analysis Center, Key Laboratory of Separation Science for Analytical Chemistry, Dalian Institute of Chemical Physics, Chinese Academy of Sciences, Dalian 116023 (China)

    2014-06-23

    Highlights: • An integrated platform for quantitative N-glycoproteome analysis was established. • On-line enrichment, deglycosylation and labeling could be achieved within 160 min. • A N{sub 2}-assisted interface was applied to improve the compatibility of the platform. • The platform exhibited improved quantification accuracy, precision and throughput. - Abstract: Relative quantification of N-glycoproteomes shows great promise for the discovery of candidate biomarkers and therapeutic targets. The traditional protocol for quantitative analysis of glycoproteomes is usually off-line performed, and suffers from long sample preparation time, and the risk of sample loss or contamination due to manual manipulation. In this study, a novel integrated sample preparation platform for quantitative N-glycoproteome analysis was established, with combination of online N-glycopeptide capture by a HILIC column, sample buffer exchange by a N{sub 2}-assisted HILIC–RPLC interface, deglycosylation by a hydrophilic PNGase F immobilized enzymatic reactor (hIMER) and solid dimethyl labeling on a C18 precolumn. To evaluate the performance of such a platform, two equal aliquots of immunoglobulin G (IgG) digests were sequentially pretreated, followed by MALDI-TOF MS analysis. The signal intensity ratio of heavy/light (H/L) labeled deglycosylated peptides with the equal aliquots was 1.00 (RSD = 6.2%, n = 3), much better than those obtained by the offline protocol, with H/L ratio as 0.76 (RSD = 11.6%, n = 3). Additionally, the total on-line sample preparation time was greatly shortened to 160 min, much faster than that of offline approach (24 h). Furthermore, such an integrated pretreatment platform was successfully applied to analyze the two kinds of hepatocarcinoma ascites syngeneic cell lines with high (Hca-F) and low (Hca-P) lymph node metastasis rates. For H/L labeled Hca-P lysates with the equal aliquots, 99.6% of log 2 ratios (H/L) of quantified glycopeptides ranged from −1

  9. Permeability of gypsum samples dehydrated in air

    Science.gov (United States)

    Milsch, Harald; Priegnitz, Mike; Blöcher, Guido

    2011-09-01

    We report on changes in rock permeability induced by devolatilization reactions using gypsum as a reference analog material. Cylindrical samples of natural alabaster were dehydrated in air (dry) for up to 800 h at ambient pressure and temperatures between 378 and 423 K. Subsequently, the reaction kinetics, so induced changes in porosity, and the concurrent evolution of sample permeability were constrained. Weighing the heated samples in predefined time intervals yielded the reaction progress where the stoichiometric mass balance indicated an ultimate and complete dehydration to anhydrite regardless of temperature. Porosity showed to continuously increase with reaction progress from approximately 2% to 30%, whilst the initial bulk volume remained unchanged. Within these limits permeability significantly increased with porosity by almost three orders of magnitude from approximately 7 × 10-19 m2 to 3 × 10-16 m2. We show that - when mechanical and hydraulic feedbacks can be excluded - permeability, reaction progress, and porosity are related unequivocally.

  10. Saccadic reaction times to audiovisual stimuli show effects of oscillatory phase reset.

    Directory of Open Access Journals (Sweden)

    Adele Diederich

    Full Text Available Initiating an eye movement towards a suddenly appearing visual target is faster when an accessory auditory stimulus occurs in close spatiotemporal vicinity. Such facilitation of saccadic reaction time (SRT is well-documented, but the exact neural mechanisms underlying the crossmodal effect remain to be elucidated. From EEG/MEG studies it has been hypothesized that coupled oscillatory activity in primary sensory cortices regulates multisensory processing. Specifically, it is assumed that the phase of an ongoing neural oscillation is shifted due to the occurrence of a sensory stimulus so that, across trials, phase values become highly consistent (phase reset. If one can identify the phase an oscillation is reset to, it is possible to predict when temporal windows of high and low excitability will occur. However, in behavioral experiments the pre-stimulus phase will be different on successive repetitions of the experimental trial, and average performance over many trials will show no signs of the modulation. Here we circumvent this problem by repeatedly presenting an auditory accessory stimulus followed by a visual target stimulus with a temporal delay varied in steps of 2 ms. Performing a discrete time series analysis on SRT as a function of the delay, we provide statistical evidence for the existence of distinct peak spectral components in the power spectrum. These frequencies, although varying across participants, fall within the beta and gamma range (20 to 40 Hz of neural oscillatory activity observed in neurophysiological studies of multisensory integration. Some evidence for high-theta/alpha activity was found as well. Our results are consistent with the phase reset hypothesis and demonstrate that it is amenable to testing by purely psychophysical methods. Thus, any theory of multisensory processes that connects specific brain states with patterns of saccadic responses should be able to account for traces of oscillatory activity in observable

  11. FDA Food Code recommendations: how do popular US baking shows measure up?

    Directory of Open Access Journals (Sweden)

    Valerie Cadorett

    2018-05-01

    Full Text Available The purpose of this study was to determine if popular US baking shows follow the FDA Food Code recommendations and critical food safety principles. This cross-sectional study examined a convenience sample of 75 episodes from three popular baking shows. The three shows were about competitively baking cupcakes, competitively baking cakes, and baking in a popular local bakery. Twenty-five episodes from each show were viewed. Coding involved tallying how many times 17 FDA Food Code recommendations were or were not followed. On each show, bare hands frequently came in contact with ready-to-eat food. On a per-hour basis, this occurred 80, 155, and 176 times on shows 1-3, respectively. Hands were washed before cooking three times on the three shows and never for the recommended 20 seconds. On each show, many people touched food while wearing jewelry other than a plain wedding band, for an average of at least 7 people per hour on each show. Shows 1-3 had high rates of long-haired bakers not wearing hair restraints (11.14, 6.57, and 14.06 per hour, respectively. Shows 1 and 2 had high rates of running among the bakers (22.29 and 10.57 instances per hour, respectively. These popular baking shows do not demonstrate proper food safety techniques put forth by the FDA and do not contribute the reduction of foodborne illnesses through proper food handling.

  12. Simulated tempering distributed replica sampling: A practical guide to enhanced conformational sampling

    Energy Technology Data Exchange (ETDEWEB)

    Rauscher, Sarah; Pomes, Regis, E-mail: pomes@sickkids.ca

    2010-11-01

    Simulated tempering distributed replica sampling (STDR) is a generalized-ensemble method designed specifically for simulations of large molecular systems on shared and heterogeneous computing platforms [Rauscher, Neale and Pomes (2009) J. Chem. Theor. Comput. 5, 2640]. The STDR algorithm consists of an alternation of two steps: (1) a short molecular dynamics (MD) simulation; and (2) a stochastic temperature jump. Repeating these steps thousands of times results in a random walk in temperature, which allows the system to overcome energetic barriers, thereby enhancing conformational sampling. The aim of the present paper is to provide a practical guide to applying STDR to complex biomolecular systems. We discuss the details of our STDR implementation, which is a highly-parallel algorithm designed to maximize computational efficiency while simultaneously minimizing network communication and data storage requirements. Using a 35-residue disordered peptide in explicit water as a test system, we characterize the efficiency of the STDR algorithm with respect to both diffusion in temperature space and statistical convergence of structural properties. Importantly, we show that STDR provides a dramatic enhancement of conformational sampling compared to a canonical MD simulation.

  13. Rational Arithmetic Mathematica Functions to Evaluate the Two-Sided One Sample K-S Cumulative Sampling Distribution

    Directory of Open Access Journals (Sweden)

    J. Randall Brown

    2007-06-01

    Full Text Available One of the most widely used goodness-of-fit tests is the two-sided one sample Kolmogorov-Smirnov (K-S test which has been implemented by many computer statistical software packages. To calculate a two-sided p value (evaluate the cumulative sampling distribution, these packages use various methods including recursion formulae, limiting distributions, and approximations of unknown accuracy developed over thirty years ago. Based on an extensive literature search for the two-sided one sample K-S test, this paper identifies an exact formula for sample sizes up to 31, six recursion formulae, and one matrix formula that can be used to calculate a p value. To ensure accurate calculation by avoiding catastrophic cancelation and eliminating rounding error, each of these formulae is implemented in rational arithmetic. For the six recursion formulae and the matrix formula, computational experience for sample sizes up to 500 shows that computational times are increasing functions of both the sample size and the number of digits in the numerator and denominator integers of the rational number test statistic. The computational times of the seven formulae vary immensely but the Durbin recursion formula is almost always the fastest. Linear search is used to calculate the inverse of the cumulative sampling distribution (find the confidence interval half-width and tables of calculated half-widths are presented for sample sizes up to 500. Using calculated half-widths as input, computational times for the fastest formula, the Durbin recursion formula, are given for sample sizes up to two thousand.

  14. Comparing Respondent-Driven Sampling and Targeted Sampling Methods of Recruiting Injection Drug Users in San Francisco

    Science.gov (United States)

    Malekinejad, Mohsen; Vaudrey, Jason; Martinez, Alexis N.; Lorvick, Jennifer; McFarland, Willi; Raymond, H. Fisher

    2010-01-01

    The objective of this article is to compare demographic characteristics, risk behaviors, and service utilization among injection drug users (IDUs) recruited from two separate studies in San Francisco in 2005, one which used targeted sampling (TS) and the other which used respondent-driven sampling (RDS). IDUs were recruited using TS (n = 651) and RDS (n = 534) and participated in quantitative interviews that included demographic characteristics, risk behaviors, and service utilization. Prevalence estimates and 95% confidence intervals (CIs) were calculated to assess whether there were differences in these variables by sampling method. There was overlap in 95% CIs for all demographic variables except African American race (TS: 45%, 53%; RDS: 29%, 44%). Maps showed that the proportion of IDUs distributed across zip codes were similar for the TS and RDS sample, with the exception of a single zip code that was more represented in the TS sample. This zip code includes an isolated, predominantly African American neighborhood where only the TS study had a field site. Risk behavior estimates were similar for both TS and RDS samples, although self-reported hepatitis C infection was lower in the RDS sample. In terms of service utilization, more IDUs in the RDS sample reported no recent use of drug treatment and syringe exchange program services. Our study suggests that perhaps a hybrid sampling plan is best suited for recruiting IDUs in San Francisco, whereby the more intensive ethnographic and secondary analysis components of TS would aid in the planning of seed placement and field locations for RDS. PMID:20582573

  15. 23 CFR Appendix A to Subpart D of... - Sample Show Cause Notice

    Science.gov (United States)

    2010-04-01

    ... female representation at each level of each trade and a list of minority employees. You are specifically... unacceptable level of minority and female employment in your operations, particularly in the semiskilled and skilled categories of employees. The Department of Labor regulations (41 CFR 60) implementing Executive...

  16. Perceived social environment and adolescents' well-being and adjustment: Comparing a foster care sample with a matched sample

    OpenAIRE

    Farruggia, SP; Greenberger, E; Chen, C; Heckhausen, J

    2006-01-01

    Previous research has demonstrated that former foster care youth are at risk for poor outcomes (e.g., more problem behaviors, more depression, lower self-esteem, and poor social relationships). It is not clear, however, whether these findings reflect preemancipation developmental deficits. This study used 163 preemancipation foster care youth and a matched sample of 163 comparison youth. Results showed that foster-care youth did not differ from the comparison sample on measures of well-being,...

  17. Direct detection of the AR-E211 G > A gene polymorphism from blood and tissue samples without DNA isolation.

    Science.gov (United States)

    Reptova, Silvie; Trtkova, Katerina Smesny; Kolar, Zdenek

    2014-04-01

    The pathogenesis of prostate cancer (CaP) involves alterations in a gene structure of the androgen receptor (AR). The single nucleotide polymorphism AR-E211 G > A localized in exon 1 of the AR gene (G1733A) was detected using direct polymerase chain reaction and restriction digestion (PCR-RFLP) method on blood and tissue samples without prior DNA isolation. We used blood samples of patients with a diagnosis of benign prostatic hyperplasia (BPH) or CaP. From monitored group of CaP patients were selected specimen in formalin-fixed paraffin-embedded tissue blocks with morphology of BPH and CaP. The main objective of our study was to develop a method based the direct PCR-RFLP analysis from blood and tissue without prior DNA isolation for faster genotyping analysis of a large number of samples. We found no statistically significant differences in allelic % of the AR-E211 G > A polymorphism between BPH and CaP patients (p ≤ 0.8462). Genotyping of the AR-E211 G > A variant in blood was not identical with tumor tissue genotyping analysis. Significant agreement between blood and tissue AR-E211 G > A polymorphism only in non-tumor tissue focus was confirmed. Although we analyzed a limited number of the tissue samples, we suppose that a presence of the minor allele A may be associated with cancer transformation-induced changes of the modified AR gene.

  18. Sampling pig farms at the abattoir in a cross-sectional study - Evaluation of a sampling method.

    Science.gov (United States)

    Birkegård, Anna Camilla; Halasa, Tariq; Toft, Nils

    2017-09-15

    A cross-sectional study design is relatively inexpensive, fast and easy to conduct when compared to other study designs. Careful planning is essential to obtaining a representative sample of the population, and the recommended approach is to use simple random sampling from an exhaustive list of units in the target population. This approach is rarely feasible in practice, and other sampling procedures must often be adopted. For example, when slaughter pigs are the target population, sampling the pigs on the slaughter line may be an alternative to on-site sampling at a list of farms. However, it is difficult to sample a large number of farms from an exact predefined list, due to the logistics and workflow of an abattoir. Therefore, it is necessary to have a systematic sampling procedure and to evaluate the obtained sample with respect to the study objective. We propose a method for 1) planning, 2) conducting, and 3) evaluating the representativeness and reproducibility of a cross-sectional study when simple random sampling is not possible. We used an example of a cross-sectional study with the aim of quantifying the association of antimicrobial resistance and antimicrobial consumption in Danish slaughter pigs. It was not possible to visit farms within the designated timeframe. Therefore, it was decided to use convenience sampling at the abattoir. Our approach was carried out in three steps: 1) planning: using data from meat inspection to plan at which abattoirs and how many farms to sample; 2) conducting: sampling was carried out at five abattoirs; 3) evaluation: representativeness was evaluated by comparing sampled and non-sampled farms, and the reproducibility of the study was assessed through simulated sampling based on meat inspection data from the period where the actual data collection was carried out. In the cross-sectional study samples were taken from 681 Danish pig farms, during five weeks from February to March 2015. The evaluation showed that the sampling

  19. Sample Selection for Training Cascade Detectors.

    Directory of Open Access Journals (Sweden)

    Noelia Vállez

    Full Text Available Automatic detection systems usually require large and representative training datasets in order to obtain good detection and false positive rates. Training datasets are such that the positive set has few samples and/or the negative set should represent anything except the object of interest. In this respect, the negative set typically contains orders of magnitude more images than the positive set. However, imbalanced training databases lead to biased classifiers. In this paper, we focus our attention on a negative sample selection method to properly balance the training data for cascade detectors. The method is based on the selection of the most informative false positive samples generated in one stage to feed the next stage. The results show that the proposed cascade detector with sample selection obtains on average better partial AUC and smaller standard deviation than the other compared cascade detectors.

  20. Sampling and chemical analysis in environmental samples around Nuclear Power Plants and some environmental samples

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Yong Woo; Han, Man Jung; Cho, Seong Won; Cho, Hong Jun; Oh, Hyeon Kyun; Lee, Jeong Min; Chang, Jae Sook [KORTIC, Taejon (Korea, Republic of)

    2002-12-15

    Twelve kinds of environmental samples such as soil, seawater, underground water, etc. around Nuclear Power Plants(NPPs) were collected. Tritium chemical analysis was tried for the samples of rain water, pine-needle, air, seawater, underground water, chinese cabbage, a grain of rice and milk sampled around NPPs, and surface seawater and rain water sampled over the country. Strontium in the soil that sere sampled at 60 point of district in Korea were analyzed. Tritium were sampled at 60 point of district in Korea were analyzed. Tritium were analyzed in 21 samples of surface seawater around the Korea peninsular that were supplied from KFRDI(National Fisheries Research and Development Institute). Sampling and chemical analysis environmental samples around Kori, Woolsung, Youngkwang, Wooljin Npps and Taeduk science town for tritium and strontium analysis was managed according to plans. Succeed to KINS after all samples were tried.

  1. Product forms in Gabor analysis for a quincunx-type sampling geometry

    NARCIS (Netherlands)

    Bastiaans, M.J.; Leest, van A.J.; Veen, J.P.

    1998-01-01

    Recently a new sampling lattice - the quincunx lattice - has been introduced [1] as a sampling geometry in the Gabor scheme, which geometry is different from the traditional rectangular sampling geometry. In this paper we will show how results that hold for rectangular sampling (see, for instance,

  2. Magnitude of 14C/12C variations based on archaeological samples

    International Nuclear Information System (INIS)

    Kusumgar, S.; Agrawal, D.P.

    1977-01-01

    The magnitude of 14 C/ 12 C variations in the period A.D. 5O0 to 200 B.C. and 370 B.C. to 2900 B.C. is discussed. The 14 C dates of well-dated archaeological samples from India and Egypt do not show any significant divergence from the historical ages. On the other hand, the corrections based on dendrochronological samples show marked deviations for the same time period. A plea is, therefore, made to study old tree samples from Anatolia and Irish bogs and archaeological samples from west Asia to arrive at a more realistic calibration curve. (author)

  3. Identifying the potential of changes to blood sample logistics using simulation

    DEFF Research Database (Denmark)

    Jørgensen, Pelle Morten Thomas; Jacobsen, Peter; Poulsen, Jørgen Hjelm

    2013-01-01

    of the simulation was to evaluate changes made to the transportation of blood samples between wards and the laboratory. The average- (AWT) and maximum waiting time (MWT) from a blood sample was drawn at the ward until it was received at the laboratory, and the distribution of arrivals of blood samples......, each of the scenarios was tested in terms of what amount of resources would give the optimal result. The simulations showed a big improvement potential in implementing a new technology/mean for transporting the blood samples. The pneumatic tube system showed the biggest potential lowering the AWT...

  4. Rapid and sensitive analysis of polychlorinated biphenyls and acrylamide in food samples using ionic liquid-based in situ dispersive liquid-liquid microextraction coupled to headspace gas chromatography.

    Science.gov (United States)

    Zhang, Cheng; Cagliero, Cecilia; Pierson, Stephen A; Anderson, Jared L

    2017-01-20

    A simple and rapid ionic liquid (IL)-based in situ dispersive liquid-liquid microextraction (DLLME) method was developed and coupled to headspace gas chromatography (HS-GC) employing electron capture (ECD) and mass spectrometry (MS) detection for the analysis of polychlorinated biphenyls (PCBs) and acrylamide at trace levels from milk and coffee samples. The chemical structures of the halide-based ILs were tailored by introducing various functional groups to the cations to evaluate the effect of different structural features on the extraction efficiency of the target analytes. Extraction parameters including the molar ratio of IL to metathesis reagent and IL mass were optimized. The effects of HS oven temperature and the HS sample vial volume on the analyte response were also evaluated. The optimized in situ DLLME method exhibited good analytical precision, good linearity, and provided detection limits down to the low ppt level for PCBs and the low ppb level for acrylamide in aqueous samples. The matrix-compatibility of the developed method was also established by quantifying acrylamide in brewed coffee samples. This method is much simpler and faster compared to previously reported GC-MS methods using solid-phase microextraction (SPME) for the extraction/preconcentration of PCBs and acrylamide from complex food samples. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Pierre Gy's sampling theory and sampling practice heterogeneity, sampling correctness, and statistical process control

    CERN Document Server

    Pitard, Francis F

    1993-01-01

    Pierre Gy's Sampling Theory and Sampling Practice, Second Edition is a concise, step-by-step guide for process variability management and methods. Updated and expanded, this new edition provides a comprehensive study of heterogeneity, covering the basic principles of sampling theory and its various applications. It presents many practical examples to allow readers to select appropriate sampling protocols and assess the validity of sampling protocols from others. The variability of dynamic process streams using variography is discussed to help bridge sampling theory with statistical process control. Many descriptions of good sampling devices, as well as descriptions of poor ones, are featured to educate readers on what to look for when purchasing sampling systems. The book uses its accessible, tutorial style to focus on professional selection and use of methods. The book will be a valuable guide for mineral processing engineers; metallurgists; geologists; miners; chemists; environmental scientists; and practit...

  6. Automated sample preparation using membrane microtiter extraction for bioanalytical mass spectrometry.

    Science.gov (United States)

    Janiszewski, J; Schneider, P; Hoffmaster, K; Swyden, M; Wells, D; Fouda, H

    1997-01-01

    The development and application of membrane solid phase extraction (SPE) in 96-well microtiter plate format is described for the automated analysis of drugs in biological fluids. The small bed volume of the membrane allows elution of the analyte in a very small solvent volume, permitting direct HPLC injection and negating the need for the time consuming solvent evaporation step. A programmable liquid handling station (Quadra 96) was modified to automate all SPE steps. To avoid drying of the SPE bed and to enhance the analytical precision a novel protocol for performing the condition, load and wash steps in rapid succession was utilized. A block of 96 samples can now be extracted in 10 min., about 30 times faster than manual solvent extraction or single cartridge SPE methods. This processing speed complements the high-throughput speed of contemporary high performance liquid chromatography mass spectrometry (HPLC/MS) analysis. The quantitative analysis of a test analyte (Ziprasidone) in plasma demonstrates the utility and throughput of membrane SPE in combination with HPLC/MS. The results obtained with the current automated procedure compare favorably with those obtained using solvent and traditional solid phase extraction methods. The method has been used for the analysis of numerous drug prototypes in biological fluids to support drug discovery efforts.

  7. Hanford Site Environmental Surveillance Master Sampling Schedule

    International Nuclear Information System (INIS)

    Bisping, L.E.

    2000-01-01

    Environmental surveillance of the Hanford Site and surrounding areas is conducted by the Pacific Northwest National Laboratory (PNNL) for the U.S. Department of Energy (DOE). Sampling is conducted to evaluate levels of radioactive and nonradioactive pollutants in the Hanford environs, as required in DOE Order 5400.1, General Environmental Protection Program: and DOE Order 5400.5, Radiation Protection of the Public and the Environment. The sampling design is described in the Operations Office, Environmental Monitoring Plan, United States Department of Energy, Richland DOE/RL-91-50, Rev.2, U.S. Department of Energy, Richland, Washington. This document contains the CY 2000 schedules for the routine collection of samples for the Surface Environmental Surveillance Project (SESP) and Drinking Water Monitoring Project. Each section includes sampling locations, sample types, and analyses to be performed. In some cases, samples are scheduled on a rotating basis and may not be collected in 2000 in which case the anticipated year for collection is provided. In addition, a map showing approximate sampling locations is included for each media scheduled for collection

  8. Systematic Sampling and Cluster Sampling of Packet Delays

    OpenAIRE

    Lindh, Thomas

    2006-01-01

    Based on experiences of a traffic flow performance meter this papersuggests and evaluates cluster sampling and systematic sampling as methods toestimate average packet delays. Systematic sampling facilitates for exampletime analysis, frequency analysis and jitter measurements. Cluster samplingwith repeated trains of periodically spaced sampling units separated by randomstarting periods, and systematic sampling are evaluated with respect to accuracyand precision. Packet delay traces have been ...

  9. Efficient computation of smoothing splines via adaptive basis sampling

    KAUST Repository

    Ma, Ping

    2015-06-24

    © 2015 Biometrika Trust. Smoothing splines provide flexible nonparametric regression estimators. However, the high computational cost of smoothing splines for large datasets has hindered their wide application. In this article, we develop a new method, named adaptive basis sampling, for efficient computation of smoothing splines in super-large samples. Except for the univariate case where the Reinsch algorithm is applicable, a smoothing spline for a regression problem with sample size n can be expressed as a linear combination of n basis functions and its computational complexity is generally O(n3). We achieve a more scalable computation in the multivariate case by evaluating the smoothing spline using a smaller set of basis functions, obtained by an adaptive sampling scheme that uses values of the response variable. Our asymptotic analysis shows that smoothing splines computed via adaptive basis sampling converge to the true function at the same rate as full basis smoothing splines. Using simulation studies and a large-scale deep earth core-mantle boundary imaging study, we show that the proposed method outperforms a sampling method that does not use the values of response variables.

  10. Efficient computation of smoothing splines via adaptive basis sampling

    KAUST Repository

    Ma, Ping; Huang, Jianhua Z.; Zhang, Nan

    2015-01-01

    © 2015 Biometrika Trust. Smoothing splines provide flexible nonparametric regression estimators. However, the high computational cost of smoothing splines for large datasets has hindered their wide application. In this article, we develop a new method, named adaptive basis sampling, for efficient computation of smoothing splines in super-large samples. Except for the univariate case where the Reinsch algorithm is applicable, a smoothing spline for a regression problem with sample size n can be expressed as a linear combination of n basis functions and its computational complexity is generally O(n3). We achieve a more scalable computation in the multivariate case by evaluating the smoothing spline using a smaller set of basis functions, obtained by an adaptive sampling scheme that uses values of the response variable. Our asymptotic analysis shows that smoothing splines computed via adaptive basis sampling converge to the true function at the same rate as full basis smoothing splines. Using simulation studies and a large-scale deep earth core-mantle boundary imaging study, we show that the proposed method outperforms a sampling method that does not use the values of response variables.

  11. 14CO2 analysis of soil gas: Evaluation of sample size limits and sampling devices

    Science.gov (United States)

    Wotte, Anja; Wischhöfer, Philipp; Wacker, Lukas; Rethemeyer, Janet

    2017-12-01

    Radiocarbon (14C) analysis of CO2 respired from soils or sediments is a valuable tool to identify different carbon sources. The collection and processing of the CO2, however, is challenging and prone to contamination. We thus continuously improve our handling procedures and present a refined method for the collection of even small amounts of CO2 in molecular sieve cartridges (MSCs) for accelerator mass spectrometry 14C analysis. Using a modified vacuum rig and an improved desorption procedure, we were able to increase the CO2 recovery from the MSC (95%) as well as the sample throughput compared to our previous study. By processing series of different sample size, we show that our MSCs can be used for CO2 samples of as small as 50 μg C. The contamination by exogenous carbon determined in these laboratory tests, was less than 2.0 μg C from fossil and less than 3.0 μg C from modern sources. Additionally, we tested two sampling devices for the collection of CO2 samples released from soils or sediments, including a respiration chamber and a depth sampler, which are connected to the MSC. We obtained a very promising, low process blank for the entire CO2 sampling and purification procedure of ∼0.004 F14C (equal to 44,000 yrs BP) and ∼0.003 F14C (equal to 47,000 yrs BP). In contrast to previous studies, we observed no isotopic fractionation towards lighter δ13C values during the passive sampling with the depth samplers.

  12. Estimating the encounter rate variance in distance sampling

    Science.gov (United States)

    Fewster, R.M.; Buckland, S.T.; Burnham, K.P.; Borchers, D.L.; Jupp, P.E.; Laake, J.L.; Thomas, L.

    2009-01-01

    The dominant source of variance in line transect sampling is usually the encounter rate variance. Systematic survey designs are often used to reduce the true variability among different realizations of the design, but estimating the variance is difficult and estimators typically approximate the variance by treating the design as a simple random sample of lines. We explore the properties of different encounter rate variance estimators under random and systematic designs. We show that a design-based variance estimator improves upon the model-based estimator of Buckland et al. (2001, Introduction to Distance Sampling. Oxford: Oxford University Press, p. 79) when transects are positioned at random. However, if populations exhibit strong spatial trends, both estimators can have substantial positive bias under systematic designs. We show that poststratification is effective in reducing this bias. ?? 2008, The International Biometric Society.

  13. Extreme Quantum Memory Advantage for Rare-Event Sampling

    Science.gov (United States)

    Aghamohammadi, Cina; Loomis, Samuel P.; Mahoney, John R.; Crutchfield, James P.

    2018-02-01

    We introduce a quantum algorithm for memory-efficient biased sampling of rare events generated by classical memoryful stochastic processes. Two efficiency metrics are used to compare quantum and classical resources for rare-event sampling. For a fixed stochastic process, the first is the classical-to-quantum ratio of required memory. We show for two example processes that there exists an infinite number of rare-event classes for which the memory ratio for sampling is larger than r , for any large real number r . Then, for a sequence of processes each labeled by an integer size N , we compare how the classical and quantum required memories scale with N . In this setting, since both memories can diverge as N →∞ , the efficiency metric tracks how fast they diverge. An extreme quantum memory advantage exists when the classical memory diverges in the limit N →∞ , but the quantum memory has a finite bound. We then show that finite-state Markov processes and spin chains exhibit memory advantage for sampling of almost all of their rare-event classes.

  14. Extreme Quantum Memory Advantage for Rare-Event Sampling

    Directory of Open Access Journals (Sweden)

    Cina Aghamohammadi

    2018-02-01

    Full Text Available We introduce a quantum algorithm for memory-efficient biased sampling of rare events generated by classical memoryful stochastic processes. Two efficiency metrics are used to compare quantum and classical resources for rare-event sampling. For a fixed stochastic process, the first is the classical-to-quantum ratio of required memory. We show for two example processes that there exists an infinite number of rare-event classes for which the memory ratio for sampling is larger than r, for any large real number r. Then, for a sequence of processes each labeled by an integer size N, we compare how the classical and quantum required memories scale with N. In this setting, since both memories can diverge as N→∞, the efficiency metric tracks how fast they diverge. An extreme quantum memory advantage exists when the classical memory diverges in the limit N→∞, but the quantum memory has a finite bound. We then show that finite-state Markov processes and spin chains exhibit memory advantage for sampling of almost all of their rare-event classes.

  15. Faster-higher-stronger -- greener

    International Nuclear Information System (INIS)

    Burgess, A.

    2000-01-01

    The Toronto Olympic Bid Committee is reported to have adopted a strong environmental orientation in its bid to bring the 2008 Olympic Games to Toronto. In a recent address, the President of the Committee outlined details of the bid's environmental component which emphasizes the role of sustainable development within the Olympics and the consequences of this orientation on the design, construction and operation of facilities. The Toronto Bid Committee has gained inspiration and momentum for its 'green bid' from the host city of the 2000 Olympic Games, Sidney, Australia, which has won widespread praise for its efforts to clean up Homebush Bay, a brownfield site long seen as a liability for the city. The Toronto Bid Committee is making itself accountable for: creating the healthiest possible conditions for the athletes, visitors and residents; designing for sustainability; protecting, restoring and enhancing human and natural habitats; conserving resources and minimizing the ecological impact of the Games; promoting innovative, technically proven Canadian environmental technology; and fostering environmental awareness and education. The Committee intends to make the environment a priority and not just an afterthought in the bidding process. It hopes to develop specific goals and where possible, quantifiable targets in non-polluting designs for all Olympic housing and sports facilities. Wherever possible renewable power such as wind, solar and fuel cells will be used, and cleaner fuels such as natural gas where green power is not a viable option

  16. Faster scannerless GLR parsing

    NARCIS (Netherlands)

    Economopoulos, G.R.; Klint, P.; Vinju, J.J.; Moor, de O.; Schwartzbach, M.I.

    2009-01-01

    Analysis and renovation of large software portfolios requires syntax analysis of multiple, usually embedded, languages and this is beyond the capabilities of many standard parsing techniques. The traditional separation between lexer and parser falls short due to the limitations of tokenization based

  17. Faster Scannerless GLR parsing

    NARCIS (Netherlands)

    J.J. Vinju (Jurgen); G.R. Economopoulos (Giorgos Robert); P. Klint (Paul)

    2008-01-01

    textabstractAnalysis and renovation of large software portfolios requires syntax analysis of multiple, usually embedded, languages and this is beyond the capabilities of many standard parsing techniques. The traditional separation between lexer and parser falls short due to the limitations of

  18. Faster, Practical GLL Parsing

    NARCIS (Netherlands)

    A. Afroozeh (Ali); A. Izmaylova (Anastasia)

    2015-01-01

    htmlabstractGeneralized LL (GLL) parsing is an extension of recursive-descent (RD) parsing that supports all context-free grammars in cubic time and space. GLL parsers have the direct relationship with the grammar that RD parsers have, and therefore, compared to GLR, are easier to understand, debug,

  19. Faster scannerless GLR parsing

    NARCIS (Netherlands)

    G.R. Economopoulos (Giorgos Robert); P. Klint (Paul); J.J. Vinju (Jurgen); O. de Moor; M.I. Schwartzbach

    2009-01-01

    textabstractAnalysis and renovation of large software portfolios requires syntax analysis of multiple, usually embedded, languages and this is beyond the capabilities of many standard parsing techniques. The traditional separation between lexer and parser falls short due to the limitations of

  20. Sampling

    CERN Document Server

    Thompson, Steven K

    2012-01-01

    Praise for the Second Edition "This book has never had a competitor. It is the only book that takes a broad approach to sampling . . . any good personal statistics library should include a copy of this book." —Technometrics "Well-written . . . an excellent book on an important subject. Highly recommended." —Choice "An ideal reference for scientific researchers and other professionals who use sampling." —Zentralblatt Math Features new developments in the field combined with all aspects of obtaining, interpreting, and using sample data Sampling provides an up-to-date treat

  1. Autonomous sample switcher for Mössbauer spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    López, J. H., E-mail: jolobotero@gmail.com; Restrepo, J., E-mail: jrestre@gmail.com [University of Antioquia, Group of Magnetism and Simulation, Institute of Physics (Colombia); Barrero, C. A., E-mail: cesar.barrero.meneses@gmail.com [University of Antioquia, Group of Solid State Physics, Institute of Physics (Colombia); Tobón, J. E., E-mail: nobotj@gmail.com; Ramírez, L. F., E-mail: luisf.ramirez@udea.edu.co; Jaramillo, J., E-mail: jdex87@gmail.com [University of Antioquia, Group of Scientific Instrumentation and Microelectronics, Institute of Physics (Colombia)

    2017-11-15

    In this work we show the design and implementation of an autonomous sample switcher device to be used as a part of the experimental set up in transmission Mössbauer spectroscopy, which can be extended to other spectroscopic techniques employing radioactive sources. The changer is intended to minimize radiation exposure times to the users or technical staff and to optimize the use of radioactive sources without compromising the resolution of measurements or spectra. This proposal is motivated firstly by the potential hazards arising from the use of radioactive sources and secondly by the expensive costs involved, and in other cases the short life times, where a suitable and optimum use of the sources is crucial. The switcher system includes a PIC microcontroller for simple tasks involving sample displacement and positioning, in addition to a virtual instrument developed by using LabView. The shuffle of the samples proceeds in a sequential way based on the number of counts and the signal to noise ratio as selection criteria whereas the virtual instrument allows performing a remote monitoring from a PC via Internet about the status of the spectra and to take control decisions. As an example, we show a case study involving a series of akaganeite samples. An efficiency and economical analysis is finally presented and discussed.

  2. Fully three-dimensional reconstruction from data collected on concentric cubes in Fourier space: implementation and a sample application to MRI [magnetic resonance imaging

    International Nuclear Information System (INIS)

    Herman, G.T.; Roberts, D.; Axel, L.

    1992-01-01

    An algorithm is proposed for rapid and accurate reconstruction from data collected in Fourier space at points arranged on a grid of concentric cubes. The whole process has computational complexity of the same order as required for the 3D fast Fourier transform and so (for medically relevant sizes of the data set) it is faster than backprojection into the same size rectangular grid. The design of the algorithm ensures that no interpolations are needed, in contrast to methods involving backprojection with their unavoidable interpolations. As an application, a 3D data collection method for MRI has been designed which directly samples the Fourier transform of the object to be reconstructed on concentric cubes as needed for the algorithm. (author)

  3. Water born pollutants sampling using porous suction samples

    International Nuclear Information System (INIS)

    Baig, M.A.

    1997-01-01

    The common standard method of sampling water born pollutants in the vadoze zone is core sampling and it is followed by extraction of pore fluid. This method does not allow sampling at the same location next time and again later on. There is an alternative approach for sampling fluids (water born pollutants) from both saturated and unsaturated regions of vadose zone using porous suction samplers. There are three types of porous suction samplers, vacuum-operated, pressure-vacuum lysimeters, high pressure vacuum samples. The suction samples are operated in the range of 0-70 centi bars and usually consist of ceramic and polytetrafluorethylene (PTFE). The operation range of PTFE is higher than ceramic cups. These samplers are well suited for in situ and repeated sampling form the same location. This paper discusses the physical properties and operating condition of such samplers to the utilized under our environmental sampling. (author)

  4. Choosing a suitable sample size in descriptive sampling

    International Nuclear Information System (INIS)

    Lee, Yong Kyun; Choi, Dong Hoon; Cha, Kyung Joon

    2010-01-01

    Descriptive sampling (DS) is an alternative to crude Monte Carlo sampling (CMCS) in finding solutions to structural reliability problems. It is known to be an effective sampling method in approximating the distribution of a random variable because it uses the deterministic selection of sample values and their random permutation,. However, because this method is difficult to apply to complex simulations, the sample size is occasionally determined without thorough consideration. Input sample variability may cause the sample size to change between runs, leading to poor simulation results. This paper proposes a numerical method for choosing a suitable sample size for use in DS. Using this method, one can estimate a more accurate probability of failure in a reliability problem while running a minimal number of simulations. The method is then applied to several examples and compared with CMCS and conventional DS to validate its usefulness and efficiency

  5. The effect of sample preparation methods on glass performance

    International Nuclear Information System (INIS)

    Oh, M.S.; Oversby, V.M.

    1990-01-01

    A series of experiments was conducted using SRL 165 synthetic waste glass to investigate the effects of surface preparation and leaching solution composition on the alteration of the glass. Samples of glass with as-cast surfaces produced smooth reaction layers and some evidence for precipitation of secondary phases from solution. Secondary phases were more abundant in samples reacted in deionized water than for those reacted in a silicate solution. Samples with saw-cut surfaces showed a large reduction in surface roughness after 7 days of reaction in either solution. Reaction in silicate solution for up to 91 days produced no further change in surface morphology, while reaction in DIW produced a spongy surface that formed the substrate for further surface layer development. The differences in the surface morphology of the samples may create microclimates that control the details of development of alteration layers on the glass; however, the concentrations of elements in leaching solutions show differences of 50% or less between samples prepared with different surface conditions for tests of a few months duration. 6 refs., 7 figs., 1 tab

  6. CuInS2/ZnS QD-ferroelectric liquid crystal mixtures for faster electro-optical devices and their energy storage aspects

    Science.gov (United States)

    Singh, Dharmendra Pratap; Vimal, Tripti; Mange, Yatin J.; Varia, Mahesh C.; Nann, Thomas; Pandey, K. K.; Manohar, Rajiv; Douali, Redouane

    2018-01-01

    CuInS2/ZnS core/shell quantum dots (CIS/ZnS QDs) dispersed ferroelectric liquid crystal (FLC) mixtures have been characterized for their application in electro-optical devices, energy storage, and solar cells. Physical properties of the CIS/ZnS QD-FLC (ferroelectric liquid crystal) mixtures have also been investigated with varying QD concentrations in order to optimize the critical concentration of QDs in mixtures. The presence of QDs breaks the geometrical symmetry in the FLC matrix, which results in a change in the physical properties of the mixtures. We observed the reduced values of primary and secondary order parameters (tilt angle and spontaneous polarization, respectively) for mixtures, which also depend on the concentration of QDs. The reduction of spontaneous polarization in QDs-FLC mixtures is attributed to the adverse role of flexoelectric contribution in the mixtures. The 92% faster electro-optic response and enhanced capacitance indicate the possible application of these mixtures in electro-optical devices and solar cells. Photoluminescence emission of pure FLC and QDs-FLC mixtures has been thermally tailored, which is explained by suitable models.

  7. The problem of sampling families rather than populations: Relatedness among individuals in samples of juvenile brown trout Salmo trutta L

    DEFF Research Database (Denmark)

    Hansen, Michael Møller; Eg Nielsen, Einar; Mensberg, Karen-Lise Dons

    1997-01-01

    In species exhibiting a nonrandom distribution of closely related individuals, sampling of a few families may lead to biased estimates of allele frequencies in populations. This problem was studied in two brown trout populations, based on analysis of mtDNA and microsatellites. In both samples mt......DNA haplotype frequencies differed significantly between age classes, and in one sample 17 out of 18 individuals less than 1 year of age shared one particular mtDNA haplotype. Estimates of relatedness showed that these individuals most likely represented only three full-sib families. Older trout exhibiting...

  8. A 67-Item Stress Resilience item bank showing high content validity was developed in a psychosomatic sample.

    Science.gov (United States)

    Obbarius, Nina; Fischer, Felix; Obbarius, Alexander; Nolte, Sandra; Liegl, Gregor; Rose, Matthias

    2018-04-10

    To develop the first item bank to measure Stress Resilience (SR) in clinical populations. Qualitative item development resulted in an initial pool of 131 items covering a broad theoretical SR concept. These items were tested in n=521 patients at a psychosomatic outpatient clinic. Exploratory and Confirmatory Factor Analysis (CFA), as well as other state-of-the-art item analyses and IRT were used for item evaluation and calibration of the final item bank. Out of the initial item pool of 131 items, we excluded 64 items (54 factor loading .3, 2 non-discriminative Item Response Curves, 4 Differential Item Functioning). The final set of 67 items indicated sufficient model fit in CFA and IRT analyses. Additionally, a 10-item short form with high measurement precision (SE≤.32 in a theta range between -1.8 and +1.5) was derived. Both the SR item bank and the SR short form were highly correlated with an existing static legacy tool (Connor-Davidson Resilience Scale). The final SR item bank and 10-item short form showed good psychometric properties. When further validated, they will be ready to be used within a framework of Computer-Adaptive Tests for a comprehensive assessment of the Stress-Construct. Copyright © 2018. Published by Elsevier Inc.

  9. Hanford site environmental surveillance master sampling schedule

    International Nuclear Information System (INIS)

    Bisping, L.E.

    1998-01-01

    Environmental surveillance of the Hanford Site and surrounding areas is conducted by the Pacific Northwest National Laboratory (PNNL) for the U.S. Department of Energy (DOE). Sampling is conducted to evaluate levels of radioactive and nonradioactive pollutants in the Hanford environs, as required in DOE Order 5400.1 open-quotes General Environmental Protection Program,close quotes and DOE Order 5400.5, open-quotes Radiation Protection of the Public and the Environment.close quotes The sampling methods are described in the Environmental Monitoring Plan, United States Department of Energy, Richland Operations Office, DOE/RL91-50, Rev. 2, U.S. Department of Energy, Richland, Washington. This document contains the 1998 schedules for routine collection of samples for the Surface Environmental Surveillance Project (SESP) and Drinking Water Monitoring Project. Each section of this document describes the planned sampling schedule for a specific media (air, surface water, biota, soil and vegetation, sediment, and external radiation). Each section includes the sample location, sample type, and analyses to be performed on the sample. In some cases, samples are scheduled on a rotating basis and may not be planned for 1998 in which case the anticipated year for collection is provided. In addition, a map is included for each media showing sample locations

  10. Impact of blood sampling in very preterm infants

    DEFF Research Database (Denmark)

    Madsen, L P; Rasmussen, M K; Bjerregaard, L L

    2000-01-01

    ; the groups were then subdivided into critically ill or not. Diagnostic blood sampling and blood transfusion events were recorded. In total, 1905 blood samples (5,253 analysis) were performed, corresponding to 0.7 samples (1.9 analysis) per day per infant. The highest frequencies were found during the first....../kg. For the extremely preterm infants a significant correlation between sampled and transfused blood volume was found (mean 37.1 and 33.3 ml/kg, respectively, r = + 0.71, p = 0.0003). The most frequently requested analyses were glucose, sodium and potassium. Few blood gas analyses were requested (1.9/ infant). No blood...... losses attributable to excessive generous sampling were detected. The results show an acceptable low frequency of sampling and transfusion events for infants of GA 28-32 weeks. The study emphasizes the necessity of thorough reflection and monitoring of blood losses when ordering blood sampling...

  11. The effect of clustering on lot quality assurance sampling: a probabilistic model to calculate sample sizes for quality assessments.

    Science.gov (United States)

    Hedt-Gauthier, Bethany L; Mitsunaga, Tisha; Hund, Lauren; Olives, Casey; Pagano, Marcello

    2013-10-26

    Traditional Lot Quality Assurance Sampling (LQAS) designs assume observations are collected using simple random sampling. Alternatively, randomly sampling clusters of observations and then individuals within clusters reduces costs but decreases the precision of the classifications. In this paper, we develop a general framework for designing the cluster(C)-LQAS system and illustrate the method with the design of data quality assessments for the community health worker program in Rwanda. To determine sample size and decision rules for C-LQAS, we use the beta-binomial distribution to account for inflated risk of errors introduced by sampling clusters at the first stage. We present general theory and code for sample size calculations.The C-LQAS sample sizes provided in this paper constrain misclassification risks below user-specified limits. Multiple C-LQAS systems meet the specified risk requirements, but numerous considerations, including per-cluster versus per-individual sampling costs, help identify optimal systems for distinct applications. We show the utility of C-LQAS for data quality assessments, but the method generalizes to numerous applications. This paper provides the necessary technical detail and supplemental code to support the design of C-LQAS for specific programs.

  12. Sampling and examination methods used for TMI-2 samples

    International Nuclear Information System (INIS)

    Marley, A.W.; Akers, D.W.; McIsaac, C.V.

    1988-01-01

    The purpose of this paper is to summarize the sampling and examination techniques that were used in the collection and analysis of TMI-2 samples. Samples ranging from auxiliary building air to core debris were collected and analyzed. Handling of the larger samples and many of the smaller samples had to be done remotely and many standard laboratory analytical techniques were modified to accommodate the extremely high radiation fields associated with these samples. The TMI-2 samples presented unique problems with sampling and the laboratory analysis of prior molten fuel debris. 14 refs., 8 figs

  13. Desflurane Allows for a Faster Emergence when Compared to Sevoflurane Without Affecting the Baseline Cognitive Recovery Time.

    Directory of Open Access Journals (Sweden)

    Joseph G. Werner

    2015-10-01

    Full Text Available Aims, We compared the effect of desflurane and sevoflurane on anesthesia recovery time in patients undergoing urological cystoscopic surgery. The Short Orientation Memory Concentration Test (SOMCT measured and compared cognitive impairment between groups and coughing was assessed throughout the anesthetic.Methods and Materials, This investigation included 75 ambulatory patients. Patients were randomized to receive either desflurane or sevoflurane. Inhalational anesthetics were discontinued after removal of the cystoscope and once repositioning of the patient was final. Coughing assessment and awakening time from anesthesia were assessed by a blinded observer.Statistical analysis used: Statistical analysis was performed by using t-test for parametric variables and Mann-Whitney U test for nonparametric variables. Results, The primary endpoint, mean time to eye-opening, was 5.0±2.5 minutes for desflurane, and 7.9±4.1 minutes for sevoflurane (p <0.001. There were no significant differences in time to SOMCT recovery (p=0.109, overall time spent in the post anesthesia care unit (p=0.924 or time to discharge (p=0.363. Median time until readiness for discharge was nine minutes in the desflurane group, while the sevoflurane group had a median time of 20 minutes (p=0.020. The overall incidence of coughing during the perioperative period was significantly higher in the desflurane (p=0.030. Conclusions, We re-confirmed that patients receiving desflurane had a faster emergence and met the criteria to be discharged from the post anesthesia care unit earlier. No difference was found in time to return to baseline cognition between desflurane and sevoflurane.

  14. Importance sampling of rare events in chaotic systems

    DEFF Research Database (Denmark)

    Leitão, Jorge C.; Parente Lopes, João M.Viana; Altmann, Eduardo G.

    2017-01-01

    space of chaotic systems. As examples of our general framework we compute the distribution of finite-time Lyapunov exponents (in different chaotic maps) and the distribution of escape times (in transient-chaos problems). Our methods sample exponentially rare states in polynomial number of samples (in......Finding and sampling rare trajectories in dynamical systems is a difficult computational task underlying numerous problems and applications. In this paper we show how to construct Metropolis-Hastings Monte-Carlo methods that can efficiently sample rare trajectories in the (extremely rough) phase...... both low- and high-dimensional systems). An open-source software that implements our algorithms and reproduces our results can be found in reference [J. Leitao, A library to sample chaotic systems, 2017, https://github.com/jorgecarleitao/chaospp]....

  15. Strategies for achieving high sequencing accuracy for low diversity samples and avoiding sample bleeding using illumina platform.

    Science.gov (United States)

    Mitra, Abhishek; Skrzypczak, Magdalena; Ginalski, Krzysztof; Rowicka, Maga

    2015-01-01

    Sequencing microRNA, reduced representation sequencing, Hi-C technology and any method requiring the use of in-house barcodes result in sequencing libraries with low initial sequence diversity. Sequencing such data on the Illumina platform typically produces low quality data due to the limitations of the Illumina cluster calling algorithm. Moreover, even in the case of diverse samples, these limitations are causing substantial inaccuracies in multiplexed sample assignment (sample bleeding). Such inaccuracies are unacceptable in clinical applications, and in some other fields (e.g. detection of rare variants). Here, we discuss how both problems with quality of low-diversity samples and sample bleeding are caused by incorrect detection of clusters on the flowcell during initial sequencing cycles. We propose simple software modifications (Long Template Protocol) that overcome this problem. We present experimental results showing that our Long Template Protocol remarkably increases data quality for low diversity samples, as compared with the standard analysis protocol; it also substantially reduces sample bleeding for all samples. For comprehensiveness, we also discuss and compare experimental results from alternative approaches to sequencing low diversity samples. First, we discuss how the low diversity problem, if caused by barcodes, can be avoided altogether at the barcode design stage. Second and third, we present modified guidelines, which are more stringent than the manufacturer's, for mixing low diversity samples with diverse samples and lowering cluster density, which in our experience consistently produces high quality data from low diversity samples. Fourth and fifth, we present rescue strategies that can be applied when sequencing results in low quality data and when there is no more biological material available. In such cases, we propose that the flowcell be re-hybridized and sequenced again using our Long Template Protocol. Alternatively, we discuss how

  16. Strategies for achieving high sequencing accuracy for low diversity samples and avoiding sample bleeding using illumina platform.

    Directory of Open Access Journals (Sweden)

    Abhishek Mitra

    Full Text Available Sequencing microRNA, reduced representation sequencing, Hi-C technology and any method requiring the use of in-house barcodes result in sequencing libraries with low initial sequence diversity. Sequencing such data on the Illumina platform typically produces low quality data due to the limitations of the Illumina cluster calling algorithm. Moreover, even in the case of diverse samples, these limitations are causing substantial inaccuracies in multiplexed sample assignment (sample bleeding. Such inaccuracies are unacceptable in clinical applications, and in some other fields (e.g. detection of rare variants. Here, we discuss how both problems with quality of low-diversity samples and sample bleeding are caused by incorrect detection of clusters on the flowcell during initial sequencing cycles. We propose simple software modifications (Long Template Protocol that overcome this problem. We present experimental results showing that our Long Template Protocol remarkably increases data quality for low diversity samples, as compared with the standard analysis protocol; it also substantially reduces sample bleeding for all samples. For comprehensiveness, we also discuss and compare experimental results from alternative approaches to sequencing low diversity samples. First, we discuss how the low diversity problem, if caused by barcodes, can be avoided altogether at the barcode design stage. Second and third, we present modified guidelines, which are more stringent than the manufacturer's, for mixing low diversity samples with diverse samples and lowering cluster density, which in our experience consistently produces high quality data from low diversity samples. Fourth and fifth, we present rescue strategies that can be applied when sequencing results in low quality data and when there is no more biological material available. In such cases, we propose that the flowcell be re-hybridized and sequenced again using our Long Template Protocol. Alternatively

  17. Optimal grade control sampling practice in open-pit mining

    DEFF Research Database (Denmark)

    Engström, Karin; Esbensen, Kim Harry

    2017-01-01

    Misclassification of ore grades results in lost revenues, and the need for representative sampling procedures in open pit mining is increasingly important in all mining industries. This study evaluated possible improvements in sampling representativity with the use of Reverse Circulation (RC) drill...... sampling compared to manual Blast Hole (BH) sampling in the Leveäniemi open pit mine, northern Sweden. The variographic experiment results showed that sampling variability was lower for RC than for BH sampling. However, the total costs for RC drill sampling are significantly exceeding current costs...... for manual BH sampling, which needs to be compensated for by other benefits to motivate introduction of RC drilling. The main conclusion is that manual BH sampling can be fit-for-purpose in the studied open pit mine. However, with so many mineral commodities and mining methods in use globally...

  18. Neutron activation analysis of wheat samples

    International Nuclear Information System (INIS)

    Galinha, C.; Anawar, H.M.; Freitas, M.C.; Pacheco, A.M.G.; Almeida-Silva, M.; Coutinho, J.; Macas, B.; Almeida, A.S.

    2011-01-01

    The deficiency of essential micronutrients and excess of toxic metals in cereals, an important food items for human nutrition, can cause public health risk. Therefore, before their consumption and adoption of soil supplementation, concentrations of essential micronutrients and metals in cereals should be monitored. This study collected soil and two varieties of wheat samples-Triticum aestivum L. (Jordao/bread wheat), and Triticum durum L. (Marialva/durum wheat) from Elvas area, Portugal and analyzed concentrations of As, Cr, Co, Fe, K, Na, Rb and Zn using Instrumental Neutron Activation Analysis (INAA) to focus on the risk of adverse public health issues. The low variability and moderate concentrations of metals in soils indicated a lower significant effect of environmental input on metal concentrations in agricultural soils. The Cr and Fe concentrations in soils that ranged from 93-117 and 26,400-31,300 mg/kg, respectively, were relatively high, but Zn concentration was very low (below detection limit Fe>Na>Zn>Cr>Rb>As>Co. Concentrations of As, Co and Cr in root, straw and spike of both varieties were higher than the permissible limits with exception of a few samples. The concentrations of Zn in root, straw and spike were relatively low (4-30 mg/kg) indicating the deficiency of an essential micronutrient Zn in wheat cultivated in Portugal. The elemental transfer from soil to plant decreases with increasing growth of the plant. The concentrations of various metals in different parts of wheat followed the order: Root>Straw>Spike. A few root, straw and spike samples showed enrichment of metals, but the majority of the samples showed no enrichment. Potassium is enriched in all samples of root, straw and spike for both varieties of wheat. Relatively to the seed used for cultivation, Jordao presented higher transfer coefficients than Marialva, in particular for Co, Fe, and Na. The Jordao and Marialva cultivars accumulated not statistically significant different

  19. Generalized atmospheric sampling of self-avoiding walks

    International Nuclear Information System (INIS)

    Van Rensburg, E J Janse; Rechnitzer, A

    2009-01-01

    In this paper, we introduce a new Monte Carlo method for sampling lattice self-avoiding walks. The method, which we call 'GAS' (generalized atmospheric sampling), samples walks along weighted sequences by implementing elementary moves generated by the positive, negative and neutral atmospheric statistics of the walks. A realized sequence is weighted such that the average weight of states of length n is proportional to the number of self-avoiding walks from the origin c n . In addition, the method also self-tunes to sample from uniform distributions over walks of lengths in an interval [0, n max ]. We show how to implement GAS using both generalized and endpoint atmospheres of walks and analyse our data to obtain estimates of the growth constant and entropic exponent of self-avoiding walks in the square and cubic lattices.

  20. Show Horse Welfare: Horse Show Competitors' Understanding, Awareness, and Perceptions of Equine Welfare.

    Science.gov (United States)

    Voigt, Melissa A; Hiney, Kristina; Richardson, Jennifer C; Waite, Karen; Borron, Abigail; Brady, Colleen M

    2016-01-01

    The purpose of this study was to gain a better understanding of stock-type horse show competitors' understanding of welfare and level of concern for stock-type show horses' welfare. Data were collected through an online questionnaire that included questions relating to (a) interest and general understanding of horse welfare, (b) welfare concerns of the horse show industry and specifically the stock-type horse show industry, (c) decision-making influences, and (d) level of empathic characteristics. The majority of respondents indicated they agree or strongly agree that physical metrics should be a factor when assessing horse welfare, while fewer agreed that behavioral and mental metrics should be a factor. Respondent empathy levels were moderate to high and were positively correlated with the belief that mental and behavioral metrics should be a factor in assessing horse welfare. Respondents indicated the inhumane practices that most often occur at stock-type shows include excessive jerking on reins, excessive spurring, and induced excessive unnatural movement. Additionally, respondents indicated association rules, hired trainers, and hired riding instructors are the most influential regarding the decisions they make related to their horses' care and treatment.

  1. Faster DNA Repair of Ultraviolet-Induced Cyclobutane Pyrimidine Dimers and Lower Sensitivity to Apoptosis in Human Corneal Epithelial Cells than in Epidermal Keratinocytes.

    Directory of Open Access Journals (Sweden)

    Justin D Mallet

    Full Text Available Absorption of UV rays by DNA generates the formation of mutagenic cyclobutane pyrimidine dimers (CPD and pyrimidine (6-4 pyrimidone photoproducts (6-4PP. These damages are the major cause of skin cancer because in turn, they can lead to signature UV mutations. The eye is exposed to UV light, but the cornea is orders of magnitude less prone to UV-induced cancer. In an attempt to shed light on this paradox, we compared cells of the corneal epithelium and the epidermis for UVB-induced DNA damage frequency, repair and cell death sensitivity. We found similar CPD levels but a 4-time faster UVB-induced CPD, but not 6-4PP, repair and lower UV-induced apoptosis sensitivity in corneal epithelial cells than epidermal. We then investigated levels of DDB2, a UV-induced DNA damage recognition protein mostly impacting CPD repair, XPC, essential for the repair of both CPD and 6-4PP and p53 a protein upstream of the genotoxic stress response. We found more DDB2, XPC and p53 in corneal epithelial cells than in epidermal cells. According to our results analyzing the protein stability of DDB2 and XPC, the higher level of DDB2 and XPC in corneal epithelial cells is most likely due to an increased stability of the protein. Taken together, our results show that corneal epithelial cells have a better efficiency to repair UV-induced mutagenic CPD. On the other hand, they are less prone to UV-induced apoptosis, which could be related to the fact that since the repair is more efficient in the HCEC, the need to eliminate highly damaged cells by apoptosis is reduced.

  2. Faster but Less Careful Prehension in Presence of High, Rather than Low, Social Status Attendees.

    Directory of Open Access Journals (Sweden)

    Carlo Fantoni

    Full Text Available Ample evidence attests that social intention, elicited through gestures explicitly signaling a request of communicative intention, affects the patterning of hand movement kinematics. The current study goes beyond the effect of social intention and addresses whether the same action of reaching to grasp an object for placing it in an end target position within or without a monitoring attendee's peripersonal space, can be moulded by pure social factors in general, and by social facilitation in particular. A motion tracking system (Optotrak Certus was used to record motor acts. We carefully avoided the usage of communicative intention by keeping constant both the visual information and the positional uncertainty of the end target position, while we systematically varied the social status of the attendee (a high, or a low social status in separated blocks. Only thirty acts performed in the presence of a different social status attendee, revealed a significant change of kinematic parameterization of hand movement, independently of the attendee's distance. The amplitude of peak velocity reached by the hand during the reach-to-grasp and the lift-to-place phase of the movement was larger in the high rather than in the low social status condition. By contrast, the deceleration time of the reach-to-grasp phase and the maximum grasp aperture was smaller in the high rather than in the low social status condition. These results indicated that the hand movement was faster but less carefully shaped in presence of a high, but not of a low social status attendee. This kinematic patterning suggests that being monitored by a high rather than a low social status attendee might lead participants to experience evaluation apprehension that informs the control of motor execution. Motor execution would rely more on feedforward motor control in the presence of a high social status human attendee, vs. feedback motor control, in the presence of a low social status attendee.

  3. Analysis of the 2H-evaporator scale samples (HTF-17-56, -57)

    Energy Technology Data Exchange (ETDEWEB)

    Hay, M. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Coleman, C. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Diprete, D. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2017-09-13

    Savannah River National Laboratory analyzed scale samples from both the wall and cone sections of the 242-16H Evaporator prior to chemical cleaning. The samples were analyzed for uranium and plutonium isotopes required for a Nuclear Criticality Safety Assessment of the scale removal process. The analysis of the scale samples found the material to contain crystalline nitrated cancrinite and clarkeite. Samples from both the wall and cone contain depleted uranium. Uranium concentrations of 16.8 wt% 4.76 wt% were measured in the wall and cone samples, respectively. The ratio of plutonium isotopes in both samples is ~85% Pu-239 and ~15% Pu-238 by mass and shows approximately the same 3.5 times higher concentration in the wall sample versus the cone sample as observed in the uranium concentrations. The mercury concentrations measured in the scale samples were higher than previously reported values. The wall sample contains 19.4 wt% mercury and the cone scale sample 11.4 wt% mercury. The results from the current scales samples show reasonable agreement with previous 242-16H Evaporator scale sample analysis; however, the uranium concentration in the current wall sample is substantially higher than previous measurements.

  4. Hanford Site Environmental Surveillance Master Sampling Schedule for Calendar Year 2011

    Energy Technology Data Exchange (ETDEWEB)

    Bisping, Lynn E.

    2011-01-21

    This document contains the calendar year 2011 schedule for the routine collection of samples for the Surface Environmental Surveillance Project and the Drinking Water Monitoring Project. Each section includes sampling locations, sampling frequencies, sample types, and analyses to be performed. In some cases, samples are scheduled on a rotating basis. If a sample will not be collected in 2011, the anticipated year for collection is provided. Maps showing approximate sampling locations are included for media scheduled for collection in 2011.

  5. Women awaken faster than men after electroencephalogram-monitored propofol sedation for colonoscopy: A prospective observational study.

    Science.gov (United States)

    Riphaus, Andrea; Slottje, Mark; Bulla, Jan; Keil, Carolin; Mentzel, Christian; Limbach, Vera; Schultz, Barbara; Unzicker, Christian

    2017-10-01

    Sedation for colonoscopy using intravenous propofol has become standard in many Western countries. Gender-specific differences have been shown for general anaesthesia in dentistry, but no such data existed for gastrointestinal endoscopy. A prospective observational study. An academic teaching hospital of Hannover Medical School. A total of 219 patients (108 women and 111 men) scheduled for colonoscopy. Propofol sedation using electroencephalogram monitoring during a constant level of sedation depth (D0 to D2) performed by trained nurses or physicians after a body-weight-adjusted loading dose. The primary end-point was the presence of gender-specific differences in awakening time (time from end of sedation to eye-opening and complete orientation); secondary outcome parameters analysed were total dose of propofol, sedation-associated complications (bradycardia, hypotension, hypoxaemia and apnoea), patient cooperation and patient satisfaction. Multivariate analysis was performed to correct confounding factors such as age and BMI. Women awakened significantly faster than men, with a time to eye-opening of 7.3 ± 3.7 versus 8.4 ± 3.4 min (P = 0.005) and time until complete orientation of 9.1 ± 3.9 versus 10.4 ± 13.7 min (P = 0.008). The propofol dosage was not significantly different, with some trend towards more propofol per kg body weight in women (3.98 ± 1.81 mg versus 3.72 ± 1.75 mg, P = 0.232). The effect of gender aspects should be considered when propofol is used as sedation for gastrointestinal endoscopy. That includes adequate dosing for women as well as caution regarding potential overdosing of male patients. ClinicalTrials.gov (Identifier: NCT02687568).

  6. Determination of copper in powdered chocolate samples by slurry-sampling flame atomic-absorption spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Santos, Walter N.L. dos; Silva, Erik G.P. da; Fernandes, Marcelo S.; Araujo, Rennan G.O.; Costa, Anto' ' enio C.S.; Ferreira, Sergio L.C. [Nucleo de Excelencia em Quimica Analitica da Bahia, Universidade Federal da Bahia, Instituto de Quimica, Salvador, Bahia (Brazil); Vale, M.G.R. [Instituto de Quimica, Universidade Federal da Bahia do Rio Grande do Sul, Porto Alegre, Rio Grande do Sul (Brazil)

    2005-06-01

    Chocolate is a complex sample with a high content of organic compounds and its analysis generally involves digestion procedures that might include the risk of losses and/or contamination. The determination of copper in chocolate is important because copper compounds are extensively used as fungicides in the farming of cocoa. In this paper, a slurry-sampling flame atomic-absorption spectrometric method is proposed for determination of copper in powdered chocolate samples. Optimization was carried out using univariate methodology involving the variables nature and concentration of the acid solution for slurry preparation, sonication time, and sample mass. The recommended conditions include a sample mass of 0.2 g, 2.0 mol L{sup -1} hydrochloric acid solution, and a sonication time of 15 min. The calibration curve was prepared using aqueous copper standards in 2.0 mol L{sup -1} hydrochloric acid. This method allowed determination of copper in chocolate with a detection limit of 0.4 {mu}g g{sup -1} and precision, expressed as relative standard deviation (RSD), of 2.5% (n=10) for a copper content of approximately 30 {mu}g g{sup -1}, using a chocolate mass of 0.2 g. The accuracy was confirmed by analyzing the certified reference materials NIST SRM 1568a rice flour and NIES CRM 10-b rice flour. The proposed method was used for determination of copper in three powdered chocolate samples, the copper content of which varied between 26.6 and 31.5 {mu}g g{sup -1}. The results showed no significant differences with those obtained after complete digestion, using a t-test for comparison. (orig.)

  7. Faster, Stronger, Healthier: Adolescent-Stated Reasons for Dietary Supplementation.

    Science.gov (United States)

    Zdešar Kotnik, Katja; Jurak, Gregor; Starc, Gregor; Golja, Petra

    Examine the underlying reasons and sources of recommendation for dietary supplement (DS) use among adolescents. Cross-sectional analysis of children's development in Slovenia in September to October, 2014. Nationally recruited sample. Adolescents aged 14-19 years enrolled in 15 high schools (n = 1,463). Reasons for and sources of recommendation for DS use, sports club membership, sports discipline, and extent of physical activity (PA) were self-reported data. Chi-square test of independence was performed to compare the prevalence of DS use between groups with different extents of PA and nonathletes/athletes, referring to 11 different reasons and 9 different sources of recommendation for DS use. Use of DS was widespread among adolescents (69%), athletes (76%), and nonathletes (66%). Higher prevalence of supplementation was observed in males, who justified it use for sports performance enhancement and better development and function of muscles. In contrast, females emphasized immune system improvement. Higher extent of PA was associated with a higher prevalence of DS use. This was especially evident in males, who participated in team sports. A high percentage of adolescents (41%) decided on their own to use DS and because of advice from parents or relatives (30%). Several reasons for the widespread use of DS in adolescents were associated with sports participation. Therefore, educational programs regarding DS use should be targeted primarily to adolescents and their parents who are involved in sports, and especially team sports. Copyright © 2017 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.

  8. Comparison of sampling techniques for use in SYVAC

    International Nuclear Information System (INIS)

    Dalrymple, G.J.

    1984-01-01

    The Stephen Howe review (reference TR-STH-1) recommended the use of a deterministic generator (DG) sampling technique for sampling the input values to the SYVAC (SYstems Variability Analysis Code) program. This technique was compared with Monte Carlo simple random sampling (MC) by taking a 1000 run case of SYVAC using MC as the reference case. The results show that DG appears relatively inaccurate for most values of consequence when used with 11 sample intervals. If 22 sample intervals are used then DG generates cumulative distribution functions that are statistically similar to the reference distribution. 400 runs of DG or MC are adequate to generate a representative cumulative distribution function. The MC technique appears to perform better than DG for the same number of runs. However, the DG predicts higher doses and in view of the importance of generating data in the high dose region this sampling technique with 22 sample intervals is recommended for use in SYVAC. (author)

  9. Sampling and sample processing in pesticide residue analysis.

    Science.gov (United States)

    Lehotay, Steven J; Cook, Jo Marie

    2015-05-13

    Proper sampling and sample processing in pesticide residue analysis of food and soil have always been essential to obtain accurate results, but the subject is becoming a greater concern as approximately 100 mg test portions are being analyzed with automated high-throughput analytical methods by agrochemical industry and contract laboratories. As global food trade and the importance of monitoring increase, the food industry and regulatory laboratories are also considering miniaturized high-throughput methods. In conjunction with a summary of the symposium "Residues in Food and Feed - Going from Macro to Micro: The Future of Sample Processing in Residue Analytical Methods" held at the 13th IUPAC International Congress of Pesticide Chemistry, this is an opportune time to review sampling theory and sample processing for pesticide residue analysis. If collected samples and test portions do not adequately represent the actual lot from which they came and provide meaningful results, then all costs, time, and efforts involved in implementing programs using sophisticated analytical instruments and techniques are wasted and can actually yield misleading results. This paper is designed to briefly review the often-neglected but crucial topic of sample collection and processing and put the issue into perspective for the future of pesticide residue analysis. It also emphasizes that analysts should demonstrate the validity of their sample processing approaches for the analytes/matrices of interest and encourages further studies on sampling and sample mass reduction to produce a test portion.

  10. Applications of Asymptotic Sampling on High Dimensional Structural Dynamic Problems

    DEFF Research Database (Denmark)

    Sichani, Mahdi Teimouri; Nielsen, Søren R.K.; Bucher, Christian

    2011-01-01

    The paper represents application of the asymptotic sampling on various structural models subjected to random excitations. A detailed study on the effect of different distributions of the so-called support points is performed. This study shows that the distribution of the support points has consid...... dimensional reliability problems in structural dynamics.......The paper represents application of the asymptotic sampling on various structural models subjected to random excitations. A detailed study on the effect of different distributions of the so-called support points is performed. This study shows that the distribution of the support points has...... is minimized. Next, the method is applied on different cases of linear and nonlinear systems with a large number of random variables representing the dynamic excitation. The results show that asymptotic sampling is capable of providing good approximations of low failure probability events for very high...

  11. EFSA Panel on Dietetic Products, Nutrition and Allergies (NDA); Scientific Opinion on the substantiation of a health claim related to citrulline-malate and faster recovery from muscle fatigue after exercise pursuant to Article 13(5) of Regulation (EC) No 1924/2006

    DEFF Research Database (Denmark)

    Tetens, Inge

    Following an application from Biocodex, submitted pursuant to Article 13(5) of Regulation (EC) No 1924/2006 via the Competent Authority of Belgium, the Panel on Dietetic Products, Nutrition and Allergies was asked to deliver an opinion on the scientific substantiation of a health claim related...... to citrulline-malate and faster recovery from muscle fatigue after exercise. Citrulline-malate is sufficiently characterised. The claimed effect is “maintenance of ATP levels through reduction of lactates in excess for an improved recovery from muscle fatigue”. The target population proposed by the applicant...... is healthy children above six years of age and adults. The Panel considers that faster recovery from muscle fatigue after exercise contributing to the restoration of muscle function is a beneficial physiological effect. A total of 33 references were considered as pertinent to the claim by the applicant...

  12. Sampling of temporal networks: Methods and biases

    Science.gov (United States)

    Rocha, Luis E. C.; Masuda, Naoki; Holme, Petter

    2017-11-01

    Temporal networks have been increasingly used to model a diversity of systems that evolve in time; for example, human contact structures over which dynamic processes such as epidemics take place. A fundamental aspect of real-life networks is that they are sampled within temporal and spatial frames. Furthermore, one might wish to subsample networks to reduce their size for better visualization or to perform computationally intensive simulations. The sampling method may affect the network structure and thus caution is necessary to generalize results based on samples. In this paper, we study four sampling strategies applied to a variety of real-life temporal networks. We quantify the biases generated by each sampling strategy on a number of relevant statistics such as link activity, temporal paths and epidemic spread. We find that some biases are common in a variety of networks and statistics, but one strategy, uniform sampling of nodes, shows improved performance in most scenarios. Given the particularities of temporal network data and the variety of network structures, we recommend that the choice of sampling methods be problem oriented to minimize the potential biases for the specific research questions on hand. Our results help researchers to better design network data collection protocols and to understand the limitations of sampled temporal network data.

  13. Boat sampling

    International Nuclear Information System (INIS)

    Citanovic, M.; Bezlaj, H.

    1994-01-01

    This presentation describes essential boat sampling activities: on site boat sampling process optimization and qualification; boat sampling of base material (beltline region); boat sampling of weld material (weld No. 4); problems accompanied with weld crown varieties, RPV shell inner radius tolerance, local corrosion pitting and water clarity. The equipment used for boat sampling is described too. 7 pictures

  14. Heater-Integrated Cantilevers for Nano-Samples Thermogravimetric Analysis

    Science.gov (United States)

    Toffoli, Valeria; Carrato, Sergio; Lee, Dongkyu; Jeon, Sangmin; Lazzarino, Marco

    2013-01-01

    The design and characteristics of a micro-system for thermogravimetric analysis (TGA) in which heater, temperature sensor and mass sensor are integrated into a single device are presented. The system consists of a suspended cantilever that incorporates a microfabricated resistor, used as both heater and thermometer. A three-dimensional finite element analysis was used to define the structure parameters. TGA sensors were fabricated by standard microlithographic techniques and tested using milli-Q water and polyurethane microcapsule. The results demonstrated that our approach provides a faster and more sensitive TGA with respect to commercial systems.

  15. Composite Sampling Approaches for Bacillus anthracis Surrogate Extracted from Soil.

    Directory of Open Access Journals (Sweden)

    Brian France

    Full Text Available Any release of anthrax spores in the U.S. would require action to decontaminate the site and restore its use and operations as rapidly as possible. The remediation activity would require environmental sampling, both initially to determine the extent of contamination (hazard mapping and post-decon to determine that the site is free of contamination (clearance sampling. Whether the spore contamination is within a building or outdoors, collecting and analyzing what could be thousands of samples can become the factor that limits the pace of restoring operations. To address this sampling and analysis bottleneck and decrease the time needed to recover from an anthrax contamination event, this study investigates the use of composite sampling. Pooling or compositing of samples is an established technique to reduce the number of analyses required, and its use for anthrax spore sampling has recently been investigated. However, use of composite sampling in an anthrax spore remediation event will require well-documented and accepted methods. In particular, previous composite sampling studies have focused on sampling from hard surfaces; data on soil sampling are required to extend the procedure to outdoor use. Further, we must consider whether combining liquid samples, thus increasing the volume, lowers the sensitivity of detection and produces false negatives. In this study, methods to composite bacterial spore samples from soil are demonstrated. B. subtilis spore suspensions were used as a surrogate for anthrax spores. Two soils (Arizona Test Dust and sterilized potting soil were contaminated and spore recovery with composites was shown to match individual sample performance. Results show that dilution can be overcome by concentrating bacterial spores using standard filtration methods. This study shows that composite sampling can be a viable method of pooling samples to reduce the number of analysis that must be performed during anthrax spore remediation.

  16. Evaluation of BBL™ Sensi-Discs™ and FTA® cards as sampling devices for detection of rotavirus in stool samples.

    Science.gov (United States)

    Tam, Ka Ian; Esona, Mathew D; Williams, Alice; Ndze, Valantine N; Boula, Angeline; Bowen, Michael D

    2015-09-15

    Rotavirus is the most important cause of severe childhood gastroenteritis worldwide. Rotavirus vaccines are available and rotavirus surveillance is carried out to assess vaccination impact. In surveillance studies, stool samples are stored typically at 4°C or frozen to maintain sample quality. Uninterrupted cold storage is a problem in developing countries because of power interruptions. Cold-chain transportation of samples from collection sites to testing laboratories is costly. In this study, we evaluated the use of BBL™ Sensi-Discs™ and FTA(®) cards for storage and transportation of samples for virus isolation, EIA, and RT-PCR testing. Infectious rotavirus was recovered after 30 days of storage on Sensi-Discs™ at room temperature. We were able to genotype 98-99% of samples stored on Sensi-Discs™ and FTA(®) cards at temperatures ranging from -80°C to 37°C up to 180 days. A field sampling test using samples prepared and shipped from Cameroon, showed that both matrices yielded 100% genotyping success compared with whole stool and Sensi-Discs™ demonstrated 95% concordance with whole stool in EIA testing. The utilization of BBL™ Sensi-Discs™ and FTA(®) cards for stool sample storage and shipment has the potential to have great impact on global public health by facilitating surveillance and epidemiological investigations of rotavirus strains worldwide at a reduced cost. Published by Elsevier B.V.

  17. Evaluation of BBL™ Sensi-Discs™ and FTA® cards as sampling devices for detection of rotavirus in stool samples

    Science.gov (United States)

    Tam, Ka Ian; Esona, Mathew D.; Williams, Alice; Ndze, Valentine N.; Boula, Angeline; Bowen, Michael D.

    2015-01-01

    Rotavirus is the most important cause of severe childhood gastroenteritis worldwide. Rotavirus vaccines are available and rotavirus surveillance is carried out to assess vaccination impact. In surveillance studies, stool samples are stored typically at 4°C or frozen to maintain sample quality. Uninterrupted cold storage is a problem in developing countries because of power interruptions. Cold-chain transportation of samples from collection sites to testing laboratories is costly. In this study, we evaluated the use of BBL™ Sensi-Discs™ and FTA® cards for storage and transportation of samples for virus isolation, EIA, and RT-PCR testing. Infectious rotavirus was recovered after 30 days of storage on Sensi-Discs™ at room temperature. We were able to genotype 98–99% of samples stored on Sensi-Discs™ and FTA® cards at temperatures ranging from −80°C to 37°C up to 180 days. A field sampling test using samples prepared and shipped from Cameroon, showed that both matrices yielded 100% genotyping success compared with whole stool and Sensi-Discs™ demonstrated 95% concordance with whole stool in EIA testing. The utilization of BBL™ Sensi-Discs™ and FTA® cards for stool sample storage and shipment has the potential to have great impact on global public health by facilitating surveillance and epidemiological investigations of rotavirus strains worldwide at a reduced cost. PMID:26022083

  18. Gamma spectroscopy analysis of archived Marshall Island soil samples

    International Nuclear Information System (INIS)

    Herman, S.; Hoffman, K.; Lavelle, K.; Trauth, A.; Glover, S.E.; Connick, W.; Spitz, H.; LaMont, S.P.; Hamilton, T.

    2016-01-01

    Four samples of archival Marshall Islands soil were subjected to non-destructive, broad energy (17 keV-2.61 MeV) gamma-ray spectrometry analysis using a series of different high-resolution germanium detectors. These archival samples were collected in 1967 from different locations on Bikini Atoll and were contaminated with a range of fission and activation products, and other nuclear material from multiple weapons tests. Unlike samples collected recently, these samples have been stored in sealed containers and have been unaffected by approximately 50 years of weathering. Initial results show that the samples contained measurable but proportionally different concentrations of plutonium, 241 Am, and 137 Cs, and 60 Co. (author)

  19. A strategy of faster movements used by elderly humans to lift objects of increasing weight in ecological context.

    Science.gov (United States)

    Hoellinger, Thomas; McIntyre, Joseph; Jami, Lena; Hanneton, Sylvain; Cheron, Guy; Roby-Brami, Agnes

    2017-08-15

    It is not known whether, during the course of aging, changes occur in the motor strategies used by the CNS for lifting objects of different weights. Here, we analyzed the kinematics of object-lifting in two different healthy groups (young and elderly people) plus one well-known deafferented patient (GL). The task was to reach and lift onto a shelf an opaque cylindrical object with changing weight. The movements of the hand and object were recorded with electromagnetic sensors. In an ecological context (i.e. no instruction was given about movement speed), we found that younger participants, elderly people and GL did not all move at the same speed and that, surprisingly, elder people are faster. We also observed that the lifting trajectories were constant for both the elderly and the deafferented patient while younger participants raised their hand higher when the object weighed more. It appears that, depending on age and on available proprioceptive information, the CNS uses different strategies of lifting. We suggest that elder people tend to optimize their feedforward control in order to compensate for less functional afferent feedback, perhaps to optimize movement time and energy expenditure at the expense of high precision. In the case of complete loss of proprioceptive input, however, compensation follows a different strategy as suggested by GL's behavior who moved more slowly compared to both our younger and older participants. Copyright © 2017. Published by Elsevier Ltd.

  20. Graph sampling

    OpenAIRE

    Zhang, L.-C.; Patone, M.

    2017-01-01

    We synthesise the existing theory of graph sampling. We propose a formal definition of sampling in finite graphs, and provide a classification of potential graph parameters. We develop a general approach of Horvitz–Thompson estimation to T-stage snowball sampling, and present various reformulations of some common network sampling methods in the literature in terms of the outlined graph sampling theory.