WorldWideScience

Sample records for sampling points required

  1. Iowa Geologic Sampling Points

    Data.gov (United States)

    Iowa State University GIS Support and Research Facility — Point locations of geologic samples/files in the IGS repository. Types of samples include well cuttings, outcrop samples, cores, drillers logs, measured sections,...

  2. Pacific Northwest National Laboratory Facility Radionuclide Emission Points and Sampling Systems

    International Nuclear Information System (INIS)

    Barfuss, Brad C.; Barnett, J. M.; Ballinger, Marcel Y.

    2009-01-01

    Battelle-Pacific Northwest Division operates numerous research and development laboratories in Richland, Washington, including those associated with the Pacific Northwest National Laboratory (PNNL) on the Department of Energy's Hanford Site that have the potential for radionuclide air emissions. The National Emission Standard for Hazardous Air Pollutants (NESHAP 40 CFR 61, Subparts H and I) requires an assessment of all effluent release points that have the potential for radionuclide emissions. Potential emissions are assessed annually. Sampling, monitoring, and other regulatory compliance requirements are designated based upon the potential-to-emit dose criteria found in the regulations. The purpose of this document is to describe the facility radionuclide air emission sampling program and provide current and historical facility emission point system performance, operation, and design information. A description of the buildings, exhaust points, control technologies, and sample extraction details is provided for each registered or deregistered facility emission point. Additionally, applicable stack sampler configuration drawings, figures, and photographs are provided

  3. Pacific Northwest National Laboratory Facility Radionuclide Emission Points and Sampling Systems

    Energy Technology Data Exchange (ETDEWEB)

    Barfuss, Brad C.; Barnett, J. Matthew; Ballinger, Marcel Y.

    2009-04-08

    Battelle—Pacific Northwest Division operates numerous research and development laboratories in Richland, Washington, including those associated with the Pacific Northwest National Laboratory (PNNL) on the Department of Energy’s Hanford Site that have the potential for radionuclide air emissions. The National Emission Standard for Hazardous Air Pollutants (NESHAP 40 CFR 61, Subparts H and I) requires an assessment of all effluent release points that have the potential for radionuclide emissions. Potential emissions are assessed annually. Sampling, monitoring, and other regulatory compliance requirements are designated based upon the potential-to-emit dose criteria found in the regulations. The purpose of this document is to describe the facility radionuclide air emission sampling program and provide current and historical facility emission point system performance, operation, and design information. A description of the buildings, exhaust points, control technologies, and sample extraction details is provided for each registered or deregistered facility emission point. Additionally, applicable stack sampler configuration drawings, figures, and photographs are provided.

  4. Quantifying regional cerebral blood flow with N-isopropyl-p-[123I]iodoamphetamine and SPECT by one-point sampling method

    International Nuclear Information System (INIS)

    Odano, Ikuo; Takahashi, Naoya; Noguchi, Eikichi; Ohtaki, Hiro; Hatano, Masayoshi; Yamazaki, Yoshihiro; Higuchi, Takeshi; Ohkubo, Masaki.

    1994-01-01

    We developed a new non-invasive technique; one-point sampling method, for quantitative measurement of regional cerebral blood flow (rCBF) with N-isopropyl-p-[ 123 I]iodoamphetamine and SPECT. Although the continuous withdrawal of arterial blood and octanol treatment of the blood are required in the conventional microsphere method, the new technique dose not require these two procedures. The total activity of 123 I-IMP obtained by the continuous withdrawal of arterial blood is inferred by the activity of 133 I-IMP obtained by the one point arterial sample using a regression line. To determine when one point sampling time was optimum for inferring integral input function of the continuous withdrawal and whether the treatment of sampled blood for octanol fraction was required, we examined a correlation between the total activity of arterial blood withdrawn from 0 to 5 min after the injection and the activity of one point sample obtained at time t, and calculated a regression line. As a result, the minimum % error for the inference using the regression line was obtained at 6 min after the 123 I-IMP injection, moreover, the octanol treatment was not required. Then examining an effect on the values of rCBF when the sampling time was deviated from 6 min, we could correct the values in approximately 3% error when the sample was obtained at 6±1 min after the injection. The one-point sampling method provides accurate and relatively non-invasive measurement of rCBF without octanol extraction of arterial blood. (author)

  5. Advances in paper-based sample pretreatment for point-of-care testing.

    Science.gov (United States)

    Tang, Rui Hua; Yang, Hui; Choi, Jane Ru; Gong, Yan; Feng, Shang Sheng; Pingguan-Murphy, Belinda; Huang, Qing Sheng; Shi, Jun Ling; Mei, Qi Bing; Xu, Feng

    2017-06-01

    In recent years, paper-based point-of-care testing (POCT) has been widely used in medical diagnostics, food safety and environmental monitoring. However, a high-cost, time-consuming and equipment-dependent sample pretreatment technique is generally required for raw sample processing, which are impractical for low-resource and disease-endemic areas. Therefore, there is an escalating demand for a cost-effective, simple and portable pretreatment technique, to be coupled with the commonly used paper-based assay (e.g. lateral flow assay) in POCT. In this review, we focus on the importance of using paper as a platform for sample pretreatment. We firstly discuss the beneficial use of paper for sample pretreatment, including sample collection and storage, separation, extraction, and concentration. We highlight the working principle and fabrication of each sample pretreatment device, the existing challenges and the future perspectives for developing paper-based sample pretreatment technique.

  6. Point of Injury Sampling Technology for Battlefield Molecular Diagnostics

    Science.gov (United States)

    2011-11-14

    Injury" Sampling Technology for Battlefield Molecular Diagnostics November 14, 2011 Sponsored by Defense Advanced Research Projects Agency (DOD...Date of Contract: April 25, 2011 Short Title of Work: "Point of Injury" Sampling Technology for Battlefield Molecular Diagnostics " Contract...PHASE I FINAL REPORT: Point of Injury, Sampling Technology for Battlefield Molecular Diagnostics . W31P4Q-11-C-0222 (UNCLASSIFIED) P.I: Bernardo

  7. Health information needs of professional nurses required at the point of care.

    Science.gov (United States)

    Ricks, Esmeralda; ten Ham, Wilma

    2015-06-11

    Professional nurses work in dynamic environments and need to keep up to date with relevant information for practice in nursing to render quality patient care. Keeping up to date with current information is often challenging because of heavy workload, diverse information needs and the accessibility of the required information at the point of care. The aim of the study was to explore and describe the information needs of professional nurses at the point of care in order to make recommendations to stakeholders to develop a mobile library accessible by means of smart phones when needed. The researcher utilised a quantitative, descriptive survey design to conduct this study. The target population comprised 757 professional nurses employed at a state hospital. Simple random sampling was used to select a sample of the wards, units and departments for inclusion in the study. A convenience sample of 250 participants was selected. Two hundred and fifty structured self-administered questionnaires were distributed amongst the participants. Descriptive statistics were used to analyse the data. A total of 136 completed questionnaires were returned. The findings highlighted the types and accessible sources of information. Information needs of professional nurses were identified such as: extremely drug-resistant tuberculosis, multi-drug-resistant tuberculosis, HIV, antiretrovirals and all chronic lifestyle diseases. This study has enabled the researcher to identify the information needs required by professional nurses at the point of care to enhance the delivery of patient care. The research results were used to develop a mobile library that could be accessed by professional nurses.

  8. Water Sample Points, Navajo Nation, 2000, USACE

    Data.gov (United States)

    U.S. Environmental Protection Agency — This point shapefile presents the locations and results for water samples collected on the Navajo Nation by the US Army Corps of Engineers (USACE) for the US...

  9. Comprehensive Interpretation of a Three-Point Gauss Quadrature with Variable Sampling Points and Its Application to Integration for Discrete Data

    Directory of Open Access Journals (Sweden)

    Young-Doo Kwon

    2013-01-01

    Full Text Available This study examined the characteristics of a variable three-point Gauss quadrature using a variable set of weighting factors and corresponding optimal sampling points. The major findings were as follows. The one-point, two-point, and three-point Gauss quadratures that adopt the Legendre sampling points and the well-known Simpson’s 1/3 rule were found to be special cases of the variable three-point Gauss quadrature. In addition, the three-point Gauss quadrature may have out-of-domain sampling points beyond the domain end points. By applying the quadratically extrapolated integrals and nonlinearity index, the accuracy of the integration could be increased significantly for evenly acquired data, which is popular with modern sophisticated digital data acquisition systems, without using higher-order extrapolation polynomials.

  10. Point Counts of Birds in Bottomland Hardwood Forests of the Mississippi Alluvial Valley: Duration, Minimum Sample Size, and Points Versus Visits

    Science.gov (United States)

    Winston Paul Smith; Daniel J. Twedt; David A. Wiedenfeld; Paul B. Hamel; Robert P. Ford; Robert J. Cooper

    1993-01-01

    To compare efficacy of point count sampling in bottomland hardwood forests, duration of point count, number of point counts, number of visits to each point during a breeding season, and minimum sample size are examined.

  11. Predicting sample size required for classification performance

    Directory of Open Access Journals (Sweden)

    Figueroa Rosa L

    2012-02-01

    Full Text Available Abstract Background Supervised learning methods need annotated data in order to generate efficient models. Annotated data, however, is a relatively scarce resource and can be expensive to obtain. For both passive and active learning methods, there is a need to estimate the size of the annotated sample required to reach a performance target. Methods We designed and implemented a method that fits an inverse power law model to points of a given learning curve created using a small annotated training set. Fitting is carried out using nonlinear weighted least squares optimization. The fitted model is then used to predict the classifier's performance and confidence interval for larger sample sizes. For evaluation, the nonlinear weighted curve fitting method was applied to a set of learning curves generated using clinical text and waveform classification tasks with active and passive sampling methods, and predictions were validated using standard goodness of fit measures. As control we used an un-weighted fitting method. Results A total of 568 models were fitted and the model predictions were compared with the observed performances. Depending on the data set and sampling method, it took between 80 to 560 annotated samples to achieve mean average and root mean squared error below 0.01. Results also show that our weighted fitting method outperformed the baseline un-weighted method (p Conclusions This paper describes a simple and effective sample size prediction algorithm that conducts weighted fitting of learning curves. The algorithm outperformed an un-weighted algorithm described in previous literature. It can help researchers determine annotation sample size for supervised machine learning.

  12. 30 CFR 71.201 - Sampling; general requirements.

    Science.gov (United States)

    2010-07-01

    ... MINES Sampling Procedures § 71.201 Sampling; general requirements. (a) Each operator shall take... required by this part with a sampling device approved by the Secretary and the Secretary of Health and Human Services under part 74 (Coal Mine Dust Personal Sampler Units) of this title. (b) Sampling devices...

  13. Point of Injury’ Sampling Technology for Battlefield Molecular Diagnostics

    Science.gov (United States)

    2012-03-17

    Injury" Sampling Technology for Battlefield Molecular Diagnostics March 17,2012 Sponsored by Defense Advanced Research Projects Agency (DOD) Defense...Contract: April 25, 2011 Short Title of Work: "Point of Injury" Sampling Technology for Battlefield Molecular Diagnostics " Contract Expiration Date...SBIR PHASE I OPTION REPORT: Point of Injury, Sampling Technology for Battlefield Molecular Diagnostics . W31P4Q-1 l-C-0222 (UNCLASSIFIED) P.I

  14. A comparison of point counts with a new acoustic sampling method ...

    African Journals Online (AJOL)

    We showed that the estimates of species richness, abundance and community composition based on point counts and post-hoc laboratory listening to acoustic samples are very similar, especially for a distance limited up to 50 m. Species that were frequently missed during both point counts and listening to acoustic samples ...

  15. 40 CFR 141.23 - Inorganic chemical sampling and analytical requirements.

    Science.gov (United States)

    2010-07-01

    ... may allow a groundwater system to reduce the sampling frequency to annually after four consecutive... this section. (a) Monitoring shall be conducted as follows: (1) Groundwater systems shall take a... system shall take each sample at the same sampling point unless conditions make another sampling point...

  16. Groundwater sampling with well-points

    International Nuclear Information System (INIS)

    Laubacher, R.C.; Bailey, W.M.

    1992-01-01

    This paper reports that BP Oil Company and Engineering-Science (ES) conducted a groundwater investigation at a BP Oil Distribution facility in the coastal plain of south central Alabama. The predominant lithologies include unconsolidated Quaternary-aged gravels, sands, silts and clay. Wellpoints were used to determine the vertical and horizontal extent of volatile hydrocarbons in the water table aquifer. To determine the vertical extent of contaminant migration, the hollow-stem augers were advanced approximately 10 feet into the aquifer near a suspected source. The drill stem and bit were removed very slowly to prevent sand heaving. The well-point was again driven ahead of the augers and four volumes (18 liters) of groundwater were purged. A sample was collected and the headspace vapor was analyzed as before. Groundwater from a total of seven borings was analyzed using these techniques. Permanent monitoring wells were installed at four boring locations which had volatile concentrations less than 1 part per million. Later groundwater sampling and laboratory analysis confirmed the wells had been installed near or beyond both the horizontal and vertical plume boundaries

  17. A simple method for regional cerebral blood flow measurement by one-point arterial blood sampling and 123I-IMP microsphere model (part 2). A study of time correction of one-point blood sample count

    International Nuclear Information System (INIS)

    Masuda, Yasuhiko; Makino, Kenichi; Gotoh, Satoshi

    1999-01-01

    In our previous paper regarding determination of the regional cerebral blood flow (rCBF) using the 123 I-IMP microsphere model, we reported that the accuracy of determination of the integrated value of the input function from one-point arterial blood sampling can be increased by performing correction using the 5 min: 29 min ratio for the whole-brain count. However, failure to carry out the arterial blood collection at exactly 5 minutes after 123 I-IMP injection causes errors with this method, and there is thus a time limitation. We have now revised out method so that the one-point arterial blood sampling can be performed at any time during the interval between 5 minutes and 20 minutes after 123 I-IMP injection, with addition of a correction step for the sampling time. This revised method permits more accurate estimation of the integral of the input functions. This method was then applied to 174 experimental subjects: one-point blood samples collected at random times between 5 and 20 minutes, and the estimated values for the continuous arterial octanol extraction count (COC) were determined. The mean error rate between the COC and the actual measured continuous arterial octanol extraction count (OC) was 3.6%, and the standard deviation was 12.7%. Accordingly, in 70% of the cases, the rCBF was able to be estimated within an error rate of 13%, while estimation was possible in 95% of the cases within an error rate of 25%. This improved method is a simple technique for determination of the rCBF by 123 I-IMP microsphere model and one-point arterial blood sampling which no longer shows a time limitation and does not require any octanol extraction step. (author)

  18. 30 CFR 70.201 - Sampling; general requirements.

    Science.gov (United States)

    2010-07-01

    ...; general requirements. (a) Each operator shall take respirable dust samples of the concentration of respirable dust in the active workings of the mine as required by this part with a sampling device approved... Personal Sampler Units) of this title. (b) Sampling devices shall be worn or carried directly to and from...

  19. Critical point relascope sampling for unbiased volume estimation of downed coarse woody debris

    Science.gov (United States)

    Jeffrey H. Gove; Michael S. Williams; Mark J. Ducey; Mark J. Ducey

    2005-01-01

    Critical point relascope sampling is developed and shown to be design-unbiased for the estimation of log volume when used with point relascope sampling for downed coarse woody debris. The method is closely related to critical height sampling for standing trees when trees are first sampled with a wedge prism. Three alternative protocols for determining the critical...

  20. On the Required Number of Antennas in a Point-to-Point Large-but-Finite MIMO System

    KAUST Repository

    Makki, Behrooz; Svensson, Tommy; Eriksson, Thomas; Alouini, Mohamed-Slim

    2015-01-01

    In this paper, we investigate the performance of the point-to-point multiple-input-multiple-output (MIMO) systems in the presence of a large but finite numbers of antennas at the transmitters and/or receivers. Considering the cases with and without hybrid automatic repeat request (HARQ) feedback, we determine the minimum numbers of the transmit/receive antennas which are required to satisfy different outage probability constraints. We study the effect of the spatial correlation between the antennas on the system performance. Also, the required number of antennas are obtained for different fading conditions. Our results show that different outage requirements can be satisfied with relatively few transmit/receive antennas. © 2015 IEEE.

  1. On the Required Number of Antennas in a Point-to-Point Large-but-Finite MIMO System

    KAUST Repository

    Makki, Behrooz

    2015-11-12

    In this paper, we investigate the performance of the point-to-point multiple-input-multiple-output (MIMO) systems in the presence of a large but finite numbers of antennas at the transmitters and/or receivers. Considering the cases with and without hybrid automatic repeat request (HARQ) feedback, we determine the minimum numbers of the transmit/receive antennas which are required to satisfy different outage probability constraints. We study the effect of the spatial correlation between the antennas on the system performance. Also, the required number of antennas are obtained for different fading conditions. Our results show that different outage requirements can be satisfied with relatively few transmit/receive antennas. © 2015 IEEE.

  2. Efficient triangulation of Poisson-disk sampled point sets

    KAUST Repository

    Guo, Jianwei

    2014-05-06

    In this paper, we present a simple yet efficient algorithm for triangulating a 2D input domain containing a Poisson-disk sampled point set. The proposed algorithm combines a regular grid and a discrete clustering approach to speedup the triangulation. Moreover, our triangulation algorithm is flexible and performs well on more general point sets such as adaptive, non-maximal Poisson-disk sets. The experimental results demonstrate that our algorithm is robust for a wide range of input domains and achieves significant performance improvement compared to the current state-of-the-art approaches. © 2014 Springer-Verlag Berlin Heidelberg.

  3. Evaluation of the point-centred-quarter method of sampling ...

    African Journals Online (AJOL)

    -quarter method.The parameter which was most efficiently sampled was species composition relativedensity) with 90% replicate similarity being achieved with 100 point-centred-quarters. However, this technique cannot be recommended, even ...

  4. Development of spatial scaling technique of forest health sample point information

    Science.gov (United States)

    Lee, J.; Ryu, J.; Choi, Y. Y.; Chung, H. I.; Kim, S. H.; Jeon, S. W.

    2017-12-01

    Most forest health assessments are limited to monitoring sampling sites. The monitoring of forest health in Britain in Britain was carried out mainly on five species (Norway spruce, Sitka spruce, Scots pine, Oak, Beech) Database construction using Oracle database program with density The Forest Health Assessment in GreatBay in the United States was conducted to identify the characteristics of the ecosystem populations of each area based on the evaluation of forest health by tree species, diameter at breast height, water pipe and density in summer and fall of 200. In the case of Korea, in the first evaluation report on forest health vitality, 1000 sample points were placed in the forests using a systematic method of arranging forests at 4Km × 4Km at regular intervals based on an sample point, and 29 items in four categories such as tree health, vegetation, soil, and atmosphere. As mentioned above, existing researches have been done through the monitoring of the survey sample points, and it is difficult to collect information to support customized policies for the regional survey sites. In the case of special forests such as urban forests and major forests, policy and management appropriate to the forest characteristics are needed. Therefore, it is necessary to expand the survey headquarters for diagnosis and evaluation of customized forest health. For this reason, we have constructed a method of spatial scale through the spatial interpolation according to the characteristics of each index of the main sample point table of 29 index in the four points of diagnosis and evaluation report of the first forest health vitality report, PCA statistical analysis and correlative analysis are conducted to construct the indicators with significance, and then weights are selected for each index, and evaluation of forest health is conducted through statistical grading.

  5. 30 CFR 75.336 - Sampling and monitoring requirements.

    Science.gov (United States)

    2010-07-01

    ... concentration of other sampling locations in the sealed area and other required information. Before miners... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Sampling and monitoring requirements. 75.336... SAFETY AND HEALTH MANDATORY SAFETY STANDARDS-UNDERGROUND COAL MINES Ventilation § 75.336 Sampling and...

  6. Testing to fulfill HACCP (Hazard Analysis Critical Control Points) requirements: principles and examples.

    Science.gov (United States)

    Gardner, I A

    1997-12-01

    On-farm HACCP (hazard analysis critical control points) monitoring requires cost-effective, yet accurate and reproducible tests that can determine the status of cows, milk, and the dairy environment. Tests need to be field-validated, and their limitations need to be established so that appropriate screening strategies can be initiated and test results can be rationally interpreted. For infections and residues of low prevalence, tests or testing strategies that are highly specific help to minimize false-positive results and excessive costs to the dairy industry. The determination of the numbers of samples to be tested in HACCP monitoring programs depends on the specific purpose of the test and the likely prevalence of the agent or residue at the critical control point. The absence of positive samples from a herd test should not be interpreted as freedom from a particular agent or residue unless the entire herd has been tested with a test that is 100% sensitive. The current lack of field-validated tests for most of the chemical and infectious agents of concern makes it difficult to ensure that the stated goals of HACCP programs are consistently achieved.

  7. Measurement of regional cerebral blood flow using one-point arterial blood sampling and microsphere model with 123I-IMP. Correction of one-point arterial sampling count by whole brain count ratio

    International Nuclear Information System (INIS)

    Makino, Kenichi; Masuda, Yasuhiko; Gotoh, Satoshi

    1998-01-01

    The experimental subjects were 189 patients with cerebrovascular disorders. 123 I-IMP, 222 MBq, was administered by intravenous infusion. Continuous arterial blood sampling was carried out for 5 minutes, and arterial blood was also sampled once at 5 minutes after 123 I-IMP administration. Then the whole blood count of the one-point arterial sampling was compared with the octanol-extracted count of the continuous arterial sampling. A positive correlation was found between the two values. The ratio of the continuous sampling octanol-extracted count (OC) to the one-point sampling whole blood count (TC5) was compared with the whole brain count ratio (5:29 ratio, Cn) using 1-minute planar SPECT images, centering on 5 and 29 minutes after 123 I-IMP administration. Correlation was found between the two values. The following relationship was shown from the correlation equation. OC/TC5=0.390969 x Cn-0.08924. Based on this correlation equation, we calculated the theoretical continuous arterial sampling octanol-extracted count (COC). COC=TC5 x (0.390969 x Cn-0.08924). There was good correlation between the value calculated with this equation and the actually measured value. The coefficient improved to r=0.94 from the r=0.87 obtained before using the 5:29 ratio for correction. For 23 of these 189 cases, another one-point arterial sampling was carried out at 6, 7, 8, 9 and 10 minutes after the administration of 123 I-IMP. The correlation coefficient was also improved for these other point samplings when this correction method using the 5:29 ratio was applied. It was concluded that it is possible to obtain highly accurate input functions, i.e., calculated continuous arterial sampling octanol-extracted counts, using one-point arterial sampling whole blood counts by performing correction using the 5:29 ratio. (K.H.)

  8. Reliability of impingement sampling designs: An example from the Indian Point station

    International Nuclear Information System (INIS)

    Mattson, M.T.; Waxman, J.B.; Watson, D.A.

    1988-01-01

    A 4-year data base (1976-1979) of daily fish impingement counts at the Indian Point electric power station on the Hudson River was used to compare the precision and reliability of three random-sampling designs: (1) simple random, (2) seasonally stratified, and (3) empirically stratified. The precision of daily impingement estimates improved logarithmically for each design as more days in the year were sampled. Simple random sampling was the least, and empirically stratified sampling was the most precise design, and the difference in precision between the two stratified designs was small. Computer-simulated sampling was used to estimate the reliability of the two stratified-random-sampling designs. A seasonally stratified sampling design was selected as the most appropriate reduced-sampling program for Indian Point station because: (1) reasonably precise and reliable impingement estimates were obtained using this design for all species combined and for eight common Hudson River fish by sampling only 30% of the days in a year (110 d); and (2) seasonal strata may be more precise and reliable than empirical strata if future changes in annual impingement patterns occur. The seasonally stratified design applied to the 1976-1983 Indian Point impingement data showed that selection of sampling dates based on daily species-specific impingement variability gave results that were more precise, but not more consistently reliable, than sampling allocations based on the variability of all fish species combined. 14 refs., 1 fig., 6 tabs

  9. Effects of LiDAR point density, sampling size and height threshold on estimation accuracy of crop biophysical parameters.

    Science.gov (United States)

    Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong

    2016-05-30

    Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data.

  10. An inversion-relaxation approach for sampling stationary points of spin model Hamiltonians

    International Nuclear Information System (INIS)

    Hughes, Ciaran; Mehta, Dhagash; Wales, David J.

    2014-01-01

    Sampling the stationary points of a complicated potential energy landscape is a challenging problem. Here, we introduce a sampling method based on relaxation from stationary points of the highest index of the Hessian matrix. We illustrate how this approach can find all the stationary points for potentials or Hamiltonians bounded from above, which includes a large class of important spin models, and we show that it is far more efficient than previous methods. For potentials unbounded from above, the relaxation part of the method is still efficient in finding minima and transition states, which are usually the primary focus of attention for atomistic systems

  11. Development of septum-free injector for gas chromatography and its application to the samples with a high boiling point.

    Science.gov (United States)

    Ito, Hiroshi; Hayakawa, Kazuichi; Yamamoto, Atsushi; Murase, Atsushi; Hayakawa, Kazumi; Kuno, Minoru; Inoue, Yoshinori

    2006-11-03

    A novel apparatus with a simple structure has been developed for introducing samples into the vaporizing chamber of a gas chromatograph. It requires no septum due to the gas sealing structure over the carrier gas supply line. The septum-free injector made it possible to use injection port temperatures as high as 450 degrees C. Repetitive injection of samples with boiling points below 300 degrees C resulted in peak areas with relative standard deviations between 1.25 and 3.28% (n=5) and good linearity (r(2)>0.9942) for the calibration curve. In the analysis of polycyclic aromatic hydrocarbons and a base oil, the peak areas of components with high boiling points increased as the injection port temperature was increased to 450 degrees C.

  12. 40 CFR 141.174 - Filtration sampling requirements.

    Science.gov (United States)

    2010-07-01

    ... PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Enhanced Filtration and Disinfection... water system subject to the requirements of this subpart that provides conventional filtration treatment... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Filtration sampling requirements. 141...

  13. A novel knot selection method for the error-bounded B-spline curve fitting of sampling points in the measuring process

    International Nuclear Information System (INIS)

    Liang, Fusheng; Zhao, Ji; Ji, Shijun; Zhang, Bing; Fan, Cheng

    2017-01-01

    The B-spline curve has been widely used in the reconstruction of measurement data. The error-bounded sampling points reconstruction can be achieved by the knot addition method (KAM) based B-spline curve fitting. In KAM, the selection pattern of initial knot vector has been associated with the ultimate necessary number of knots. This paper provides a novel initial knots selection method to condense the knot vector required for the error-bounded B-spline curve fitting. The initial knots are determined by the distribution of features which include the chord length (arc length) and bending degree (curvature) contained in the discrete sampling points. Firstly, the sampling points are fitted into an approximate B-spline curve Gs with intensively uniform knot vector to substitute the description of the feature of the sampling points. The feature integral of Gs is built as a monotone increasing function in an analytic form. Then, the initial knots are selected according to the constant increment of the feature integral. After that, an iterative knot insertion (IKI) process starting from the initial knots is introduced to improve the fitting precision, and the ultimate knot vector for the error-bounded B-spline curve fitting is achieved. Lastly, two simulations and the measurement experiment are provided, and the results indicate that the proposed knot selection method can reduce the number of ultimate knots available. (paper)

  14. Optimal time points sampling in pathway modelling.

    Science.gov (United States)

    Hu, Shiyan

    2004-01-01

    Modelling cellular dynamics based on experimental data is at the heart of system biology. Considerable progress has been made to dynamic pathway modelling as well as the related parameter estimation. However, few of them gives consideration for the issue of optimal sampling time selection for parameter estimation. Time course experiments in molecular biology rarely produce large and accurate data sets and the experiments involved are usually time consuming and expensive. Therefore, to approximate parameters for models with only few available sampling data is of significant practical value. For signal transduction, the sampling intervals are usually not evenly distributed and are based on heuristics. In the paper, we investigate an approach to guide the process of selecting time points in an optimal way to minimize the variance of parameter estimates. In the method, we first formulate the problem to a nonlinear constrained optimization problem by maximum likelihood estimation. We then modify and apply a quantum-inspired evolutionary algorithm, which combines the advantages of both quantum computing and evolutionary computing, to solve the optimization problem. The new algorithm does not suffer from the morass of selecting good initial values and being stuck into local optimum as usually accompanied with the conventional numerical optimization techniques. The simulation results indicate the soundness of the new method.

  15. Uncertainty analysis of point-by-point sampling complex surfaces using touch probe CMMs DOE for complex surfaces verification with CMM

    DEFF Research Database (Denmark)

    Barini, Emanuele Modesto; Tosello, Guido; De Chiffre, Leonardo

    2010-01-01

    The paper describes a study concerning point-by-point sampling of complex surfaces using tactile CMMs. A four factor, two level completely randomized factorial experiment was carried out, involving measurements on a complex surface configuration item comprising a sphere, a cylinder and a cone, co...

  16. 40 CFR 141.22 - Turbidity sampling and analytical requirements.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Turbidity sampling and analytical... § 141.22 Turbidity sampling and analytical requirements. The requirements in this section apply to... the water distribution system at least once per day, for the purposes of making turbidity measurements...

  17. Critique of Hanford Waste Vitrification Plant off-gas sampling requirements

    International Nuclear Information System (INIS)

    Goles, R.W.

    1996-03-01

    Off-gas sampling and monitoring activities needed to support operations safety, process control, waste form qualification, and environmental protection requirements of the Hanford Waste Vitrification Plant (HWVP) have been evaluated. The locations of necessary sampling sites have been identified on the basis of plant requirements, and the applicability of Defense Waste Processing Facility (DWPF) reference sampling equipment to these HWVP requirements has been assessed for all sampling sites. Equipment deficiencies, if present, have been described and the bases for modifications and/or alternative approaches have been developed

  18. Scanning electron microscope autoradiography of critical point dried biological samples

    International Nuclear Information System (INIS)

    Weiss, R.L.

    1980-01-01

    A technique has been developed for the localization of isotopes in the scanning electron microscope. Autoradiographic studies have been performed using a model system and a unicellular biflagellate alga. One requirement of this technique is that all manipulations be carried out on samples that are maintained in a liquid state. Observations of a source of radiation ( 125 I-ferritin) show that the nuclear emulsion used to detect radiation is active under these conditions. Efficiency measurement performed using 125 I-ferritin indicate that 125 I-SEM autoradiography is an efficient process that exhibits a 'dose dependent' response. Two types of labeling methods were used with cells, surface labeling with 125 I and internal labeling with 3 H. Silver grains appeared on labeled cells after autoradiography, removal of residual gelatin and critical point drying. The location of grains was examined on a flagellated green alga (Chlamydomonas reinhardi) capable of undergoing cell fusion. Fusion experiments using labeled and unlabeled cells indicate that 1. Labeling is specific for incorporated radioactivity; 2. Cell surface structure is preserved in SEM autoradiographs and 3. The technique appears to produce reliable autoradiographs. Thus scanning electron microscope autoradiography should provide a new and useful experimental approach

  19. 40 CFR 761.130 - Sampling requirements.

    Science.gov (United States)

    2010-07-01

    ... 761.130 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL... sampling scheme is that it is designed to characterize the degree of contamination within the entire.... For this purpose, the numerical level of cleanup required for spills cleaned in accordance with § 761...

  20. A simple method for measurement of cerebral blood flow using 123I-IMP SPECT with calibrated standard input function by one point blood sampling. Validation of calibration by one point venous blood sampling as a substitute for arterial blood sampling

    International Nuclear Information System (INIS)

    Ito, Hiroshi; Akaizawa, Takashi; Goto, Ryoui

    1994-01-01

    In a simplified method for measurement of cerebral blood flow using one 123 I-IMP SPECT scan and one point arterial blood sampling (Autoradiography method), input function is obtained by calibrating a standard input function by one point arterial blood sampling. A purpose of this study is validation of calibration by one point venous blood sampling as a substitute for one point arterial blood sampling. After intravenous infusion of 123 I-IMP, frequent arterial and venous blood sampling were simultaneously performed on 12 patients of CNS disease without any heart and lung disease and 5 normal volunteers. The radioactivity ratio of venous whole blood which obtained from cutaneous cubital vein to arterial whole blood were 0.76±0.08, 0.80±0.05, 0.81±0.06, 0.83±0.11 at 10, 20, 30, 50 min after 123 I-IMP infusion, respectively. The venous blood radioactivities were always 20% lower than those of arterial blood radioactivity during 50 min. However, the ratio which obtained from cutaneous dorsal hand vein to artery were 0.93±0.02, 0.94±0.05, 0.98±0.04, 0.98±0.03, at 10, 20, 30, 50 min after 123 I-IMP infusion, respectively. The venous blood radioactivity was consistent with artery. These indicate that arterio-venous difference of radioactivity in a peripheral cutaneous vein like a dorsal hand vein is minimal due to arteriovenous shunt in palm. Therefore, a substitution by blood sampling from cutaneous dorsal hand vein for artery will be possible. Optimized time for venous blood sampling evaluated by error analysis was 20 min after 123 I-IMP infusion, which is 10 min later than that of arterial blood sampling. (author)

  1. Analysis of spatial patterns informs community assembly and sampling requirements for Collembola in forest soils

    Science.gov (United States)

    Dirilgen, Tara; Juceviča, Edite; Melecis, Viesturs; Querner, Pascal; Bolger, Thomas

    2018-01-01

    The relative importance of niche separation, non-equilibrial and neutral models of community assembly has been a theme in community ecology for many decades with none appearing to be applicable under all circumstances. In this study, Collembola species abundances were recorded over eleven consecutive years in a spatially explicit grid and used to examine (i) whether observed beta diversity differed from that expected under conditions of neutrality, (ii) whether sampling points differed in their relative contributions to overall beta diversity, and (iii) the number of samples required to provide comparable estimates of species richness across three forest sites. Neutrality could not be rejected for 26 of the forest by year combinations. However, there is a trend toward greater structure in the oldest forest, where beta diversity was greater than predicted by neutrality on five of the eleven sampling dates. The lack of difference in individual- and sample-based rarefaction curves also suggests randomness in the system at this particular scale of investigation. It seems that Collembola communities are not spatially aggregated and assembly is driven primarily by neutral processes particularly in the younger two sites. Whether this finding is due to small sample size or unaccounted for environmental variables cannot be determined. Variability between dates and sites illustrates the potential of drawing incorrect conclusions if data are collected at a single site and a single point in time.

  2. Comparison of Single-Point and Continuous Sampling Methods for Estimating Residential Indoor Temperature and Humidity.

    Science.gov (United States)

    Johnston, James D; Magnusson, Brianna M; Eggett, Dennis; Collingwood, Scott C; Bernhardt, Scott A

    2015-01-01

    Residential temperature and humidity are associated with multiple health effects. Studies commonly use single-point measures to estimate indoor temperature and humidity exposures, but there is little evidence to support this sampling strategy. This study evaluated the relationship between single-point and continuous monitoring of air temperature, apparent temperature, relative humidity, and absolute humidity over four exposure intervals (5-min, 30-min, 24-hr, and 12-days) in 9 northern Utah homes, from March-June 2012. Three homes were sampled twice, for a total of 12 observation periods. Continuous data-logged sampling was conducted in homes for 2-3 wks, and simultaneous single-point measures (n = 114) were collected using handheld thermo-hygrometers. Time-centered single-point measures were moderately correlated with short-term (30-min) data logger mean air temperature (r = 0.76, β = 0.74), apparent temperature (r = 0.79, β = 0.79), relative humidity (r = 0.70, β = 0.63), and absolute humidity (r = 0.80, β = 0.80). Data logger 12-day means were also moderately correlated with single-point air temperature (r = 0.64, β = 0.43) and apparent temperature (r = 0.64, β = 0.44), but were weakly correlated with single-point relative humidity (r = 0.53, β = 0.35) and absolute humidity (r = 0.52, β = 0.39). Of the single-point RH measures, 59 (51.8%) deviated more than ±5%, 21 (18.4%) deviated more than ±10%, and 6 (5.3%) deviated more than ±15% from data logger 12-day means. Where continuous indoor monitoring is not feasible, single-point sampling strategies should include multiple measures collected at prescribed time points based on local conditions.

  3. An adaptive Monte Carlo method under emission point as sampling station for deep penetration calculation

    International Nuclear Information System (INIS)

    Wang, Ruihong; Yang, Shulin; Pei, Lucheng

    2011-01-01

    Deep penetration problem has been one of the difficult problems in shielding calculation with Monte Carlo method for several decades. In this paper, an adaptive technique under the emission point as a sampling station is presented. The main advantage is to choose the most suitable sampling number from the emission point station to get the minimum value of the total cost in the process of the random walk. Further, the related importance sampling method is also derived. The main principle is to define the importance function of the response due to the particle state and ensure the sampling number of the emission particle is proportional to the importance function. The numerical results show that the adaptive method under the emission point as a station could overcome the difficulty of underestimation to the result in some degree, and the related importance sampling method gets satisfied results as well. (author)

  4. Interference and k-point sampling in the supercell approach to phase-coherent transport - art. no. 0333401

    DEFF Research Database (Denmark)

    Thygesen, Kristian Sommer; Jacobsen, Karsten Wedel

    2005-01-01

    We present a systematic study of interference and k-point sampling effects in the supercell approach to phase-coherent electron transport. We use a representative tight-binding model to show that interference between the repeated images is a small effect compared to the error introduced by using...... only the Gamma-point for a supercell containing (3,3) sites in the transverse plane. An insufficient k-point sampling can introduce strong but unphysical features in the transmission function which can be traced to the presence of van Hove singularities in the lead. We present a first......-principles calculation of the transmission through a Pt contact which shows that the k-point sampling is also important for realistic systems....

  5. AMCO Scribe Sampling Data Points, Oakland CA, 2017, US EPA Region 9

    Data.gov (United States)

    U.S. Environmental Protection Agency — This feature class contains points depicting archived sampling data for Vinyl Chloride, Trichloroethene (TCE), and Tetrachloroethene (PCE) for the R09 AMCO-OTIE...

  6. Evaluation of mixing downstream of tees in duct systems with respect to single point representative air sampling.

    Science.gov (United States)

    Kim, Taehong; O'Neal, Dennis L; Ortiz, Carlos

    2006-09-01

    Air duct systems in nuclear facilities must be monitored with continuous sampling in case of an accidental release of airborne radionuclides. The purpose of this work is to identify the air sampling locations where the velocity and contaminant concentrations fall below the 20% coefficient of variation required by the American National Standards Institute/Health Physics Society N13.1-1999. Experiments of velocity and tracer gas concentration were conducted on a generic "T" mixing system which included combinations of three sub ducts, one main duct, and air velocities from 0.5 to 2 m s (100 to 400 fpm). The experimental results suggest that turbulent mixing provides the accepted velocity coefficients of variation after 6 hydraulic diameters downstream of the T-junction. About 95% of the cases achieved coefficients of variation below 10% by 6 hydraulic diameters. However, above a velocity ratio (velocity in the sub duct/velocity in the main duct) of 2, velocity profiles were uniform in a shorter distance downstream of the T-junction as the velocity ratio went up. For the tracer gas concentration, the distance needed for the coefficients of variation to drop 20% decreased with increasing velocity ratio due to the sub duct airflow momentum. The results may apply to other duct systems with similar geometries and, ultimately, be a basis for selecting a proper sampling location under the requirements of single point representative sampling.

  7. Sampling Development

    Science.gov (United States)

    Adolph, Karen E.; Robinson, Scott R.

    2011-01-01

    Research in developmental psychology requires sampling at different time points. Accurate depictions of developmental change provide a foundation for further empirical studies and theories about developmental mechanisms. However, overreliance on widely spaced sampling intervals in cross-sectional and longitudinal designs threatens the validity of…

  8. 21 CFR 111.83 - What are the requirements for reserve samples?

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 2 2010-04-01 2010-04-01 false What are the requirements for reserve samples? 111..., PACKAGING, LABELING, OR HOLDING OPERATIONS FOR DIETARY SUPPLEMENTS Requirement to Establish a Production and Process Control System § 111.83 What are the requirements for reserve samples? (a) You must collect and...

  9. Comparison of T-Square, Point Centered Quarter, and N-Tree Sampling Methods in Pittosporum undulatum Invaded Woodlands

    Directory of Open Access Journals (Sweden)

    Lurdes Borges Silva

    2017-01-01

    Full Text Available Tree density is an important parameter affecting ecosystems functions and management decisions, while tree distribution patterns affect sampling design. Pittosporum undulatum stands in the Azores are being targeted with a biomass valorization program, for which efficient tree density estimators are required. We compared T-Square sampling, Point Centered Quarter Method (PCQM, and N-tree sampling with benchmark quadrat (QD sampling in six 900 m2 plots established at P. undulatum stands in São Miguel Island. A total of 15 estimators were tested using a data resampling approach. The estimated density range (344–5056 trees/ha was found to agree with previous studies using PCQM only. Although with a tendency to underestimate tree density (in comparison with QD, overall, T-Square sampling appeared to be the most accurate and precise method, followed by PCQM. Tree distribution pattern was found to be slightly aggregated in 4 of the 6 stands. Considering (1 the low level of bias and high precision, (2 the consistency among three estimators, (3 the possibility of use with aggregated patterns, and (4 the possibility of obtaining a larger number of independent tree parameter estimates, we recommend the use of T-Square sampling in P. undulatum stands within the framework of a biomass valorization program.

  10. A test of alternative estimators for volume at time 1 from remeasured point samples

    Science.gov (United States)

    Francis A. Roesch; Edwin J. Green; Charles T. Scott

    1993-01-01

    Two estimators for volume at time 1 for use with permanent horizontal point samples are evaluated. One estimator, used traditionally, uses only the trees sampled at time 1, while the second estimator, originally presented by Roesch and coauthors (F.A. Roesch, Jr., E.J. Green, and C.T. Scott. 1989. For. Sci. 35(2):281-293). takes advantage of additional sample...

  11. Ultra-High-Throughput Sample Preparation System for Lymphocyte Immunophenotyping Point-of-Care Diagnostics.

    Science.gov (United States)

    Walsh, David I; Murthy, Shashi K; Russom, Aman

    2016-10-01

    Point-of-care (POC) microfluidic devices often lack the integration of common sample preparation steps, such as preconcentration, which can limit their utility in the field. In this technology brief, we describe a system that combines the necessary sample preparation methods to perform sample-to-result analysis of large-volume (20 mL) biopsy model samples with staining of captured cells. Our platform combines centrifugal-paper microfluidic filtration and an analysis system to process large, dilute biological samples. Utilizing commercialization-friendly manufacturing methods and materials, yielding a sample throughput of 20 mL/min, and allowing for on-chip staining and imaging bring together a practical, yet powerful approach to microfluidic diagnostics of large, dilute samples. © 2016 Society for Laboratory Automation and Screening.

  12. Combining Electrochemical Sensors with Miniaturized Sample Preparation for Rapid Detection in Clinical Samples

    Science.gov (United States)

    Bunyakul, Natinan; Baeumner, Antje J.

    2015-01-01

    Clinical analyses benefit world-wide from rapid and reliable diagnostics tests. New tests are sought with greatest demand not only for new analytes, but also to reduce costs, complexity and lengthy analysis times of current techniques. Among the myriad of possibilities available today to develop new test systems, amperometric biosensors are prominent players—best represented by the ubiquitous amperometric-based glucose sensors. Electrochemical approaches in general require little and often enough only simple hardware components, are rugged and yet provide low limits of detection. They thus offer many of the desirable attributes for point-of-care/point-of-need tests. This review focuses on investigating the important integration of sample preparation with (primarily electrochemical) biosensors. Sample clean up requirements, miniaturized sample preparation strategies, and their potential integration with sensors will be discussed, focusing on clinical sample analyses. PMID:25558994

  13. On the Required Number of Antennas in a Point-to-Point Large-but-Finite MIMO System: Outage-Limited Scenario

    KAUST Repository

    Makki, Behrooz

    2016-03-22

    This paper investigates the performance of the point-To-point multiple-input-multiple-output (MIMO) systems in the presence of a large but finite numbers of antennas at the transmitters and/or receivers. Considering the cases with and without hybrid automatic repeat request (HARQ) feedback, we determine the minimum numbers of the transmit/receive antennas, which are required to satisfy different outage probability constraints. Our results are obtained for different fading conditions and the effect of the power amplifiers efficiency/feedback error probability on the performance of the MIMO-HARQ systems is analyzed. Then, we use some recent results on the achievable rates of finite block-length codes, to analyze the effect of the codewords lengths on the system performance. Moreover, we derive closed-form expressions for the asymptotic performance of the MIMO-HARQ systems when the number of antennas increases. Our analytical and numerical results show that different outage requirements can be satisfied with relatively few transmit/receive antennas. © 1972-2012 IEEE.

  14. Pointing stability of Hinode and requirements for the next Solar mission Solar-C

    Science.gov (United States)

    Katsukawa, Y.; Masada, Y.; Shimizu, T.; Sakai, S.; Ichimoto, K.

    2017-11-01

    It is essential to achieve fine pointing stability in a space mission aiming for high resolutional observations. In a future Japanese solar mission SOLAR-C, which is a successor of the HINODE (SOLAR-B) mission, we set targets of angular resolution better than 0.1 arcsec in the visible light and better than 0.2 - 0.5 arcsec in EUV and X-rays. These resolutions are twice to five times better than those of corresponding instruments onboard HINODE. To identify critical items to achieve the requirements of the pointing stability in SOLAR-C, we assessed in-flight performance of the pointing stability of HINODE that achieved the highest pointing stability in Japanese space missions. We realized that one of the critical items that have to be improved in SOLAR-C is performance of the attitude stability near the upper limit of the frequency range of the attitude control system. The stability of 0.1 arcsec (3σ) is required in the EUV and X-ray telescopes of SOLAR-C while the HINODE performance is slightly worse than the requirement. The visible light telescope of HINODE is equipped with an image stabilization system inside the telescope, which achieved the stability of 0.03 arcsec (3σ) by suppressing the attitude jitter in the frequency range lower than 10 Hz. For further improvement, it is expected to suppress disturbances induced by resonance between the telescope structures and disturbances of momentum wheels and mechanical gyros in the frequency range higher than 100 Hz.

  15. Bayesian analysis of Markov point processes

    DEFF Research Database (Denmark)

    Berthelsen, Kasper Klitgaard; Møller, Jesper

    2006-01-01

    Recently Møller, Pettitt, Berthelsen and Reeves introduced a new MCMC methodology for drawing samples from a posterior distribution when the likelihood function is only specified up to a normalising constant. We illustrate the method in the setting of Bayesian inference for Markov point processes...... a partially ordered Markov point process as the auxiliary variable. As the method requires simulation from the "unknown" likelihood, perfect simulation algorithms for spatial point processes become useful....

  16. Health information needs of professional nurses required at the point of care

    Directory of Open Access Journals (Sweden)

    Esmeralda Ricks

    2015-06-01

    Conclusion: This study has enabled the researcher to identify the information needs required by professional nurses at the point of care to enhance the delivery of patient care. The research results were used to develop a mobile library that could be accessed by professional nurses.

  17. Accuracy of micro four-point probe measurements on inhomogeneous samples: A probe spacing dependence study

    DEFF Research Database (Denmark)

    Wang, Fei; Petersen, Dirch Hjorth; Østerberg, Frederik Westergaard

    2009-01-01

    In this paper, we discuss a probe spacing dependence study in order to estimate the accuracy of micro four-point probe measurements on inhomogeneous samples. Based on sensitivity calculations, both sheet resistance and Hall effect measurements are studied for samples (e.g. laser annealed samples...... the probe spacing is smaller than 1/40 of the variation wavelength, micro four-point probes can provide an accurate record of local properties with less than 1% measurement error. All the calculations agree well with previous experimental results.......) with periodic variations of sheet resistance, sheet carrier density, and carrier mobility. With a variation wavelength of ¿, probe spacings from 0.0012 to 1002 have been applied to characterize the local variations. The calculations show that the measurement error is highly dependent on the probe spacing. When...

  18. An integrated paper-based sample-to-answer biosensor for nucleic acid testing at the point of care.

    Science.gov (United States)

    Choi, Jane Ru; Hu, Jie; Tang, Ruihua; Gong, Yan; Feng, Shangsheng; Ren, Hui; Wen, Ting; Li, XiuJun; Wan Abas, Wan Abu Bakar; Pingguan-Murphy, Belinda; Xu, Feng

    2016-02-07

    With advances in point-of-care testing (POCT), lateral flow assays (LFAs) have been explored for nucleic acid detection. However, biological samples generally contain complex compositions and low amounts of target nucleic acids, and currently require laborious off-chip nucleic acid extraction and amplification processes (e.g., tube-based extraction and polymerase chain reaction (PCR)) prior to detection. To the best of our knowledge, even though the integration of DNA extraction and amplification into a paper-based biosensor has been reported, a combination of LFA with the aforementioned steps for simple colorimetric readout has not yet been demonstrated. Here, we demonstrate for the first time an integrated paper-based biosensor incorporating nucleic acid extraction, amplification and visual detection or quantification using a smartphone. A handheld battery-powered heating device was specially developed for nucleic acid amplification in POC settings, which is coupled with this simple assay for rapid target detection. The biosensor can successfully detect Escherichia coli (as a model analyte) in spiked drinking water, milk, blood, and spinach with a detection limit of as low as 10-1000 CFU mL(-1), and Streptococcus pneumonia in clinical blood samples, highlighting its potential use in medical diagnostics, food safety analysis and environmental monitoring. As compared to the lengthy conventional assay, which requires more than 5 hours for the entire sample-to-answer process, it takes about 1 hour for our integrated biosensor. The integrated biosensor holds great potential for detection of various target analytes for wide applications in the near future.

  19. 21 CFR 203.32 - Drug sample storage and handling requirements.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 4 2010-04-01 2010-04-01 false Drug sample storage and handling requirements. 203.32 Section 203.32 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN... contamination, deterioration, and adulteration. (b) Compliance with compendial and labeling requirements...

  20. Brachytherapy dose-volume histogram computations using optimized stratified sampling methods

    International Nuclear Information System (INIS)

    Karouzakis, K.; Lahanas, M.; Milickovic, N.; Giannouli, S.; Baltas, D.; Zamboglou, N.

    2002-01-01

    A stratified sampling method for the efficient repeated computation of dose-volume histograms (DVHs) in brachytherapy is presented as used for anatomy based brachytherapy optimization methods. The aim of the method is to reduce the number of sampling points required for the calculation of DVHs for the body and the PTV. From the DVHs are derived the quantities such as Conformity Index COIN and COIN integrals. This is achieved by using partial uniform distributed sampling points with a density in each region obtained from a survey of the gradients or the variance of the dose distribution in these regions. The shape of the sampling regions is adapted to the patient anatomy and the shape and size of the implant. For the application of this method a single preprocessing step is necessary which requires only a few seconds. Ten clinical implants were used to study the appropriate number of sampling points, given a required accuracy for quantities such as cumulative DVHs, COIN indices and COIN integrals. We found that DVHs of very large tissue volumes surrounding the PTV, and also COIN distributions, can be obtained using a factor of 5-10 times smaller the number of sampling points in comparison with uniform distributed points

  1. Analysis of parallel optical sampling rate and ADC requirements in digital coherent receivers

    DEFF Research Database (Denmark)

    Lorences Riesgo, Abel; Galili, Michael; Peucheret, Christophe

    2012-01-01

    We comprehensively assess analog-to-digital converter requirements in coherent digital receiver schemes with parallel optical sampling. We determine the electronic requirements in accordance with the properties of the free running local oscillator.......We comprehensively assess analog-to-digital converter requirements in coherent digital receiver schemes with parallel optical sampling. We determine the electronic requirements in accordance with the properties of the free running local oscillator....

  2. Cloud point extraction, preconcentration and spectrophotometric determination of nickel in water samples using dimethylglyoxime

    Directory of Open Access Journals (Sweden)

    Morteza Bahram

    2013-01-01

    Full Text Available A new and simple method for the preconcentration and spectrophotometric determination of trace amounts of nickel was developed by cloud point extraction (CPE. In the proposed work, dimethylglyoxime (DMG was used as the chelating agent and Triton X-114 was selected as a non-ionic surfactant for CPE. The parameters affecting the cloud point extraction including the pH of sample solution, concentration of the chelating agent and surfactant, equilibration temperature and time were optimized. Under the optimum conditions, the calibration graph was linear in the range of 10-150 ng mL-1 with a detection limit of 4 ng mL-1. The relative standard deviation for 9 replicates of 100 ng mL-1 Ni(II was 1.04%. The interference effect of some anions and cations was studied. The method was applied to the determination of Ni(II in water samples with satisfactory results.

  3. Sample to answer visualization pipeline for low-cost point-of-care blood cell counting

    Science.gov (United States)

    Smith, Suzanne; Naidoo, Thegaran; Davies, Emlyn; Fourie, Louis; Nxumalo, Zandile; Swart, Hein; Marais, Philip; Land, Kevin; Roux, Pieter

    2015-03-01

    We present a visualization pipeline from sample to answer for point-of-care blood cell counting applications. Effective and low-cost point-of-care medical diagnostic tests provide developing countries and rural communities with accessible healthcare solutions [1], and can be particularly beneficial for blood cell count tests, which are often the starting point in the process of diagnosing a patient [2]. The initial focus of this work is on total white and red blood cell counts, using a microfluidic cartridge [3] for sample processing. Analysis of the processed samples has been implemented by means of two main optical visualization systems developed in-house: 1) a fluidic operation analysis system using high speed video data to determine volumes, mixing efficiency and flow rates, and 2) a microscopy analysis system to investigate homogeneity and concentration of blood cells. Fluidic parameters were derived from the optical flow [4] as well as color-based segmentation of the different fluids using a hue-saturation-value (HSV) color space. Cell count estimates were obtained using automated microscopy analysis and were compared to a widely accepted manual method for cell counting using a hemocytometer [5]. The results using the first iteration microfluidic device [3] showed that the most simple - and thus low-cost - approach for microfluidic component implementation was not adequate as compared to techniques based on manual cell counting principles. An improved microfluidic design has been developed to incorporate enhanced mixing and metering components, which together with this work provides the foundation on which to successfully implement automated, rapid and low-cost blood cell counting tests.

  4. Fiscal Year 2001 Tank Characterization Technical Sampling Basis and Waste Information Requirements Document

    International Nuclear Information System (INIS)

    ADAMS, M.R.

    2000-01-01

    The Fiscal Year 2001 Tank Characterization Technical Sampling Basis and Waste Information Requirements Document (TSB-WIRD) has the following purposes: (1) To identify and integrate sampling and analysis needs for fiscal year (FY) 2001 and beyond. (2) To describe the overall drivers that require characterization information and to document their source. (3) To describe the process for identifying, prioritizing, and weighting issues that require characterization information to resolve. (4) To define the method for determining sampling priorities and to present the sampling priorities on a tank-by-tank basis. (5) To define how the characterization program is going to satisfy the drivers, close issues, and report progress. (6)To describe deliverables and acceptance criteria for characterization deliverables

  5. Naïve Point Estimation

    Science.gov (United States)

    Lindskog, Marcus; Winman, Anders; Juslin, Peter

    2013-01-01

    The capacity of short-term memory is a key constraint when people make online judgments requiring them to rely on samples retrieved from memory (e.g., Dougherty & Hunter, 2003). In this article, the authors compare 2 accounts of how people use knowledge of statistical distributions to make point estimates: either by retrieving precomputed…

  6. 40 CFR 80.1348 - What gasoline sample retention requirements apply to refiners and importers?

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false What gasoline sample retention... PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Benzene Sampling, Testing and Retention Requirements § 80.1348 What gasoline sample retention requirements...

  7. Coarse Point Cloud Registration by Egi Matching of Voxel Clusters

    Science.gov (United States)

    Wang, Jinhu; Lindenbergh, Roderik; Shen, Yueqian; Menenti, Massimo

    2016-06-01

    Laser scanning samples the surface geometry of objects efficiently and records versatile information as point clouds. However, often more scans are required to fully cover a scene. Therefore, a registration step is required that transforms the different scans into a common coordinate system. The registration of point clouds is usually conducted in two steps, i.e. coarse registration followed by fine registration. In this study an automatic marker-free coarse registration method for pair-wise scans is presented. First the two input point clouds are re-sampled as voxels and dimensionality features of the voxels are determined by principal component analysis (PCA). Then voxel cells with the same dimensionality are clustered. Next, the Extended Gaussian Image (EGI) descriptor of those voxel clusters are constructed using significant eigenvectors of each voxel in the cluster. Correspondences between clusters in source and target data are obtained according to the similarity between their EGI descriptors. The random sampling consensus (RANSAC) algorithm is employed to remove outlying correspondences until a coarse alignment is obtained. If necessary, a fine registration is performed in a final step. This new method is illustrated on scan data sampling two indoor scenarios. The results of the tests are evaluated by computing the point to point distance between the two input point clouds. The presented two tests resulted in mean distances of 7.6 mm and 9.5 mm respectively, which are adequate for fine registration.

  8. Intraosseous blood samples for point-of-care analysis: agreement between intraosseous and arterial analyses.

    Science.gov (United States)

    Jousi, Milla; Saikko, Simo; Nurmi, Jouni

    2017-09-11

    Point-of-care (POC) testing is highly useful when treating critically ill patients. In case of difficult vascular access, the intraosseous (IO) route is commonly used, and blood is aspirated to confirm the correct position of the IO-needle. Thus, IO blood samples could be easily accessed for POC analyses in emergency situations. The aim of this study was to determine whether IO values agree sufficiently with arterial values to be used for clinical decision making. Two samples of IO blood were drawn from 31 healthy volunteers and compared with arterial samples. The samples were analysed for sodium, potassium, ionized calcium, glucose, haemoglobin, haematocrit, pH, blood gases, base excess, bicarbonate, and lactate using the i-STAT® POC device. Agreement and reliability were estimated by using the Bland-Altman method and intraclass correlation coefficient calculations. Good agreement was evident between the IO and arterial samples for pH, glucose, and lactate. Potassium levels were clearly higher in the IO samples than those from arterial blood. Base excess and bicarbonate were slightly higher, and sodium and ionised calcium values were slightly lower, in the IO samples compared with the arterial values. The blood gases in the IO samples were between arterial and venous values. Haemoglobin and haematocrit showed remarkable variation in agreement. POC diagnostics of IO blood can be a useful tool to guide treatment in critical emergency care. Seeking out the reversible causes of cardiac arrest or assessing the severity of shock are examples of situations in which obtaining vascular access and blood samples can be difficult, though information about the electrolytes, acid-base balance, and lactate could guide clinical decision making. The analysis of IO samples should though be limited to situations in which no other option is available, and the results should be interpreted with caution, because there is not yet enough scientific evidence regarding the agreement of IO

  9. Determination of optimal samples for robot calibration based on error similarity

    Directory of Open Access Journals (Sweden)

    Tian Wei

    2015-06-01

    Full Text Available Industrial robots are used for automatic drilling and riveting. The absolute position accuracy of an industrial robot is one of the key performance indexes in aircraft assembly, and can be improved through error compensation to meet aircraft assembly requirements. The achievable accuracy and the difficulty of accuracy compensation implementation are closely related to the choice of sampling points. Therefore, based on the error similarity error compensation method, a method for choosing sampling points on a uniform grid is proposed. A simulation is conducted to analyze the influence of the sample point locations on error compensation. In addition, the grid steps of the sampling points are optimized using a statistical analysis method. The method is used to generate grids and optimize the grid steps of a Kuka KR-210 robot. The experimental results show that the method for planning sampling data can be used to effectively optimize the sampling grid. After error compensation, the position accuracy of the robot meets the position accuracy requirements.

  10. Uncertainty analysis of point by point sampling complex surfaces using touch probe CMMs

    DEFF Research Database (Denmark)

    Barini, Emanuele; Tosello, Guido; De Chiffre, Leonardo

    2007-01-01

    The paper describes a study concerning point by point scanning of complex surfaces using tactile CMMs. A four factors-two level full factorial experiment was carried out, involving measurements on a complex surface configuration item comprising a sphere, a cylinder and a cone, combined in a singl...

  11. 21 CFR 111.465 - What requirements apply to holding reserve samples of dietary supplements?

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 2 2010-04-01 2010-04-01 false What requirements apply to holding reserve samples... Distributing § 111.465 What requirements apply to holding reserve samples of dietary supplements? (a) You must hold reserve samples of dietary supplements in a manner that protects against contamination and...

  12. MUSIC ALGORITHM FOR LOCATING POINT-LIKE SCATTERERS CONTAINED IN A SAMPLE ON FLAT SUBSTRATE

    Institute of Scientific and Technical Information of China (English)

    Dong Heping; Ma Fuming; Zhang Deyue

    2012-01-01

    In this paper,we consider a MUSIC algorithm for locating point-like scatterers contained in a sample on flat substrate.Based on an asymptotic expansion of the scattering amplitude proposed by Ammari et al.,the reconstruction problem can be reduced to a calculation of Green function corresponding to the background medium.In addition,we use an explicit formulation of Green function in the MUSIC algorithm to simplify the calculation when the cross-section of sample is a half-disc.Numerical experiments are included to demonstrate the feasibility of this method.

  13. Implications of Microwave Holography Using Minimum Required Frequency Samples for Weakly- and Strongly-Scattering Indications

    Science.gov (United States)

    Fallahpour, M.; Case, J. T.; Kharkovsky, S.; Zoughi, R.

    2010-01-01

    Microwave imaging techniques, an integral component of nondestructive testing and evaluation (NDTE), have received significant attention in the past decade. These techniques have included the implementation of synthetic aperture focusing (SAF) algorithms for obtaining high spatial resolution images. The next important step in these developments is the implementation of 3-D holographic imaging algorithms. These are well-known wideband imaging technique requiring a swept-frequency (i.e., wideband), which unlike SAF that is a single frequency technique, are not easily performed on a real-time basis. This is due to the fact that a significant number of data points (in the frequency domain) must be obtained within the frequency band of interest. This not only makes for a complex imaging system design, it also significantly increases the image-production time. Consequently in an attempt to reduce the measurement time and system complexity, an investigation was conducted to determine the minimum required number of frequency samples needed to image a specific object while preserving a desired maximum measurement range and range resolution. To this end the 3-D holographic algorithm was modified to use properlyinterpolated frequency data. Measurements of the complex reflection coefficient for several samples were conducted using a swept-frequency approach. Subsequently, holographical images were generated using data containing a relatively large number of frequency samples and were compared with images generated by the reduced data set data. Quantitative metrics such as average, contrast, and signal-to-noise ratio were used to evaluate the quality of images generated using reduced data sets. Furthermore, this approach was applied to both weakly- and strongly-scattering indications. This paper presents the methods used and the results of this investigation.

  14. Communication: Newton homotopies for sampling stationary points of potential energy landscapes

    Energy Technology Data Exchange (ETDEWEB)

    Mehta, Dhagash, E-mail: dmehta@nd.edu [Department of Applied and Computational Mathematics and Statistics, University of Notre Dame, Notre Dame, Indiana 46556 (United States); University Chemical Laboratory, The University of Cambridge, Cambridge CB2 1EW (United Kingdom); Chen, Tianran, E-mail: chentia1@msu.edu [Department of Mathematics, Michigan State University, East Lansing, Michigan 48823 (United States); Hauenstein, Jonathan D., E-mail: hauenstein@nd.edu [Department of Applied and Computational Mathematics and Statistics, University of Notre Dame, Notre Dame, Indiana 46556 (United States); Wales, David J., E-mail: dw34@cam.ac.uk [University Chemical Laboratory, The University of Cambridge, Cambridge CB2 1EW (United Kingdom)

    2014-09-28

    One of the most challenging and frequently arising problems in many areas of science is to find solutions of a system of multivariate nonlinear equations. There are several numerical methods that can find many (or all if the system is small enough) solutions but they all exhibit characteristic problems. Moreover, traditional methods can break down if the system contains singular solutions. Here, we propose an efficient implementation of Newton homotopies, which can sample a large number of the stationary points of complicated many-body potentials. We demonstrate how the procedure works by applying it to the nearest-neighbor ϕ{sup 4} model and atomic clusters.

  15. Communication: Newton homotopies for sampling stationary points of potential energy landscapes

    International Nuclear Information System (INIS)

    Mehta, Dhagash; Chen, Tianran; Hauenstein, Jonathan D.; Wales, David J.

    2014-01-01

    One of the most challenging and frequently arising problems in many areas of science is to find solutions of a system of multivariate nonlinear equations. There are several numerical methods that can find many (or all if the system is small enough) solutions but they all exhibit characteristic problems. Moreover, traditional methods can break down if the system contains singular solutions. Here, we propose an efficient implementation of Newton homotopies, which can sample a large number of the stationary points of complicated many-body potentials. We demonstrate how the procedure works by applying it to the nearest-neighbor ϕ 4 model and atomic clusters

  16. Instrument air dew point requirements -- 108-P, L, K

    International Nuclear Information System (INIS)

    Fairchild, P.N.

    1994-01-01

    The 108 Building dew point analyzers measure dew point at atmospheric pressure. Existing 108 Roundsheets state the maximum dew point temperature shall be less than -50 F. After repeatedly failing to maintain a -50 F dew point temperature Reactor Engineering researched the basis for the existing limit. This report documents the results of the study and provides technical justification for a new maximum dew point temperature of -35 F at atmospheric pressure as read by the 108 building dew point analyzers

  17. Coarse point cloud registration by EGI matching of voxel clusters

    NARCIS (Netherlands)

    Wang, J.; Lindenbergh, R.C.; Shen, Y.; Menenti, M.

    2016-01-01

    Laser scanning samples the surface geometry of objects efficiently and records versatile information as point clouds. However, often more scans are required to fully cover a scene. Therefore, a registration step is required that transforms the different scans into a common coordinate system. The

  18. Quantification of regional cerebral blood flow (rCBF) measurement with one point sampling by sup 123 I-IMP SPECT

    Energy Technology Data Exchange (ETDEWEB)

    Munaka, Masahiro [University of Occupational and Enviromental Health, Kitakyushu (Japan); Iida, Hidehiro; Murakami, Matsutaro

    1992-02-01

    A handy method of quantifying regional cerebral blood flow (rCBF) measurement by {sup 123}I-IMP SPECT was designed. A standard input function was made and the sampling time to calibrate this standard input function by one point sampling was optimized. An average standard input function was obtained from continuous arterial samplings of 12 healthy adults. The best sampling time was the minimum differential value between the integral calculus value of the standard input function calibrated by one point sampling and the input funciton by continuous arterial samplings. This time was 8 minutes after an intravenous injection of {sup 123}I-IMP and an error was estimated to be {+-}4.1%. The rCBF values by this method were evaluated by comparing them with the rCBF values of the input function with continuous arterial samplings in 2 healthy adults and a patient with cerebral infarction. A significant correlation (r=0.764, p<0.001) was obtained between both. (author).

  19. 40 CFR 63.1583 - What are the emission points and control requirements for an industrial POTW treatment plant?

    Science.gov (United States)

    2010-07-01

    ... control requirements for an industrial POTW treatment plant? 63.1583 Section 63.1583 Protection of... Pollutants: Publicly Owned Treatment Works Industrial Potw Treatment Plant Description and Requirements § 63.1583 What are the emission points and control requirements for an industrial POTW treatment plant? (a...

  20. Mars Sample Return: Mars Ascent Vehicle Mission and Technology Requirements

    Science.gov (United States)

    Bowles, Jeffrey V.; Huynh, Loc C.; Hawke, Veronica M.; Jiang, Xun J.

    2013-01-01

    A Mars Sample Return mission is the highest priority science mission for the next decade recommended by the recent Decadal Survey of Planetary Science, the key community input process that guides NASAs science missions. A feasibility study was conducted of a potentially simple and low cost approach to Mars Sample Return mission enabled by the use of developing commercial capabilities. Previous studies of MSR have shown that landing an all up sample return mission with a high mass capacity lander is a cost effective approach. The approach proposed is the use of an emerging commercially available capsule to land the launch vehicle system that would return samples to Earth. This paper describes the mission and technology requirements impact on the launch vehicle system design, referred to as the Mars Ascent Vehicle (MAV).

  1. Single point aerosol sampling: Evaluation of mixing and probe performance in a nuclear stack

    Energy Technology Data Exchange (ETDEWEB)

    Rodgers, J.C.; Fairchild, C.I.; Wood, G.O. [Los Alamos National Laboratory, NM (United States)] [and others

    1995-02-01

    Alternative Reference Methodologies (ARMs) have been developed for sampling of radionuclides from stacks and ducts that differ from the methods required by the U.S. EPA. The EPA methods are prescriptive in selection of sampling locations and in design of sampling probes whereas the alternative methods are performance driven. Tests were conducted in a stack at Los Alamos National Laboratory to demonstrate the efficacy of the ARMs. Coefficients of variation of the velocity tracer gas, and aerosol particle profiles were determined at three sampling locations. Results showed numerical criteria placed upon the coefficients of variation by the ARMs were met at sampling stations located 9 and 14 stack diameters from flow entrance, but not at a location that is 1.5 diameters downstream from the inlet. Experiments were conducted to characterize the transmission of 10 {mu}m aerodynamic equivalent diameter liquid aerosol particles through three types of sampling probes. The transmission ratio (ratio of aerosol concentration at the probe exit plane to the concentration in the free stream) was 107% for a 113 L/min (4-cfm) anisokinetic shrouded probe, but only 20% for an isokinetic probe that follows the EPA requirements. A specially designed isokinetic probe showed a transmission ratio of 63%. The shrouded probe performance would conform to the ARM criteria; however, the isokinetic probes would not.

  2. Evaluation of factor for one-point venous blood sampling method based on the causality model

    International Nuclear Information System (INIS)

    Matsutomo, Norikazu; Onishi, Hideo; Kobara, Kouichi; Sasaki, Fumie; Watanabe, Haruo; Nagaki, Akio; Mimura, Hiroaki

    2009-01-01

    One-point venous blood sampling method (Mimura, et al.) can evaluate the regional cerebral blood flow (rCBF) value with a high degree of accuracy. However, the method is accompanied by complexity of technique because it requires a venous blood Octanol value, and its accuracy is affected by factors of input function. Therefore, we evaluated the factors that are used for input function to determine the accuracy input function and simplify the technique. The input function which uses the time-dependent brain count of 5 minutes, 15 minutes, and 25 minutes from administration, and the input function in which an objective variable is used as the artery octanol value to exclude the venous blood octanol value are created. Therefore, a correlation between these functions and rCBF value by the microsphere (MS) method is evaluated. Creation of a high-accuracy input function and simplification of technique are possible. The rCBF value obtained by the input function, the factor of which is a time-dependent brain count of 5 minutes from administration, and the objective variable is artery octanol value, had a high correlation with the MS method (y=0.899x+4.653, r=0.842). (author)

  3. Bibliography of papers, reports, and presentations related to point-sample dimensional measurement methods for machined part evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Baldwin, J.M. [Sandia National Labs., Livermore, CA (United States). Integrated Manufacturing Systems

    1996-04-01

    The Dimensional Inspection Techniques Specification (DITS) Project is an ongoing effort to produce tools and guidelines for optimum sampling and data analysis of machined parts, when measured using point-sample methods of dimensional metrology. This report is a compilation of results of a literature survey, conducted in support of the DITS. Over 160 citations are included, with author abstracts where available.

  4. Speciation and Determination of Low Concentration of Iron in Beer Samples by Cloud Point Extraction

    Science.gov (United States)

    Khalafi, Lida; Doolittle, Pamela; Wright, John

    2018-01-01

    A laboratory experiment is described in which students determine the concentration and speciation of iron in beer samples using cloud point extraction and absorbance spectroscopy. The basis of determination is the complexation between iron and 2-(5-bromo-2- pyridylazo)-5-diethylaminophenol (5-Br-PADAP) as a colorimetric reagent in an aqueous…

  5. Latin Hypercube Sampling (LHS) at variable resolutions for enhanced watershed scale Soil Sampling and Digital Soil Mapping.

    Science.gov (United States)

    Hamalainen, Sampsa; Geng, Xiaoyuan; He, Juanxia

    2017-04-01

    Latin Hypercube Sampling (LHS) at variable resolutions for enhanced watershed scale Soil Sampling and Digital Soil Mapping. Sampsa Hamalainen, Xiaoyuan Geng, and Juanxia, He. AAFC - Agriculture and Agr-Food Canada, Ottawa, Canada. The Latin Hypercube Sampling (LHS) approach to assist with Digital Soil Mapping has been developed for some time now, however the purpose of this work was to complement LHS with use of multiple spatial resolutions of covariate datasets and variability in the range of sampling points produced. This allowed for specific sets of LHS points to be produced to fulfil the needs of various partners from multiple projects working in the Ontario and Prince Edward Island provinces of Canada. Secondary soil and environmental attributes are critical inputs that are required in the development of sampling points by LHS. These include a required Digital Elevation Model (DEM) and subsequent covariate datasets produced as a result of a Digital Terrain Analysis performed on the DEM. These additional covariates often include but are not limited to Topographic Wetness Index (TWI), Length-Slope (LS) Factor, and Slope which are continuous data. The range of specific points created in LHS included 50 - 200 depending on the size of the watershed and more importantly the number of soil types found within. The spatial resolution of covariates included within the work ranged from 5 - 30 m. The iterations within the LHS sampling were run at an optimal level so the LHS model provided a good spatial representation of the environmental attributes within the watershed. Also, additional covariates were included in the Latin Hypercube Sampling approach which is categorical in nature such as external Surficial Geology data. Some initial results of the work include using a 1000 iteration variable within the LHS model. 1000 iterations was consistently a reasonable value used to produce sampling points that provided a good spatial representation of the environmental

  6. Systematic sampling with errors in sample locations

    DEFF Research Database (Denmark)

    Ziegel, Johanna; Baddeley, Adrian; Dorph-Petersen, Karl-Anton

    2010-01-01

    analysis using point process methods. We then analyze three different models for the error process, calculate exact expressions for the variances, and derive asymptotic variances. Errors in the placement of sample points can lead to substantial inflation of the variance, dampening of zitterbewegung......Systematic sampling of points in continuous space is widely used in microscopy and spatial surveys. Classical theory provides asymptotic expressions for the variance of estimators based on systematic sampling as the grid spacing decreases. However, the classical theory assumes that the sample grid...... is exactly periodic; real physical sampling procedures may introduce errors in the placement of the sample points. This paper studies the effect of errors in sample positioning on the variance of estimators in the case of one-dimensional systematic sampling. First we sketch a general approach to variance...

  7. Technical assessment of compliance with work place air sampling requirements at T Plant. Revision No. 1

    International Nuclear Information System (INIS)

    Hackworth, M.F.

    1995-01-01

    The US DOE requires its contractors to conduct air sampling to detect and evaluate airborne radioactive material in the workplace. Hanford Reservation T Plant compliance with workplace air sampling requirements has been assessed. Requirements, basis for determining compliance and recommendations are included

  8. Technical assessment of compliance with workplace air sampling requirements in the 300 Area

    International Nuclear Information System (INIS)

    Olsen, P.A.

    1995-01-01

    The purpose of this Technical Work Document is to satisfy HSRCM-1, the ''Hanford Site Radiological Control Manual.'' Article 551.4 of that manual states a requirement for a documented study of facility workplace air sampling programs (WPAS). This first revision of the original Supporting Document covers the period from January 1, 1995 to December 31, 1995. HSRCM-1 is the primary guidance for radiological control at Westinghouse Hanford Company (WHC). It was written to implement DOE/EH-0256T ''US Department of Energy Radiological Control Manual'' as it applies to programs at Hanford. As such, it complies with Title 10, Part 835 of the Code of Federal Regulations. There are also several Department of Energy (DOE) Orders, national consensus standards, and reports that provide criteria, standards, and requirements for workplace air sampling programs. This document provides a summary of these, as they apply to WHC facility workplace air sampling programs. This document also provides an evaluation of the compliance of 300 Areas' workplace air sampling program to the criteria, standards, and requirements and documents compliance with the requirements where appropriate. Where necessary, it also indicates changes needed to bring specific locations into compliance. The areas evaluated were the 340 Facility, the Advanced Reactor Operations Division Facilities, the N Reactor Fuels Supply Facility, and The Geotechnical Engineering Laboratory

  9. Blister pouches for effective reagent storage and release for low-cost point-of-care diagnostic applications

    CSIR Research Space (South Africa)

    Smith, S

    2016-02-01

    Full Text Available Lab-on-a-chip devices are often applied to point-of-care diagnostic solutions as they are low-cost, compact, disposable, and require only small sample volumes. For such devices, various reagents are required for sample preparation and analysis and...

  10. Benthic faunal sampling adjacent to the Barbers Point ocean outfall, Oahu, Hawaii, 1986-2010 (NODC Accession 9900098)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Benthic fauna in the vicinity of the Barbers Point (Honouliuli) ocean outfall were sampled from 1986-2010. To assess the environmental quality, sediment grain size...

  11. Technical assessment of workplace air sampling requirements at tank farm facilities. Revision 1

    International Nuclear Information System (INIS)

    Olsen, P.A.

    1994-01-01

    WHC-CM-1-6 is the primary guidance for radiological control at Westinghouse Hanford Company (WHC). It was written to implement DOE N 5480.6 ''US Department of Energy Radiological Control Manual'' as it applies to programs at Hanford which are now overseen by WHC. As such, it complies with Title 10, Part 835 of the Code of Federal Regulations. In addition to WHC-CM-1-6, there is HSRCM-1, the ''Hanford Site Radiological Control Manual'' and several Department of Energy (DOE) Orders, national consensus standards, and reports that provide criteria, standards, and requirements for workplace air sampling programs. This document provides a summary of these, as they apply to WHC facility workplace air sampling programs. This document also provides an evaluation of the compliance of Tank Farms' workplace air sampling program to the criteria, standards, and requirements and documents compliance with the requirements where appropriate. Where necessary, it also indicates changes needed to bring specific locations into compliance

  12. The requirement for proper storage of nuclear and related decommissioning samples to safeguard accuracy of tritium data.

    Science.gov (United States)

    Kim, Daeji; Croudace, Ian W; Warwick, Phillip E

    2012-04-30

    Large volumes of potentially tritium-contaminated waste materials are generated during nuclear decommissioning that require accurate characterisation prior to final waste sentencing. The practice of initially determining a radionuclide waste fingerprint for materials from an operational area is often used to save time and money but tritium cannot be included because of its tendency to be chemically mobile. This mobility demands a specific measurement for tritium and also poses a challenge in terms of sampling, storage and reliable analysis. This study shows that the extent of any tritium redistribution during storage will depend on its form or speciation and the physical conditions of storage. Any weakly or moderately bound tritium (e.g. adsorbed water, waters of hydration or crystallisation) may be variably lost at temperatures over the range 100-300 °C whereas for more strongly bound tritium (e.g. chemically bound or held in mineral lattices) the liberation temperature can be delayed up to 800 °C. For tritium that is weakly held the emanation behaviour at different temperatures becomes particularly important. The degree of (3)H loss and cross-contamination that can arise after sampling and before analysis can be reduced by appropriate storage. Storing samples in vapour tight containers at the point of sampling, the use of triple enclosures, segregating high activity samples and using a freezer all lead to good analytical practice. Copyright © 2012 Elsevier B.V. All rights reserved.

  13. Sulfonate-terminated carbosilane dendron-coated nanotubes: a greener point of view in protein sample preparation.

    Science.gov (United States)

    González-García, Estefanía; Gutiérrez Ulloa, Carlos E; de la Mata, Francisco Javier; Marina, María Luisa; García, María Concepción

    2017-09-01

    Reduction or removal of solvents and reagents in protein sample preparation is a requirement. Dendrimers can strongly interact with proteins and have great potential as a greener alternative to conventional methods used in protein sample preparation. This work proposes the use of single-walled carbon nanotubes (SWCNTs) functionalized with carbosilane dendrons with sulfonate groups for protein sample preparation and shows the successful application of the proposed methodology to extract proteins from a complex matrix. SEM images of nanotubes and mixtures of nanotubes and proteins were taken. Moreover, intrinsic fluorescence intensity of proteins was monitored to observe the most significant interactions at increasing dendron generations under neutral and basic pHs. Different conditions for the disruption of interactions between proteins and nanotubes after protein extraction and different concentrations of the disrupting reagent and the nanotube were also tried. Compatibility of extraction and disrupting conditions with the enzymatic digestion of proteins for obtaining bioactive peptides was also studied. Finally, sulfonate-terminated carbosilane dendron-coated SWCNTs enabled the extraction of proteins from a complex sample without using non-environmentally friendly solvents that were required so far. Graphical Abstract Green protein extraction from a complex sample employing carbosilane dendron coated nanotubes.

  14. 40 CFR 63.1586 - What are the emission points and control requirements for a non-industrial POTW treatment plant?

    Science.gov (United States)

    2010-07-01

    ... control requirements for a non-industrial POTW treatment plant? 63.1586 Section 63.1586 Protection of... Pollutants: Publicly Owned Treatment Works Non-Industrial Potw Treatment Plant Requirements § 63.1586 What are the emission points and control requirements for a non-industrial POTW treatment plant? There are...

  15. Gran method for end point anticipation in monosegmented flow titration

    Directory of Open Access Journals (Sweden)

    Aquino Emerson V

    2004-01-01

    Full Text Available An automatic potentiometric monosegmented flow titration procedure based on Gran linearisation approach has been developed. The controlling program can estimate the end point of the titration after the addition of three or four aliquots of titrant. Alternatively, the end point can be determined by the second derivative procedure. In this case, additional volumes of titrant are added until the vicinity of the end point and three points before and after the stoichiometric point are used for end point calculation. The performance of the system was assessed by the determination of chloride in isotonic beverages and parenteral solutions. The system employs a tubular Ag2S/AgCl indicator electrode. A typical titration, performed according to the IUPAC definition, requires only 60 mL of sample and about the same volume of titrant (AgNO3 solution. A complete titration can be carried out in 1 - 5 min. The accuracy and precision (relative standard deviation of ten replicates are 2% and 1% for the Gran and 1% and 0.5% for the Gran/derivative end point determination procedures, respectively. The proposed system reduces the time to perform a titration, ensuring low sample and reagent consumption, and full automatic sampling and titrant addition in a calibration-free titration protocol.

  16. 40 CFR 80.335 - What gasoline sample retention requirements apply to refiners and importers?

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false What gasoline sample retention... PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Sulfur Sampling, Testing and Retention Requirements for Refiners and Importers § 80.335 What gasoline sample...

  17. Development of a cloud-point extraction method for copper and nickel determination in food samples

    International Nuclear Information System (INIS)

    Azevedo Lemos, Valfredo; Selis Santos, Moacy; Teixeira David, Graciete; Vasconcelos Maciel, Mardson; Almeida Bezerra, Marcos de

    2008-01-01

    A new, simple and versatile cloud-point extraction (CPE) methodology has been developed for the separation and preconcentration of copper and nickel. The metals in the initial aqueous solution were complexed with 2-(2'-benzothiazolylazo)-5-(N,N-diethyl)aminophenol (BDAP) and Triton X-114 was added as surfactant. Dilution of the surfactant-rich phase with acidified methanol was performed after phase separation, and the copper and nickel contents were measured by flame atomic absorption spectrometry. The variables affecting the cloud-point extraction were optimized using a Box-Behnken design. Under the optimum experimental conditions, enrichment factors of 29 and 25 were achieved for copper and nickel, respectively. The accuracy of the method was evaluated and confirmed by analysis of the followings certified reference materials: Apple Leaves, Spinach Leaves and Tomato Leaves. The limits of detection expressed to solid sample analysis were 0.1 μg g -1 (Cu) and 0.4 μg g -1 (Ni). The precision for 10 replicate measurements of 75 μg L -1 Cu or Ni was 6.4 and 1.0, respectively. The method has been successfully applied to the analysis of food samples

  18. Distance of Sample Measurement Points to Prototype Catalog Curve

    DEFF Research Database (Denmark)

    Hjorth, Poul G.; Karamehmedovic, Mirza; Perram, John

    2006-01-01

    We discuss strategies for comparing discrete data points to a catalog (reference) curve by means of the Euclidean distance from each point to the curve in a pump's head H vs. flow Qdiagram. In particular we find that a method currently in use is inaccurate. We propose several alternatives...

  19. Improved technical success and radiation safety of adrenal vein sampling using rapid, semi-quantitative point-of-care cortisol measurement.

    Science.gov (United States)

    Page, Michael M; Taranto, Mario; Ramsay, Duncan; van Schie, Greg; Glendenning, Paul; Gillett, Melissa J; Vasikaran, Samuel D

    2018-01-01

    Objective Primary aldosteronism is a curable cause of hypertension which can be treated surgically or medically depending on the findings of adrenal vein sampling studies. Adrenal vein sampling studies are technically demanding with a high failure rate in many centres. The use of intraprocedural cortisol measurement could improve the success rates of adrenal vein sampling but may be impracticable due to cost and effects on procedural duration. Design Retrospective review of the results of adrenal vein sampling procedures since commencement of point-of-care cortisol measurement using a novel single-use semi-quantitative measuring device for cortisol, the adrenal vein sampling Accuracy Kit. Success rate and complications of adrenal vein sampling procedures before and after use of the adrenal vein sampling Accuracy Kit. Routine use of the adrenal vein sampling Accuracy Kit device for intraprocedural measurement of cortisol commenced in 2016. Results Technical success rate of adrenal vein sampling increased from 63% of 99 procedures to 90% of 48 procedures ( P = 0.0007) after implementation of the adrenal vein sampling Accuracy Kit. Failure of right adrenal vein cannulation was the main reason for an unsuccessful study. Radiation dose decreased from 34.2 Gy.cm 2 (interquartile range, 15.8-85.9) to 15.7 Gy.cm 2 (6.9-47.3) ( P = 0.009). No complications were noted, and implementation costs were minimal. Conclusions Point-of-care cortisol measurement during adrenal vein sampling improved cannulation success rates and reduced radiation exposure. The use of the adrenal vein sampling Accuracy Kit is now standard practice at our centre.

  20. Point and Fixed Plot Sampling Inventory Estimates at the Savannah River Site, South Carolina.

    Energy Technology Data Exchange (ETDEWEB)

    Parresol, Bernard, R.

    2004-02-01

    This report provides calculation of systematic point sampling volume estimates for trees greater than or equal to 5 inches diameter breast height (dbh) and fixed radius plot volume estimates for trees < 5 inches dbh at the Savannah River Site (SRS), Aiken County, South Carolina. The inventory of 622 plots was started in March 1999 and completed in January 2002 (Figure 1). Estimates are given in cubic foot volume. The analyses are presented in a series of Tables and Figures. In addition, a preliminary analysis of fuel levels on the SRS is given, based on depth measurements of the duff and litter layers on the 622 inventory plots plus line transect samples of down coarse woody material. Potential standing live fuels are also included. The fuels analyses are presented in a series of tables.

  1. At the Tipping Point

    Energy Technology Data Exchange (ETDEWEB)

    Wiley, H. S.

    2011-02-28

    There comes a time in every field of science when things suddenly change. While it might not be immediately apparent that things are different, a tipping point has occurred. Biology is now at such a point. The reason is the introduction of high-throughput genomics-based technologies. I am not talking about the consequences of the sequencing of the human genome (and every other genome within reach). The change is due to new technologies that generate an enormous amount of data about the molecular composition of cells. These include proteomics, transcriptional profiling by sequencing, and the ability to globally measure microRNAs and post-translational modifications of proteins. These mountains of digital data can be mapped to a common frame of reference: the organism’s genome. With the new high-throughput technologies, we can generate tens of thousands of data points from each sample. Data are now measured in terabytes and the time necessary to analyze data can now require years. Obviously, we can’t wait to interpret the data fully before the next experiment. In fact, we might never be able to even look at all of it, much less understand it. This volume of data requires sophisticated computational and statistical methods for its analysis and is forcing biologists to approach data interpretation as a collaborative venture.

  2. 40 CFR 90.421 - Dilute gaseous exhaust sampling and analytical system description.

    Science.gov (United States)

    2010-07-01

    ... gas mixture temperature, measured at a point immediately ahead of the critical flow venturi, must be... analytical system description. (a) General. The exhaust gas sampling system described in this section is... requirements are as follows: (1) This sampling system requires the use of a Positive Displacement Pump—Constant...

  3. Subrandom methods for multidimensional nonuniform sampling.

    Science.gov (United States)

    Worley, Bradley

    2016-08-01

    Methods of nonuniform sampling that utilize pseudorandom number sequences to select points from a weighted Nyquist grid are commonplace in biomolecular NMR studies, due to the beneficial incoherence introduced by pseudorandom sampling. However, these methods require the specification of a non-arbitrary seed number in order to initialize a pseudorandom number generator. Because the performance of pseudorandom sampling schedules can substantially vary based on seed number, this can complicate the task of routine data collection. Approaches such as jittered sampling and stochastic gap sampling are effective at reducing random seed dependence of nonuniform sampling schedules, but still require the specification of a seed number. This work formalizes the use of subrandom number sequences in nonuniform sampling as a means of seed-independent sampling, and compares the performance of three subrandom methods to their pseudorandom counterparts using commonly applied schedule performance metrics. Reconstruction results using experimental datasets are also provided to validate claims made using these performance metrics. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Determining Plane-Sweep Sampling Points in Image Space Using the Cross-Ratio for Image-Based Depth Estimation

    Science.gov (United States)

    Ruf, B.; Erdnuess, B.; Weinmann, M.

    2017-08-01

    With the emergence of small consumer Unmanned Aerial Vehicles (UAVs), the importance and interest of image-based depth estimation and model generation from aerial images has greatly increased in the photogrammetric society. In our work, we focus on algorithms that allow an online image-based dense depth estimation from video sequences, which enables the direct and live structural analysis of the depicted scene. Therefore, we use a multi-view plane-sweep algorithm with a semi-global matching (SGM) optimization which is parallelized for general purpose computation on a GPU (GPGPU), reaching sufficient performance to keep up with the key-frames of input sequences. One important aspect to reach good performance is the way to sample the scene space, creating plane hypotheses. A small step size between consecutive planes, which is needed to reconstruct details in the near vicinity of the camera may lead to ambiguities in distant regions, due to the perspective projection of the camera. Furthermore, an equidistant sampling with a small step size produces a large number of plane hypotheses, leading to high computational effort. To overcome these problems, we present a novel methodology to directly determine the sampling points of plane-sweep algorithms in image space. The use of the perspective invariant cross-ratio allows us to derive the location of the sampling planes directly from the image data. With this, we efficiently sample the scene space, achieving higher sampling density in areas which are close to the camera and a lower density in distant regions. We evaluate our approach on a synthetic benchmark dataset for quantitative evaluation and on a real-image dataset consisting of aerial imagery. The experiments reveal that an inverse sampling achieves equal and better results than a linear sampling, with less sampling points and thus less runtime. Our algorithm allows an online computation of depth maps for subsequences of five frames, provided that the relative

  5. DETERMINING PLANE-SWEEP SAMPLING POINTS IN IMAGE SPACE USING THE CROSS-RATIO FOR IMAGE-BASED DEPTH ESTIMATION

    Directory of Open Access Journals (Sweden)

    B. Ruf

    2017-08-01

    Full Text Available With the emergence of small consumer Unmanned Aerial Vehicles (UAVs, the importance and interest of image-based depth estimation and model generation from aerial images has greatly increased in the photogrammetric society. In our work, we focus on algorithms that allow an online image-based dense depth estimation from video sequences, which enables the direct and live structural analysis of the depicted scene. Therefore, we use a multi-view plane-sweep algorithm with a semi-global matching (SGM optimization which is parallelized for general purpose computation on a GPU (GPGPU, reaching sufficient performance to keep up with the key-frames of input sequences. One important aspect to reach good performance is the way to sample the scene space, creating plane hypotheses. A small step size between consecutive planes, which is needed to reconstruct details in the near vicinity of the camera may lead to ambiguities in distant regions, due to the perspective projection of the camera. Furthermore, an equidistant sampling with a small step size produces a large number of plane hypotheses, leading to high computational effort. To overcome these problems, we present a novel methodology to directly determine the sampling points of plane-sweep algorithms in image space. The use of the perspective invariant cross-ratio allows us to derive the location of the sampling planes directly from the image data. With this, we efficiently sample the scene space, achieving higher sampling density in areas which are close to the camera and a lower density in distant regions. We evaluate our approach on a synthetic benchmark dataset for quantitative evaluation and on a real-image dataset consisting of aerial imagery. The experiments reveal that an inverse sampling achieves equal and better results than a linear sampling, with less sampling points and thus less runtime. Our algorithm allows an online computation of depth maps for subsequences of five frames, provided that

  6. Self-organizing adaptive map: autonomous learning of curves and surfaces from point samples.

    Science.gov (United States)

    Piastra, Marco

    2013-05-01

    Competitive Hebbian Learning (CHL) (Martinetz, 1993) is a simple and elegant method for estimating the topology of a manifold from point samples. The method has been adopted in a number of self-organizing networks described in the literature and has given rise to related studies in the fields of geometry and computational topology. Recent results from these fields have shown that a faithful reconstruction can be obtained using the CHL method only for curves and surfaces. Within these limitations, these findings constitute a basis for defining a CHL-based, growing self-organizing network that produces a faithful reconstruction of an input manifold. The SOAM (Self-Organizing Adaptive Map) algorithm adapts its local structure autonomously in such a way that it can match the features of the manifold being learned. The adaptation process is driven by the defects arising when the network structure is inadequate, which cause a growth in the density of units. Regions of the network undergo a phase transition and change their behavior whenever a simple, local condition of topological regularity is met. The phase transition is eventually completed across the entire structure and the adaptation process terminates. In specific conditions, the structure thus obtained is homeomorphic to the input manifold. During the adaptation process, the network also has the capability to focus on the acquisition of input point samples in critical regions, with a substantial increase in efficiency. The behavior of the network has been assessed experimentally with typical data sets for surface reconstruction, including suboptimal conditions, e.g. with undersampling and noise. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. An Efficient Constraint Boundary Sampling Method for Sequential RBDO Using Kriging Surrogate Model

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jihoon; Jang, Junyong; Kim, Shinyu; Lee, Tae Hee [Hanyang Univ., Seoul (Korea, Republic of); Cho, Sugil; Kim, Hyung Woo; Hong, Sup [Korea Research Institute of Ships and Ocean Engineering, Busan (Korea, Republic of)

    2016-06-15

    Reliability-based design optimization (RBDO) requires a high computational cost owing to its reliability analysis. A surrogate model is introduced to reduce the computational cost in RBDO. The accuracy of the reliability depends on the accuracy of the surrogate model of constraint boundaries in the surrogated-model-based RBDO. In earlier researches, constraint boundary sampling (CBS) was proposed to approximate accurately the boundaries of constraints by locating sample points on the boundaries of constraints. However, because CBS uses sample points on all constraint boundaries, it creates superfluous sample points. In this paper, efficient constraint boundary sampling (ECBS) is proposed to enhance the efficiency of CBS. ECBS uses the statistical information of a kriging surrogate model to locate sample points on or near the RBDO solution. The efficiency of ECBS is verified by mathematical examples.

  8. Procedures for sampling and sample reduction within quality assurance systems for solid biofuels

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2005-07-01

    The objective of this experimental study on sampling was to determine the size and number of samples of biofuels required (taken at two sampling points in each case) and to compare two methods of sampling. The first objective of the sample-reduction exercise was to compare the reliability of various sampling methods, and the second objective was to measure the variations introduced as a result of reducing the sample size to form suitable test portions. The materials studied were sawdust, wood chips, wood pellets and bales of straw, and these were analysed for moisture, ash, particle size and chloride. The sampling procedures are described. The study was conducted in Scandinavia. The results of the study were presented in Leipzig in October 2004. The work was carried out as part of the UK's DTI Technology Programme: New and Renewable Energy.

  9. Matching Ge detector element geometry to sample size and shape: One does not fit all exclamation point

    International Nuclear Information System (INIS)

    Keyser, R.M.; Twomey, T.R.; Sangsingkeow, P.

    1998-01-01

    For 25 yr, coaxial germanium detector performance has been specified using the methods and values specified in Ref. 1. These specifications are the full-width at half-maximum (FWHM), FW.1M, FW.02M, peak-to-Compton ratio, and relative efficiency. All of these measurements are made with a 60 Co source 25 cm from the cryostat endcap and centered on the axis of the detector. These measurements are easy to reproduce, both because they are simple to set up and use a common source. These standard tests have been useful in guiding the user to an appropriate detector choice for the intended measurement. Most users of germanium gamma-ray detectors do not make measurements in this simple geometry. Germanium detector manufacturers have worked over the years to make detectors with better resolution, better peak-to-Compton ratios, and higher efficiency--but all based on measurements using the IEEE standard. Advances in germanium crystal growth techniques have made it relatively easy to provide detector elements of different shapes and sizes. Many of these different shapes and sizes can give better results for a specific application than other shapes and sizes. But, the detector specifications must be changed to correspond to the actual application. Both the expected values and the actual parameters to be specified should be changed. In many cases, detection efficiency, peak shape, and minimum detectable limit for a particular detector/sample combination are valuable specifications of detector performance. For other situations, other parameters are important, such as peak shape as a function of count rate. In this work, different sample geometries were considered. The results show the variation in efficiency with energy for all of these sample and detector geometries. The point source at 25 cm from the endcap measurement allows the results to be compared with the currently given IEEE criteria. The best sample/detector configuration for a specific measurement requires more and

  10. Spatial Sampling of Weather Data for Regional Crop Yield Simulations

    Science.gov (United States)

    Van Bussel, Lenny G. J.; Ewert, Frank; Zhao, Gang; Hoffmann, Holger; Enders, Andreas; Wallach, Daniel; Asseng, Senthold; Baigorria, Guillermo A.; Basso, Bruno; Biernath, Christian; hide

    2016-01-01

    Field-scale crop models are increasingly applied at spatio-temporal scales that range from regions to the globe and from decades up to 100 years. Sufficiently detailed data to capture the prevailing spatio-temporal heterogeneity in weather, soil, and management conditions as needed by crop models are rarely available. Effective sampling may overcome the problem of missing data but has rarely been investigated. In this study the effect of sampling weather data has been evaluated for simulating yields of winter wheat in a region in Germany over a 30-year period (1982-2011) using 12 process-based crop models. A stratified sampling was applied to compare the effect of different sizes of spatially sampled weather data (10, 30, 50, 100, 500, 1000 and full coverage of 34,078 sampling points) on simulated wheat yields. Stratified sampling was further compared with random sampling. Possible interactions between sample size and crop model were evaluated. The results showed differences in simulated yields among crop models but all models reproduced well the pattern of the stratification. Importantly, the regional mean of simulated yields based on full coverage could already be reproduced by a small sample of 10 points. This was also true for reproducing the temporal variability in simulated yields but more sampling points (about 100) were required to accurately reproduce spatial yield variability. The number of sampling points can be smaller when a stratified sampling is applied as compared to a random sampling. However, differences between crop models were observed including some interaction between the effect of sampling on simulated yields and the model used. We concluded that stratified sampling can considerably reduce the number of required simulations. But, differences between crop models must be considered as the choice for a specific model can have larger effects on simulated yields than the sampling strategy. Assessing the impact of sampling soil and crop management

  11. Sample Size Requirements for Assessing Statistical Moments of Simulated Crop Yield Distributions

    NARCIS (Netherlands)

    Lehmann, N.; Finger, R.; Klein, T.; Calanca, P.

    2013-01-01

    Mechanistic crop growth models are becoming increasingly important in agricultural research and are extensively used in climate change impact assessments. In such studies, statistics of crop yields are usually evaluated without the explicit consideration of sample size requirements. The purpose of

  12. Fluid sample collection and distribution system. [qualitative analysis of aqueous samples from several points

    Science.gov (United States)

    Brooks, R. L. (Inventor)

    1979-01-01

    A multipoint fluid sample collection and distribution system is provided wherein the sample inputs are made through one or more of a number of sampling valves to a progressive cavity pump which is not susceptible to damage by large unfiltered particles. The pump output is through a filter unit that can provide a filtered multipoint sample. An unfiltered multipoint sample is also provided. An effluent sample can be taken and applied to a second progressive cavity pump for pumping to a filter unit that can provide one or more filtered effluent samples. The second pump can also provide an unfiltered effluent sample. Means are provided to periodically back flush each filter unit without shutting off the whole system.

  13. Design of point-of-care (POC) microfluidic medical diagnostic devices

    Science.gov (United States)

    Leary, James F.

    2018-02-01

    Design of inexpensive and portable hand-held microfluidic flow/image cytometry devices for initial medical diagnostics at the point of initial patient contact by emergency medical personnel in the field requires careful design in terms of power/weight requirements to allow for realistic portability as a hand-held, point-of-care medical diagnostics device. True portability also requires small micro-pumps for high-throughput capability. Weight/power requirements dictate use of super-bright LEDs and very small silicon photodiodes or nanophotonic sensors that can be powered by batteries. Signal-to-noise characteristics can be greatly improved by appropriately pulsing the LED excitation sources and sampling and subtracting noise in between excitation pulses. The requirements for basic computing, imaging, GPS and basic telecommunications can be simultaneously met by use of smartphone technologies, which become part of the overall device. Software for a user-interface system, limited real-time computing, real-time imaging, and offline data analysis can be accomplished through multi-platform software development systems that are well-suited to a variety of currently available cellphone technologies which already contain all of these capabilities. Microfluidic cytometry requires judicious use of small sample volumes and appropriate statistical sampling by microfluidic cytometry or imaging for adequate statistical significance to permit real-time (typically medical decisions for patients at the physician's office or real-time decision making in the field. One or two drops of blood obtained by pin-prick should be able to provide statistically meaningful results for use in making real-time medical decisions without the need for blood fractionation, which is not realistic in the field.

  14. Development of cloud point extraction - UV-visible spectrophotometric method for vanadium (V) determination in hydrogeochemical samples

    International Nuclear Information System (INIS)

    Durani, Smeer; Mathur, Neerja; Chowdary, G.S.

    2007-01-01

    The cloud point extraction behavior (CPE) of vanadium (V) using 5,7 dibromo 8-hydroxyquinoline (DBHQ) and triton X 100 was investigated. Vanadium (V) was extracted with 4 ml of 0.5 mg/ml DBHQ and 6 ml of 8% (V/V) triton X 100 at the pH 3.7. A few hydrogeochemical samples were analysed for vanadium using the above method. (author)

  15. Active AirCore Sampling: Constraining Point Sources of Methane and Other Gases with Fixed Wing Unmanned Aerial Systems

    Science.gov (United States)

    Bent, J. D.; Sweeney, C.; Tans, P. P.; Newberger, T.; Higgs, J. A.; Wolter, S.

    2017-12-01

    Accurate estimates of point source gas emissions are essential for reconciling top-down and bottom-up greenhouse gas measurements, but sampling such sources is challenging. Remote sensing methods are limited by resolution and cloud cover; aircraft methods are limited by air traffic control clearances, and the need to properly determine boundary layer height. A new sampling approach leverages the ability of unmanned aerial systems (UAS) to measure all the way to the surface near the source of emissions, improving sample resolution, and reducing the need to characterize a wide downstream swath, or measure to the full height of the planetary boundary layer (PBL). The "Active-AirCore" sampler, currently under development, will fly on a fixed wing UAS in Class G airspace, spiraling from the surface to 1200 ft AGL around point sources such as leaking oil wells to measure methane, carbon dioxide and carbon monoxide. The sampler collects a 100-meter long sample "core" of air in an 1/8" passivated stainless steel tube. This "core" is run on a high-precision instrument shortly after the UAS is recovered. Sample values are mapped to a specific geographic location by cross-referencing GPS and flow/pressure metadata, and fluxes are quantified by applying Gauss's theorem to the data, mapped onto the spatial "cylinder" circumscribed by the UAS. The AirCore-Active builds off the sampling ability and analytical approach of the related AirCore sampler, which profiles the atmosphere passively using a balloon launch platform, but will add an active pumping capability needed for near-surface horizontal sampling applications. Here, we show design elements, laboratory and field test results for methane, describe the overall goals of the mission, and discuss how the platform can be adapted, with minimal effort, to measure other gas species.

  16. Reassessing Function Points

    Directory of Open Access Journals (Sweden)

    G.R. Finnie

    1997-05-01

    Full Text Available Accurate estimation of the size and development effort for software projects requires estimation models which can be used early enough in the development life cycle to be of practical value. Function Point Analysis (FPA has become possibly the most widely used estimation technique in practice. However the technique was developed in the data processing environment of the 1970's and, despite undergoing considerable reassessment and formalisation, still attracts criticism for the weighting scoring it employs and for the way in which the function point score is adapted for specific system characteristics. This paper reviews the validity of the weighting scheme and the value of adjusting for system characteristics by studying their effect in a sample of 299 software developments. In general the value adjustment scheme does not appear to cater for differences in productivity. The weighting scheme used to adjust system components in terms of being simple, average or complex also appears suspect and should be redesigned to provide a more realistic estimate of system functionality.

  17. Preliminary studies on DNA retardation by MutS applied to the detection of point mutations in clinical samples

    International Nuclear Information System (INIS)

    Stanislawska-Sachadyn, Anna; Paszko, Zygmunt; Kluska, Anna; Skasko, Elzibieta; Sromek, Maria; Balabas, Aneta; Janiec-Jankowska, Aneta; Wisniewska, Alicja; Kur, Jozef; Sachadyn, Pawel

    2005-01-01

    MutS ability to bind DNA mismatches was applied to the detection of point mutations in PCR products. MutS recognized mismatches from single up to five nucleotides and retarded the electrophoretic migration of mismatched DNA. The electrophoretic detection of insertions/deletions above three nucleotides is also possible without MutS, thanks to the DNA mobility shift caused by the presence of large insertion/deletion loops in the heteroduplex DNA. Thus, the method enables the search for a broad range of mutations: from single up to several nucleotides. The mobility shift assays were carried out in polyacrylamide gels stained with SYBR-Gold. One assay required 50-200 ng of PCR product and 1-3 μg of Thermus thermophilus his 6 -MutS protein. The advantages of this approach are: the small amounts of DNA required for the examination, simple and fast staining, no demand for PCR product purification, no labelling and radioisotopes required. The method was tested in the detection of cancer predisposing mutations in RET, hMSH2, hMLH1, BRCA1, BRCA2 and NBS1 genes. The approach appears to be promising in screening for unknown point mutations

  18. CMOS Cell Sensors for Point-of-Care Diagnostics

    Science.gov (United States)

    Adiguzel, Yekbun; Kulah, Haluk

    2012-01-01

    The burden of health-care related services in a global era with continuously increasing population and inefficient dissipation of the resources requires effective solutions. From this perspective, point-of-care diagnostics is a demanded field in clinics. It is also necessary both for prompt diagnosis and for providing health services evenly throughout the population, including the rural districts. The requirements can only be fulfilled by technologies whose productivity has already been proven, such as complementary metal-oxide-semiconductors (CMOS). CMOS-based products can enable clinical tests in a fast, simple, safe, and reliable manner, with improved sensitivities. Portability due to diminished sensor dimensions and compactness of the test set-ups, along with low sample and power consumption, is another vital feature. CMOS-based sensors for cell studies have the potential to become essential counterparts of point-of-care diagnostics technologies. Hence, this review attempts to inform on the sensors fabricated with CMOS technology for point-of-care diagnostic studies, with a focus on CMOS image sensors and capacitance sensors for cell studies. PMID:23112587

  19. Generation and Analysis of Constrained Random Sampling Patterns

    DEFF Research Database (Denmark)

    Pierzchlewski, Jacek; Arildsen, Thomas

    2016-01-01

    Random sampling is a technique for signal acquisition which is gaining popularity in practical signal processing systems. Nowadays, event-driven analog-to-digital converters make random sampling feasible in practical applications. A process of random sampling is defined by a sampling pattern, which...... indicates signal sampling points in time. Practical random sampling patterns are constrained by ADC characteristics and application requirements. In this paper, we introduce statistical methods which evaluate random sampling pattern generators with emphasis on practical applications. Furthermore, we propose...... algorithm generates random sampling patterns dedicated for event-driven-ADCs better than existed sampling pattern generators. Finally, implementation issues of random sampling patterns are discussed....

  20. Two-point vs multipoint sample collection for the analysis of energy expenditure by use of the doubly labeled water method

    International Nuclear Information System (INIS)

    Welle, S.

    1990-01-01

    Energy expenditure over a 2-wk period was determined by the doubly labeled water (2H2(18)O) method in nine adults. When daily samples were analyzed, energy expenditure was 2859 +/- 453 kcal/d (means +/- SD); when only the first and last time points were considered, the mean calculated energy expenditure was not significantly different (2947 +/- 430 kcal/d). An analysis of theoretical cases in which isotope flux is not constant indicates that the multipoint method can cause errors in the calculation of average isotope fluxes, but these are generally small. Simulations of the effect of analytical error indicate that increasing the number of replicates on two points reduces the impact of technical errors more effectively than does performing single analyses on multiple samples. It appears that generally there is no advantage to collecting frequent samples when the 2H2(18)O method is used to estimate energy expenditure in adult humans

  1. On-site meteorological instrumentation requirements to characterize diffusion from point sources: workshop report. Final report Sep 79-Sep 80

    International Nuclear Information System (INIS)

    Strimaitis, D.; Hoffnagle, G.; Bass, A.

    1981-04-01

    Results of a workshop entitled 'On-Site Meteorological Instrumentation Requirements to Characterize Diffusion from Point Sources' are summarized and reported. The workshop was sponsored by the U.S. Environmental Protection Agency in Raleigh, North Carolina, on January 15-17, 1980. Its purpose was to provide EPA with a thorough examination of the meteorological instrumentation and data collection requirements needed to characterize airborne dispersion of air contaminants from point sources and to recommend, based on an expert consensus, specific measurement technique and accuracies. Secondary purposes of the workshop were to (1) make recommendations to the National Weather Service (NWS) about collecting and archiving meteorological data that would best support air quality dispersion modeling objectives and (2) make recommendations on standardization of meteorological data reporting and quality assurance programs

  2. Point-of-care blood gases, electrolytes, chemistries, hemoglobin, and hematocrit measurement in venous samples from pet rabbits.

    Science.gov (United States)

    Selleri, Paolo; Di Girolamo, Nicola

    2014-01-01

    Point-of-care testing is an attractive option in rabbit medicine, because it permits rapid analysis of a panel of electrolytes, chemistries, blood gases, hemoglobin, and hematocrit, requiring only 65 μL of blood. The purpose of this study was to evaluate the performance of a portable clinical analyzer for measurement of pH, partial pressure of CO2, Na, chloride, potassium, blood urea nitrogen, glucose, hematocrit, and hemoglobin in healthy and diseased rabbits. Blood samples obtained from 30 pet rabbits were analyzed immediately after collection by the portable clinical analyzer (PCA) and immediately thereafter (time <20 sec) by a reference analyzer. Bland-Altman plots and Passing-Bablok regression analysis were used to compare the results. Limits of agreement were wide for all the variables studied, with the exception of pH. Most variables presented significant proportional and/or constant bias. The current study provides sufficient evidence that the PCA presents reliability for pH, although its low agreement with a reference analyzer for the other variables does not support their interchangeability. Limits of agreement provided for each variable allow researchers to evaluate if the PCA is reliable enough for their scope. To the authors' knowledge, the present is the first report evaluating a PCA in the rabbit.

  3. Rapid, sensitive and reproducible method for point-of-collection screening of liquid milk for adulterants using a portable Raman spectrometer with novel optimized sample well

    Science.gov (United States)

    Nieuwoudt, Michel K.; Holroyd, Steve E.; McGoverin, Cushla M.; Simpson, M. Cather; Williams, David E.

    2017-02-01

    Point-of-care diagnostics are of interest in the medical, security and food industry, the latter particularly for screening food adulterated for economic gain. Milk adulteration continues to be a major problem worldwide and different methods to detect fraudulent additives have been investigated for over a century. Laboratory based methods are limited in their application to point-of-collection diagnosis and also require expensive instrumentation, chemicals and skilled technicians. This has encouraged exploration of spectroscopic methods as more rapid and inexpensive alternatives. Raman spectroscopy has excellent potential for screening of milk because of the rich complexity inherent in its signals. The rapid advances in photonic technologies and fabrication methods are enabling increasingly sensitive portable mini-Raman systems to be placed on the market that are both affordable and feasible for both point-of-care and point-of-collection applications. We have developed a powerful spectroscopic method for rapidly screening liquid milk for sucrose and four nitrogen-rich adulterants (dicyandiamide (DCD), ammonium sulphate, melamine, urea), using a combined system: a small, portable Raman spectrometer with focusing fibre optic probe and optimized reflective focusing wells, simply fabricated in aluminium. The reliable sample presentation of this system enabled high reproducibility of 8% RSD (residual standard deviation) within four minutes. Limit of detection intervals for PLS calibrations ranged between 140 - 520 ppm for the four N-rich compounds and between 0.7 - 3.6 % for sucrose. The portability of the system and reliability and reproducibility of this technique opens opportunities for general, reagentless adulteration screening of biological fluids as well as milk, at point-of-collection.

  4. Sediment Monitoring and Benthic Faunal Sampling Adjacent to the Barbers Point Ocean Outfall, Oahu, Hawaii, 1986-2010 (NODC Accession 9900098)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Benthic fauna and sediment in the vicinity of the Barbers Point (Honouliuli) ocean outfall were sampled from 1986-2010. To assess the environmental quality, sediment...

  5. Data Quality Objectives for Regulatory Requirements for Dangerous Waste Sampling and Analysis

    International Nuclear Information System (INIS)

    MULKEY, C.H.

    1999-01-01

    This document describes sampling and analytical requirements needed to meet state and federal regulations for dangerous waste (DW). The River Protection Project (RPP) is assigned to the task of storage and interim treatment of hazardous waste. Any final treatment or disposal operations, as well as requirements under the land disposal restrictions (LDRs), fall in the jurisdiction of another Hanford organization and are not part of this scope. The requirements for this Data Quality Objective (DQO) Process were developed using the RPP Data Quality Objective Procedure (Banning 1996), which is based on the U.S. Environmental Protection Agency's (EPA) Guidance for the Data Quality Objectives Process (EPA 1994). Hereafter, this document is referred to as the DW DQO. Federal and state laws and regulations pertaining to waste contain requirements that are dependent upon the composition of the waste stream. These regulatory drivers require that pertinent information be obtained. For many requirements, documented process knowledge of a waste composition can be used instead of analytical data to characterize or designate a waste. When process knowledge alone is used to characterize a waste, it is a best management practice to validate the information with analytical measurements

  6. Power distribution system reliability evaluation using dagger-sampling Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Hu, Y.; Zhao, S.; Ma, Y. [North China Electric Power Univ., Hebei (China). Dept. of Electrical Engineering

    2009-03-11

    A dagger-sampling Monte Carlo simulation method was used to evaluate power distribution system reliability. The dagger-sampling technique was used to record the failure of a component as an incident and to determine its occurrence probability by generating incident samples using random numbers. The dagger sampling technique was combined with the direct sequential Monte Carlo method to calculate average values of load point indices and system indices. Results of the 2 methods with simulation times of up to 100,000 years were then compared. The comparative evaluation showed that less computing time was required using the dagger-sampling technique due to its higher convergence speed. When simulation times were 1000 years, the dagger-sampling method required 0.05 seconds to accomplish an evaluation, while the direct method required 0.27 seconds. 12 refs., 3 tabs., 4 figs.

  7. Quality assurance and reference material requirements and considerations for environmental sample analysis in nuclear forensics

    International Nuclear Information System (INIS)

    Swindle, D.W. Jr.; Perrin, R.E.; Goldberg, S.A.; Cappis, J.

    2002-01-01

    Full text: High-sensitivity nuclear environmental sampling and analysis techniques have been proven in their ability to verify declared nuclear activities, as well as to assist in the detection of undeclared nuclear activities and facilities. Following the Gulf War, the capability and revealing power of environmental sampling and analysis techniques to support international safeguards was demonstrated and subsequently adopted by the International Atomic Energy Agency (IAEA) as routine safeguards measures in safeguards inspections and verifications. In addition to having been proved useful in international safeguards, environmental sampling and analysis techniques have demonstrated their utility in identifying the origins of 'orphaned' nuclear material, as well as the origin of intercepted smuggled nuclear material. Today, environmental sampling and analysis techniques are now being applied in six broad areas to support nonproliferation, disarmament treaty verification, national and international nuclear security, and environmental stewardship of weapons production activities. Consequently, more and more laboratories around the world are establishing capabilities or expanding capabilities to meet these growing applications, and as such requirements for quality assurance and control are increasing. The six areas are: 1) Nuclear safeguards; 2) Nuclear forensics/illicit trafficking; 3) Ongoing monitoring and verification (OMV); 4) Comprehensive Test Ban Treaty (CTBT); 5) Weapons dismantlement/materials disposition; and 6) Research and development (R and D)/environmental stewardship/safety. Application of environmental sampling and analysis techniques and resources to illicit nuclear material trafficking, while embodying the same basic techniques and resources, does have unique requirements for sample management, handling, protocols, chain of custody, archiving, and data interpretation. These requirements are derived from needs of how data from nuclear forensics

  8. User requirements Massive Point Clouds for eSciences (WP1)

    NARCIS (Netherlands)

    Suijker, P.M.; Alkemade, I.; Kodde, M.P.; Nonhebel, A.E.

    2014-01-01

    This report is a milestone in work package 1 (WP1) of the project Massive point clouds for eSciences. In WP1 the basic functionalities needed for a new Point Cloud Spatial Database Management System are identified. This is achieved by (1) literature research, (2) discussions with the project

  9. Development of Spatial Scaling Technique of Forest Health Sample Point Information

    Science.gov (United States)

    Lee, J. H.; Ryu, J. E.; Chung, H. I.; Choi, Y. Y.; Jeon, S. W.; Kim, S. H.

    2018-04-01

    Forests provide many goods, Ecosystem services, and resources to humans such as recreation air purification and water protection functions. In rececnt years, there has been an increase in the factors that threaten the health of forests such as global warming due to climate change, environmental pollution, and the increase in interest in forests, and efforts are being made in various countries for forest management. Thus, existing forest ecosystem survey method is a monitoring method of sampling points, and it is difficult to utilize forests for forest management because Korea is surveying only a small part of the forest area occupying 63.7 % of the country (Ministry of Land Infrastructure and Transport Korea, 2016). Therefore, in order to manage large forests, a method of interpolating and spatializing data is needed. In this study, The 1st Korea Forest Health Management biodiversity Shannon;s index data (National Institute of Forests Science, 2015) were used for spatial interpolation. Two widely used methods of interpolation, Kriging method and IDW(Inverse Distance Weighted) method were used to interpolate the biodiversity index. Vegetation indices SAVI, NDVI, LAI and SR were used. As a result, Kriging method was the most accurate method.

  10. DEVELOPMENT OF SPATIAL SCALING TECHNIQUE OF FOREST HEALTH SAMPLE POINT INFORMATION

    Directory of Open Access Journals (Sweden)

    J. H. Lee

    2018-04-01

    Full Text Available Forests provide many goods, Ecosystem services, and resources to humans such as recreation air purification and water protection functions. In rececnt years, there has been an increase in the factors that threaten the health of forests such as global warming due to climate change, environmental pollution, and the increase in interest in forests, and efforts are being made in various countries for forest management. Thus, existing forest ecosystem survey method is a monitoring method of sampling points, and it is difficult to utilize forests for forest management because Korea is surveying only a small part of the forest area occupying 63.7 % of the country (Ministry of Land Infrastructure and Transport Korea, 2016. Therefore, in order to manage large forests, a method of interpolating and spatializing data is needed. In this study, The 1st Korea Forest Health Management biodiversity Shannon;s index data (National Institute of Forests Science, 2015 were used for spatial interpolation. Two widely used methods of interpolation, Kriging method and IDW(Inverse Distance Weighted method were used to interpolate the biodiversity index. Vegetation indices SAVI, NDVI, LAI and SR were used. As a result, Kriging method was the most accurate method.

  11. Direct protein quantification in complex sample solutions by surface-engineered nanorod probes

    KAUST Repository

    Schrittwieser, Stefan

    2017-06-30

    Detecting biomarkers from complex sample solutions is the key objective of molecular diagnostics. Being able to do so in a simple approach that does not require laborious sample preparation, sophisticated equipment and trained staff is vital for point-of-care applications. Here, we report on the specific detection of the breast cancer biomarker sHER2 directly from serum and saliva samples by a nanorod-based homogeneous biosensing approach, which is easy to operate as it only requires mixing of the samples with the nanorod probes. By careful nanorod surface engineering and homogeneous assay design, we demonstrate that the formation of a protein corona around the nanoparticles does not limit the applicability of our detection method, but on the contrary enables us to conduct in-situ reference measurements, thus further strengthening the point-of-care applicability of our method. Making use of sandwich assays on top of the nanorods, we obtain a limit of detection of 110 pM and 470 pM in 10-fold diluted spiked saliva and serum samples, respectively. In conclusion, our results open up numerous applications in direct protein biomarker quantification, specifically in point-of-care settings where resources are limited and ease-of-use is of essence.

  12. Direct protein quantification in complex sample solutions by surface-engineered nanorod probes

    KAUST Repository

    Schrittwieser, Stefan; Pelaz, Beatriz; Parak, Wolfgang J.; Lentijo Mozo, Sergio; Soulantica, Katerina; Dieckhoff, Jan; Ludwig, Frank; Schotter, Joerg

    2017-01-01

    Detecting biomarkers from complex sample solutions is the key objective of molecular diagnostics. Being able to do so in a simple approach that does not require laborious sample preparation, sophisticated equipment and trained staff is vital for point-of-care applications. Here, we report on the specific detection of the breast cancer biomarker sHER2 directly from serum and saliva samples by a nanorod-based homogeneous biosensing approach, which is easy to operate as it only requires mixing of the samples with the nanorod probes. By careful nanorod surface engineering and homogeneous assay design, we demonstrate that the formation of a protein corona around the nanoparticles does not limit the applicability of our detection method, but on the contrary enables us to conduct in-situ reference measurements, thus further strengthening the point-of-care applicability of our method. Making use of sandwich assays on top of the nanorods, we obtain a limit of detection of 110 pM and 470 pM in 10-fold diluted spiked saliva and serum samples, respectively. In conclusion, our results open up numerous applications in direct protein biomarker quantification, specifically in point-of-care settings where resources are limited and ease-of-use is of essence.

  13. Sample size estimation and sampling techniques for selecting a representative sample

    Directory of Open Access Journals (Sweden)

    Aamir Omair

    2014-01-01

    Full Text Available Introduction: The purpose of this article is to provide a general understanding of the concepts of sampling as applied to health-related research. Sample Size Estimation: It is important to select a representative sample in quantitative research in order to be able to generalize the results to the target population. The sample should be of the required sample size and must be selected using an appropriate probability sampling technique. There are many hidden biases which can adversely affect the outcome of the study. Important factors to consider for estimating the sample size include the size of the study population, confidence level, expected proportion of the outcome variable (for categorical variables/standard deviation of the outcome variable (for numerical variables, and the required precision (margin of accuracy from the study. The more the precision required, the greater is the required sample size. Sampling Techniques: The probability sampling techniques applied for health related research include simple random sampling, systematic random sampling, stratified random sampling, cluster sampling, and multistage sampling. These are more recommended than the nonprobability sampling techniques, because the results of the study can be generalized to the target population.

  14. Image subsampling and point scoring approaches for large-scale marine benthic monitoring programs

    Science.gov (United States)

    Perkins, Nicholas R.; Foster, Scott D.; Hill, Nicole A.; Barrett, Neville S.

    2016-07-01

    Benthic imagery is an effective tool for quantitative description of ecologically and economically important benthic habitats and biota. The recent development of autonomous underwater vehicles (AUVs) allows surveying of spatial scales that were previously unfeasible. However, an AUV collects a large number of images, the scoring of which is time and labour intensive. There is a need to optimise the way that subsamples of imagery are chosen and scored to gain meaningful inferences for ecological monitoring studies. We examine the trade-off between the number of images selected within transects and the number of random points scored within images on the percent cover of target biota, the typical output of such monitoring programs. We also investigate the efficacy of various image selection approaches, such as systematic or random, on the bias and precision of cover estimates. We use simulated biotas that have varying size, abundance and distributional patterns. We find that a relatively small sampling effort is required to minimise bias. An increased precision for groups that are likely to be the focus of monitoring programs is best gained through increasing the number of images sampled rather than the number of points scored within images. For rare species, sampling using point count approaches is unlikely to provide sufficient precision, and alternative sampling approaches may need to be employed. The approach by which images are selected (simple random sampling, regularly spaced etc.) had no discernible effect on mean and variance estimates, regardless of the distributional pattern of biota. Field validation of our findings is provided through Monte Carlo resampling analysis of a previously scored benthic survey from temperate waters. We show that point count sampling approaches are capable of providing relatively precise cover estimates for candidate groups that are not overly rare. The amount of sampling required, in terms of both the number of images and

  15. Hinkley Point 'C' power station public inquiry: proof of evidence on the need for Hinkley Point 'C' to help meet capacity requirement and the non-fossil-fuel proportion economically

    International Nuclear Information System (INIS)

    Jenkin, F.P.

    1988-09-01

    A public inquiry has been set up to examine the planning application made by the Central Electricity Generating Board (CEGB) for the construction of a 1200 MW Pressurized Water Reactor power station at Hinkley Point (Hinkley Point ''C'') in the United Kingdom. The purpose of this evidence to the Inquiry is to show why there is a need now to go ahead with the construction of Hinkley Point ''C'' generating station to help meet the non-fossil-fuel proportion of generation economically and also to help meet future generating capacity requirement. The CEGB submits that it is appropriate to compare Hinkley Point ''C'' with other non-fossil-fuel alternatives under various bases. Those dealt with by this proof of evidence are as follows: i) ability to contribute to capacity need and in assisting the distribution companies to meet their duty to supply electricity; ii) ability to contribute to the non-fossil-fuel proportion; iii) relative economic merit. (author)

  16. Implementation of antimicrobial peptides for sample preparation prior to nucleic acid amplification in point-of-care settings.

    Science.gov (United States)

    Krõlov, Katrin; Uusna, Julia; Grellier, Tiia; Andresen, Liis; Jevtuševskaja, Jekaterina; Tulp, Indrek; Langel, Ülo

    2017-12-01

    A variety of sample preparation techniques are used prior to nucleic acid amplification. However, their efficiency is not always sufficient and nucleic acid purification remains the preferred method for template preparation. Purification is difficult and costly to apply in point-of-care (POC) settings and there is a strong need for more robust, rapid, and efficient biological sample preparation techniques in molecular diagnostics. Here, the authors applied antimicrobial peptides (AMPs) for urine sample preparation prior to isothermal loop-mediated amplification (LAMP). AMPs bind to many microorganisms such as bacteria, fungi, protozoa and viruses causing disruption of their membrane integrity and facilitate nucleic acid release. The authors show that incubation of E. coli with antimicrobial peptide cecropin P1 for 5 min had a significant effect on the availability of template DNA compared with untreated or even heat treated samples resulting in up to six times increase of the amplification efficiency. These results show that AMPs treatment is a very efficient sample preparation technique that is suitable for application prior to nucleic acid amplification directly within biological samples. Furthermore, the entire process of AMPs treatment was performed at room temperature for 5 min thereby making it a good candidate for use in POC applications.

  17. Neuromuscular dose-response studies: determining sample size.

    Science.gov (United States)

    Kopman, A F; Lien, C A; Naguib, M

    2011-02-01

    Investigators planning dose-response studies of neuromuscular blockers have rarely used a priori power analysis to determine the minimal sample size their protocols require. Institutional Review Boards and peer-reviewed journals now generally ask for this information. This study outlines a proposed method for meeting these requirements. The slopes of the dose-response relationships of eight neuromuscular blocking agents were determined using regression analysis. These values were substituted for γ in the Hill equation. When this is done, the coefficient of variation (COV) around the mean value of the ED₅₀ for each drug is easily calculated. Using these values, we performed an a priori one-sample two-tailed t-test of the means to determine the required sample size when the allowable error in the ED₅₀ was varied from ±10-20%. The COV averaged 22% (range 15-27%). We used a COV value of 25% in determining the sample size. If the allowable error in finding the mean ED₅₀ is ±15%, a sample size of 24 is needed to achieve a power of 80%. Increasing 'accuracy' beyond this point requires increasing greater sample sizes (e.g. an 'n' of 37 for a ±12% error). On the basis of the results of this retrospective analysis, a total sample size of not less than 24 subjects should be adequate for determining a neuromuscular blocking drug's clinical potency with a reasonable degree of assurance.

  18. Sample size and classification error for Bayesian change-point models with unlabelled sub-groups and incomplete follow-up.

    Science.gov (United States)

    White, Simon R; Muniz-Terrera, Graciela; Matthews, Fiona E

    2018-05-01

    Many medical (and ecological) processes involve the change of shape, whereby one trajectory changes into another trajectory at a specific time point. There has been little investigation into the study design needed to investigate these models. We consider the class of fixed effect change-point models with an underlying shape comprised two joined linear segments, also known as broken-stick models. We extend this model to include two sub-groups with different trajectories at the change-point, a change and no change class, and also include a missingness model to account for individuals with incomplete follow-up. Through a simulation study, we consider the relationship of sample size to the estimates of the underlying shape, the existence of a change-point, and the classification-error of sub-group labels. We use a Bayesian framework to account for the missing labels, and the analysis of each simulation is performed using standard Markov chain Monte Carlo techniques. Our simulation study is inspired by cognitive decline as measured by the Mini-Mental State Examination, where our extended model is appropriate due to the commonly observed mixture of individuals within studies who do or do not exhibit accelerated decline. We find that even for studies of modest size ( n = 500, with 50 individuals observed past the change-point) in the fixed effect setting, a change-point can be detected and reliably estimated across a range of observation-errors.

  19. Is routine karyotyping required in prenatal samples with a molecular or metabolic referral?

    Directory of Open Access Journals (Sweden)

    Kooper Angelique JA

    2012-01-01

    Full Text Available Abstract As a routine, karyotyping of invasive prenatal samples is performed as an adjunct to referrals for DNA mutation detection and metabolic testing. We performed a retrospective study on 500 samples to assess the diagnostic value of this procedure. These samples included 454 (90.8% chorionic villus (CV and 46 (9.2% amniocenteses specimens. For CV samples karyotyping was based on analyses of both short-term culture (STC and long-term culture (LTC cells. Overall, 19 (3.8% abnormal karyotypes were denoted: four with a common aneuploidy (trisomy 21, 18 and 13, two with a sex chromosomal aneuploidy (Klinefelter syndrome, one with a sex chromosome mosaicism and twelve with various autosome mosaicisms. In four cases a second invasive test was performed because of an abnormal finding in the STC. Taken together, we conclude that STC and LTC karyotyping has resulted in a diagnostic yield of 19 (3.8% abnormal cases, including 12 cases (2.4% with an uncertain significance. From a diagnostic point of view, it is desirable to limit uncertain test results as secondary test findings. Therefore, we recommend a more targeted assay, such as e.g. QF-PCR, as a replacement of the STC and to provide parents the autonomy to choose between karyotyping and QF-PCR.

  20. Improved orientation sampling for indexing diffraction patterns of polycrystalline materials

    DEFF Research Database (Denmark)

    Larsen, Peter Mahler; Schmidt, Søren

    2017-01-01

    to that of optimally distributing points on a four‐dimensional sphere. In doing so, the number of orientation samples needed to achieve a desired indexing accuracy is significantly reduced. Orientation sets at a range of sizes are generated in this way for all Laue groups and are made available online for easy use.......Orientation mapping is a widely used technique for revealing the microstructure of a polycrystalline sample. The crystalline orientation at each point in the sample is determined by analysis of the diffraction pattern, a process known as pattern indexing. A recent development in pattern indexing...... in the presence of noise, it has very high computational requirements. In this article, the computational burden is reduced by developing a method for nearly optimal sampling of orientations. By using the quaternion representation of orientations, it is shown that the optimal sampling problem is equivalent...

  1. Sample requirements and design of an inter-laboratory trial for radiocarbon laboratories

    International Nuclear Information System (INIS)

    Bryant, Charlotte; Carmi, Israel; Cook, Gordon; Gulliksen, Steinar; Harkness, Doug; Heinemeier, Jan; McGee, Edward; Naysmith, Philip; Possnert, Goran; Scott, Marian; Plicht, Hans van der; Strydonck, Mark van

    2000-01-01

    An on-going inter-comparison programme which is focused on assessing and establishing consensus protocols to be applied in the identification, selection and sub-sampling of materials for subsequent 14 C analysis is described. The outcome of the programme will provide a detailed quantification of the uncertainties associated with 14 C measurements including the issues of accuracy and precision. Such projects have become recognised as a fundamental aspect of continuing laboratory quality assurance schemes, providing a mechanism for the harmonisation of measurements and for demonstrating the traceability of results. The design of this study and its rationale are described. In summary, a suite of core samples has been defined which will be made available to both AMS and radiometric laboratories. These core materials are representative of routinely dated material and their ages span the full range of the applied 14 C time-scale. Two of the samples are of wood from the German and Irish dendrochronologies, thus providing a direct connection to the master dendrochronological calibration curve. Further samples link this new inter-comparison to past studies. Sample size and precision have been identified as being of paramount importance in defining dating confidence, and so several core samples have been identified for more in-depth study of these practical issues. In addition to the core samples, optional samples have been identified and prepared specifically for either AMS and/or radiometric laboratories. For AMS laboratories, these include bone, textile, leather and parchment samples. Participation in the study requires a commitment to a minimum of 10 core analyses, with results to be returned within a year

  2. Sampling and chemical analysis in environmental samples around Nuclear Power Plants and some environmental samples

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Yong Woo; Han, Man Jung; Cho, Seong Won; Cho, Hong Jun; Oh, Hyeon Kyun; Lee, Jeong Min; Chang, Jae Sook [KORTIC, Taejon (Korea, Republic of)

    2002-12-15

    Twelve kinds of environmental samples such as soil, seawater, underground water, etc. around Nuclear Power Plants(NPPs) were collected. Tritium chemical analysis was tried for the samples of rain water, pine-needle, air, seawater, underground water, chinese cabbage, a grain of rice and milk sampled around NPPs, and surface seawater and rain water sampled over the country. Strontium in the soil that sere sampled at 60 point of district in Korea were analyzed. Tritium were sampled at 60 point of district in Korea were analyzed. Tritium were analyzed in 21 samples of surface seawater around the Korea peninsular that were supplied from KFRDI(National Fisheries Research and Development Institute). Sampling and chemical analysis environmental samples around Kori, Woolsung, Youngkwang, Wooljin Npps and Taeduk science town for tritium and strontium analysis was managed according to plans. Succeed to KINS after all samples were tried.

  3. Data Quality Objectives for Regulatory Requirements for Hazardous and Radioactive Air Emissions Sampling and Analysis

    International Nuclear Information System (INIS)

    MULKEY, C.H.

    1999-01-01

    This document describes the results of the data quality objective (DQO) process undertaken to define data needs for state and federal requirements associated with toxic, hazardous, and/or radiological air emissions under the jurisdiction of the River Protection Project (RPP). Hereafter, this document is referred to as the Air DQO. The primary drivers for characterization under this DQO are the regulatory requirements pursuant to Washington State regulations, that may require sampling and analysis. The federal regulations concerning air emissions are incorporated into the Washington State regulations. Data needs exist for nonradioactive and radioactive waste constituents and characteristics as identified through the DQO process described in this document. The purpose is to identify current data needs for complying with regulatory drivers for the measurement of air emissions from RPP facilities in support of air permitting. These drivers include best management practices; similar analyses may have more than one regulatory driver. This document should not be used for determining overall compliance with regulations because the regulations are in constant change, and this document may not reflect the latest regulatory requirements. Regulatory requirements are also expected to change as various permits are issued. Data needs require samples for both radionuclides and nonradionuclide analytes of air emissions from tanks and stored waste containers. The collection of data is to support environmental permitting and compliance, not for health and safety issues

  4. Automatic markerless registration of point clouds with semantic-keypoint-based 4-points congruent sets

    Science.gov (United States)

    Ge, Xuming

    2017-08-01

    The coarse registration of point clouds from urban building scenes has become a key topic in applications of terrestrial laser scanning technology. Sampling-based algorithms in the random sample consensus (RANSAC) model have emerged as mainstream solutions to address coarse registration problems. In this paper, we propose a novel combined solution to automatically align two markerless point clouds from building scenes. Firstly, the method segments non-ground points from ground points. Secondly, the proposed method detects feature points from each cross section and then obtains semantic keypoints by connecting feature points with specific rules. Finally, the detected semantic keypoints from two point clouds act as inputs to a modified 4PCS algorithm. Examples are presented and the results compared with those of K-4PCS to demonstrate the main contributions of the proposed method, which are the extension of the original 4PCS to handle heavy datasets and the use of semantic keypoints to improve K-4PCS in relation to registration accuracy and computational efficiency.

  5. Cloud point extraction of palladium in water samples and alloy mixtures using new synthesized reagent with flame atomic absorption spectrometry (FAAS)

    International Nuclear Information System (INIS)

    Priya, B. Krishna; Subrahmanayam, P.; Suvardhan, K.; Kumar, K. Suresh; Rekha, D.; Rao, A. Venkata; Rao, G.C.; Chiranjeevi, P.

    2007-01-01

    The present paper outlines novel, simple and sensitive method for the determination of palladium by flame atomic absorption spectrometry (FAAS) after separation and preconcentration by cloud point extraction (CPE). The cloud point methodology was successfully applied for palladium determination by using new reagent 4-(2-naphthalenyl)thiozol-2yl azo chromotropic acid (NTACA) and hydrophobic ligand Triton X-114 as chelating agent and nonionic surfactant respectively in the water samples and alloys. The following parameters such as pH, concentration of the reagent and Triton X-114, equilibrating temperature and centrifuging time were evaluated and optimized to enhance the sensitivity and extraction efficiency of the proposed method. The preconcentration factor was found to be (50-fold) for 250 ml of water sample. Under optimum condition the detection limit was found as 0.067 ng ml -1 for palladium in various environmental matrices. The present method was applied for the determination of palladium in various water samples, alloys and the result shows good agreement with reported method and the recoveries are in the range of 96.7-99.4%

  6. Gridsampler – A Simulation Tool to Determine the Required Sample Size for Repertory Grid Studies

    Directory of Open Access Journals (Sweden)

    Mark Heckmann

    2017-01-01

    Full Text Available The repertory grid is a psychological data collection technique that is used to elicit qualitative data in the form of attributes as well as quantitative ratings. A common approach for evaluating multiple repertory grid data is sorting the elicited bipolar attributes (so called constructs into mutually exclusive categories by means of content analysis. An important question when planning this type of study is determining the sample size needed to a discover all attribute categories relevant to the field and b yield a predefined minimal number of attributes per category. For most applied researchers who collect multiple repertory grid data, programming a numeric simulation to answer these questions is not feasible. The gridsampler software facilitates determining the required sample size by providing a GUI for conducting the necessary numerical simulations. Researchers can supply a set of parameters suitable for the specific research situation, determine the required sample size, and easily explore the effects of changes in the parameter set.

  7. Data Quality Objectives for Regulatory Requirements for Dangerous Waste Sampling and Analysis; FINAL

    International Nuclear Information System (INIS)

    MULKEY, C.H.

    1999-01-01

    This document describes sampling and analytical requirements needed to meet state and federal regulations for dangerous waste (DW). The River Protection Project (RPP) is assigned to the task of storage and interim treatment of hazardous waste. Any final treatment or disposal operations, as well as requirements under the land disposal restrictions (LDRs), fall in the jurisdiction of another Hanford organization and are not part of this scope. The requirements for this Data Quality Objective (DQO) Process were developed using the RPP Data Quality Objective Procedure (Banning 1996), which is based on the U.S. Environmental Protection Agency's (EPA) Guidance for the Data Quality Objectives Process (EPA 1994). Hereafter, this document is referred to as the DW DQO. Federal and state laws and regulations pertaining to waste contain requirements that are dependent upon the composition of the waste stream. These regulatory drivers require that pertinent information be obtained. For many requirements, documented process knowledge of a waste composition can be used instead of analytical data to characterize or designate a waste. When process knowledge alone is used to characterize a waste, it is a best management practice to validate the information with analytical measurements

  8. The role of point-of-care assessment of platelet function in predicting postoperative bleeding and transfusion requirements after coronary artery bypass grafting.

    Science.gov (United States)

    Mishra, Pankaj Kumar; Thekkudan, Joyce; Sahajanandan, Raj; Gravenor, Mike; Lakshmanan, Suresh; Fayaz, Khazi Mohammed; Luckraz, Heyman

    2015-01-01

    OBJECTIVE platelet function assessment after cardiac surgery can predict postoperative blood loss, guide transfusion requirements and discriminate the need for surgical re-exploration. We conducted this study to assess the predictive value of point-of-care testing platelet function using the Multiplate® device. Patients undergoing isolated coronary artery bypass grafting were prospectively recruited ( n = 84). Group A ( n = 42) patients were on anti-platelet therapy until surgery; patients in Group B ( n = 42) stopped anti-platelet treatment at least 5 days preoperatively. Multiplate® and thromboelastography (TEG) tests were performed in the perioperative period. Primary end-point was excessive bleeding (>2.5 ml/kg/h) within first 3 h postoperative. Secondary end-points included transfusion requirements, re-exploration rates, intensive care unit and in-hospital stays. Patients in Group A had excessive bleeding (59% vs. 33%, P = 0.02), higher re-exploration rates (14% vs. 0%, P function testing was the most significant predictor of excessive bleeding (odds ratio [OR]: 2.3, P = 0.08), need for blood (OR: 5.5, P functional assessment with Multiplate® was the strongest predictor for bleeding and transfusion requirements in patients on anti-platelet therapy until the time of surgery.

  9. Analysis of point source size on measurement accuracy of lateral point-spread function of confocal Raman microscopy

    Science.gov (United States)

    Fu, Shihang; Zhang, Li; Hu, Yao; Ding, Xiang

    2018-01-01

    Confocal Raman Microscopy (CRM) has matured to become one of the most powerful instruments in analytical science because of its molecular sensitivity and high spatial resolution. Compared with conventional Raman Microscopy, CRM can perform three dimensions mapping of tiny samples and has the advantage of high spatial resolution thanking to the unique pinhole. With the wide application of the instrument, there is a growing requirement for the evaluation of the imaging performance of the system. Point-spread function (PSF) is an important approach to the evaluation of imaging capability of an optical instrument. Among a variety of measurement methods of PSF, the point source method has been widely used because it is easy to operate and the measurement results are approximate to the true PSF. In the point source method, the point source size has a significant impact on the final measurement accuracy. In this paper, the influence of the point source sizes on the measurement accuracy of PSF is analyzed and verified experimentally. A theoretical model of the lateral PSF for CRM is established and the effect of point source size on full-width at half maximum of lateral PSF is simulated. For long-term preservation and measurement convenience, PSF measurement phantom using polydimethylsiloxane resin, doped with different sizes of polystyrene microspheres is designed. The PSF of CRM with different sizes of microspheres are measured and the results are compared with the simulation results. The results provide a guide for measuring the PSF of the CRM.

  10. Cloud point extraction and flame atomic absorption spectrometric determination of cadmium and nickel in drinking and wastewater samples.

    Science.gov (United States)

    Naeemullah; Kazi, Tasneem G; Shah, Faheem; Afridi, Hassan I; Baig, Jameel Ahmed; Soomro, Abdul Sattar

    2013-01-01

    A simple method for the preconcentration of cadmium (Cd) and nickel (Ni) in drinking and wastewater samples was developed. Cloud point extraction has been used for the preconcentration of both metals, after formation of complexes with 8-hydroxyquinoline (8-HQ) and extraction with the surfactant octylphenoxypolyethoxyethanol (Triton X-114). Dilution of the surfactant-rich phase with acidified ethanol was performed after phase separation, and the Cd and Ni contents were measured by flame atomic absorption spectrometry. The experimental variables, such as pH, amounts of reagents (8-HQ and Triton X-114), temperature, incubation time, and sample volume, were optimized. After optimization of the complexation and extraction conditions, enhancement factors of 80 and 61, with LOD values of 0.22 and 0.52 microg/L, were obtained for Cd and Ni, respectively. The proposed method was applied satisfactorily for the determination of both elements in drinking and wastewater samples.

  11. Cloud point extraction for trace inorganic arsenic speciation analysis in water samples by hydride generation atomic fluorescence spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Li, Shan, E-mail: ls_tuzi@163.com; Wang, Mei, E-mail: wmei02@163.com; Zhong, Yizhou, E-mail: yizhz@21cn.com; Zhang, Zehua, E-mail: kazuki.0101@aliyun.com; Yang, Bingyi, E-mail: e_yby@163.com

    2015-09-01

    A new cloud point extraction technique was established and used for the determination of trace inorganic arsenic species in water samples combined with hydride generation atomic fluorescence spectrometry (HGAFS). As(III) and As(V) were complexed with ammonium pyrrolidinedithiocarbamate and molybdate, respectively. The complexes were quantitatively extracted with the non-ionic surfactant (Triton X-114) by centrifugation. After addition of antifoam, the surfactant-rich phase containing As(III) was diluted with 5% HCl for HGAFS determination. For As(V) determination, 50% HCl was added to the surfactant-rich phase, and the mixture was placed in an ultrasonic bath at 70 °C for 30 min. As(V) was reduced to As(III) with thiourea–ascorbic acid solution, followed by HGAFS. Under the optimum conditions, limits of detection of 0.009 and 0.012 μg/L were obtained for As(III) and As(V), respectively. Concentration factors of 9.3 and 7.9, respectively, were obtained for a 50 mL sample. The precisions were 2.1% for As(III) and 2.3% for As(V). The proposed method was successfully used for the determination of trace As(III) and As(V) in water samples, with satisfactory recoveries. - Highlights: • Cloud point extraction was firstly established to determine trace inorganic arsenic(As) species combining with HGAFS. • Separate As(III) and As(V) determinations improve the accuracy. • Ultrasonic release of complexed As(V) enables complete As(V) reduction to As(III). • Direct HGAFS analysis can be performed.

  12. 21 CFR 111.80 - What representative samples must you collect?

    Science.gov (United States)

    2010-04-01

    ..., PACKAGING, LABELING, OR HOLDING OPERATIONS FOR DIETARY SUPPLEMENTS Requirement to Establish a Production and... manufactured batch at points, steps, or stages, in the manufacturing process as specified in the master... statistical sampling plan (or otherwise every finished batch), before releasing for distribution to verify...

  13. The effects of spatial sampling choices on MR temperature measurements.

    Science.gov (United States)

    Todd, Nick; Vyas, Urvi; de Bever, Josh; Payne, Allison; Parker, Dennis L

    2011-02-01

    The purpose of this article is to quantify the effects that spatial sampling parameters have on the accuracy of magnetic resonance temperature measurements during high intensity focused ultrasound treatments. Spatial resolution and position of the sampling grid were considered using experimental and simulated data for two different types of high intensity focused ultrasound heating trajectories (a single point and a 4-mm circle) with maximum measured temperature and thermal dose volume as the metrics. It is demonstrated that measurement accuracy is related to the curvature of the temperature distribution, where regions with larger spatial second derivatives require higher resolution. The location of the sampling grid relative temperature distribution has a significant effect on the measured values. When imaging at 1.0 × 1.0 × 3.0 mm(3) resolution, the measured values for maximum temperature and volume dosed to 240 cumulative equivalent minutes (CEM) or greater varied by 17% and 33%, respectively, for the single-point heating case, and by 5% and 18%, respectively, for the 4-mm circle heating case. Accurate measurement of the maximum temperature required imaging at 1.0 × 1.0 × 3.0 mm(3) resolution for the single-point heating case and 2.0 × 2.0 × 5.0 mm(3) resolution for the 4-mm circle heating case. Copyright © 2010 Wiley-Liss, Inc.

  14. Determination of trace inorganic mercury species in water samples by cloud point extraction and UV-vis spectrophotometry.

    Science.gov (United States)

    Ulusoy, Halil Ibrahim

    2014-01-01

    A new micelle-mediated extraction method was developed for preconcentration of ultratrace Hg(II) ions prior to spectrophotometric determination. 2-(2'-Thiazolylazo)-p-cresol (TAC) and Ponpe 7.5 were used as the chelating agent and nonionic surfactant, respectively. Hg(II) ions form a hydrophobic complex with TAC in a micelle medium. The main factors affecting cloud point extraction efficiency, such as pH of the medium, concentrations of TAC and Ponpe 7.5, and equilibration temperature and time, were investigated in detail. An overall preconcentration factor of 33.3 was obtained upon preconcentration of a 50 mL sample. The LOD obtained under the optimal conditions was 0.86 microg/L, and the RSD for five replicate measurements of 100 microg/L Hg(II) was 3.12%. The method was successfully applied to the determination of Hg in environmental water samples.

  15. Hysteresis critical point of nitrogen in porous glass: occurrence of sample spanning transition in capillary condensation.

    Science.gov (United States)

    Morishige, Kunimitsu

    2009-06-02

    To examine the mechanisms for capillary condensation and for capillary evaporation in porous glass, we measured the hysteresis critical points and desorption scanning curves of nitrogen in four kinds of porous glasses with different pore sizes (Vycor, CPG75A, CPG120A, and CPG170A). The shapes of the hysteresis loop in the adsorption isotherm of nitrogen for the Vycor and the CPG75A changed with temperature, whereas those for the CPG120A and the CPG170A remained almost unchanged with temperature. The hysteresis critical points for the Vycor and the CPG75A fell on the common line observed previously for ordered mesoporous silicas. On the other hand, the hysteresis critical points for the CPG120A and the CPG170A deviated appreciably from the common line. This strongly suggests that capillary evaporation of nitrogen in the interconnected and disordered pores of both the Vycor and the CPG75A follows a cavitation process at least in the vicinity of their hysteresis critical temperatures in the same way as that in the cagelike pores of the ordered silicas, whereas the hysteresis critical points in the CPG120A and the CPG170A have origin different from that in the cagelike pores. The desorption scanning curves for the CPG75A indicated the nonindependence of the porous domains. On the other hand, for both the CPG120A and the CPG170A, we obtained the scanning curves that are expected from the independent domain theory. All these results suggest that sample spanning transitions in capillary condensation and evaporation take place inside the interconnected pores of both the CPG120A and the CPG170A.

  16. What are the essential competencies required of a midwife at the point of registration?

    Science.gov (United States)

    Butler, Michelle M; Fraser, Diane M; Murphy, Roger J L

    2008-09-01

    to identify the essential competencies required of a midwife at the point of registration. qualitative, descriptive, extended case study and depth interviews. pre-registration midwifery education in England. 39 qualifying midwives, their assessors, midwives and midwife teachers across six higher education institutions, and 20 experienced midwives at two sites. essential competencies were identified relating to (1) being a safe practitioner; (2) having the right attitude; and (3) being an effective communicator. In order to be a safe practitioner, it was proposed that a midwife must have a reasonable degree of self-sufficiency, use up-to-date knowledge in practice, and have self and professional awareness. It was suggested that having the right attitude involves being motivated, being committed to midwifery and being caring and kind. Participants highlighted the importance of effective communication so that midwives can relate to and work in partnership with women and provide truly informed choice. Essential communication skills include active listening, providing appropriate information and flexibility. the most important requirement at registration is that a midwife is safe and will practise safely. However, this capability to be safe is further mediated by attitudes and communication skills. models of midwifery competence should always include personal attributes and effective communication in addition to the competencies required to be able to practise safely, and there should be an explicit focus in curriculum content, skills training and assessment on attitudes and communication.

  17. End points for adjuvant therapy trials: has the time come to accept disease-free survival as a surrogate end point for overall survival?

    Science.gov (United States)

    Gill, Sharlene; Sargent, Daniel

    2006-06-01

    The intent of adjuvant therapy is to eradicate micro-metastatic residual disease following curative resection with the goal of preventing or delaying recurrence. The time-honored standard for demonstrating efficacy of new adjuvant therapies is an improvement in overall survival (OS). This typically requires phase III trials of large sample size with lengthy follow-up. With the intent of reducing the cost and time of completing such trials, there is considerable interest in developing alternative or surrogate end points. A surrogate end point may be employed as a substitute to directly assess the effects of an intervention on an already accepted clinical end point such as mortality. When used judiciously, surrogate end points can accelerate the evaluation of new therapies, resulting in the more timely dissemination of effective therapies to patients. The current review provides a perspective on the suitability and validity of disease-free survival (DFS) as an alternative end point for OS. Criteria for establishing surrogacy and the advantages and limitations associated with the use of DFS as a primary end point in adjuvant clinical trials and as the basis for approval of new adjuvant therapies are discussed.

  18. Mechanical Conversion for High-Throughput TEM Sample Preparation

    International Nuclear Information System (INIS)

    Kendrick, Anthony B; Moore, Thomas M; Zaykova-Feldman, Lyudmila

    2006-01-01

    This paper presents a novel method of direct mechanical conversion from lift-out sample to TEM sample holder. The lift-out sample is prepared in the FIB using the in-situ liftout Total Release TM method. The mechanical conversion is conducted using a mechanical press and one of a variety of TEM coupons, including coupons for both top-side and back-side thinning. The press joins a probe tip point with attached TEM sample to the sample coupon and separates the complete assembly as a 3mm diameter TEM grid, compatible with commercially available TEM sample holder rods. This mechanical conversion process lends itself well to the high through-put requirements of in-line process control and to materials characterization labs where instrument utilization and sample security are critically important

  19. Melting point of yttria

    International Nuclear Information System (INIS)

    Skaggs, S.R.

    1977-06-01

    Fourteen samples of 99.999 percent Y 2 O 3 were melted near the focus of a 250-W CO 2 laser. The average value of the observed melting point along the solid-liquid interface was 2462 +- 19 0 C. Several of these same samples were then melted in ultrahigh-purity oxygen, nitrogen, helium, or argon and in water vapor. No change in the observed temperature was detected, with the exception of a 20 0 C increase in temperature from air to helium gas. Post test examination of the sample characteristics, clarity, sphericity, and density is presented, along with composition. It is suggested that yttria is superior to alumina as a secondary melting-point standard

  20. Visible-near infrared point spectrometry of drill core samples from Río Tinto, Spain: results from the 2005 Mars Astrobiology Research and Technology Experiment (MARTE) drilling exercise.

    Science.gov (United States)

    Sutter, Brad; Brown, Adrian J; Stoker, Carol R

    2008-10-01

    Sampling of subsurface rock may be required to detect evidence of past biological activity on Mars. The Mars Astrobiology Research and Technology Experiment (MARTE) utilized the Río Tinto region, Spain, as a Mars analog site to test dry drilling technologies specific to Mars that retrieve subsurface rock for biological analysis. This work examines the usefulness of visible-near infrared (VNIR) (450-1000 nm) point spectrometry to characterize ferric iron minerals in core material retrieved during a simulated Mars drilling mission. VNIR spectrometry can indicate the presence of aqueously precipitated ferric iron minerals and, thus, determine whether biological analysis of retrieved rock is warranted. Core spectra obtained during the mission with T1 (893-897 nm) and T2 (644-652 nm) features indicate goethite-dominated samples, while relatively lower wavelength T1 (832-880 nm) features indicate hematite. Hematite/goethite molar ratios varied from 0 to 1.4, and within the 880-898 nm range, T1 features were used to estimate hematite/goethite molar ratios. Post-mission X-ray analysis detected phyllosilicates, which indicates that examining beyond the VNIR (e.g., shortwave infrared, 1000-2500 nm) will enhance the detection of other minerals formed by aqueous processes. Despite the limited spectral range of VNIR point spectrometry utilized in the MARTE Mars drilling simulation project, ferric iron minerals could be identified in retrieved core material, and their distribution served to direct core subsampling for biological analysis.

  1. Probabilistic Requirements (Partial) Verification Methods Best Practices Improvement. Variables Acceptance Sampling Calculators: Empirical Testing. Volume 2

    Science.gov (United States)

    Johnson, Kenneth L.; White, K. Preston, Jr.

    2012-01-01

    The NASA Engineering and Safety Center was requested to improve on the Best Practices document produced for the NESC assessment, Verification of Probabilistic Requirements for the Constellation Program, by giving a recommended procedure for using acceptance sampling by variables techniques as an alternative to the potentially resource-intensive acceptance sampling by attributes method given in the document. In this paper, the results of empirical tests intended to assess the accuracy of acceptance sampling plan calculators implemented for six variable distributions are presented.

  2. Air sampling in the workplace to meet the new part 20 requirements

    International Nuclear Information System (INIS)

    McGuire, S.; Hickey, E.E.; Knox, W.

    1991-01-01

    The US Nuclear Regulatory Commission is developing a Regulatory Guide on air sampling in the workplace to meet the requirements of the revised Part 20. The guide will be accompanied by a technical manual describing and giving examples of how to meet the recommendations in the guide. Draft versions of the guide and manual are scheduled to be published for public comment this year. A final guide and manual, revised to consider the public comments, are scheduled to be published in 1992. This talk will summarize some of the more important features of the guide and manual. In particular, the talk will discuss how to demonstrate that samples taken to estimate worker intakes are representative of the air inhaled by workers and what measurements are necessary if a licensee wants to adjust derived air concentrations to account for particle size

  3. Optimization of Control Points Number at Coordinate Measurements based on the Monte-Carlo Method

    Science.gov (United States)

    Korolev, A. A.; Kochetkov, A. V.; Zakharov, O. V.

    2018-01-01

    Improving the quality of products causes an increase in the requirements for the accuracy of the dimensions and shape of the surfaces of the workpieces. This, in turn, raises the requirements for accuracy and productivity of measuring of the workpieces. The use of coordinate measuring machines is currently the most effective measuring tool for solving similar problems. The article proposes a method for optimizing the number of control points using Monte Carlo simulation. Based on the measurement of a small sample from batches of workpieces, statistical modeling is performed, which allows one to obtain interval estimates of the measurement error. This approach is demonstrated by examples of applications for flatness, cylindricity and sphericity. Four options of uniform and uneven arrangement of control points are considered and their comparison is given. It is revealed that when the number of control points decreases, the arithmetic mean decreases, the standard deviation of the measurement error increases and the probability of the measurement α-error increases. In general, it has been established that it is possible to repeatedly reduce the number of control points while maintaining the required measurement accuracy.

  4. Sampling and instrumentation requirements for long-range D and D activities at INEL

    International Nuclear Information System (INIS)

    Ahlquist, A.J.

    1985-01-01

    Assistance was requested to help determine sampling and instrumentation requirements for the long-range decontamination and decommissioning activities at the Idaho National Engineering Laboratory. Through a combination of literature review, visits to other DOE contractors, and a determination of the needs for the INEL program, a draft report has been prepared that is now under review. The final report should be completed in FY 84

  5. A cautionary note about the cross-national and clinical validity of cut-off points for the Maslach Burnout Inventory.

    Science.gov (United States)

    Schaufeli, W B; Van Dierendonck, D

    1995-06-01

    In the present study, burnout scores of three samples, as measured with the Maslach Burnout Inventory, were compared: (1) the normative American sample from the test-manual (N = 10,067), (2) the normative Dutch sample (N = 3,892), and (3) a Dutch outpatient sample (N = 142). Generally, the highest burnout scores were found for the outpatient sample, followed by the American and Dutch normative samples, respectively. Slightly different patterns were noted for each of the three components. Probably sampling bias, i.e., the healthy worker effect, or cultural value patterns, i.e., femininity versus masculinity, might be responsible for the results. It is concluded that extreme caution is required when cut-off points are used to classify individuals by burnout scores; only nation-specific and clinically derived cut-off points should be employed.

  6. Sampling Design of Soil Physical Properties in a Conilon Coffee Field

    Directory of Open Access Journals (Sweden)

    Eduardo Oliveira de Jesus Santos

    Full Text Available ABSTRACT Establishing the number of samples required to determine values of soil physical properties ultimately results in optimization of labor and allows better representation of such attributes. The objective of this study was to analyze the spatial variability of soil physical properties in a Conilon coffee field and propose a soil sampling method better attuned to conditions of the management system. The experiment was performed in a Conilon coffee field in Espírito Santo state, Brazil, under a 3.0 × 2.0 × 1.0 m (4,000 plants ha-1 double spacing design. An irregular grid, with dimensions of 107 × 95.7 m and 65 sampling points, was set up. Soil samples were collected from the 0.00-0.20 m depth from each sampling point. Data were analyzed under descriptive statistical and geostatistical methods. Using statistical parameters, the adequate number of samples for analyzing the attributes under study was established, which ranged from 1 to 11 sampling points. With the exception of particle density, all soil physical properties showed a spatial dependence structure best fitted to the spherical model. Establishment of the number of samples and spatial variability for the physical properties of soils may be useful in developing sampling strategies that minimize costs for farmers within a tolerable and predictable level of error.

  7. An antithetic variate to facilitate upper-stem height measurements for critical height sampling with importance sampling

    Science.gov (United States)

    Thomas B. Lynch; Jeffrey H. Gove

    2013-01-01

    Critical height sampling (CHS) estimates cubic volume per unit area by multiplying the sum of critical heights measured on trees tallied in a horizontal point sample (HPS) by the HPS basal area factor. One of the barriers to practical application of CHS is the fact that trees near the field location of the point-sampling sample point have critical heights that occur...

  8. 40 CFR 80.583 - What alternative sampling and testing requirements apply to importers who transport motor vehicle...

    Science.gov (United States)

    2010-07-01

    ... requirements apply to importers who transport motor vehicle diesel fuel, NRLM diesel fuel, or ECA marine fuel... (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Motor Vehicle Diesel Fuel... alternative sampling and testing requirements apply to importers who transport motor vehicle diesel fuel, NRLM...

  9. Data Quality Objectives for Regulatory Requirements for Hazardous and Radioactive Air Emissions Sampling and Analysis

    Energy Technology Data Exchange (ETDEWEB)

    MULKEY, C.H.

    1999-07-06

    This document describes the results of the data quality objective (DQO) process undertaken to define data needs for state and federal requirements associated with toxic, hazardous, and/or radiological air emissions under the jurisdiction of the River Protection Project (RPP). Hereafter, this document is referred to as the Air DQO. The primary drivers for characterization under this DQO are the regulatory requirements pursuant to Washington State regulations, that may require sampling and analysis. The federal regulations concerning air emissions are incorporated into the Washington State regulations. Data needs exist for nonradioactive and radioactive waste constituents and characteristics as identified through the DQO process described in this document. The purpose is to identify current data needs for complying with regulatory drivers for the measurement of air emissions from RPP facilities in support of air permitting. These drivers include best management practices; similar analyses may have more than one regulatory driver. This document should not be used for determining overall compliance with regulations because the regulations are in constant change, and this document may not reflect the latest regulatory requirements. Regulatory requirements are also expected to change as various permits are issued. Data needs require samples for both radionuclides and nonradionuclide analytes of air emissions from tanks and stored waste containers. The collection of data is to support environmental permitting and compliance, not for health and safety issues. This document does not address health or safety regulations or requirements (those of the Occupational Safety and Health Administration or the National Institute of Occupational Safety and Health) or continuous emission monitoring systems. This DQO is applicable to all equipment, facilities, and operations under the jurisdiction of RPP that emit or have the potential to emit regulated air pollutants.

  10. Data Quality Objectives for Regulatory Requirements for Hazardous and Radioactive Air Emissions Sampling and Analysis

    International Nuclear Information System (INIS)

    MULKEY, C.H.

    1999-01-01

    This document describes the results of the data quality objective (DQO) process undertaken to define data needs for state and federal requirements associated with toxic, hazardous, and/or radiological air emissions under the jurisdiction of the River Protection Project (RPP). Hereafter, this document is referred to as the Air DQO. The primary drivers for characterization under this DQO are the regulatory requirements pursuant to Washington State regulations, that may require sampling and analysis. The federal regulations concerning air emissions are incorporated into the Washington State regulations. Data needs exist for nonradioactive and radioactive waste constituents and characteristics as identified through the DQO process described in this document. The purpose is to identify current data needs for complying with regulatory drivers for the measurement of air emissions from RPP facilities in support of air permitting. These drivers include best management practices; similar analyses may have more than one regulatory driver. This document should not be used for determining overall compliance with regulations because the regulations are in constant change, and this document may not reflect the latest regulatory requirements. Regulatory requirements are also expected to change as various permits are issued. Data needs require samples for both radionuclides and nonradionuclide analytes of air emissions from tanks and stored waste containers. The collection of data is to support environmental permitting and compliance, not for health and safety issues. This document does not address health or safety regulations or requirements (those of the Occupational Safety and Health Administration or the National Institute of Occupational Safety and Health) or continuous emission monitoring systems. This DQO is applicable to all equipment, facilities, and operations under the jurisdiction of RPP that emit or have the potential to emit regulated air pollutants

  11. Lead preconcentration in synthetic samples with triton x-114 in the cloud point extraction and analysis by atomic absorption (EAAF)

    International Nuclear Information System (INIS)

    Zegarra Pisconti, Marixa; Cjuno Huanca, Jesus

    2015-01-01

    A methodology was developed about lead preconcentration in water samples that were added dithizone as complexing agent, previously dissolved in the nonionic surfactant Triton X-114, until the formation of the critical micelle concentration and the cloud point temperature. The centrifuged system gave a precipitate with high concentrations of Pb (II) that was measured by atomic absorption spectroscopy with flame (EAAF). The method has proved feasible to be implemented as a method of preconcentration and analysis of Pb in aqueous samples with concentrations less than 1 ppm. Several parameters were evaluated to obtain a percentage recovery of 89.8%. (author)

  12. NID Copper Sample Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kouzes, Richard T.; Zhu, Zihua

    2011-09-12

    The current focal point of the nuclear physics program at PNNL is the MAJORANA DEMONSTRATOR, and the follow-on Tonne-Scale experiment, a large array of ultra-low background high-purity germanium detectors, enriched in 76Ge, designed to search for zero-neutrino double-beta decay (0νββ). This experiment requires the use of germanium isotopically enriched in 76Ge. The MAJORANA DEMONSTRATOR is a DOE and NSF funded project with a major science impact. The DEMONSTRATOR will utilize 76Ge from Russia, but for the Tonne-Scale experiment it is hoped that an alternate technology, possibly one under development at Nonlinear Ion Dynamics (NID), will be a viable, US-based, lower-cost source of separated material. Samples of separated material from NID require analysis to determine the isotopic distribution and impurities. DOE is funding NID through an SBIR grant for development of their separation technology for application to the Tonne-Scale experiment. The Environmental Molecular Sciences facility (EMSL), a DOE user facility at PNNL, has the required mass spectroscopy instruments for making isotopic measurements that are essential to the quality assurance for the MAJORANA DEMONSTRATOR and for the development of the future separation technology required for the Tonne-Scale experiment. A sample of isotopically separated copper was provided by NID to PNNL in January 2011 for isotopic analysis as a test of the NID technology. The results of that analysis are reported here. A second sample of isotopically separated copper was provided by NID to PNNL in August 2011 for isotopic analysis as a test of the NID technology. The results of that analysis are also reported here.

  13. INEL Sample Management Office

    International Nuclear Information System (INIS)

    Watkins, C.

    1994-01-01

    The Idaho National Engineering Laboratory (INEL) Sample Management Office (SMO) was formed as part of the EG ampersand G Idaho Environmental Restoration Program (ERP) in June, 1990. Since then, the SMO has been recognized and sought out by other prime contractors and programs at the INEL. Since December 1991, the DOE-ID Division Directors for the Environmental Restoration Division and Waste Management Division supported the expansion of the INEL ERP SMO into the INEL site wide SMO. The INEL SMO serves as a point of contact for multiple environmental analytical chemistry and laboratory issues (e.g., capacity, capability). The SMO chemists work with project managers during planning to help develop data quality objectives, select appropriate analytical methods, identify special analytical services needs, identify a source for the services, and ensure that requirements for sampling and analysis (e.g., preservations, sample volumes) are clear and technically accurate. The SMO chemists also prepare work scope statements for the laboratories performing the analyses

  14. Interpolation-free scanning and sampling scheme for tomographic reconstructions

    International Nuclear Information System (INIS)

    Donohue, K.D.; Saniie, J.

    1987-01-01

    In this paper a sampling scheme is developed for computer tomography (CT) systems that eliminates the need for interpolation. A set of projection angles along with their corresponding sampling rates are derived from the geometry of the Cartesian grid such that no interpolation is required to calculate the final image points for the display grid. A discussion is presented on the choice of an optimal set of projection angles that will maintain a resolution comparable to a sampling scheme of regular measurement geometry, while minimizing the computational load. The interpolation-free scanning and sampling (IFSS) scheme developed here is compared to a typical sampling scheme of regular measurement geometry through a computer simulation

  15. [Formal sample size calculation and its limited validity in animal studies of medical basic research].

    Science.gov (United States)

    Mayer, B; Muche, R

    2013-01-01

    Animal studies are highly relevant for basic medical research, although their usage is discussed controversially in public. Thus, an optimal sample size for these projects should be aimed at from a biometrical point of view. Statistical sample size calculation is usually the appropriate methodology in planning medical research projects. However, required information is often not valid or only available during the course of an animal experiment. This article critically discusses the validity of formal sample size calculation for animal studies. Within the discussion, some requirements are formulated to fundamentally regulate the process of sample size determination for animal experiments.

  16. Extreme values, regular variation and point processes

    CERN Document Server

    Resnick, Sidney I

    1987-01-01

    Extremes Values, Regular Variation and Point Processes is a readable and efficient account of the fundamental mathematical and stochastic process techniques needed to study the behavior of extreme values of phenomena based on independent and identically distributed random variables and vectors It presents a coherent treatment of the distributional and sample path fundamental properties of extremes and records It emphasizes the core primacy of three topics necessary for understanding extremes the analytical theory of regularly varying functions; the probabilistic theory of point processes and random measures; and the link to asymptotic distribution approximations provided by the theory of weak convergence of probability measures in metric spaces The book is self-contained and requires an introductory measure-theoretic course in probability as a prerequisite Almost all sections have an extensive list of exercises which extend developments in the text, offer alternate approaches, test mastery and provide for enj...

  17. Monte Carlo full-waveform inversion of crosshole GPR data using multiple-point geostatistical a priori information

    DEFF Research Database (Denmark)

    Cordua, Knud Skou; Hansen, Thomas Mejer; Mosegaard, Klaus

    2012-01-01

    We present a general Monte Carlo full-waveform inversion strategy that integrates a priori information described by geostatistical algorithms with Bayesian inverse problem theory. The extended Metropolis algorithm can be used to sample the a posteriori probability density of highly nonlinear...... inverse problems, such as full-waveform inversion. Sequential Gibbs sampling is a method that allows efficient sampling of a priori probability densities described by geostatistical algorithms based on either two-point (e.g., Gaussian) or multiple-point statistics. We outline the theoretical framework......) Based on a posteriori realizations, complicated statistical questions can be answered, such as the probability of connectivity across a layer. (3) Complex a priori information can be included through geostatistical algorithms. These benefits, however, require more computing resources than traditional...

  18. Choice of Sample Split in Out-of-Sample Forecast Evaluation

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Timmermann, Allan

    , while conversely the power of forecast evaluation tests is strongest with long out-of-sample periods. To deal with size distortions, we propose a test statistic that is robust to the effect of considering multiple sample split points. Empirical applications to predictabil- ity of stock returns......Out-of-sample tests of forecast performance depend on how a given data set is split into estimation and evaluation periods, yet no guidance exists on how to choose the split point. Empirical forecast evaluation results can therefore be difficult to interpret, particularly when several values...... and inflation demonstrate that out-of-sample forecast evaluation results can critically depend on how the sample split is determined....

  19. Blue-noise remeshing with farthest point optimization

    KAUST Repository

    Yan, Dongming

    2014-08-01

    In this paper, we present a novel method for surface sampling and remeshing with good blue-noise properties. Our approach is based on the farthest point optimization (FPO), a relaxation technique that generates high quality blue-noise point sets in 2D. We propose two important generalizations of the original FPO framework: adaptive sampling and sampling on surfaces. A simple and efficient algorithm for accelerating the FPO framework is also proposed. Experimental results show that the generalized FPO generates point sets with excellent blue-noise properties for adaptive and surface sampling. Furthermore, we demonstrate that our remeshing quality is superior to the current state-of-the art approaches. © 2014 The Eurographics Association and John Wiley & Sons Ltd.

  20. Blue-noise remeshing with farthest point optimization

    KAUST Repository

    Yan, Dongming; Guo, Jianwei; Jia, Xiaohong; Zhang, Xiaopeng; Wonka, Peter

    2014-01-01

    In this paper, we present a novel method for surface sampling and remeshing with good blue-noise properties. Our approach is based on the farthest point optimization (FPO), a relaxation technique that generates high quality blue-noise point sets in 2D. We propose two important generalizations of the original FPO framework: adaptive sampling and sampling on surfaces. A simple and efficient algorithm for accelerating the FPO framework is also proposed. Experimental results show that the generalized FPO generates point sets with excellent blue-noise properties for adaptive and surface sampling. Furthermore, we demonstrate that our remeshing quality is superior to the current state-of-the art approaches. © 2014 The Eurographics Association and John Wiley & Sons Ltd.

  1. Required sample size for monitoring stand dynamics in strict forest reserves: a case study

    Science.gov (United States)

    Diego Van Den Meersschaut; Bart De Cuyper; Kris Vandekerkhove; Noel Lust

    2000-01-01

    Stand dynamics in European strict forest reserves are commonly monitored using inventory densities of 5 to 15 percent of the total surface. The assumption that these densities guarantee a representative image of certain parameters is critically analyzed in a case study for the parameters basal area and stem number. The required sample sizes for different accuracy and...

  2. NG09 And CTBT On-Site Inspection Noble Gas Sampling and Analysis Requirements

    Science.gov (United States)

    Carrigan, Charles R.; Tanaka, Junichi

    2010-05-01

    A provision of the Comprehensive Test Ban Treaty (CTBT) allows on-site inspections (OSIs) of suspect nuclear sites to determine if the occurrence of a detected event is nuclear in origin. For an underground nuclear explosion (UNE), the potential success of an OSI depends significantly on the containment scenario of the alleged event as well as the application of air and soil-gas radionuclide sampling techniques in a manner that takes into account both the suspect site geology and the gas transport physics. UNE scenarios may be broadly divided into categories involving the level of containment. The simplest to detect is a UNE that vents a significant portion of its radionuclide inventory and is readily detectable at distance by the International Monitoring System (IMS). The most well contained subsurface events will only be detectable during an OSI. In such cases, 37 Ar and radioactive xenon cavity gases may reach the surface through either "micro-seepage" or the barometric pumping process and only the careful siting of sampling locations, timing of sampling and application of the most site-appropriate atmospheric and soil-gas capturing methods will result in a confirmatory signal. The OSI noble gas field tests NG09 was recently held in Stupava, Slovakia to consider, in addition to other field sampling and analysis techniques, drilling and subsurface noble gas extraction methods that might be applied during an OSI. One of the experiments focused on challenges to soil-gas sampling near the soil-atmosphere interface. During withdrawal of soil gas from shallow, subsurface sample points, atmospheric dilution of the sample and the potential for introduction of unwanted atmospheric gases were considered. Tests were designed to evaluate surface infiltration and the ability of inflatable well-packers to seal out atmospheric gases during sample acquisition. We discuss these tests along with some model-based predictions regarding infiltration under different near

  3. NID Copper Sample Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kouzes, Richard T.; Zhu, Zihua

    2011-02-01

    The current focal point of the nuclear physics program at PNNL is the MAJORANA DEMONSTRATOR, and the follow-on Tonne-Scale experiment, a large array of ultra-low background high-purity germanium detectors, enriched in 76Ge, designed to search for zero-neutrino double-beta decay (0νββ). This experiment requires the use of germanium isotopically enriched in 76Ge. The DEMONSTRATOR will utilize 76Ge from Russia, but for the Tonne-Scale experiment it is hoped that an alternate technology under development at Nonlinear Ion Dynamics (NID) will be a viable, US-based, lower-cost source of separated material. Samples of separated material from NID require analysis to determine the isotopic distribution and impurities. The MAJORANA DEMONSTRATOR is a DOE and NSF funded project with a major science impact. DOE is funding NID through an SBIR grant for development of their separation technology for application to the Tonne-Scale experiment. The Environmental Molecular Sciences facility (EMSL), a DOE user facility at PNNL, has the required mass spectroscopy instruments for making these isotopic measurements that are essential to the quality assurance for the MAJORANA DEMONSTRATOR and for the development of the future separation technology required for the Tonne-Scale experiment. A sample of isotopically separated copper was provided by NID to PNNL for isotopic analysis as a test of the NID technology. The results of that analysis are reported here.

  4. NID Copper Sample Analysis

    International Nuclear Information System (INIS)

    Kouzes, Richard T.; Zhu, Zihua

    2011-01-01

    The current focal point of the nuclear physics program at PNNL is the MAJORANA DEMONSTRATOR, and the follow-on Tonne-Scale experiment, a large array of ultra-low background high-purity germanium detectors, enriched in 76 Ge, designed to search for zero-neutrino double-beta decay (0νββ). This experiment requires the use of germanium isotopically enriched in 76 Ge. The DEMONSTRATOR will utilize 76 Ge from Russia, but for the Tonne-Scale experiment it is hoped that an alternate technology under development at Nonlinear Ion Dynamics (NID) will be a viable, US-based, lower-cost source of separated material. Samples of separated material from NID require analysis to determine the isotopic distribution and impurities. The MAJORANA DEMONSTRATOR is a DOE and NSF funded project with a major science impact. DOE is funding NID through an SBIR grant for development of their separation technology for application to the Tonne-Scale experiment. The Environmental Molecular Sciences facility (EMSL), a DOE user facility at PNNL, has the required mass spectroscopy instruments for making these isotopic measurements that are essential to the quality assurance for the MAJORANA DEMONSTRATOR and for the development of the future separation technology required for the Tonne-Scale experiment. A sample of isotopically separated copper was provided by NID to PNNL for isotopic analysis as a test of the NID technology. The results of that analysis are reported here.

  5. Experimental study and modelling of the well-mixing length. Application to the representativeness of sampling points in duct

    International Nuclear Information System (INIS)

    Alengry, Jonathan

    2014-01-01

    Monitoring of gaseous releases from nuclear installations in the environment and air cleaning efficiency measurement are based on regular measurements of concentrations of contaminants in outlet chimneys and ventilation systems. The concentration distribution may be heterogeneous at the measuring point if the distance setting of the mixing is not sufficient. The question is about the set up of the measuring point in duct and the error compared to the homogeneous concentration in case of non-compliance with this distance. This study defines the so-called 'well mixing length' from laboratory experiments. The bench designed for these tests allowed to reproduce flows in long circular and rectangular ducts, each including a bend. An optical measurement technique has been developed, calibrated and used to measure the concentration distribution of a tracer injected in the flow. The experimental results in cylindrical duct have validated an analytical model based on the convection-diffusion equation of a tracer, and allowed to propose models of good mixing length and representativeness of sampling points. In rectangular duct, the acquired measures constitute a first database on the evolution of the homogenization of a tracer, in the perspective of numerical simulations exploring more realistic conditions for measurements in situ. (author) [fr

  6. Species selective preconcentration and quantification of gold nanoparticles using cloud point extraction and electrothermal atomic absorption spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Hartmann, Georg, E-mail: georg.hartmann@tum.de [Department of Chemistry, Technische Universitaet Muenchen, 85748 Garching (Germany); Schuster, Michael, E-mail: michael.schuster@tum.de [Department of Chemistry, Technische Universitaet Muenchen, 85748 Garching (Germany)

    2013-01-25

    Highlights: Black-Right-Pointing-Pointer We optimized cloud point extraction and ET-AAS parameters for Au-NPs measurement. Black-Right-Pointing-Pointer A selective ligand (sodium thiosulphate) is introduced for species separation. Black-Right-Pointing-Pointer A limit of detection of 5 ng Au-NP per L is achieved for aqueous samples. Black-Right-Pointing-Pointer Measurement of samples with high natural organic mater content is possible. Black-Right-Pointing-Pointer Real water samples including wastewater treatment plant effluent were analyzed. - Abstract: The determination of metallic nanoparticles in environmental samples requires sample pretreatment that ideally combines pre-concentration and species selectivity. With cloud point extraction (CPE) using the surfactant Triton X-114 we present a simple and cost effective separation technique that meets both criteria. Effective separation of ionic gold species and Au nanoparticles (Au-NPs) is achieved by using sodium thiosulphate as a complexing agent. The extraction efficiency for Au-NP ranged from 1.01 {+-} 0.06 (particle size 2 nm) to 0.52 {+-} 0.16 (particle size 150 nm). An enrichment factor of 80 and a low limit of detection of 5 ng L{sup -1} is achieved using electrothermal atomic absorption spectrometry (ET-AAS) for quantification. TEM measurements showed that the particle size is not affected by the CPE process. Natural organic matter (NOM) is tolerated up to a concentration of 10 mg L{sup -1}. The precision of the method expressed as the standard deviation of 12 replicates at an Au-NP concentration of 100 ng L{sup -1} is 9.5%. A relation between particle concentration and the extraction efficiency was not observed. Spiking experiments showed a recovery higher than 91% for environmental water samples.

  7. StreamMap: Smooth Dynamic Visualization of High-Density Streaming Points.

    Science.gov (United States)

    Li, Chenhui; Baciu, George; Han, Yu

    2018-03-01

    Interactive visualization of streaming points for real-time scatterplots and linear blending of correlation patterns is increasingly becoming the dominant mode of visual analytics for both big data and streaming data from active sensors and broadcasting media. To better visualize and interact with inter-stream patterns, it is generally necessary to smooth out gaps or distortions in the streaming data. Previous approaches either animate the points directly or present a sampled static heat-map. We propose a new approach, called StreamMap, to smoothly blend high-density streaming points and create a visual flow that emphasizes the density pattern distributions. In essence, we present three new contributions for the visualization of high-density streaming points. The first contribution is a density-based method called super kernel density estimation that aggregates streaming points using an adaptive kernel to solve the overlapping problem. The second contribution is a robust density morphing algorithm that generates several smooth intermediate frames for a given pair of frames. The third contribution is a trend representation design that can help convey the flow directions of the streaming points. The experimental results on three datasets demonstrate the effectiveness of StreamMap when dynamic visualization and visual analysis of trend patterns on streaming points are required.

  8. Determination of cadmium in real water samples by flame atomic absorption spectrometry after cloud point extraction

    International Nuclear Information System (INIS)

    Naeemullah, A.; Kazi, T.G.

    2011-01-01

    Water pollution is a global threat and it is the leading world wide cause of death and diseases. The awareness of the potential danger posed by heavy metals to the ecosystems and in particular to human health has grown tremendously in the past decades. Separation and preconcentration procedures are considered of great importance in analytical and environmental chemistry. Cloud point is one of the most reliable and sophisticated separation methods for determination of traces quantities of heavy metals. Cloud point methodology was successfully employed for preconcentration of trace quantities of cadmium prior to their determination by flame atomic absorption spectrometry (FAAS). The metals react with 8-hydroxquinoline in a surfactant Triton X-114 medium. The following parameters such as pH, concentration of the reagent and Triton X-114, equilibrating temperature and centrifuging time were evaluated and optimized to enhance the sensitivity and extraction efficiency of the proposed method. Dilution of the surfactant-rich phase with acidified ethanol was performed after phase separation and the cadmium content was measured by FAAS. The validation of the procedure was carried out by spiking addition methods. The method was applied for determination of Cd in water samples of different ecosystems (lake and river). (author)

  9. Determination of rhodium in metallic alloy and water samples using cloud point extraction coupled with spectrophotometric technique

    Science.gov (United States)

    Kassem, Mohammed A.; Amin, Alaa S.

    2015-02-01

    A new method to estimate rhodium in different samples at trace levels had been developed. Rhodium was complexed with 5-(4‧-nitro-2‧,6‧-dichlorophenylazo)-6-hydroxypyrimidine-2,4-dione (NDPHPD) as a complexing agent in an aqueous medium and concentrated by using Triton X-114 as a surfactant. The investigated rhodium complex was preconcentrated with cloud point extraction process using the nonionic surfactant Triton X-114 to extract rhodium complex from aqueous solutions at pH 4.75. After the phase separation at 50 °C, the surfactant-rich phase was heated again at 100 °C to remove water after decantation and the remaining phase was dissolved using 0.5 mL of acetonitrile. Under optimum conditions, the calibration curve was linear for the concentration range of 0.5-75 ng mL-1 and the detection limit was 0.15 ng mL-1 of the original solution. The enhancement factor of 500 was achieved for 250 mL samples containing the analyte and relative standard deviations were ⩽1.50%. The method was found to be highly selective, fairly sensitive, simple, rapid and economical and safely applied for rhodium determination in different complex materials such as synthetic mixture of alloys and environmental water samples.

  10. Finger image quality based on singular point localization

    DEFF Research Database (Denmark)

    Wang, Jinghua; Olsen, Martin A.; Busch, Christoph

    2014-01-01

    Singular points are important global features of fingerprints and singular point localization is a crucial step in biometric recognition. Moreover the presence and position of the core point in a captured fingerprint sample can reflect whether the finger is placed properly on the sensor. Therefore...... and analyze the importance of singular points on biometric accuracy. The experiment is based on large scale databases and conducted by relating the measured quality of a fingerprint sample, given by the positions of core points, to the biometric performance. The experimental results show the positions of core...

  11. Determination of geostatistically representative sampling locations in Porsuk Dam Reservoir (Turkey)

    Science.gov (United States)

    Aksoy, A.; Yenilmez, F.; Duzgun, S.

    2013-12-01

    Several factors such as wind action, bathymetry and shape of a lake/reservoir, inflows, outflows, point and diffuse pollution sources result in spatial and temporal variations in water quality of lakes and reservoirs. The guides by the United Nations Environment Programme and the World Health Organization to design and implement water quality monitoring programs suggest that even a single monitoring station near the center or at the deepest part of a lake will be sufficient to observe long-term trends if there is good horizontal mixing. In stratified water bodies, several samples can be required. According to the guide of sampling and analysis under the Turkish Water Pollution Control Regulation, a minimum of five sampling locations should be employed to characterize the water quality in a reservoir or a lake. The European Union Water Framework Directive (2000/60/EC) states to select a sufficient number of monitoring sites to assess the magnitude and impact of point and diffuse sources and hydromorphological pressures in designing a monitoring program. Although existing regulations and guidelines include frameworks for the determination of sampling locations in surface waters, most of them do not specify a procedure in establishment of monitoring aims with representative sampling locations in lakes and reservoirs. In this study, geostatistical tools are used to determine the representative sampling locations in the Porsuk Dam Reservoir (PDR). Kernel density estimation and kriging were used in combination to select the representative sampling locations. Dissolved oxygen and specific conductivity were measured at 81 points. Sixteen of them were used for validation. In selection of the representative sampling locations, care was given to keep similar spatial structure in distributions of measured parameters. A procedure was proposed for that purpose. Results indicated that spatial structure was lost under 30 sampling points. This was as a result of varying water

  12. Limited Sampling Strategy for Accurate Prediction of Pharmacokinetics of Saroglitazar: A 3-point Linear Regression Model Development and Successful Prediction of Human Exposure.

    Science.gov (United States)

    Joshi, Shuchi N; Srinivas, Nuggehally R; Parmar, Deven V

    2018-03-01

    Our aim was to develop and validate the extrapolative performance of a regression model using a limited sampling strategy for accurate estimation of the area under the plasma concentration versus time curve for saroglitazar. Healthy subject pharmacokinetic data from a well-powered food-effect study (fasted vs fed treatments; n = 50) was used in this work. The first 25 subjects' serial plasma concentration data up to 72 hours and corresponding AUC 0-t (ie, 72 hours) from the fasting group comprised a training dataset to develop the limited sampling model. The internal datasets for prediction included the remaining 25 subjects from the fasting group and all 50 subjects from the fed condition of the same study. The external datasets included pharmacokinetic data for saroglitazar from previous single-dose clinical studies. Limited sampling models were composed of 1-, 2-, and 3-concentration-time points' correlation with AUC 0-t of saroglitazar. Only models with regression coefficients (R 2 ) >0.90 were screened for further evaluation. The best R 2 model was validated for its utility based on mean prediction error, mean absolute prediction error, and root mean square error. Both correlations between predicted and observed AUC 0-t of saroglitazar and verification of precision and bias using Bland-Altman plot were carried out. None of the evaluated 1- and 2-concentration-time points models achieved R 2 > 0.90. Among the various 3-concentration-time points models, only 4 equations passed the predefined criterion of R 2 > 0.90. Limited sampling models with time points 0.5, 2, and 8 hours (R 2 = 0.9323) and 0.75, 2, and 8 hours (R 2 = 0.9375) were validated. Mean prediction error, mean absolute prediction error, and root mean square error were prediction of saroglitazar. The same models, when applied to the AUC 0-t prediction of saroglitazar sulfoxide, showed mean prediction error, mean absolute prediction error, and root mean square error model predicts the exposure of

  13. Local activation time sampling density for atrial tachycardia contact mapping: how much is enough?

    Science.gov (United States)

    Williams, Steven E; Harrison, James L; Chubb, Henry; Whitaker, John; Kiedrowicz, Radek; Rinaldi, Christopher A; Cooklin, Michael; Wright, Matthew; Niederer, Steven; O'Neill, Mark D

    2018-02-01

    Local activation time (LAT) mapping forms the cornerstone of atrial tachycardia diagnosis. Although anatomic and positional accuracy of electroanatomic mapping (EAM) systems have been validated, the effect of electrode sampling density on LAT map reconstruction is not known. Here, we study the effect of chamber geometry and activation complexity on optimal LAT sampling density using a combined in silico and in vivo approach. In vivo 21 atrial tachycardia maps were studied in three groups: (1) focal activation, (2) macro-re-entry, and (3) localized re-entry. In silico activation was simulated on a 4×4cm atrial monolayer, sampled randomly at 0.25-10 points/cm2 and used to re-interpolate LAT maps. Activation patterns were studied in the geometrically simple porcine right atrium (RA) and complex human left atrium (LA). Activation complexity was introduced into the porcine RA by incomplete inter-caval linear ablation. In all cases, optimal sampling density was defined as the highest density resulting in minimal further error reduction in the re-interpolated maps. Optimal sampling densities for LA tachycardias were 0.67 ± 0.17 points/cm2 (focal activation), 1.05 ± 0.32 points/cm2 (macro-re-entry) and 1.23 ± 0.26 points/cm2 (localized re-entry), P = 0.0031. Increasing activation complexity was associated with increased optimal sampling density both in silico (focal activation 1.09 ± 0.14 points/cm2; re-entry 1.44 ± 0.49 points/cm2; spiral-wave 1.50 ± 0.34 points/cm2, P density (0.61 ± 0.22 points/cm2 vs. 1.0 ± 0.34 points/cm2, P = 0.0015). Optimal sampling densities can be identified to maximize diagnostic yield of LAT maps. Greater sampling density is required to correctly reveal complex activation and represent activation across complex geometries. Overall, the optimal sampling density for LAT map interpolation defined in this study was ∼1.0-1.5 points/cm2. Published on behalf of the European Society of

  14. Tracer measured substrate turnover requires arterial sampling downstream of infusion site

    International Nuclear Information System (INIS)

    Stanley, W.C.; Neese, R.A.; Gertz, E.W.; Wisneski, J.A.; Morris, D.L.; Brooks, G.A.

    1986-01-01

    Measurement of metabolite turnover (Rt) with radioactive tracers is done by either infusing tracer venously and sampling specific activity (SA) arterially (V-A modes), or by infusing into the aorta and sampling venous blood (A-V mode). Using the Fick principle, the necessity for using the V-A mode can be demonstrated. If tracer is infused into the left ventricle, in a steady state the Rt is the product of arterial trace concentration, the cardiac output, and the tracer extraction ratio for the whole body. This is expressed as: Rt = Ca x Qx ((*Ca - *Cv)/*Ca) (Eq1) where C=trace concentration (μmol/ml), *C=tracer conc. (dpm/ml), a=arterial, v-=mixed venous, and Q=cardiac output (ml/min). Rearranging the equation: Rt = Qx(*Ca - *Cv)/SAa = F/SAa (Eq2) where SAa is *Ca/Ca, and Qx (*Ca-*Cv) equals the infusion rate (F). The authors compared Eqs1 and 2 (Rt = F/SAa) in 3 anesthetized dogs in which [1- 14 C] lactate was infused into the left ventricle, and blood was sampled arterially downstream from the infusion site and in the pulmonary artery. Eqs 1 and 2 gave similar results for Rt (45.9 vs. 43.9 μmol/kg min), while substituting SAv for SAa (A-V mode) into Eq 2 gave a higher Rt (53.6). When SAv (A-V mode) is used, the specific activity seen by the tissues (SAa) is not considered in the calculation of Rt. Therefore, only the V-A mode meets the requirements for tracer measured metabolite turnover

  15. Determination of carcinogenic herbicides in milk samples using green non-ionic silicone surfactant of cloud point extraction and spectrophotometry.

    Science.gov (United States)

    Mohd, N I; Zain, N N M; Raoov, M; Mohamad, S

    2018-04-01

    A new cloud point methodology was successfully used for the extraction of carcinogenic pesticides in milk samples as a prior step to their determination by spectrophotometry. In this work, non-ionic silicone surfactant, also known as 3-(3-hydroxypropyl-heptatrimethylxyloxane), was chosen as a green extraction solvent because of its structure and properties. The effect of different parameters, such as the type of surfactant, concentration and volume of surfactant, pH, salt, temperature, incubation time and water content on the cloud point extraction of carcinogenic pesticides such as atrazine and propazine, was studied in detail and a set of optimum conditions was established. A good correlation coefficient ( R 2 ) in the range of 0.991-0.997 for all calibration curves was obtained. The limit of detection was 1.06 µg l -1 (atrazine) and 1.22 µg l -1 (propazine), and the limit of quantitation was 3.54 µg l -1 (atrazine) and 4.07 µg l -1 (propazine). Satisfactory recoveries in the range of 81-108% were determined in milk samples at 5 and 1000 µg l -1 , respectively, with low relative standard deviation, n  = 3 of 0.301-7.45% in milk matrices. The proposed method is very convenient, rapid, cost-effective and environmentally friendly for food analysis.

  16. Data Quality Objectives for Regulatory Requirements for Hazardous and Radioactive Air Emissions Sampling and Analysis; FINAL

    International Nuclear Information System (INIS)

    MULKEY, C.H.

    1999-01-01

    This document describes the results of the data quality objective (DQO) process undertaken to define data needs for state and federal requirements associated with toxic, hazardous, and/or radiological air emissions under the jurisdiction of the River Protection Project (RPP). Hereafter, this document is referred to as the Air DQO. The primary drivers for characterization under this DQO are the regulatory requirements pursuant to Washington State regulations, that may require sampling and analysis. The federal regulations concerning air emissions are incorporated into the Washington State regulations. Data needs exist for nonradioactive and radioactive waste constituents and characteristics as identified through the DQO process described in this document. The purpose is to identify current data needs for complying with regulatory drivers for the measurement of air emissions from RPP facilities in support of air permitting. These drivers include best management practices; similar analyses may have more than one regulatory driver. This document should not be used for determining overall compliance with regulations because the regulations are in constant change, and this document may not reflect the latest regulatory requirements. Regulatory requirements are also expected to change as various permits are issued. Data needs require samples for both radionuclides and nonradionuclide analytes of air emissions from tanks and stored waste containers. The collection of data is to support environmental permitting and compliance, not for health and safety issues. This document does not address health or safety regulations or requirements (those of the Occupational Safety and Health Administration or the National Institute of Occupational Safety and Health) or continuous emission monitoring systems. This DQO is applicable to all equipment, facilities, and operations under the jurisdiction of RPP that emit or have the potential to emit regulated air pollutants

  17. Prospects for direct neutron capture measurements on s-process branching point isotopes

    Energy Technology Data Exchange (ETDEWEB)

    Guerrero, C.; Lerendegui-Marco, J.; Quesada, J.M. [Universidad de Sevilla, Dept. de Fisica Atomica, Molecular y Nuclear, Sevilla (Spain); Domingo-Pardo, C. [CSIC-Universidad de Valencia, Instituto de Fisica Corpuscular, Valencia (Spain); Kaeppeler, F. [Karlsruhe Institute of Technology, Institut fuer Kernphysik, Karlsruhe (Germany); Palomo, F.R. [Universidad de Sevilla, Dept. de Ingenieria Electronica, Sevilla (Spain); Reifarth, R. [Goethe-Universitaet Frankfurt am Main, Frankfurt am Main (Germany)

    2017-05-15

    The neutron capture cross sections of several unstable key isotopes acting as branching points in the s-process are crucial for stellar nucleosynthesis studies, but they are very challenging to measure directly due to the difficult production of sufficient sample material, the high activity of the resulting samples, and the actual (n, γ) measurement, where high neutron fluxes and effective background rejection capabilities are required. At present there are about 21 relevant s-process branching point isotopes whose cross section could not be measured yet over the neutron energy range of interest for astrophysics. However, the situation is changing with some very recent developments and upcoming technologies. This work introduces three techniques that will change the current paradigm in the field: the use of γ-ray imaging techniques in (n, γ) experiments, the production of moderated neutron beams using high-power lasers, and double capture experiments in Maxwellian neutron beams. (orig.)

  18. 40 CFR 141.24 - Organic chemicals, sampling and analytical requirements.

    Science.gov (United States)

    2010-07-01

    ...-point source of contamination. Point sources include spills and leaks of chemicals at or near a water... sources include spills and leaks of chemicals at or near a water treatment facility or at manufacturing... the well casing. (v) Elevated nitrate levels at the water supply source. (vi) Use of PCBs in equipment...

  19. Change Analysis in Structural Laser Scanning Point Clouds: The Baseline Method.

    Science.gov (United States)

    Shen, Yueqian; Lindenbergh, Roderik; Wang, Jinhu

    2016-12-24

    A method is introduced for detecting changes from point clouds that avoids registration. For many applications, changes are detected between two scans of the same scene obtained at different times. Traditionally, these scans are aligned to a common coordinate system having the disadvantage that this registration step introduces additional errors. In addition, registration requires stable targets or features. To avoid these issues, we propose a change detection method based on so-called baselines. Baselines connect feature points within one scan. To analyze changes, baselines connecting corresponding points in two scans are compared. As feature points either targets or virtual points corresponding to some reconstructable feature in the scene are used. The new method is implemented on two scans sampling a masonry laboratory building before and after seismic testing, that resulted in damages in the order of several centimeters. The centres of the bricks of the laboratory building are automatically extracted to serve as virtual points. Baselines connecting virtual points and/or target points are extracted and compared with respect to a suitable structural coordinate system. Changes detected from the baseline analysis are compared to a traditional cloud to cloud change analysis demonstrating the potential of the new method for structural analysis.

  20. Change Analysis in Structural Laser Scanning Point Clouds: The Baseline Method

    Directory of Open Access Journals (Sweden)

    Yueqian Shen

    2016-12-01

    Full Text Available A method is introduced for detecting changes from point clouds that avoids registration. For many applications, changes are detected between two scans of the same scene obtained at different times. Traditionally, these scans are aligned to a common coordinate system having the disadvantage that this registration step introduces additional errors. In addition, registration requires stable targets or features. To avoid these issues, we propose a change detection method based on so-called baselines. Baselines connect feature points within one scan. To analyze changes, baselines connecting corresponding points in two scans are compared. As feature points either targets or virtual points corresponding to some reconstructable feature in the scene are used. The new method is implemented on two scans sampling a masonry laboratory building before and after seismic testing, that resulted in damages in the order of several centimeters. The centres of the bricks of the laboratory building are automatically extracted to serve as virtual points. Baselines connecting virtual points and/or target points are extracted and compared with respect to a suitable structural coordinate system. Changes detected from the baseline analysis are compared to a traditional cloud to cloud change analysis demonstrating the potential of the new method for structural analysis.

  1. Error Mitigation of Point-to-Point Communication for Fault-Tolerant Computing

    Science.gov (United States)

    Akamine, Robert L.; Hodson, Robert F.; LaMeres, Brock J.; Ray, Robert E.

    2011-01-01

    Fault tolerant systems require the ability to detect and recover from physical damage caused by the hardware s environment, faulty connectors, and system degradation over time. This ability applies to military, space, and industrial computing applications. The integrity of Point-to-Point (P2P) communication, between two microcontrollers for example, is an essential part of fault tolerant computing systems. In this paper, different methods of fault detection and recovery are presented and analyzed.

  2. Measuring Blood Glucose Concentrations in Photometric Glucometers Requiring Very Small Sample Volumes.

    Science.gov (United States)

    Demitri, Nevine; Zoubir, Abdelhak M

    2017-01-01

    Glucometers present an important self-monitoring tool for diabetes patients and, therefore, must exhibit high accuracy as well as good usability features. Based on an invasive photometric measurement principle that drastically reduces the volume of the blood sample needed from the patient, we present a framework that is capable of dealing with small blood samples, while maintaining the required accuracy. The framework consists of two major parts: 1) image segmentation; and 2) convergence detection. Step 1 is based on iterative mode-seeking methods to estimate the intensity value of the region of interest. We present several variations of these methods and give theoretical proofs of their convergence. Our approach is able to deal with changes in the number and position of clusters without any prior knowledge. Furthermore, we propose a method based on sparse approximation to decrease the computational load, while maintaining accuracy. Step 2 is achieved by employing temporal tracking and prediction, herewith decreasing the measurement time, and, thus, improving usability. Our framework is tested on several real datasets with different characteristics. We show that we are able to estimate the underlying glucose concentration from much smaller blood samples than is currently state of the art with sufficient accuracy according to the most recent ISO standards and reduce measurement time significantly compared to state-of-the-art methods.

  3. Multi-point probe for testing electrical properties and a method of producing a multi-point probe

    DEFF Research Database (Denmark)

    2011-01-01

    A multi-point probe for testing electrical properties of a number of specific locations of a test sample comprises a supporting body defining a first surface, a first multitude of conductive probe arms (101-101'''), each of the probe arms defining a proximal end and a distal end. The probe arms...... of contact with the supporting body, and a maximum thickness perpendicular to its perpendicular bisector and its line of contact with the supporting body. Each of the probe arms has a specific area or point of contact (111-111''') at its distal end for contacting a specific location among the number...... of specific locations of the test sample. At least one of the probe arms has an extension defining a pointing distal end providing its specific area or point of contact located offset relative to its perpendicular bisector....

  4. Development of a Cloud-Point Extraction Method for Cobalt Determination in Natural Water Samples

    Directory of Open Access Journals (Sweden)

    Mohammad Reza Jamali

    2013-01-01

    Full Text Available A new, simple, and versatile cloud-point extraction (CPE methodology has been developed for the separation and preconcentration of cobalt. The cobalt ions in the initial aqueous solution were complexed with 4-Benzylpiperidinedithiocarbamate, and Triton X-114 was added as surfactant. Dilution of the surfactant-rich phase with acidified ethanol was performed after phase separation, and the cobalt content was measured by flame atomic absorption spectrometry. The main factors affecting CPE procedure, such as pH, concentration of ligand, amount of Triton X-114, equilibrium temperature, and incubation time were investigated and optimized. Under the optimal conditions, the limit of detection (LOD for cobalt was 0.5 μg L-1, with sensitivity enhancement factor (EF of 67. Calibration curve was linear in the range of 2–150 μg L-1, and relative standard deviation was 3.2% (c=100 μg L-1; n=10. The proposed method was applied to the determination of trace cobalt in real water samples with satisfactory analytical results.

  5. A logistic regression estimating function for spatial Gibbs point processes

    DEFF Research Database (Denmark)

    Baddeley, Adrian; Coeurjolly, Jean-François; Rubak, Ege

    We propose a computationally efficient logistic regression estimating function for spatial Gibbs point processes. The sample points for the logistic regression consist of the observed point pattern together with a random pattern of dummy points. The estimating function is closely related to the p......We propose a computationally efficient logistic regression estimating function for spatial Gibbs point processes. The sample points for the logistic regression consist of the observed point pattern together with a random pattern of dummy points. The estimating function is closely related...

  6. Evaluating the effect of sample type on American alligator (Alligator mississippiensis) analyte values in a point-of-care blood analyser.

    Science.gov (United States)

    Hamilton, Matthew T; Finger, John W; Winzeler, Megan E; Tuberville, Tracey D

    2016-01-01

    The assessment of wildlife health has been enhanced by the ability of point-of-care (POC) blood analysers to provide biochemical analyses of non-domesticated animals in the field. However, environmental limitations (e.g. temperature, atmospheric humidity and rain) and lack of reference values may inhibit researchers from using such a device with certain wildlife species. Evaluating the use of alternative sample types, such as plasma, in a POC device may afford researchers the opportunity to delay sample analysis and the ability to use banked samples. In this study, we examined fresh whole blood, fresh plasma and frozen plasma (sample type) pH, partial pressure of carbon dioxide (PCO2), bicarbonate (HCO3 (-)), total carbon dioxide (TCO2), base excess (BE), partial pressure of oxygen (PO2), oxygen saturation (sO2) and lactate concentrations in 23 juvenile American alligators (Alligator mississippiensis) using an i-STAT CG4+ cartridge. Our results indicate that sample type had no effect on lactate concentration values (F 2,65 = 0.37, P = 0.963), suggesting that the i-STAT analyser can be used reliably to quantify lactate concentrations in fresh and frozen plasma samples. In contrast, the other seven blood parameters measured by the CG4+ cartridge were significantly affected by sample type. Lastly, we were able to collect blood samples from all alligators within 2 min of capture to establish preliminary reference ranges for juvenile alligators based on values obtained using fresh whole blood.

  7. Optical biosensor technologies for molecular diagnostics at the point-of-care

    Science.gov (United States)

    Schotter, Joerg; Schrittwieser, Stefan; Muellner, Paul; Melnik, Eva; Hainberger, Rainer; Koppitsch, Guenther; Schrank, Franz; Soulantika, Katerina; Lentijo-Mozo, Sergio; Pelaz, Beatriz; Parak, Wolfgang; Ludwig, Frank; Dieckhoff, Jan

    2015-05-01

    Label-free optical schemes for molecular biosensing hold a strong promise for point-of-care applications in medical research and diagnostics. Apart from diagnostic requirements in terms of sensitivity, specificity, and multiplexing capability, also other aspects such as ease of use and manufacturability have to be considered in order to pave the way to a practical implementation. We present integrated optical waveguide as well as magnetic nanoparticle based molecular biosensor concepts that address these aspects. The integrated optical waveguide devices are based on low-loss photonic wires made of silicon nitride deposited by a CMOS compatible plasma-enhanced chemical vapor deposition (PECVD) process that allows for backend integration of waveguides on optoelectronic CMOS chips. The molecular detection principle relies on evanescent wave sensing in the 0.85 μm wavelength regime by means of Mach-Zehnder interferometers, which enables on-chip integration of silicon photodiodes and, thus, the realization of system-on-chip solutions. Our nanoparticle-based approach is based on optical observation of the dynamic response of functionalized magneticcore/ noble-metal-shell nanorods (`nanoprobes') to an externally applied time-varying magnetic field. As target molecules specifically bind to the surface of the nanoprobes, the observed dynamics of the nanoprobes changes, and the concentration of target molecules in the sample solution can be quantified. This approach is suitable for dynamic real-time measurements and only requires minimal sample preparation, thus presenting a highly promising point-of-care diagnostic system. In this paper, we present a prototype of a diagnostic device suitable for highly automated sample analysis by our nanoparticle-based approach.

  8. Concepts in sample size determination

    Directory of Open Access Journals (Sweden)

    Umadevi K Rao

    2012-01-01

    Full Text Available Investigators involved in clinical, epidemiological or translational research, have the drive to publish their results so that they can extrapolate their findings to the population. This begins with the preliminary step of deciding the topic to be studied, the subjects and the type of study design. In this context, the researcher must determine how many subjects would be required for the proposed study. Thus, the number of individuals to be included in the study, i.e., the sample size is an important consideration in the design of many clinical studies. The sample size determination should be based on the difference in the outcome between the two groups studied as in an analytical study, as well as on the accepted p value for statistical significance and the required statistical power to test a hypothesis. The accepted risk of type I error or alpha value, which by convention is set at the 0.05 level in biomedical research defines the cutoff point at which the p value obtained in the study is judged as significant or not. The power in clinical research is the likelihood of finding a statistically significant result when it exists and is typically set to >80%. This is necessary since the most rigorously executed studies may fail to answer the research question if the sample size is too small. Alternatively, a study with too large a sample size will be difficult and will result in waste of time and resources. Thus, the goal of sample size planning is to estimate an appropriate number of subjects for a given study design. This article describes the concepts in estimating the sample size.

  9. Sample and injection manifolds used to in-place test of nuclear air-cleaning system

    International Nuclear Information System (INIS)

    Qiu Dangui; Li Xinzhi; Hou Jianrong; Qiao Taifei; Wu Tao; Zhang Jirong; Han Lihong

    2012-01-01

    Objective: According to the regulations of nuclear safety rules and related standards, in-place test of the nuclear air-cleaning systems should be carried out before and during operation of the nuclear facilities, which ensure them to be in good condition. In some special conditions, the use of sample and injection manifolds is required to make the test tracer and ventilating duct air fully mixed, so as to get the on-spot typical sample. Methods: This paper introduces the technology and application of the sample and injection manifolds in nuclear air-cleaning system. Results: Multi point injection and multi point sampling technology as an effective experimental method, has been used in a of domestic and international nuclear facilities. Conclusion: The technology solved the problem of uniformly of on-spot injection and sampling,which plays an important role in objectively evaluating the function of nuclear air-cleaning system. (authors)

  10. Four-point probe measurements using current probes with voltage feedback to measure electric potentials

    Science.gov (United States)

    Lüpke, Felix; Cuma, David; Korte, Stefan; Cherepanov, Vasily; Voigtländer, Bert

    2018-02-01

    We present a four-point probe resistance measurement technique which uses four equivalent current measuring units, resulting in minimal hardware requirements and corresponding sources of noise. Local sample potentials are measured by a software feedback loop which adjusts the corresponding tip voltage such that no current flows to the sample. The resulting tip voltage is then equivalent to the sample potential at the tip position. We implement this measurement method into a multi-tip scanning tunneling microscope setup such that potentials can also be measured in tunneling contact, allowing in principle truly non-invasive four-probe measurements. The resulting measurement capabilities are demonstrated for \

  11. Evaluating the effect of sample type on American alligator (Alligator mississippiensis) analyte values in a point-of-care blood analyser

    OpenAIRE

    Hamilton, Matthew T.; Finger, John W.; Winzeler, Megan E.; Tuberville, Tracey D.

    2016-01-01

    The assessment of wildlife health has been enhanced by the ability of point-of-care (POC) blood analysers to provide biochemical analyses of non-domesticated animals in the field. However, environmental limitations (e.g. temperature, atmospheric humidity and rain) and lack of reference values may inhibit researchers from using such a device with certain wildlife species. Evaluating the use of alternative sample types, such as plasma, in a POC device may afford researchers the opportunity to d...

  12. Collection and control of tritium bioassay samples at Pantex

    International Nuclear Information System (INIS)

    Fairrow, N.L.; Ivie, W.E.

    1992-01-01

    Pantex is the final assembly/disassembly point for US nuclear weapons. The Pantex internal dosimetry section monitors radiation workers once a month for tritium exposure. In order to manage collection and control of the bioassay specimens efficiently, a bar code system for collection of samples was developed and implemented to speed up the process and decrease the number of errors probable when transferring data. In the past, all the bioassay data from samples were entered manually into a computer database. Transferring the bioassay data from the liquid scintillation counter to each individual's dosimetry record required as much as two weeks of concentrated effort

  13. Monte carlo sampling of fission multiplicity.

    Energy Technology Data Exchange (ETDEWEB)

    Hendricks, J. S. (John S.)

    2004-01-01

    Two new methods have been developed for fission multiplicity modeling in Monte Carlo calculations. The traditional method of sampling neutron multiplicity from fission is to sample the number of neutrons above or below the average. For example, if there are 2.7 neutrons per fission, three would be chosen 70% of the time and two would be chosen 30% of the time. For many applications, particularly {sup 3}He coincidence counting, a better estimate of the true number of neutrons per fission is required. Generally, this number is estimated by sampling a Gaussian distribution about the average. However, because the tail of the Gaussian distribution is negative and negative neutrons cannot be produced, a slight positive bias can be found in the average value. For criticality calculations, the result of rejecting the negative neutrons is an increase in k{sub eff} of 0.1% in some cases. For spontaneous fission, where the average number of neutrons emitted from fission is low, the error also can be unacceptably large. If the Gaussian width approaches the average number of fissions, 10% too many fission neutrons are produced by not treating the negative Gaussian tail adequately. The first method to treat the Gaussian tail is to determine a correction offset, which then is subtracted from all sampled values of the number of neutrons produced. This offset depends on the average value for any given fission at any energy and must be computed efficiently at each fission from the non-integrable error function. The second method is to determine a corrected zero point so that all neutrons sampled between zero and the corrected zero point are killed to compensate for the negative Gaussian tail bias. Again, the zero point must be computed efficiently at each fission. Both methods give excellent results with a negligible computing time penalty. It is now possible to include the full effects of fission multiplicity without the negative Gaussian tail bias.

  14. Determination of ultra trace arsenic species in water samples by hydride generation atomic absorption spectrometry after cloud point extraction

    Energy Technology Data Exchange (ETDEWEB)

    Ulusoy, Halil Ibrahim, E-mail: hiulusoy@yahoo.com [University of Cumhuriyet, Faculty of Science, Department of Chemistry, TR-58140, Sivas (Turkey); Akcay, Mehmet; Ulusoy, Songuel; Guerkan, Ramazan [University of Cumhuriyet, Faculty of Science, Department of Chemistry, TR-58140, Sivas (Turkey)

    2011-10-10

    Graphical abstract: The possible complex formation mechanism for ultra-trace As determination. Highlights: {yields} CPE/HGAAS system for arsenic determination and speciation in real samples has been applied first time until now. {yields} The proposed method has the lowest detection limit when compared with those of similar CPE studies present in literature. {yields} The linear range of the method is highly wide and suitable for its application to real samples. - Abstract: Cloud point extraction (CPE) methodology has successfully been employed for the preconcentration of ultra-trace arsenic species in aqueous samples prior to hydride generation atomic absorption spectrometry (HGAAS). As(III) has formed an ion-pairing complex with Pyronine B in presence of sodium dodecyl sulfate (SDS) at pH 10.0 and extracted into the non-ionic surfactant, polyethylene glycol tert-octylphenyl ether (Triton X-114). After phase separation, the surfactant-rich phase was diluted with 2 mL of 1 M HCl and 0.5 mL of 3.0% (w/v) Antifoam A. Under the optimized conditions, a preconcentration factor of 60 and a detection limit of 0.008 {mu}g L{sup -1} with a correlation coefficient of 0.9918 was obtained with a calibration curve in the range of 0.03-4.00 {mu}g L{sup -1}. The proposed preconcentration procedure was successfully applied to the determination of As(III) ions in certified standard water samples (TMDA-53.3 and NIST 1643e, a low level fortified standard for trace elements) and some real samples including natural drinking water and tap water samples.

  15. Robust identification of noncoding RNA from transcriptomes requires phylogenetically-informed sampling.

    Directory of Open Access Journals (Sweden)

    Stinus Lindgreen

    2014-10-01

    Full Text Available Noncoding RNAs are integral to a wide range of biological processes, including translation, gene regulation, host-pathogen interactions and environmental sensing. While genomics is now a mature field, our capacity to identify noncoding RNA elements in bacterial and archaeal genomes is hampered by the difficulty of de novo identification. The emergence of new technologies for characterizing transcriptome outputs, notably RNA-seq, are improving noncoding RNA identification and expression quantification. However, a major challenge is to robustly distinguish functional outputs from transcriptional noise. To establish whether annotation of existing transcriptome data has effectively captured all functional outputs, we analysed over 400 publicly available RNA-seq datasets spanning 37 different Archaea and Bacteria. Using comparative tools, we identify close to a thousand highly-expressed candidate noncoding RNAs. However, our analyses reveal that capacity to identify noncoding RNA outputs is strongly dependent on phylogenetic sampling. Surprisingly, and in stark contrast to protein-coding genes, the phylogenetic window for effective use of comparative methods is perversely narrow: aggregating public datasets only produced one phylogenetic cluster where these tools could be used to robustly separate unannotated noncoding RNAs from a null hypothesis of transcriptional noise. Our results show that for the full potential of transcriptomics data to be realized, a change in experimental design is paramount: effective transcriptomics requires phylogeny-aware sampling.

  16. RandomSpot: A web-based tool for systematic random sampling of virtual slides.

    Science.gov (United States)

    Wright, Alexander I; Grabsch, Heike I; Treanor, Darren E

    2015-01-01

    This paper describes work presented at the Nordic Symposium on Digital Pathology 2014, Linköping, Sweden. Systematic random sampling (SRS) is a stereological tool, which provides a framework to quickly build an accurate estimation of the distribution of objects or classes within an image, whilst minimizing the number of observations required. RandomSpot is a web-based tool for SRS in stereology, which systematically places equidistant points within a given region of interest on a virtual slide. Each point can then be visually inspected by a pathologist in order to generate an unbiased sample of the distribution of classes within the tissue. Further measurements can then be derived from the distribution, such as the ratio of tumor to stroma. RandomSpot replicates the fundamental principle of traditional light microscope grid-shaped graticules, with the added benefits associated with virtual slides, such as facilitated collaboration and automated navigation between points. Once the sample points have been added to the region(s) of interest, users can download the annotations and view them locally using their virtual slide viewing software. Since its introduction, RandomSpot has been used extensively for international collaborative projects, clinical trials and independent research projects. So far, the system has been used to generate over 21,000 sample sets, and has been used to generate data for use in multiple publications, identifying significant new prognostic markers in colorectal, upper gastro-intestinal and breast cancer. Data generated using RandomSpot also has significant value for training image analysis algorithms using sample point coordinates and pathologist classifications.

  17. Image Sampling with Quasicrystals

    Directory of Open Access Journals (Sweden)

    Mark Grundland

    2009-07-01

    Full Text Available We investigate the use of quasicrystals in image sampling. Quasicrystals produce space-filling, non-periodic point sets that are uniformly discrete and relatively dense, thereby ensuring the sample sites are evenly spread out throughout the sampled image. Their self-similar structure can be attractive for creating sampling patterns endowed with a decorative symmetry. We present a brief general overview of the algebraic theory of cut-and-project quasicrystals based on the geometry of the golden ratio. To assess the practical utility of quasicrystal sampling, we evaluate the visual effects of a variety of non-adaptive image sampling strategies on photorealistic image reconstruction and non-photorealistic image rendering used in multiresolution image representations. For computer visualization of point sets used in image sampling, we introduce a mosaic rendering technique.

  18. Point-of-care hemoglobin testing for postmortem diagnosis of anemia.

    Science.gov (United States)

    Na, Joo-Young; Park, Ji Hye; Choi, Byung Ha; Kim, Hyung-Seok; Park, Jong-Tae

    2018-03-01

    An autopsy involves examination of a body using invasive methods such as dissection, and includes various tests using samples procured during dissection. During medicolegal autopsies, the blood carboxyhemoglobin concentration is commonly measured using the AVOXimeter® 4000 as a point-of-care test. When evaluating the body following hypovolemic shock, characteristics such as reduced livor mortis or an anemic appearance of the viscera can be identified, but these observations arequite subjective. Thus, a more objective test is required for the postmortem diagnosis of anemia. In the present study, the AVOXimeter® 4000 was used to investigate the utility of point-of-care hemoglobin testing. Hemoglobin tests were performed in 93 autopsy cases. The AVOXimeter® 4000 and the BC-2800 Auto Hematology Analyzer were used to test identical samples in 29 of these cases. The results of hemoglobin tests performed with these two devices were statistically similar (r = 0.969). The results of hemoglobin tests using postmortem blood were compared with antemortem test results from medical records from 31 cases, and these results were similar. In 13 of 17 cases of death from internal hemorrhage, hemoglobin levels were lower in the cardiac blood than in blood from the affected body cavity, likely due to compensatory changes induced by antemortem hemorrhage. It is concluded that blood hemoglobin testing may be useful as a point-of-care test for diagnosing postmortem anemia.

  19. Hardening in AlN induced by point defects

    International Nuclear Information System (INIS)

    Suematsu, H.; Mitchell, T.E.; Iseki, T.; Yano, T.

    1991-01-01

    Pressureless-sintered AIN was neutron irradiated and the hardness change was examined by Vickers indentation. The hardness was increased by irradiation. When the samples were annealed at high temperature, the hardness gradually decreased. Length was also found to increase and to change in the same way as the hardness. A considerable density of dislocation loops still remained, even after the hardness completely recovered to the value of the unirradiated sample. Thus, it is concluded that the hardening in AIN is caused by isolated point defects and small clusters of point defects, rather than by dislocation loops. Hardness was found to increase in proportion to the length change. If the length change is assumed to be proportional to the point defect density, then the curve could be fitted qualitatively to that predicted by models of solution hardening in metals. Furthermore, the curves for three samples irradiated at different temperatures and fluences are identical. There should be different kinds of defect clusters in samples irradiated at different conditions, e.g., the fraction of single point defects is the highest in the sample irradiated at the lowest temperature. Thus, hardening is insensitive to the kind of defects remaining in the sample and is influenced only by those which contribute to length change

  20. Determining the number of samples required for decisions concerning remedial actions at hazardous waste sites

    International Nuclear Information System (INIS)

    Skiles, J.L.; Redfearn, A.; White, R.K.

    1991-01-01

    The processing of collecting, analyzing, and assessing the data needed to make to make decisions concerning the cleanup of hazardous waste sites is quite complex and often very expensive. This is due to the many elements that must be considered during remedial investigations. The decision maker must have sufficient data to determine the potential risks to human health and the environment and to verify compliance with regulatory requirements, given the availability of resources allocated for a site, and time constraints specified for the completion of the decision making process. It is desirable to simplify the remedial investigation procedure as much as possible to conserve both time and resources while, simultaneously, minimizing the probability of error associated with each decision to be made. With this in mind, it is necessary to have a practical and statistically valid technique for estimating the number of on-site samples required to ''guarantee'' that the correct decisions are made with a specified precision and confidence level. Here, we will examine existing methodologies and then develop our own approach for determining a statistically defensible sample size based on specific guidelines that have been established for the risk assessment process

  1. Point process models for spatio-temporal distance sampling data from a large-scale survey of blue whales

    KAUST Repository

    Yuan, Yuan; Bachl, Fabian E.; Lindgren, Finn; Borchers, David L.; Illian, Janine B.; Buckland, Stephen T.; Rue, Haavard; Gerrodette, Tim

    2017-01-01

    Distance sampling is a widely used method for estimating wildlife population abundance. The fact that conventional distance sampling methods are partly design-based constrains the spatial resolution at which animal density can be estimated using these methods. Estimates are usually obtained at survey stratum level. For an endangered species such as the blue whale, it is desirable to estimate density and abundance at a finer spatial scale than stratum. Temporal variation in the spatial structure is also important. We formulate the process generating distance sampling data as a thinned spatial point process and propose model-based inference using a spatial log-Gaussian Cox process. The method adopts a flexible stochastic partial differential equation (SPDE) approach to model spatial structure in density that is not accounted for by explanatory variables, and integrated nested Laplace approximation (INLA) for Bayesian inference. It allows simultaneous fitting of detection and density models and permits prediction of density at an arbitrarily fine scale. We estimate blue whale density in the Eastern Tropical Pacific Ocean from thirteen shipboard surveys conducted over 22 years. We find that higher blue whale density is associated with colder sea surface temperatures in space, and although there is some positive association between density and mean annual temperature, our estimates are consistent with no trend in density across years. Our analysis also indicates that there is substantial spatially structured variation in density that is not explained by available covariates.

  2. Point process models for spatio-temporal distance sampling data from a large-scale survey of blue whales

    KAUST Repository

    Yuan, Yuan

    2017-12-28

    Distance sampling is a widely used method for estimating wildlife population abundance. The fact that conventional distance sampling methods are partly design-based constrains the spatial resolution at which animal density can be estimated using these methods. Estimates are usually obtained at survey stratum level. For an endangered species such as the blue whale, it is desirable to estimate density and abundance at a finer spatial scale than stratum. Temporal variation in the spatial structure is also important. We formulate the process generating distance sampling data as a thinned spatial point process and propose model-based inference using a spatial log-Gaussian Cox process. The method adopts a flexible stochastic partial differential equation (SPDE) approach to model spatial structure in density that is not accounted for by explanatory variables, and integrated nested Laplace approximation (INLA) for Bayesian inference. It allows simultaneous fitting of detection and density models and permits prediction of density at an arbitrarily fine scale. We estimate blue whale density in the Eastern Tropical Pacific Ocean from thirteen shipboard surveys conducted over 22 years. We find that higher blue whale density is associated with colder sea surface temperatures in space, and although there is some positive association between density and mean annual temperature, our estimates are consistent with no trend in density across years. Our analysis also indicates that there is substantial spatially structured variation in density that is not explained by available covariates.

  3. Cloud Point Extraction and Determination of Silver Ion in Real Sample using Bis((1H-benzo[d ]imidazol-2ylmethylsulfane

    Directory of Open Access Journals (Sweden)

    Farshid Ahmadi

    2011-01-01

    Full Text Available Bis((1H-benzo[d]imidazol-2ylmethylsulfane (BHIS was used as a complexing agent in cloud point extraction for the first time and applied for selective pre-concentration of trace amounts of silver. The method is based on the extraction of silver at pH 8.0 by using non-ionic surfactant T-X114 and bis((1H-benzo[d]imidazol-2ylmethylsulfane as a chelating agent. The adopted concentrations for BHIS, Triton X-114 and HNO3, bath temperature, centrifuge rate and time were optimized. Detection limits (3SDb/m of 1.7 along with enrichment factor of 39 for silver ion was achieved. The high efficiency of cloud point extraction to carry out the determination of analytes in complex matrices was demonstrated. The proposed method was successfully applied to the ultra-trace determination of silver in real samples.

  4. High resolution x-ray microtomography of biological samples: Requirements and strategies for satisfying them

    Energy Technology Data Exchange (ETDEWEB)

    Loo, B.W. Jr. [Univ. of California, San Francisco, CA (United States)]|[Univ. of California, Davis, CA (United States)]|[Lawrence Berkeley National Lab., CA (United States); Rothman, S.S. [Univ. of California, San Francisco, CA (United States)]|[Lawrence Berkeley National Lab., CA (United States)

    1997-02-01

    High resolution x-ray microscopy has been made possible in recent years primarily by two new technologies: microfabricated diffractive lenses for soft x-rays with about 30-50 nm resolution, and high brightness synchrotron x-ray sources. X-ray microscopy occupies a special niche in the array of biological microscopic imaging methods. It extends the capabilities of existing techniques mainly in two areas: a previously unachievable combination of sub-visible resolution and multi-micrometer sample size, and new contrast mechanisms. Because of the soft x-ray wavelengths used in biological imaging (about 1-4 nm), XM is intermediate in resolution between visible light and electron microscopies. Similarly, the penetration depth of soft x-rays in biological materials is such that the ideal sample thickness for XM falls in the range of 0.25 - 10 {mu}m, between that of VLM and EM. XM is therefore valuable for imaging of intermediate level ultrastructure, requiring sub-visible resolutions, in intact cells and subcellular organelles, without artifacts produced by thin sectioning. Many of the contrast producing and sample preparation techniques developed for VLM and EM also work well with XM. These include, for example, molecule specific staining by antibodies with heavy metal or fluorescent labels attached, and sectioning of both frozen and plastic embedded tissue. However, there is also a contrast mechanism unique to XM that exists naturally because a number of elemental absorption edges lie in the wavelength range used. In particular, between the oxygen and carbon absorption edges (2.3 and 4.4 nm wavelength), organic molecules absorb photons much more strongly than does water, permitting element-specific imaging of cellular structure in aqueous media, with no artifically introduced contrast agents. For three-dimensional imaging applications requiring the capabilities of XM, an obvious extension of the technique would therefore be computerized x-ray microtomography (XMT).

  5. Measurements of plutonium in environmental samples

    International Nuclear Information System (INIS)

    D'Alberti, F.; Risposi, L.

    1996-01-01

    Within the activities connected with the start up of the PETRA Laboratory (Processo per l'Estrazione di Terre Rare ed Attinidi, i.e. process for extraction of rare earths and actinides), the Radiation Protection Unit of the J.R.C.-Ispra has carried out a well planned set of experimental measurements aimed at evaluating the zero point of the isotopes of plutonium in environmental samples by alfa spectrometry. After the International Moratorium in 1963, no release of plutonium has occurred in the environment apart from the burn up of SNAP 9A satellite in April 1964. Since then the plutonium concentration in air and in fallout samples has been continuously decreasing requiring, therefore, optimization of both instrumentation and experimental measurement procedures in order to obtain better sensibilities. In this work, the experimental methodology followed at the J.R.C.-Ispra for measurements of plutonium concentration in air, deposition and soil is described and the plutonium behaviour in these samples is reported and discussed starting from 1961

  6. Measurements of plutonium in environmental samples

    Energy Technology Data Exchange (ETDEWEB)

    D' Alberti, F; Risposi, L [Instituto di Fisica Applicata, University of Milan, Milan (Italy)

    1996-01-01

    Within the activities connected with the start up of the PETRA Laboratory (Processo per l'Estrazione di Terre Rare ed Attinidi, i.e. process for extraction of rare earths and actinides), the Radiation Protection Unit of the J.R.C.-Ispra has carried out a well planned set of experimental measurements aimed at evaluating the zero point of the isotopes of plutonium in environmental samples by alfa spectrometry. After the International Moratorium in 1963, no release of plutonium has occurred in the environment apart from the burn up of SNAP 9A satellite in April 1964. Since then the plutonium concentration in air and in fallout samples has been continuously decreasing requiring, therefore, optimization of both instrumentation and experimental measurement procedures in order to obtain better sensibilities. In this work, the experimental methodology followed at the J.R.C.-Ispra for measurements of plutonium concentration in air, deposition and soil is described and the plutonium behaviour in these samples is reported and discussed starting from 1961.

  7. Zoonoses action plan Salmonella monitoring programme: an investigation of the sampling protocol.

    Science.gov (United States)

    Snary, E L; Munday, D K; Arnold, M E; Cook, A J C

    2010-03-01

    The Zoonoses Action Plan (ZAP) Salmonella Programme was established by the British Pig Executive to monitor Salmonella prevalence in quality-assured British pigs at slaughter by testing a sample of pigs with a meat juice enzyme-linked immunosorbent assay for antibodies against group B and C(1) Salmonella. Farms were assigned a ZAP level (1 to 3) depending on the monitored prevalence, and ZAP 2 or 3 farms were required to act to reduce the prevalence. The ultimate goal was to reduce the risk of human salmonellosis attributable to British pork. A mathematical model has been developed to describe the ZAP sampling protocol. Results show that the probability of assigning a farm the correct ZAP level was high, except for farms that had a seroprevalence close to the cutoff points between different ZAP levels. Sensitivity analyses identified that the probability of assigning a farm to the correct ZAP level was dependent on the sensitivity and specificity of the test, the number of batches taken to slaughter each quarter, and the number of samples taken per batch. The variability of the predicted seroprevalence was reduced as the number of batches or samples increased and, away from the cutoff points, the probability of being assigned the correct ZAP level increased as the number of batches or samples increased. In summary, the model described here provided invaluable insight into the ZAP sampling protocol. Further work is required to understand the impact of the program for Salmonella infection in British pig farms and therefore on human health.

  8. Hazardous Waste Remedial Actions Program requirements for quality control of analytical data

    International Nuclear Information System (INIS)

    Miller, M.S.; Zolyniak, J.W.

    1988-08-01

    The Hazardous Waste Remedial Action Program (HAZWRAP) is involved in performing field investigations and sample analysis pursuant to the NCP for the Department of Energy and other federal agencies. The purpose of this document is to specify the requirements for the control of the accuracy, precision and completeness of the samples, and data from the point of collection through analysis. The requirements include data reduction and reporting of the resulting environmentally related data. Because every instance and concern may not be addressed in this document, HAZWRAP subcontractors are encouraged to discuss any questions with the HAZWRAP Project Manager hereafter identified as the Project Manager

  9. A simple method for determination of carmine in food samples based on cloud point extraction and spectrophotometric detection.

    Science.gov (United States)

    Heydari, Rouhollah; Hosseini, Mohammad; Zarabi, Sanaz

    2015-01-01

    In this paper, a simple and cost effective method was developed for extraction and pre-concentration of carmine in food samples by using cloud point extraction (CPE) prior to its spectrophotometric determination. Carmine was extracted from aqueous solution using Triton X-100 as extracting solvent. The effects of main parameters such as solution pH, surfactant and salt concentrations, incubation time and temperature were investigated and optimized. Calibration graph was linear in the range of 0.04-5.0 μg mL(-1) of carmine in the initial solution with regression coefficient of 0.9995. The limit of detection (LOD) and limit of quantification were 0.012 and 0.04 μg mL(-1), respectively. Relative standard deviation (RSD) at low concentration level (0.05 μg mL(-1)) of carmine was 4.8% (n=7). Recovery values in different concentration levels were in the range of 93.7-105.8%. The obtained results demonstrate the proposed method can be applied satisfactory to determine the carmine in food samples. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. C-point and V-point singularity lattice formation and index sign conversion methods

    Science.gov (United States)

    Kumar Pal, Sushanta; Ruchi; Senthilkumaran, P.

    2017-06-01

    The generic singularities in an ellipse field are C-points namely stars, lemons and monstars in a polarization distribution with C-point indices (-1/2), (+1/2) and (+1/2) respectively. Similar to C-point singularities, there are V-point singularities that occur in a vector field and are characterized by Poincare-Hopf index of integer values. In this paper we show that the superposition of three homogenously polarized beams in different linear states leads to the formation of polarization singularity lattice. Three point sources at the focal plane of the lens are used to create three interfering plane waves. A radial/azimuthal polarization converter (S-wave plate) placed near the focal plane modulates the polarization states of the three beams. The interference pattern is found to host C-points and V-points in a hexagonal lattice. The C-points occur at intensity maxima and V-points occur at intensity minima. Modulating the state of polarization (SOP) of three plane waves from radial to azimuthal does not essentially change the nature of polarization singularity lattice as the Poincare-Hopf index for both radial and azimuthal polarization distributions is (+1). Hence a transformation from a star to a lemon is not trivial, as such a transformation requires not a single SOP change, but a change in whole spatial SOP distribution. Further there is no change in the lattice structure and the C- and V-points appear at locations where they were present earlier. Hence to convert an interlacing star and V-point lattice into an interlacing lemon and V-point lattice, the interferometer requires modification. We show for the first time a method to change the polarity of C-point and V-point indices. This means that lemons can be converted into stars and stars can be converted into lemons. Similarly the positive V-point can be converted to negative V-point and vice versa. The intensity distribution in all these lattices is invariant as the SOPs of the three beams are changed in an

  11. Point to point multispectral light projection applied to cultural heritage

    Science.gov (United States)

    Vázquez, D.; Alvarez, A.; Canabal, H.; Garcia, A.; Mayorga, S.; Muro, C.; Galan, T.

    2017-09-01

    Use of new of light sources based on LED technology should allow the develop of systems that combine conservation and exhibition requirements and allow to make these art goods available to the next generations according to sustainability principles. The goal of this work is to develop light systems and sources with an optimized spectral distribution for each specific point of the art piece. This optimization process implies to maximize the color fidelity reproduction and the same time to minimize the photochemical damage. Perceived color under these sources will be similar (metameric) to technical requirements given by the restoration team uncharged of the conservation and exhibition of the goods of art. Depending of the fragility of the exposed art objects (i.e. spectral responsivity of the material) the irradiance must be kept under a critical level. Therefore, it is necessary to develop a mathematical model that simulates with enough accuracy both the visual effect of the illumination and the photochemical impact of the radiation. Spectral reflectance of a reference painting The mathematical model is based on a merit function that optimized the individual intensity of the LED-light sources taking into account the damage function of the material and color space coordinates. Moreover the algorithm used weights for damage and color fidelity in order to adapt the model to a specific museal application. In this work we show a sample of this technology applied to a picture of Sorolla (1863-1923) an important Spanish painter title "woman walking at the beach".

  12. Acid dew point measurement in flue gases

    Energy Technology Data Exchange (ETDEWEB)

    Struschka, M.; Baumbach, G.

    1986-06-01

    The operation of modern boiler plants requires the continuous measurement of the acid dew point in flue gases. An existing measuring instrument was modified in such a way that it can determine acid dew points reliably, reproduceably and continuously. The authors present the mechanisms of the dew point formation, the dew point measuring principle, the modification and the operational results.

  13. Cloud point extraction and spectrophotometric determination of mercury species at trace levels in environmental samples.

    Science.gov (United States)

    Ulusoy, Halil İbrahim; Gürkan, Ramazan; Ulusoy, Songül

    2012-01-15

    A new micelle-mediated separation and preconcentration method was developed for ultra-trace quantities of mercury ions prior to spectrophotometric determination. The method is based on cloud point extraction (CPE) of Hg(II) ions with polyethylene glycol tert-octylphenyl ether (Triton X-114) in the presence of chelating agents such as 1-(2-pyridylazo)-2-naphthol (PAN) and 4-(2-thiazolylazo) resorcinol (TAR). Hg(II) ions react with both PAN and TAR in a surfactant solution yielding a hydrophobic complex at pH 9.0 and 8.0, respectively. The phase separation was accomplished by centrifugation for 5 min at 3500 rpm. The calibration graphs obtained from Hg(II)-PAN and Hg(II)-TAR complexes were linear in the concentration ranges of 10-1000 μg L(-1) and 50-2500 μg L(-1) with detection limits of 1.65 and 14.5 μg L(-1), respectively. The relative standard deviations (RSDs) were 1.85% and 2.35% in determinations of 25 and 250 μg L(-1) Hg(II), respectively. The interference effect of several ions were studied and seen commonly present ions in water samples had no significantly effect on determination of Hg(II). The developed methods were successfully applied to determine mercury concentrations in environmental water samples. The accuracy and validity of the proposed methods were tested by means of five replicate analyses of the certified standard materials such as QC Metal LL3 (VWR, drinking water) and IAEA W-4 (NIST, simulated fresh water). Copyright © 2011 Elsevier B.V. All rights reserved.

  14. The utility of point count surveys to predict wildlife interactions with wind energy facilities: An example focused on golden eagles

    Science.gov (United States)

    Sur, Maitreyi; Belthoff, James R.; Bjerre, Emily R.; Millsap, Brian A.; Katzner, Todd

    2018-01-01

    Wind energy development is rapidly expanding in North America, often accompanied by requirements to survey potential facility locations for existing wildlife. Within the USA, golden eagles (Aquila chrysaetos) are among the most high-profile species of birds that are at risk from wind turbines. To minimize golden eagle fatalities in areas proposed for wind development, modified point count surveys are usually conducted to estimate use by these birds. However, it is not always clear what drives variation in the relationship between on-site point count data and actual use by eagles of a wind energy project footprint. We used existing GPS-GSM telemetry data, collected at 15 min intervals from 13 golden eagles in 2012 and 2013, to explore the relationship between point count data and eagle use of an entire project footprint. To do this, we overlaid the telemetry data on hypothetical project footprints and simulated a variety of point count sampling strategies for those footprints. We compared the time an eagle was found in the sample plots with the time it was found in the project footprint using a metric we called “error due to sampling”. Error due to sampling for individual eagles appeared to be influenced by interactions between the size of the project footprint (20, 40, 90 or 180 km2) and the sampling type (random, systematic or stratified) and was greatest on 90 km2 plots. However, use of random sampling resulted in lowest error due to sampling within intermediate sized plots. In addition sampling intensity and sampling frequency both influenced the effectiveness of point count sampling. Although our work focuses on individual eagles (not the eagle populations typically surveyed in the field), our analysis shows both the utility of simulations to identify specific influences on error and also potential improvements to sampling that consider the context-specific manner that point counts are laid out on the landscape.

  15. Using the Direct Sampling Multiple-Point Geostatistical Method for Filling Gaps in Landsat 7 ETM+ SLC-off Imagery

    KAUST Repository

    Yin, Gaohong

    2016-05-01

    Since the failure of the Scan Line Corrector (SLC) instrument on Landsat 7, observable gaps occur in the acquired Landsat 7 imagery, impacting the spatial continuity of observed imagery. Due to the highly geometric and radiometric accuracy provided by Landsat 7, a number of approaches have been proposed to fill the gaps. However, all proposed approaches have evident constraints for universal application. The main issues in gap-filling are an inability to describe the continuity features such as meandering streams or roads, or maintaining the shape of small objects when filling gaps in heterogeneous areas. The aim of the study is to validate the feasibility of using the Direct Sampling multiple-point geostatistical method, which has been shown to reconstruct complicated geological structures satisfactorily, to fill Landsat 7 gaps. The Direct Sampling method uses a conditional stochastic resampling of known locations within a target image to fill gaps and can generate multiple reconstructions for one simulation case. The Direct Sampling method was examined across a range of land cover types including deserts, sparse rural areas, dense farmlands, urban areas, braided rivers and coastal areas to demonstrate its capacity to recover gaps accurately for various land cover types. The prediction accuracy of the Direct Sampling method was also compared with other gap-filling approaches, which have been previously demonstrated to offer satisfactory results, under both homogeneous area and heterogeneous area situations. Studies have shown that the Direct Sampling method provides sufficiently accurate prediction results for a variety of land cover types from homogeneous areas to heterogeneous land cover types. Likewise, it exhibits superior performances when used to fill gaps in heterogeneous land cover types without input image or with an input image that is temporally far from the target image in comparison with other gap-filling approaches.

  16. Limited sampling strategy models for estimating the AUC of gliclazide in Chinese healthy volunteers.

    Science.gov (United States)

    Huang, Ji-Han; Wang, Kun; Huang, Xiao-Hui; He, Ying-Chun; Li, Lu-Jin; Sheng, Yu-Cheng; Yang, Juan; Zheng, Qing-Shan

    2013-06-01

    The aim of this work is to reduce the cost of required sampling for the estimation of the area under the gliclazide plasma concentration versus time curve within 60 h (AUC0-60t ). The limited sampling strategy (LSS) models were established and validated by the multiple regression model within 4 or fewer gliclazide concentration values. Absolute prediction error (APE), root of mean square error (RMSE) and visual prediction check were used as criterion. The results of Jack-Knife validation showed that 10 (25.0 %) of the 40 LSS based on the regression analysis were not within an APE of 15 % using one concentration-time point. 90.2, 91.5 and 92.4 % of the 40 LSS models were capable of prediction using 2, 3 and 4 points, respectively. Limited sampling strategies were developed and validated for estimating AUC0-60t of gliclazide. This study indicates that the implementation of an 80 mg dosage regimen enabled accurate predictions of AUC0-60t by the LSS model. This study shows that 12, 6, 4, 2 h after administration are the key sampling times. The combination of (12, 2 h), (12, 8, 2 h) or (12, 8, 4, 2 h) can be chosen as sampling hours for predicting AUC0-60t in practical application according to requirement.

  17. Application of Micro-cloud point extraction for spectrophotometric determination of Malachite green, Crystal violet and Rhodamine B in aqueous samples

    Science.gov (United States)

    Ghasemi, Elham; Kaykhaii, Massoud

    2016-07-01

    A novel, green, simple and fast method was developed for spectrophotometric determination of Malachite green, Crystal violet, and Rhodamine B in water samples based on Micro-cloud Point extraction (MCPE) at room temperature. This is the first report on the application of MCPE on dyes. In this method, to reach the cloud point at room temperature, the MCPE procedure was carried out in brine using Triton X-114 as a non-ionic surfactant. The factors influencing the extraction efficiency were investigated and optimized. Under the optimized condition, calibration curves were found to be linear in the concentration range of 0.06-0.60 mg/L, 0.10-0.80 mg/L, and 0.03-0.30 mg/L with the enrichment factors of 29.26, 85.47 and 28.36, respectively for Malachite green, Crystal violet, and Rhodamine B. Limit of detections were between 2.2 and 5.1 μg/L.

  18. Translating silicon nanowire BioFET sensor-technology to embedded point-of-care medical diagnostics

    DEFF Research Database (Denmark)

    Pfreundt, Andrea; Zulfiqar, Azeem; Patou, François

    2013-01-01

    Silicon nanowire and nanoribbon biosensors have shown great promise in the detection of biomarkers at very low concentrations. Their high sensitivity makes them ideal candidates for use in early-stage medical diagnostics and further disease monitoring where low amounts of biomarkers need to be de......Silicon nanowire and nanoribbon biosensors have shown great promise in the detection of biomarkers at very low concentrations. Their high sensitivity makes them ideal candidates for use in early-stage medical diagnostics and further disease monitoring where low amounts of biomarkers need...... to be detected. However, in order to translate this technology from the bench to the bedside, a number of key issues need to be taken into consideration: Integrating nanobiosensors-based technology requires to overcome the difficult tradeoff between imperatives for high device reproducibilty and associated...... rising fabrication costs. Also the translation of nano-scale sensor technology into daily-use point-of-care devices requires acknowledgement of the end-user requirements, making device portability and human-interfacing a focus point in device development. Sample handling or purification for instance...

  19. A general method to determine sampling windows for nonlinear mixed effects models with an application to population pharmacokinetic studies.

    Science.gov (United States)

    Foo, Lee Kien; McGree, James; Duffull, Stephen

    2012-01-01

    Optimal design methods have been proposed to determine the best sampling times when sparse blood sampling is required in clinical pharmacokinetic studies. However, the optimal blood sampling time points may not be feasible in clinical practice. Sampling windows, a time interval for blood sample collection, have been proposed to provide flexibility in blood sampling times while preserving efficient parameter estimation. Because of the complexity of the population pharmacokinetic models, which are generally nonlinear mixed effects models, there is no analytical solution available to determine sampling windows. We propose a method for determination of sampling windows based on MCMC sampling techniques. The proposed method attains a stationary distribution rapidly and provides time-sensitive windows around the optimal design points. The proposed method is applicable to determine sampling windows for any nonlinear mixed effects model although our work focuses on an application to population pharmacokinetic models. Copyright © 2012 John Wiley & Sons, Ltd.

  20. 40 CFR Table 5 to Subpart Eeee of... - Requirements for Performance Tests and Design Evaluations

    Science.gov (United States)

    2010-07-01

    ... 40 CFR part 60, as appropriate (A) Sampling port locations and the required number of traverse points... or 3B in appendix A-2 of 40 CFR part 60, as appropriate (A) Concentration of CO2 and O2 and dry...

  1. Requirements for the retrofitting an extension of the maximum voltage power grid from the point of view of environmental protection and cultivated landscape work

    International Nuclear Information System (INIS)

    2013-01-01

    The project on the requirements for the retrofitting an extension of the maximum voltage power grid from the point of view of environmental protection and cultivated landscape work includes contributions on the following topics: the development of the European transmission grid, the grid extension law, restrictions for the power grid and their infrastructure, requirements for the regulations concerning the realization of the transnational grid extension, inclusion of the public - public acceptance - communication, requirements concerning the environmental compensation law, overhead line - underground cable - health hazards, ecological effects of overhead lines and underground cables, infrastructural projects, power supply in the future, structural relief by photovoltaics.

  2. [Determination of biphenyl ether herbicides in water using HPLC with cloud-point extraction].

    Science.gov (United States)

    He, Cheng-Yan; Li, Yuan-Qian; Wang, Shen-Jiao; Ouyang, Hua-Xue; Zheng, Bo

    2010-01-01

    To determine residues of multiple biphenyl ether herbicides simultaneously in water using high performance liquid chromatography (HPLC) with cloud-point extraction. The residues of eight biphenyl ether herbicides (including bentazone, fomesafen, acifluorfen, aclonifen, bifenox, fluoroglycofenethy, nitrofen, oxyfluorfen) in water samples were extracted with cloud-point extraction of Triton X-114. The analytes were separated and determined using reverse phase HPLC with ultraviolet detector at 300 nm. Optimized conditions for the pretreatment of water samples and the parameters of chromatographic separation applied. There was a good linear correlation between the concentration and the peak area of the analytes in the range of 0.05-2.00 mg/L (r = 0.9991-0.9998). Except bentazone, the spiked recoveries of the biphenyl ether herbicides in the water samples ranged from 80.1% to 100.9%, with relative standard deviations ranging from 2.70% to 6.40%. The detection limit of the method ranged from 0.10 microg/L to 0.50 microg/L. The proposed method is simple, rapid and sensitive, and can meet the requirements of determination of multiple biphenyl ether herbicides simultaneously in natural waters.

  3. Requirements for quality control of analytical data

    International Nuclear Information System (INIS)

    Westmoreland, R.D.; Bartling, M.H.

    1990-07-01

    The National Contingency Plan (NCP) of the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) provides procedures for the identification, evaluation, and remediation of past hazardous waste disposal sites. The Hazardous Materials Response section of the NCP consists of several phases: Preliminary Assessment, Site Inspection, Remedial Investigation, Feasibility Study, Remedial Design, and Remedial Action. During any of these phases, analysis of soil, water, and waste samples may be performed. The Hazardous Waste Remedial Actions Program (HAZWRAP) is involved in performing field investigations and sample analyses pursuant to the NCP for the US Department of Energy and other federal agencies. The purpose of this document is to specify the requirements of Martin Marietta Energy Systems, Inc., for the control of accuracy, precision, and completeness of samples and data from the point of collection through analysis. Requirements include data reduction and reporting of resulting environmentally related data. Because every instance and concern may not be addressed in this document, HAZWRAP subcontractors are encouraged to discuss any questions with the Analytical Quality Control Specialist (AQCS) and the HAZWRAP Project Manager. This revision supercedes all other versions of this document

  4. 40 CFR 86.1310-90 - Exhaust gas sampling and analytical system; diesel engines.

    Science.gov (United States)

    2010-07-01

    ... avoid moisture condensation. A filter pair loading of 1 mg is typically proportional to a 0.1 g/bhp-hr..., the temperatures where condensation of water in the exhaust gases could occur. This may be achieved by... sampling zone in the primary dilution tunnel and as required to prevent condensation at any point in the...

  5. Point Coulomb solutions of the Dirac equation: analytical results required for the evaluation of the bound electron propagator in quantum electrodynamics

    International Nuclear Information System (INIS)

    Whittingham, I.B.

    1977-12-01

    The bound electron propagator in quantum electrodynamics is reviewed and the Brown and Schaefer angular momentum representation of the propagator discussed. Regular and irregular solutions of the radial Dirac equations for both /E/ 2 and /E/ >or= mc 2 are required for the computation of the propagator. Analytical expressions for these solutions, and their corresponding Wronskians, are obtained for a point Coulomb potential. Some computational aspects are discussed in an appendix

  6. Collaborative Indoor Access Point Localization Using Autonomous Mobile Robot Swarm.

    Science.gov (United States)

    Awad, Fahed; Naserllah, Muhammad; Omar, Ammar; Abu-Hantash, Alaa; Al-Taj, Abrar

    2018-01-31

    Localization of access points has become an important research problem due to the wide range of applications it addresses such as dismantling critical security threats caused by rogue access points or optimizing wireless coverage of access points within a service area. Existing proposed solutions have mostly relied on theoretical hypotheses or computer simulation to demonstrate the efficiency of their methods. The techniques that rely on estimating the distance using samples of the received signal strength usually assume prior knowledge of the signal propagation characteristics of the indoor environment in hand and tend to take a relatively large number of uniformly distributed random samples. This paper presents an efficient and practical collaborative approach to detect the location of an access point in an indoor environment without any prior knowledge of the environment. The proposed approach comprises a swarm of wirelessly connected mobile robots that collaboratively and autonomously collect a relatively small number of non-uniformly distributed random samples of the access point's received signal strength. These samples are used to efficiently and accurately estimate the location of the access point. The experimental testing verified that the proposed approach can identify the location of the access point in an accurate and efficient manner.

  7. 30 CFR 90.201 - Sampling; general requirements.

    Science.gov (United States)

    2010-07-01

    ... operational during the entire shift or for 8 hours, whichever time is less. (c) Upon request from the District... respirable dust samples of the concentration of respirable dust in the active workings of the mine as... of the normal working position; or (3) At a location that represents the maximum concentration of...

  8. Neutral-point current modeling and control for Neutral-Point Clamped three-level converter drive with small DC-link capacitors

    DEFF Research Database (Denmark)

    Maheshwari, Ram Krishan; Munk-Nielsen, Stig; Busquets-Monge, Sergio

    2011-01-01

    A Neutral-Point-Clamped (NPC) three-level inverter with small DC-link capacitors is presented in this paper. This inverter requires zero average neutral-point current for stable neutral-point potential. A simple carrier based modulation strategy is proposed for achieving zero average neutral...... drive with only 14 μF DC-link capacitors. A fast and stable performance of the neutral-point voltage controller is achieved and verified by experiments....

  9. Point kinetics modeling

    International Nuclear Information System (INIS)

    Kimpland, R.H.

    1996-01-01

    A normalized form of the point kinetics equations, a prompt jump approximation, and the Nordheim-Fuchs model are used to model nuclear systems. Reactivity feedback mechanisms considered include volumetric expansion, thermal neutron temperature effect, Doppler effect and void formation. A sample problem of an excursion occurring in a plutonium solution accidentally formed in a glovebox is presented

  10. Planetary Protection Requirements for Mars Sample Return Missions: Recommendations from a 2009 NRC Report

    Science.gov (United States)

    Race, Margaret; Farmer, Jack

    A 2009 report by the National Research Council (NRC) reviewed a previous study on Mars Sample Return (1997) and provided updated recommendations for future sample return mis-sions based on our current understanding about Mars and its biological potential, as well as advances in technology and analytical capabilities. The committee* made 12 specific recommen-dations that fall into three general categories—one related to current scientific understanding, ten based on changes in the technical and/or policy environment, and one aimed at public com-munication. Substantive changes from the 1997 report relate mainly to protocols and methods, technology and infrastructure, and general oversight. This presentation provides an overview of the 2009 report and its recommendations and analyzes how they may impact mission designs and plans. The full report, Assessment of Planetary Protection Requirements for Mars Sample Return Missions is available online at: http://www.nap.edu/catalog.php?recordi d = 12576 * Study participants: Jack D. Farmer, Arizona State University (chair) James F. Bell III, Cornell University Kathleen C. Benison, Central Michigan University William V. Boynton, University of Arizona Sherry L. Cady, Portland State University F. Grant Ferris, University of Toronto Duncan MacPherson, Jet Propulsion Laboratory Margaret S. Race, SETI Institute Mark H. Thiemens, University of California, San Diego Meenakshi Wadhwa, Arizona State University

  11. Engineering assessment of inactive uranium mill tailings, Ray Point Site, Ray Point, Texas. Phase II, Title I

    International Nuclear Information System (INIS)

    1977-12-01

    Results are reported from an engineering assessment of the problems resulting from the existence of radioactive uranium mill tailings at Ray Point, Texas. The Phase II--Title I services generally include the preparation of topographic maps, the performance of soil sampling and radiometric measurements sufficient to determine areas and volumes of tailings and other radium-contaminated materials, the evaluation of resulting radiation exposures of individuals and nearby populations, the investigation of site hydrology and meteorology and the evaluation and costing of alternative corrective actions. About 490,000 tons of ore were processed at this mill with all of the uranium sold on the commercial market. None was sold to the AEC; therefore, this report focuses on a physical description of the site and the identification of radiation pathways. No remedial action options were formulated for the site, inasmuch as none of the uranium was sold to the AEC and Exxon Corporation has agreed to perform all actions required by the State of Texas. Radon gas release from the tailings at the Ray Point site constitutes the most significant environmental impact. Windblown tailings, external gamma radiation and localized contamination of surface waters are other environmental effects. Exxon is also studying the feasibility of reprocessing the tailings

  12. Two General Extension Algorithms of Latin Hypercube Sampling

    Directory of Open Access Journals (Sweden)

    Zhi-zhao Liu

    2015-01-01

    Full Text Available For reserving original sampling points to reduce the simulation runs, two general extension algorithms of Latin Hypercube Sampling (LHS are proposed. The extension algorithms start with an original LHS of size m and construct a new LHS of size m+n that contains the original points as many as possible. In order to get a strict LHS of larger size, some original points might be deleted. The relationship of original sampling points in the new LHS structure is shown by a simple undirected acyclic graph. The basic general extension algorithm is proposed to reserve the most original points, but it costs too much time. Therefore, a general extension algorithm based on greedy algorithm is proposed to reduce the extension time, which cannot guarantee to contain the most original points. These algorithms are illustrated by an example and applied to evaluating the sample means to demonstrate the effectiveness.

  13. Criteria required for an acceptable point-of-care test for UTI detection: Obtaining consensus using the Delphi technique.

    Science.gov (United States)

    Weir, Nichola-Jane M; Pattison, Sally H; Kearney, Paddy; Stafford, Bob; Gormley, Gerard J; Crockard, Martin A; Gilpin, Deirdre F; Tunney, Michael M; Hughes, Carmel M

    2018-01-01

    Urinary Tract Infections (UTIs) are common bacterial infections, second only to respiratory tract infections and particularly prevalent within primary care. Conventional detection of UTIs is culture, however, return of results can take between 24 and 72 hours. The introduction of a point of care (POC) test would allow for more timely identification of UTIs, facilitating improved, targeted treatment. This study aimed to obtain consensus on the criteria required for a POC UTI test, to meet patient need within primary care. Criteria for consideration were compiled by the research team. These criteria were validated through a two-round Delphi process, utilising an expert panel of healthcare professionals from across Europe and United States of America. Using web-based questionnaires, panellists recorded their level of agreement with each criterion based on a 5-point Likert Scale, with space for comments. Using median response, interquartile range and comments provided, criteria were accepted/rejected/revised depending on pre-agreed cut-off scores. The first round questionnaire presented thirty-three criteria to the panel, of which 22 were accepted. Consensus was not achieved for the remaining 11 criteria. Following response review, one criterion was removed, while after revision, the remaining 10 criteria entered the second round. Of these, four were subsequently accepted, resulting in 26 criteria considered appropriate for a POC test to detect urinary infections. This study generated an approved set of criteria for a POC test to detect urinary infections. Criteria acceptance and comments provided by the healthcare professionals also supports the development of a multiplex point of care UTI test.

  14. Section curve reconstruction and mean-camber curve extraction of a point-sampled blade surface.

    Directory of Open Access Journals (Sweden)

    Wen-long Li

    Full Text Available The blade is one of the most critical parts of an aviation engine, and a small change in the blade geometry may significantly affect the dynamics performance of the aviation engine. Rapid advancements in 3D scanning techniques have enabled the inspection of the blade shape using a dense and accurate point cloud. This paper proposes a new method to achieving two common tasks in blade inspection: section curve reconstruction and mean-camber curve extraction with the representation of a point cloud. The mathematical morphology is expanded and applied to restrain the effect of the measuring defects and generate an ordered sequence of 2D measured points in the section plane. Then, the energy and distance are minimized to iteratively smoothen the measured points, approximate the section curve and extract the mean-camber curve. In addition, a turbine blade is machined and scanned to observe the curvature variation, energy variation and approximation error, which demonstrates the availability of the proposed method. The proposed method is simple to implement and can be applied in aviation casting-blade finish inspection, large forging-blade allowance inspection and visual-guided robot grinding localization.

  15. Probabilistic Requirements (Partial) Verification Methods Best Practices Improvement. Variables Acceptance Sampling Calculators: Derivations and Verification of Plans. Volume 1

    Science.gov (United States)

    Johnson, Kenneth L.; White, K, Preston, Jr.

    2012-01-01

    The NASA Engineering and Safety Center was requested to improve on the Best Practices document produced for the NESC assessment, Verification of Probabilistic Requirements for the Constellation Program, by giving a recommended procedure for using acceptance sampling by variables techniques. This recommended procedure would be used as an alternative to the potentially resource-intensive acceptance sampling by attributes method given in the document. This document contains the outcome of the assessment.

  16. Remote tritium-in-air sampling in reactor building at NAPS

    International Nuclear Information System (INIS)

    Mitra, S.R.; Lal Chand

    2000-01-01

    Tritium-in-air activity is an important parameter in PHW reactors from the point of view of internal exposure and heavy water escape from the system. The sampling technique in vogue in PHWRs, for measurement of tritium-in-air activity, requires collection of on the spot sample from different areas using a portable sampler. This sampler uses the bubbler method of sampling. As the areas of sampling are numerous, this technique is time consuming, laborious and can lead to significant internal exposure in areas where tritium-in-air activity is high. This technique is also error prone due to the heavy workload involved. A new scheme, in which the sampling of all the areas of reactor building is done through a sampling station, has been introduced for the first time in NAPS. This sampling station facilitates collection of samples from all the areas of reactor building, remotely and simultaneously at one place thereby reducing time, labour, exposure and error. This paper gives the details of the sampling system installed at NAPS. (author)

  17. Inhibition effect of calcium hydroxide point and chlorhexidine point on root canal bacteria of necrosis teeth

    Directory of Open Access Journals (Sweden)

    Andry Leonard Je

    2006-03-01

    Full Text Available Calcium Hydroxide point and Chlorhexidine point are new drugs for eliminating bacteria in the root canal. The points slowly and controly realease Calcium Hydroxide and Chlorhexidine into root canal. The purpose of the study was to determined the effectivity of Calcium hydroxide point (Calcium hydroxide plus point and Chlorhexidine point in eleminating the root canal bacteria of nescrosis teeth. In this study 14 subjects were divided into 2 groups. The first group was treated with Calcium hydroxide point and the second was treated with Chlorhexidine poin. The bacteriological sampling were measured with spectrofotometry. The Paired T Test analysis (before and after showed significant difference between the first and second group. The Independent T Test which analysed the effectivity of both groups had not showed significant difference. Although there was no significant difference in statistical test, the result of second group eliminate more bacteria than the first group. The present finding indicated that the use of Chlorhexidine point was better than Calcium hydroxide point in seven days period. The conclusion is Chlorhexidine point and Calcium hydroxide point as root canal medicament effectively eliminate root canal bacteria of necrosis teeth.

  18. Collaborative Indoor Access Point Localization Using Autonomous Mobile Robot Swarm

    Directory of Open Access Journals (Sweden)

    Fahed Awad

    2018-01-01

    Full Text Available Localization of access points has become an important research problem due to the wide range of applications it addresses such as dismantling critical security threats caused by rogue access points or optimizing wireless coverage of access points within a service area. Existing proposed solutions have mostly relied on theoretical hypotheses or computer simulation to demonstrate the efficiency of their methods. The techniques that rely on estimating the distance using samples of the received signal strength usually assume prior knowledge of the signal propagation characteristics of the indoor environment in hand and tend to take a relatively large number of uniformly distributed random samples. This paper presents an efficient and practical collaborative approach to detect the location of an access point in an indoor environment without any prior knowledge of the environment. The proposed approach comprises a swarm of wirelessly connected mobile robots that collaboratively and autonomously collect a relatively small number of non-uniformly distributed random samples of the access point’s received signal strength. These samples are used to efficiently and accurately estimate the location of the access point. The experimental testing verified that the proposed approach can identify the location of the access point in an accurate and efficient manner.

  19. Integrated sampling and analysis plan for samples measuring >10 mrem/hour

    International Nuclear Information System (INIS)

    Haller, C.S.

    1992-03-01

    This integrated sampling and analysis plan was prepared to assist in planning and scheduling of Hanford Site sampling and analytical activities for all waste characterization samples that measure greater than 10 mrem/hour. This report also satisfies the requirements of the renegotiated Interim Milestone M-10-05 of the Hanford Federal Facility Agreement and Consent Order (the Tri-Party Agreement). For purposes of comparing the various analytical needs with the Hanford Site laboratory capabilities, the analytical requirements of the various programs were normalized by converting required laboratory effort for each type of sample to a common unit of work, the standard analytical equivalency unit (AEU). The AEU approximates the amount of laboratory resources required to perform an extensive suite of analyses on five core segments individually plus one additional suite of analyses on a composite sample derived from a mixture of the five core segments and prepare a validated RCRA-type data package

  20. On the preparation of as-produced and purified single-walled carbon nanotube samples for standardized X-ray diffraction characterization

    International Nuclear Information System (INIS)

    Allaf, Rula M.; Rivero, Iris V.; Spearman, Shayla S.; Hope-Weeks, Louisa J.

    2011-01-01

    The aim of this research was to specify proper sample conditioning for acquiring representative X-ray diffraction (XRD) profiles for single-walled carbon nanotube (SWCNT) samples. In doing so, a specimen preparation method for quantitative XRD characterization of as-produced and purified arc-discharge SWCNT samples has been identified. Series of powder XRD profiles were collected at different temperatures, states, and points of time to establish appropriate conditions for acquiring XRD profiles without inducing much change to the specimen. It was concluded that heating in the 300-450 deg. C range for 20 minutes, preferably vacuum-assisted, and then sealing the sample is an appropriate XRD specimen preparation technique for purified arc-discharge SWCNT samples, while raw samples do not require preconditioning for characterization. - Graphical Abstract: A sample preparation method for XRD characterization of as-produced and purified arc-discharge SWCNT samples is identified. The preparation technique seeks to acquire representative XRD profiles without inducing changes to the samples. Purified samples required 20 minutes of heating at (300-450)deg. C, while raw samples did not require preconditioning for characterization. Highlights: → Purification routines may induce adsorption onto the SWCNT samples. → Heating a SWCNT sample may result in material loss, desorption, and SWCNTs closing. → Raw arc-discharge samples do not require preparation for XRD characterization. → Heating is appropriate specimen preparation for purified and heat-treated samples. → XRD data fitting is required for structural analysis of SWCNT bundles.

  1. Novel analytical reagent for the application of cloud-point preconcentration and flame atomic absorption spectrometric determination of nickel in natural water samples

    International Nuclear Information System (INIS)

    Suvardhan, K.; Rekha, D.; Kumar, K. Suresh; Prasad, P. Reddy; Kumar, J. Dilip; Jayaraj, B.; Chiranjeevi, P.

    2007-01-01

    Cloud-point extraction was applied as a preconcentration of nickel after formation of complex with newly synthesized N-quino[8,7-b]azin-5-yl-2,3,5,6,8,9,11,12octahydrobenzo[b][1,4,7,10,13] pentaoxacyclopentadecin-15-yl-methanimine, and later determined by flame atomic absorption spectrometry (FAAS) using octyl phenoxy polyethoxy ethanol (Triton X-114) as surfactant. Nickel was complexed with N-quino[8,7-b]azin-5-yl-2,3,5,6,8,9,11,12 octahydrobenzo[b][1,4,7,10,13]pentaoxacyclopentadecin-15-yl-methanimine in an aqueous phase and was kept for 15 min in a thermo-stated bath at 40 deg. C. Separation of the two phases was accomplished by centrifugation for 15 min at 4000 rpm. The chemical variables affecting the cloud-point extraction were evaluated, optimized and successfully applied to the nickel determination in various water samples. Under the optimized conditions, the preconcentration system of 100 ml sample permitted an enhancement factor of 50-fold. The detailed study of various interferences made the method more selective. The detection limits obtained under optimal condition was 0.042 ng ml -1 . The extraction efficiency was investigated at different nickel concentrations (20-80 ng ml -1 ) and good recoveries (99.05-99.93%) were obtained using present method. The proposed method has been applied successfully for the determination of nickel in various water samples and compared with reported method in terms of Student's t-test and variance ratio f-test which indicate the significance of present method over reported and spectrophotometric methods at 95% confidence level

  2. Characterizing fixed points

    Directory of Open Access Journals (Sweden)

    Sanjo Zlobec

    2017-04-01

    Full Text Available A set of sufficient conditions which guarantee the existence of a point x⋆ such that f(x⋆ = x⋆ is called a "fixed point theorem". Many such theorems are named after well-known mathematicians and economists. Fixed point theorems are among most useful ones in applied mathematics, especially in economics and game theory. Particularly important theorem in these areas is Kakutani's fixed point theorem which ensures existence of fixed point for point-to-set mappings, e.g., [2, 3, 4]. John Nash developed and applied Kakutani's ideas to prove the existence of (what became known as "Nash equilibrium" for finite games with mixed strategies for any number of players. This work earned him a Nobel Prize in Economics that he shared with two mathematicians. Nash's life was dramatized in the movie "Beautiful Mind" in 2001. In this paper, we approach the system f(x = x differently. Instead of studying existence of its solutions our objective is to determine conditions which are both necessary and sufficient that an arbitrary point x⋆ is a fixed point, i.e., that it satisfies f(x⋆ = x⋆. The existence of solutions for continuous function f of the single variable is easy to establish using the Intermediate Value Theorem of Calculus. However, characterizing fixed points x⋆, i.e., providing answers to the question of finding both necessary and sufficient conditions for an arbitrary given x⋆ to satisfy f(x⋆ = x⋆, is not simple even for functions of the single variable. It is possible that constructive answers do not exist. Our objective is to find them. Our work may require some less familiar tools. One of these might be the "quadratic envelope characterization of zero-derivative point" recalled in the next section. The results are taken from the author's current research project "Studying the Essence of Fixed Points". They are believed to be original. The author has received several feedbacks on the preliminary report and on parts of the project

  3. Fixed-point signal processing

    CERN Document Server

    Padgett, Wayne T

    2009-01-01

    This book is intended to fill the gap between the ""ideal precision"" digital signal processing (DSP) that is widely taught, and the limited precision implementation skills that are commonly required in fixed-point processors and field programmable gate arrays (FPGAs). These skills are often neglected at the university level, particularly for undergraduates. We have attempted to create a resource both for a DSP elective course and for the practicing engineer with a need to understand fixed-point implementation. Although we assume a background in DSP, Chapter 2 contains a review of basic theory

  4. Impact of habitat diversity on the sampling effort required for the assessment of river fish communities and IBI

    NARCIS (Netherlands)

    Van Liefferinge, C.; Simoens, I.; Vogt, C.; Cox, T.J.S.; Breine, J.; Ercken, D.; Goethals, P.; Belpaire, C.; Meire, P.

    2010-01-01

    The spatial variation in the fish communities of four small Belgian rivers with variable habitat diversity was investigated by electric fishing to define the minimum sampling distance required for optimal fish stock assessment and determination of the Index of Biotic Integrity. This study shows that

  5. Point-of-care technologies for molecular diagnostics using a drop of blood.

    Science.gov (United States)

    Song, Yujun; Huang, Yu-Yen; Liu, Xuewu; Zhang, Xiaojing; Ferrari, Mauro; Qin, Lidong

    2014-03-01

    Molecular diagnostics is crucial for prevention, identification, and treatment of disease. Traditional technologies for molecular diagnostics using blood are limited to laboratory use because they rely on sample purification and sophisticated instruments, are labor and time intensive, expensive, and require highly trained operators. This review discusses the frontiers of point-of-care (POC) diagnostic technologies using a drop of blood obtained from a finger prick. These technologies, including emerging biotechnologies, nanotechnologies, and microfluidics, hold the potential for rapid, accurate, and inexpensive disease diagnostics. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Point-of-sale alcohol promotions in the Perth and Sydney metropolitan areas.

    Science.gov (United States)

    Jones, Sandra C; Barrie, Lance; Robinson, Laura; Allsop, Steve; Chikritzhs, Tanya

    2012-09-01

    Point-of-sale (POS) is increasingly being used as a marketing tool for alcohol products, and there is a growing body of evidence suggesting that these materials are positively associated with drinking and contribute to creating a pro-alcohol environment. The purpose of the present study was to document the nature and extent of POS alcohol promotions in bottle shops in two Australian capital cities. A purposive sample of 24 hotel bottle shops and liquor stores was selected across Sydney (New South Wales) and Perth (Western Australia) and audited for the presence and nature of POS marketing. Point-of-sale promotions were found to be ubiquitous, with an average of 33 promotions per outlet. Just over half were classified as 'non-price' promotions (e.g. giveaways and competitions). Spirits were the most commonly promoted type of alcohol. The average number of standard drinks required to participate in the promotions ranged from 12 for ready to drinks to 22 for beer. Alcohol outlets that were part of supermarket chains had a higher number of promotions, more price-based promotions, and required a greater quantity of alcohol to be purchased to participate in the promotion. The data collected in this study provides a starting point for our understanding of POS promotions in Australia, and poses important questions for future research in this area. © 2012 Australasian Professional Society on Alcohol and other Drugs.

  7. Sampling point selection for energy estimation in the quasicontinuum method

    NARCIS (Netherlands)

    Beex, L.A.A.; Peerlings, R.H.J.; Geers, M.G.D.

    2010-01-01

    The quasicontinuum (QC) method reduces computational costs of atomistic calculations by using interpolation between a small number of so-called repatoms to represent the displacements of the complete lattice and by selecting a small number of sampling atoms to estimate the total potential energy of

  8. Iterative closest normal point for 3D face recognition.

    Science.gov (United States)

    Mohammadzade, Hoda; Hatzinakos, Dimitrios

    2013-02-01

    The common approach for 3D face recognition is to register a probe face to each of the gallery faces and then calculate the sum of the distances between their points. This approach is computationally expensive and sensitive to facial expression variation. In this paper, we introduce the iterative closest normal point method for finding the corresponding points between a generic reference face and every input face. The proposed correspondence finding method samples a set of points for each face, denoted as the closest normal points. These points are effectively aligned across all faces, enabling effective application of discriminant analysis methods for 3D face recognition. As a result, the expression variation problem is addressed by minimizing the within-class variability of the face samples while maximizing the between-class variability. As an important conclusion, we show that the surface normal vectors of the face at the sampled points contain more discriminatory information than the coordinates of the points. We have performed comprehensive experiments on the Face Recognition Grand Challenge database, which is presently the largest available 3D face database. We have achieved verification rates of 99.6 and 99.2 percent at a false acceptance rate of 0.1 percent for the all versus all and ROC III experiments, respectively, which, to the best of our knowledge, have seven and four times less error rates, respectively, compared to the best existing methods on this database.

  9. Determination of cadmium(II), cobalt(II), nickel(II), lead(II), zinc(II), and copper(II) in water samples using dual-cloud point extraction and inductively coupled plasma emission spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Lingling; Zhong, Shuxian; Fang, Keming; Qian, Zhaosheng [College of Chemistry and Life Sciences, Zhejiang Normal University, Jinhua 321004 (China); Chen, Jianrong, E-mail: cjr@zjnu.cn [College of Chemistry and Life Sciences, Zhejiang Normal University, Jinhua 321004 (China); College of Geography and Environmental Sciences, Zhejiang Normal University, Jinhua 321004 (China)

    2012-11-15

    Highlights: Black-Right-Pointing-Pointer A dual-cloud point extraction (d-CPE) procedure was firstly developed for simultaneous pre-concentration and separation of trace metal ions combining with ICP-OES. Black-Right-Pointing-Pointer The developed d-CPE can significantly eliminate the surfactant of Triton X-114 and successfully extend to the determination of water samples with good performance. Black-Right-Pointing-Pointer The designed method is simple, high efficient, low cost, and in accordance with the green chemistry concept. - Abstract: A dual-cloud point extraction (d-CPE) procedure has been developed for simultaneous pre-concentration and separation of heavy metal ions (Cd{sup 2+}, Co{sup 2+}, Ni{sup 2+}, Pb{sup 2+}, Zn{sup 2+}, and Cu{sup 2+} ion) in water samples by inductively coupled plasma optical emission spectrometry (ICP-OES). The procedure is based on forming complexes of metal ion with 8-hydroxyquinoline (8-HQ) into the as-formed Triton X-114 surfactant rich phase. Instead of direct injection or analysis, the surfactant rich phase containing the complexes was treated by nitric acid, and the detected ions were back extracted again into aqueous phase at the second cloud point extraction stage, and finally determined by ICP-OES. Under the optimum conditions (pH = 7.0, Triton X-114 = 0.05% (w/v), 8-HQ = 2.0 Multiplication-Sign 10{sup -4} mol L{sup -1}, HNO{sub 3} = 0.8 mol L{sup -1}), the detection limits for Cd{sup 2+}, Co{sup 2+}, Ni{sup 2+}, Pb{sup 2+}, Zn{sup 2+}, and Cu{sup 2+} ions were 0.01, 0.04, 0.01, 0.34, 0.05, and 0.04 {mu}g L{sup -1}, respectively. Relative standard deviation (RSD) values for 10 replicates at 100 {mu}g L{sup -1} were lower than 6.0%. The proposed method could be successfully applied to the determination of Cd{sup 2+}, Co{sup 2+}, Ni{sup 2+}, Pb{sup 2+}, Zn{sup 2+}, and Cu{sup 2+} ion in water samples.

  10. Miniaturized Protein Microarray with Internal Calibration as Point-of-Care Device for Diagnosis of Neonatal Sepsis

    Directory of Open Access Journals (Sweden)

    Hedvig Toth-Székély

    2012-02-01

    Full Text Available Neonatal sepsis is still a leading cause of death among newborns. Therefore a protein-microarray for point-of-care testing that simultaneously quantifies the sepsis associated serum proteins IL-6, IL-8, IL-10, TNF alpha, S-100, PCT, E-Selectin, CRP and Neopterin has been developed. The chip works with only a 4 µL patient serum sample and hence minimizes excessive blood withdrawal from newborns. The 4 µL patient samples are diluted with 36 µL assay buffer and distributed to four slides for repetitive measurements. Streptavidin coated magnetic particles that act as distinct stirring detection components are added, not only to stir the sample, but also to detect antibody antigen binding events. We demonstrate that the test is complete within 2.5 h using a single step assay. S-100 conjugated to BSA is spotted in increasing concentrations to create an internal calibration. The presented low volume protein-chip fulfills the requirements of point-of-care testing for accurate and repeatable (CV < 14% quantification of serum proteins for the diagnosis of neonatal sepsis.

  11. Two to five repeated measurements per patient reduced the required sample size considerably in a randomized clinical trial for patients with inflammatory rheumatic diseases

    Directory of Open Access Journals (Sweden)

    Smedslund Geir

    2013-02-01

    Full Text Available Abstract Background Patient reported outcomes are accepted as important outcome measures in rheumatology. The fluctuating symptoms in patients with rheumatic diseases have serious implications for sample size in clinical trials. We estimated the effects of measuring the outcome 1-5 times on the sample size required in a two-armed trial. Findings In a randomized controlled trial that evaluated the effects of a mindfulness-based group intervention for patients with inflammatory arthritis (n=71, the outcome variables Numerical Rating Scales (NRS (pain, fatigue, disease activity, self-care ability, and emotional wellbeing and General Health Questionnaire (GHQ-20 were measured five times before and after the intervention. For each variable we calculated the necessary sample sizes for obtaining 80% power (α=.05 for one up to five measurements. Two, three, and four measures reduced the required sample sizes by 15%, 21%, and 24%, respectively. With three (and five measures, the required sample size per group was reduced from 56 to 39 (32 for the GHQ-20, from 71 to 60 (55 for pain, 96 to 71 (73 for fatigue, 57 to 51 (48 for disease activity, 59 to 44 (45 for self-care, and 47 to 37 (33 for emotional wellbeing. Conclusions Measuring the outcomes five times rather than once reduced the necessary sample size by an average of 27%. When planning a study, researchers should carefully compare the advantages and disadvantages of increasing sample size versus employing three to five repeated measurements in order to obtain the required statistical power.

  12. Pumping time required to obtain tube well water samples with aquifer characteristic radon concentrations

    International Nuclear Information System (INIS)

    Ricardo, Carla Pereira; Oliveira, Arno Heeren de

    2011-01-01

    Radon is an inert noble gas, which comes from the natural radioactive decay of uranium and thorium in soil, rock and water. Radon isotopes emanated from radium-bearing grains of a rock or soil are released into the pore space. Radon that reaches the pore space is partitioned between the gaseous and aqueous phases. Thus, the groundwater presents a radon signature from the rock that is characteristic of the aquifer. The characteristic radon concentration of an aquifer, which is mainly related to the emanation, is also influenced by the degree of subsurface degassing, especially in the vicinity of a tube well, where the radon concentration is strongly reduced. Looking for the required pumping time to take a tube well water sample that presents the characteristic radon concentration of the aquifer, an experiment was conducted in an 80 m deep tube well. In this experiment, after twenty-four hours without extraction, water samples were collected periodically, about ten minutes intervals, during two hours of pumping time. The radon concentrations of the samples were determined by using the RAD7 Electronic Radon Detector from Durridge Company, a solid state alpha spectrometric detector. It was realized that the necessary time to reach the maximum radon concentration, that means the characteristic radon concentration of the aquifer, is about sixty minutes. (author)

  13. Determination of gold nanoparticles in environmental water samples by second-order optical scattering using dithiotreitol-functionalized CdS quantum dots after cloud point extraction

    Energy Technology Data Exchange (ETDEWEB)

    Mandyla, Spyridoula P.; Tsogas, George Z.; Vlessidis, Athanasios G.; Giokas, Dimosthenis L., E-mail: dgiokas@cc.uoi.gr

    2017-02-05

    Highlights: • A new method has been developed to determine gold nanoparticles in water samples. • Extraction was achieved by cloud point extraction. • A nano-hybrid assembly between AuNPs and dithiol-coated quantum dots was formulated. • Detection was accomplished at pico-molar levels by second-order light scattering. • The method was selective against ionic gold and other nanoparticle species. - Abstract: This work presents a new method for the sensitive and selective determination of gold nanoparticles in water samples. The method combines a sample preparation and enrichment step based on cloud point extraction with a new detection motif that relies on the optical incoherent light scattering of a nano-hybrid assembly that is formed by hydrogen bond interactions between gold nanoparticles and dithiotreitol-functionalized CdS quantum dots. The experimental parameters affecting the extraction and detection of gold nanoparticles were optimized and evaluated to the analysis of gold nanoparticles of variable size and surface coating. The selectivity of the method against gold ions and other nanoparticle species was also evaluated under different conditions reminiscent to those usually found in natural water samples. The developed method was applied to the analysis of gold nanoparticles in natural waters and wastewater with satisfactory results in terms of sensitivity (detection limit at the low pmol L{sup −1} levels), recoveries (>80%) and reproducibility (<9%). Compared to other methods employing molecular spectrometry for metal nanoparticle analysis, the developed method offers improved sensitivity and it is easy-to-operate thus providing an additional tool for the monitoring and the assessment of nanoparticles toxicity and hazards in the environment.

  14. Determination of gold nanoparticles in environmental water samples by second-order optical scattering using dithiotreitol-functionalized CdS quantum dots after cloud point extraction

    International Nuclear Information System (INIS)

    Mandyla, Spyridoula P.; Tsogas, George Z.; Vlessidis, Athanasios G.; Giokas, Dimosthenis L.

    2017-01-01

    Highlights: • A new method has been developed to determine gold nanoparticles in water samples. • Extraction was achieved by cloud point extraction. • A nano-hybrid assembly between AuNPs and dithiol-coated quantum dots was formulated. • Detection was accomplished at pico-molar levels by second-order light scattering. • The method was selective against ionic gold and other nanoparticle species. - Abstract: This work presents a new method for the sensitive and selective determination of gold nanoparticles in water samples. The method combines a sample preparation and enrichment step based on cloud point extraction with a new detection motif that relies on the optical incoherent light scattering of a nano-hybrid assembly that is formed by hydrogen bond interactions between gold nanoparticles and dithiotreitol-functionalized CdS quantum dots. The experimental parameters affecting the extraction and detection of gold nanoparticles were optimized and evaluated to the analysis of gold nanoparticles of variable size and surface coating. The selectivity of the method against gold ions and other nanoparticle species was also evaluated under different conditions reminiscent to those usually found in natural water samples. The developed method was applied to the analysis of gold nanoparticles in natural waters and wastewater with satisfactory results in terms of sensitivity (detection limit at the low pmol L −1 levels), recoveries (>80%) and reproducibility (<9%). Compared to other methods employing molecular spectrometry for metal nanoparticle analysis, the developed method offers improved sensitivity and it is easy-to-operate thus providing an additional tool for the monitoring and the assessment of nanoparticles toxicity and hazards in the environment.

  15. Analysis of Arbitrary Reflector Antennas Applying the Geometrical Theory of Diffraction Together with the Master Points Technique

    Directory of Open Access Journals (Sweden)

    María Jesús Algar

    2013-01-01

    Full Text Available An efficient approach for the analysis of surface conformed reflector antennas fed arbitrarily is presented. The near field in a large number of sampling points in the aperture of the reflector is obtained applying the Geometrical Theory of Diffraction (GTD. A new technique named Master Points has been developed to reduce the complexity of the ray-tracing computations. The combination of both GTD and Master Points reduces the time requirements of this kind of analysis. To validate the new approach, several reflectors and the effects on the radiation pattern caused by shifting the feed and introducing different obstacles have been considered concerning both simple and complex geometries. The results of these analyses have been compared with the Method of Moments (MoM results.

  16. Sampling methods

    International Nuclear Information System (INIS)

    Loughran, R.J.; Wallbrink, P.J.; Walling, D.E.; Appleby, P.G.

    2002-01-01

    Methods for the collection of soil samples to determine levels of 137 Cs and other fallout radionuclides, such as excess 210 Pb and 7 Be, will depend on the purposes (aims) of the project, site and soil characteristics, analytical capacity, the total number of samples that can be analysed and the sample mass required. The latter two will depend partly on detector type and capabilities. A variety of field methods have been developed for different field conditions and circumstances over the past twenty years, many of them inherited or adapted from soil science and sedimentology. The use of them inherited or adapted from soil science and sedimentology. The use of 137 Cs in erosion studies has been widely developed, while the application of fallout 210 Pb and 7 Be is still developing. Although it is possible to measure these nuclides simultaneously, it is common for experiments to designed around the use of 137 Cs along. Caesium studies typically involve comparison of the inventories found at eroded or sedimentation sites with that of a 'reference' site. An accurate characterization of the depth distribution of these fallout nuclides is often required in order to apply and/or calibrate the conversion models. However, depending on the tracer involved, the depth distribution, and thus the sampling resolution required to define it, differs. For example, a depth resolution of 1 cm is often adequate when using 137 Cs. However, fallout 210 Pb and 7 Be commonly has very strong surface maxima that decrease exponentially with depth, and fine depth increments are required at or close to the soil surface. Consequently, different depth incremental sampling methods are required when using different fallout radionuclides. Geomorphic investigations also frequently require determination of the depth-distribution of fallout nuclides on slopes and depositional sites as well as their total inventories

  17. Identification of driving network of cellular differentiation from single sample time course gene expression data

    Science.gov (United States)

    Chen, Ye; Wolanyk, Nathaniel; Ilker, Tunc; Gao, Shouguo; Wang, Xujing

    Methods developed based on bifurcation theory have demonstrated their potential in driving network identification for complex human diseases, including the work by Chen, et al. Recently bifurcation theory has been successfully applied to model cellular differentiation. However, there one often faces a technical challenge in driving network prediction: time course cellular differentiation study often only contains one sample at each time point, while driving network prediction typically require multiple samples at each time point to infer the variation and interaction structures of candidate genes for the driving network. In this study, we investigate several methods to identify both the critical time point and the driving network through examination of how each time point affects the autocorrelation and phase locking. We apply these methods to a high-throughput sequencing (RNA-Seq) dataset of 42 subsets of thymocytes and mature peripheral T cells at multiple time points during their differentiation (GSE48138 from GEO). We compare the predicted driving genes with known transcription regulators of cellular differentiation. We will discuss the advantages and limitations of our proposed methods, as well as potential further improvements of our methods.

  18. Determining the sample size required to establish whether a medical device is non-inferior to an external benchmark.

    Science.gov (United States)

    Sayers, Adrian; Crowther, Michael J; Judge, Andrew; Whitehouse, Michael R; Blom, Ashley W

    2017-08-28

    The use of benchmarks to assess the performance of implants such as those used in arthroplasty surgery is a widespread practice. It provides surgeons, patients and regulatory authorities with the reassurance that implants used are safe and effective. However, it is not currently clear how or how many implants should be statistically compared with a benchmark to assess whether or not that implant is superior, equivalent, non-inferior or inferior to the performance benchmark of interest.We aim to describe the methods and sample size required to conduct a one-sample non-inferiority study of a medical device for the purposes of benchmarking. Simulation study. Simulation study of a national register of medical devices. We simulated data, with and without a non-informative competing risk, to represent an arthroplasty population and describe three methods of analysis (z-test, 1-Kaplan-Meier and competing risks) commonly used in surgical research. We evaluate the performance of each method using power, bias, root-mean-square error, coverage and CI width. 1-Kaplan-Meier provides an unbiased estimate of implant net failure, which can be used to assess if a surgical device is non-inferior to an external benchmark. Small non-inferiority margins require significantly more individuals to be at risk compared with current benchmarking standards. A non-inferiority testing paradigm provides a useful framework for determining if an implant meets the required performance defined by an external benchmark. Current contemporary benchmarking standards have limited power to detect non-inferiority, and substantially larger samples sizes, in excess of 3200 procedures, are required to achieve a power greater than 60%. It is clear when benchmarking implant performance, net failure estimated using 1-KM is preferential to crude failure estimated by competing risk models. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No

  19. The Relationship between Black Point and Fungi Species and Effects of Black Point on Seed Germination Properties in Bread Wheat

    OpenAIRE

    TOKLU, Faruk; AKGÜL, Davut Soner; BİÇİCİ, Mehmet; KARAKÖY, Tolga

    2014-01-01

    This study was undertaken to investigate the relationship between some fungi species and black point incidence and the effect of black point on seed weight, germination percentage, seedling emergence, seedling establishment, number of embryonic roots, and coleoptile length under field conditions in bread wheat. In this research, black-pointed and black point-free kernel samples of 5 bread wheat cultivars, namely Ceyhan-99, Doğankent-1, Yüreğir-89, Seyhan-95, and Adana-99 - commonly grown unde...

  20. 33 CFR 117.1001 - Cat Point Creek.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 1 2010-07-01 2010-07-01 false Cat Point Creek. 117.1001 Section 117.1001 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY BRIDGES DRAWBRIDGE OPERATION REGULATIONS Specific Requirements Virginia § 117.1001 Cat Point Creek. The draw of the...

  1. Acceptance sampling using judgmental and randomly selected samples

    Energy Technology Data Exchange (ETDEWEB)

    Sego, Landon H.; Shulman, Stanley A.; Anderson, Kevin K.; Wilson, John E.; Pulsipher, Brent A.; Sieber, W. Karl

    2010-09-01

    We present a Bayesian model for acceptance sampling where the population consists of two groups, each with different levels of risk of containing unacceptable items. Expert opinion, or judgment, may be required to distinguish between the high and low-risk groups. Hence, high-risk items are likely to be identifed (and sampled) using expert judgment, while the remaining low-risk items are sampled randomly. We focus on the situation where all observed samples must be acceptable. Consequently, the objective of the statistical inference is to quantify the probability that a large percentage of the unsampled items in the population are also acceptable. We demonstrate that traditional (frequentist) acceptance sampling and simpler Bayesian formulations of the problem are essentially special cases of the proposed model. We explore the properties of the model in detail, and discuss the conditions necessary to ensure that required samples sizes are non-decreasing function of the population size. The method is applicable to a variety of acceptance sampling problems, and, in particular, to environmental sampling where the objective is to demonstrate the safety of reoccupying a remediated facility that has been contaminated with a lethal agent.

  2. Reduction of bias in neutron multiplicity assay using a weighted point model

    Energy Technology Data Exchange (ETDEWEB)

    Geist, W. H. (William H.); Krick, M. S. (Merlyn S.); Mayo, D. R. (Douglas R.)

    2004-01-01

    Accurate assay of most common plutonium samples was the development goal for the nondestructive assay technique of neutron multiplicity counting. Over the past 20 years the technique has been proven for relatively pure oxides and small metal items. Unfortunately, the technique results in large biases when assaying large metal items. Limiting assumptions, such as unifoh multiplication, in the point model used to derive the multiplicity equations causes these biases for large dense items. A weighted point model has been developed to overcome some of the limitations in the standard point model. Weighting factors are detemiined from Monte Carlo calculations using the MCNPX code. Monte Carlo calculations give the dependence of the weighting factors on sample mass and geometry, and simulated assays using Monte Carlo give the theoretical accuracy of the weighted-point-model assay. Measured multiplicity data evaluated with both the standard and weighted point models are compared to reference values to give the experimental accuracy of the assay. Initial results show significant promise for the weighted point model in reducing or eliminating biases in the neutron multiplicity assay of metal items. The negative biases observed in the assay of plutonium metal samples are caused by variations in the neutron multiplication for neutrons originating in various locations in the sample. The bias depends on the mass and shape of the sample and depends on the amount and energy distribution of the ({alpha},n) neutrons in the sample. When the standard point model is used, this variable-multiplication bias overestimates the multiplication and alpha values of the sample, and underestimates the plutonium mass. The weighted point model potentially can provide assay accuracy of {approx}2% (1 {sigma}) for cylindrical plutonium metal samples < 4 kg with {alpha} < 1 without knowing the exact shape of the samples, provided that the ({alpha},n) source is uniformly distributed throughout the

  3. Ballistic electron transport in mesoscopic samples

    International Nuclear Information System (INIS)

    Diaconescu, D.

    2000-01-01

    In the framework of this thesis, the electron transport in the ballistic regime has been studied. Ballistic means that the lateral sample dimensions are smaller than the mean free path of the electrons, i.e. the electrons can travel through the whole device without being scattered. This leads to transport characteristics that differ significantly from the diffusive regime which is realised in most experiments. Making use of samples with high mean free path, features of ballistic transport have been observed on samples with sizes up to 100 μm. The basic device used in ballistic electron transport is the point contact, from which a collimated beam of ballistic electrons can be injected. Such point contacts were realised with focused ion beam (FIB) implantation and the collimating properties were analysed using a two opposite point contact configuration. The typical angular width at half maximum is around 50 , which is comparable with that of point contacts defined by other methods. (orig.)

  4. Large Scale Computing and Storage Requirements for Biological and Environmental Research

    Energy Technology Data Exchange (ETDEWEB)

    DOE Office of Science, Biological and Environmental Research Program Office (BER),

    2009-09-30

    In May 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of Biological and Environmental Research (BER) held a workshop to characterize HPC requirements for BER-funded research over the subsequent three to five years. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. Chief among them: scientific progress in BER-funded research is limited by current allocations of computational resources. Additionally, growth in mission-critical computing -- combined with new requirements for collaborative data manipulation and analysis -- will demand ever increasing computing, storage, network, visualization, reliability and service richness from NERSC. This report expands upon these key points and adds others. It also presents a number of"case studies" as significant representative samples of the needs of science teams within BER. Workshop participants were asked to codify their requirements in this"case study" format, summarizing their science goals, methods of solution, current and 3-5 year computing requirements, and special software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel,"multi-core" environment that is expected to dominate HPC architectures over the next few years.

  5. Sample Transport for a European Sample Curation Facility

    Science.gov (United States)

    Berthoud, L.; Vrublevskis, J. B.; Bennett, A.; Pottage, T.; Bridges, J. C.; Holt, J. M. C.; Dirri, F.; Longobardo, A.; Palomba, E.; Russell, S.; Smith, C.

    2018-04-01

    This work has looked at the recovery of Mars Sample Return capsule once it arrives on Earth. It covers possible landing sites, planetary protection requirements, and transportation from the landing site to a European Sample Curation Facility.

  6. Sampling plans for pest mites on physic nut.

    Science.gov (United States)

    Rosado, Jander F; Sarmento, Renato A; Pedro-Neto, Marçal; Galdino, Tarcísio V S; Marques, Renata V; Erasmo, Eduardo A L; Picanço, Marcelo C

    2014-08-01

    The starting point for generating a pest control decision-making system is a conventional sampling plan. Because the mites Polyphagotarsonemus latus and Tetranychus bastosi are among the most important pests of the physic nut (Jatropha curcas), in the present study, we aimed to establish sampling plans for these mite species on physic nut. Mite densities were monitored in 12 physic nut crops. Based on the obtained results, sampling of P. latus and T. bastosi should be performed by assessing the number of mites per cm(2) in 160 samples using a handheld 20× magnifying glass. The optimal sampling region for T. bastosi is the abaxial surface of the 4th most apical leaf on the branch of the middle third of the canopy. On the abaxial surface, T. bastosi should then be observed on the side parts of the middle portion of the leaf, near its edge. As for P. latus, the optimal sampling region is the abaxial surface of the 4th most apical leaf on the branch of the apical third of the canopy on the abaxial surface. Polyphagotarsonemus latus should then be assessed on the side parts of the leaf's petiole insertion. Each sampling procedure requires 4 h and costs US$ 7.31.

  7. Implementation of an Enhanced Measurement Control Program for handling nuclear safety samples at WSRC

    International Nuclear Information System (INIS)

    Boler-Melton, C.; Holland, M.K.

    1991-01-01

    In the separation and purification of nuclear material, nuclear criticality safety (NCS) is of primary concern. The primary nuclear criticality safety controls utilized by the Savannah River Site (SRS) Separations Facilities involve administrative and process equipment controls. Additional assurance of NCS is obtained by identifying key process hold points where sampling is used to independently verify the effectiveness of production control. Nuclear safety measurements of samples from these key process locations provide a high degree of assurance that processing conditions are within administrative and procedural nuclear safety controls. An enhanced procedure management system aimed at making improvements in the quality, safety, and conduct of operation was implemented for Nuclear Safety Sample (NSS) receipt, analysis, and reporting. All procedures with nuclear safety implications were reviewed for accuracy and adequate detail to perform the analytical measurements safely, efficiently, and with the utmost quality. Laboratory personnel worked in a ''Deliberate Operating'' mode (a systematic process requiring continuous expert oversight during all phases of training, testing, and implementation) to initiate the upgrades. Thus, the effort to revise and review nuclear safety sample procedures involved a team comprised of a supervisor, chemist, and two technicians for each procedure. Each NSS procedure was upgraded to a ''Use Every Time'' (UET) procedure with sign-off steps to ensure compliance with each step for every nuclear safety sample analyzed. The upgrade program met and exceeded both the long and short term customer needs by improving measurement reliability, providing objective evidence of rigid adherence to program principles and requirements, and enhancing the system for independent verification of representative sampling from designated NCS points

  8. Sample size of the reference sample in a case-augmented study.

    Science.gov (United States)

    Ghosh, Palash; Dewanji, Anup

    2017-05-01

    The case-augmented study, in which a case sample is augmented with a reference (random) sample from the source population with only covariates information known, is becoming popular in different areas of applied science such as pharmacovigilance, ecology, and econometrics. In general, the case sample is available from some source (for example, hospital database, case registry, etc.); however, the reference sample is required to be drawn from the corresponding source population. The required minimum size of the reference sample is an important issue in this regard. In this work, we address the minimum sample size calculation and discuss related issues. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  9. Strategies for satellite-based monitoring of CO2 from distributed area and point sources

    Science.gov (United States)

    Schwandner, Florian M.; Miller, Charles E.; Duren, Riley M.; Natraj, Vijay; Eldering, Annmarie; Gunson, Michael R.; Crisp, David

    2014-05-01

    Atmospheric CO2 budgets are controlled by the strengths, as well as the spatial and temporal variabilities of CO2 sources and sinks. Natural CO2 sources and sinks are dominated by the vast areas of the oceans and the terrestrial biosphere. In contrast, anthropogenic and geogenic CO2 sources are dominated by distributed area and point sources, which may constitute as much as 70% of anthropogenic (e.g., Duren & Miller, 2012), and over 80% of geogenic emissions (Burton et al., 2013). Comprehensive assessments of CO2 budgets necessitate robust and highly accurate satellite remote sensing strategies that address the competing and often conflicting requirements for sampling over disparate space and time scales. Spatial variability: The spatial distribution of anthropogenic sources is dominated by patterns of production, storage, transport and use. In contrast, geogenic variability is almost entirely controlled by endogenic geological processes, except where surface gas permeability is modulated by soil moisture. Satellite remote sensing solutions will thus have to vary greatly in spatial coverage and resolution to address distributed area sources and point sources alike. Temporal variability: While biogenic sources are dominated by diurnal and seasonal patterns, anthropogenic sources fluctuate over a greater variety of time scales from diurnal, weekly and seasonal cycles, driven by both economic and climatic factors. Geogenic sources typically vary in time scales of days to months (geogenic sources sensu stricto are not fossil fuels but volcanoes, hydrothermal and metamorphic sources). Current ground-based monitoring networks for anthropogenic and geogenic sources record data on minute- to weekly temporal scales. Satellite remote sensing solutions would have to capture temporal variability through revisit frequency or point-and-stare strategies. Space-based remote sensing offers the potential of global coverage by a single sensor. However, no single combination of orbit

  10. 1999 Baseline Sampling and Analysis Sampling Locations, Geographic NAD83, LOSCO (2004) [BSA_1999_sample_locations_LOSCO_2004

    Data.gov (United States)

    Louisiana Geographic Information Center — The monitor point data set was produced as a part of the Baseline Sampling and Analysis program coordinated by the Louisiana Oil Spill Coordinator's Office. This...

  11. 1997 Baseline Sampling and Analysis Sample Locations, Geographic NAD83, LOSCO (2004) [BSA_1997_sample_locations_LOSCO_2004

    Data.gov (United States)

    Louisiana Geographic Information Center — The monitor point data set was produced as a part of the Baseline Sampling and Analysis (BSA) program coordinated by the Louisiana Oil Spill Coordinator's Office....

  12. 1998 Baseline Sampling and Analysis Sampling Locations, Geographic NAD83, LOSCO (2004) [BSA_1998_sample_locations_LOSCO_2004

    Data.gov (United States)

    Louisiana Geographic Information Center — The monitor point data set was produced as a part of the Baseline Sampling and Analysis program coordinated by the Louisiana Oil Spill Coordinator's Office. This...

  13. Technologies for pre-screening IAEA swipe samples

    International Nuclear Information System (INIS)

    Smith, Nicholas A.; Steeb, Jennifer L.; Lee, Denise L.; Huckabay, Heath A.; Ticknor, Brian W.

    2015-01-01

    During the course of International Atomic Energy Agency (IAEA) inspections, many samples are taken for the purpose of verifying the declared facility activities and identifying any possible undeclared activities. One of these sampling techniques is the environmental swipe sample. Due to the large number of samples collected, and the amount of time that is required to analyze them, prioritizing these swipes in the field or upon receipt at the Network of Analytical Laboratories (NWAL) will allow sensitive or mission-critical analyses to be performed sooner. As a result of this study, technologies were placed into one of three categories: recommended, promising, or not recommended. Both neutron activation analysis (NAA) and X-ray fluorescence (XRF) are recommended for further study and possible field deployment. These techniques performed the best in initial trials for pre-screening and prioritizing IAEA swipes. We learned that for NAA more characterization of cold elements (such as calcium and magnesium) would need to be emphasized, and for XRF it may be appropriate to move towards a benchtop XRF versus a handheld XRF due to the increased range of elements available on benchtop equipment. Promising techniques that will require additional research and development include confocal Raman microscopy, fluorescence microscopy, and infrared (IR) microscopy. These techniques showed substantive responses to uranium compounds, but expensive instrumentation upgrades (confocal Raman) or university engagement (fluorescence microscopy) may be necessary to investigate the utility of the techniques completely. Point-and-shoot (handheld) Raman and attenuated total reflectance–infrared (ATR-IR) measurements are not recommended, as they have not shown enough promise to continue investigations.

  14. Technologies for pre-screening IAEA swipe samples

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Nicholas A. [Argonne National Lab. (ANL), Argonne, IL (United States); Steeb, Jennifer L. [Argonne National Lab. (ANL), Argonne, IL (United States); Lee, Denise L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Huckabay, Heath A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Ticknor, Brian W. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-11-09

    During the course of International Atomic Energy Agency (IAEA) inspections, many samples are taken for the purpose of verifying the declared facility activities and identifying any possible undeclared activities. One of these sampling techniques is the environmental swipe sample. Due to the large number of samples collected, and the amount of time that is required to analyze them, prioritizing these swipes in the field or upon receipt at the Network of Analytical Laboratories (NWAL) will allow sensitive or mission-critical analyses to be performed sooner. As a result of this study, technologies were placed into one of three categories: recommended, promising, or not recommended. Both neutron activation analysis (NAA) and X-ray fluorescence (XRF) are recommended for further study and possible field deployment. These techniques performed the best in initial trials for pre-screening and prioritizing IAEA swipes. We learned that for NAA more characterization of cold elements (such as calcium and magnesium) would need to be emphasized, and for XRF it may be appropriate to move towards a benchtop XRF versus a handheld XRF due to the increased range of elements available on benchtop equipment. Promising techniques that will require additional research and development include confocal Raman microscopy, fluorescence microscopy, and infrared (IR) microscopy. These techniques showed substantive responses to uranium compounds, but expensive instrumentation upgrades (confocal Raman) or university engagement (fluorescence microscopy) may be necessary to investigate the utility of the techniques completely. Point-and-shoot (handheld) Raman and attenuated total reflectance–infrared (ATR-IR) measurements are not recommended, as they have not shown enough promise to continue investigations.

  15. Wilsonville wastewater sampling program. Final report

    Energy Technology Data Exchange (ETDEWEB)

    None

    1983-10-01

    As part of its contrast to design, build and operate the SRC-1 Demonstration Plant in cooperation with the US Department of Energy (DOE), International Coal Refining Company (ICRC) was required to collect and evaluate data related to wastewater streams and wastewater treatment procedures at the SRC-1 Pilot Plant facility. The pilot plant is located at Wilsonville, Alabama and is operated by Catalytic, Inc. under the direction of Southern Company Services. The plant is funded in part by the Electric Power Research Institute and the DOE. ICRC contracted with Catalytic, Inc. to conduct wastewater sampling. Tasks 1 through 5 included sampling and analysis of various wastewater sources and points of different steps in the biological treatment facility at the plant. The sampling program ran from May 1 to July 31, 1982. Also included in the sampling program was the generation and analysis of leachate from SRC product using standard laboratory leaching procedures. For Task 6, available plant wastewater data covering the period from February 1978 to December 1981 was analyzed to gain information that might be useful for a demonstration plant design basis. This report contains a tabulation of the analytical data, a summary tabulation of the historical operating data that was evaluated and comments concerning the data. The procedures used during the sampling program are also documented.

  16. Cluster designs to assess the prevalence of acute malnutrition by lot quality assurance sampling: a validation study by computer simulation.

    Science.gov (United States)

    Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J

    2009-04-01

    Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67x3 (67 clusters of three observations) and a 33x6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67x3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis.

  17. A pilot evaluation of whole blood finger-prick sampling for point-of-care HIV viral load measurement: the UNICORN study.

    Science.gov (United States)

    Fidler, Sarah; Lewis, Heather; Meyerowitz, Jodi; Kuldanek, Kristin; Thornhill, John; Muir, David; Bonnissent, Alice; Timson, Georgina; Frater, John

    2017-10-20

    There is a global need for HIV viral load point-of-care (PoC) assays to monitor patients receiving antiretroviral therapy. UNICORN was the first study of an off-label protocol using whole blood finger-prick samples tested with and without a simple three minute spin using a clinic-room microcentrifuge. Two PoC assays were evaluated in 40 HIV-positive participants, 20 with detectable and 20 with undetectable plasma viral load (pVL) (<20 copies/ml). Using 100 µl finger-prick blood samples, the Cepheid Xpert HIV-1 Viral Load and HIV-1 Qual cartridges were compared with laboratory pVL assessment (TaqMan, Roche). For participants with undetectable viraemia by TaqMan, there was poor concordance without centrifugation with the TaqMan platform with only 40% 'undetectable' using Xpert VL and 25% 'not detected' using the Qual assay. After a 3 minute spin, 100% of samples were undetectable using either assay, showing full concordance with the TaqMan assay. Defining a lower limit of detection of 1000 copies/ml when including a spin, there was 100% concordance with the TaqMan platform with strong correlation (rho 0.95 and 0.94; p < 0.0001 for both assays). When including a simple microcentrifugation step, finger-prick PoC testing was a quick and accurate approach for assessing HIV viraemia, with excellent concordance with validated laboratory approaches.

  18. A Fault Sample Simulation Approach for Virtual Testability Demonstration Test

    Institute of Scientific and Technical Information of China (English)

    ZHANG Yong; QIU Jing; LIU Guanjun; YANG Peng

    2012-01-01

    Virtual testability demonstration test has many advantages,such as low cost,high efficiency,low risk and few restrictions.It brings new requirements to the fault sample generation.A fault sample simulation approach for virtual testability demonstration test based on stochastic process theory is proposed.First,the similarities and differences of fault sample generation between physical testability demonstration test and virtual testability demonstration test are discussed.Second,it is pointed out that the fault occurrence process subject to perfect repair is renewal process.Third,the interarrival time distribution function of the next fault event is given.Steps and flowcharts of fault sample generation are introduced.The number of faults and their occurrence time are obtained by statistical simulation.Finally,experiments are carried out on a stable tracking platform.Because a variety of types of life distributions and maintenance modes are considered and some assumptions are removed,the sample size and structure of fault sample simulation results are more similar to the actual results and more reasonable.The proposed method can effectively guide the fault injection in virtual testability demonstration test.

  19. Reviving common standards in point-count surveys for broad inference across studies

    Science.gov (United States)

    Matsuoka, Steven M.; Mahon, C. Lisa; Handel, Colleen M.; Solymos, Peter; Bayne, Erin M.; Fontaine, Patricia C.; Ralph, C.J.

    2014-01-01

    We revisit the common standards recommended by Ralph et al. (1993, 1995a) for conducting point-count surveys to assess the relative abundance of landbirds breeding in North America. The standards originated from discussions among ornithologists in 1991 and were developed so that point-count survey data could be broadly compared and jointly analyzed by national data centers with the goals of monitoring populations and managing habitat. Twenty years later, we revisit these standards because (1) they have not been universally followed and (2) new methods allow estimation of absolute abundance from point counts, but these methods generally require data beyond the original standards to account for imperfect detection. Lack of standardization and the complications it introduces for analysis become apparent from aggregated data. For example, only 3% of 196,000 point counts conducted during the period 1992-2011 across Alaska and Canada followed the standards recommended for the count period and count radius. Ten-minute, unlimited-count-radius surveys increased the number of birds detected by >300% over 3-minute, 50-m-radius surveys. This effect size, which could be eliminated by standardized sampling, was ≥10 times the published effect sizes of observers, time of day, and date of the surveys. We suggest that the recommendations by Ralph et al. (1995a) continue to form the common standards when conducting point counts. This protocol is inexpensive and easy to follow but still allows the surveys to be adjusted for detection probabilities. Investigators might optionally collect additional information so that they can analyze their data with more flexible forms of removal and time-of-detection models, distance sampling, multiple-observer methods, repeated counts, or combinations of these methods. Maintaining the common standards as a base protocol, even as these study-specific modifications are added, will maximize the value of point-count data, allowing compilation and

  20. School Audits and School Improvement: Exploring the Variance Point Concept in Kentucky's... Schools

    Directory of Open Access Journals (Sweden)

    Robert Lyons

    2011-01-01

    Full Text Available As a diagnostic intervention (Bowles, Churchill, Effrat, & McDermott, 2002 for schools failing to meet school improvement goals, Ken-tucky used a scholastic audit process based on nine standards and 88 associated indicators called the Standards and Indicators for School Improvement (SISI. Schools are rated on a scale of 1–4 on each indicator, with a score of 3 considered as fully functional (Kentucky De-partment of Education [KDE], 2002. As part of enacting the legislation, KDE was required to also audit a random sample of schools that did meet school improvement goals; thereby identifying practices present in improving schools that are not present in those failing to improve. These practices were referred to as variance points, and were reported to school leaders annually. Variance points have differed from year to year, and the methodology used by KDE was unclear. Moreover, variance points were reported for all schools without differentiating based upon the level of school (elementary, middle, or high. In this study, we established a transparent methodology for variance point determination that differentiates between elementary, middle, and high schools.

  1. Point cloud processing for smart systems

    Directory of Open Access Journals (Sweden)

    Jaromír Landa

    2013-01-01

    Full Text Available High population as well as the economical tension emphasises the necessity of effective city management – from land use planning to urban green maintenance. The management effectiveness is based on precise knowledge of the city environment. Point clouds generated by mobile and terrestrial laser scanners provide precise data about objects in the scanner vicinity. From these data pieces the state of the roads, buildings, trees and other objects important for this decision-making process can be obtained. Generally, they can support the idea of “smart” or at least “smarter” cities.Unfortunately the point clouds do not provide this type of information automatically. It has to be extracted. This extraction is done by expert personnel or by object recognition software. As the point clouds can represent large areas (streets or even cities, usage of expert personnel to identify the required objects can be very time-consuming, therefore cost ineffective. Object recognition software allows us to detect and identify required objects semi-automatically or automatically.The first part of the article reviews and analyses the state of current art point cloud object recognition techniques. The following part presents common formats used for point cloud storage and frequently used software tools for point cloud processing. Further, a method for extraction of geospatial information about detected objects is proposed. Therefore, the method can be used not only to recognize the existence and shape of certain objects, but also to retrieve their geospatial properties. These objects can be later directly used in various GIS systems for further analyses.

  2. The minimum information required for a glycomics experiment (MIRAGE) project: sample preparation guidelines for reliable reporting of glycomics datasets.

    Science.gov (United States)

    Struwe, Weston B; Agravat, Sanjay; Aoki-Kinoshita, Kiyoko F; Campbell, Matthew P; Costello, Catherine E; Dell, Anne; Ten Feizi; Haslam, Stuart M; Karlsson, Niclas G; Khoo, Kay-Hooi; Kolarich, Daniel; Liu, Yan; McBride, Ryan; Novotny, Milos V; Packer, Nicolle H; Paulson, James C; Rapp, Erdmann; Ranzinger, Rene; Rudd, Pauline M; Smith, David F; Tiemeyer, Michael; Wells, Lance; York, William S; Zaia, Joseph; Kettner, Carsten

    2016-09-01

    The minimum information required for a glycomics experiment (MIRAGE) project was established in 2011 to provide guidelines to aid in data reporting from all types of experiments in glycomics research including mass spectrometry (MS), liquid chromatography, glycan arrays, data handling and sample preparation. MIRAGE is a concerted effort of the wider glycomics community that considers the adaptation of reporting guidelines as an important step towards critical evaluation and dissemination of datasets as well as broadening of experimental techniques worldwide. The MIRAGE Commission published reporting guidelines for MS data and here we outline guidelines for sample preparation. The sample preparation guidelines include all aspects of sample generation, purification and modification from biological and/or synthetic carbohydrate material. The application of MIRAGE sample preparation guidelines will lead to improved recording of experimental protocols and reporting of understandable and reproducible glycomics datasets. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  3. Perceptions of point-of-care infectious disease testing among European medical personnel, point-of-care test kit manufacturers, and the general public

    NARCIS (Netherlands)

    W.E. Kaman (Wendy); E-R. Andrinopoulou (Eleni-Rosalina); J.P. Hays (John)

    2013-01-01

    textabstractBackground: The proper development and implementation of point-of-care (POC) diagnostics requires knowledge of the perceived requirements and barriers to their implementation. To determine the current requirements and perceived barriers to the introduction of POC diagnostics in the field

  4. SymPix: A Spherical Grid for Efficient Sampling of Rotationally Invariant Operators

    Science.gov (United States)

    Seljebotn, D. S.; Eriksen, H. K.

    2016-02-01

    We present SymPix, a special-purpose spherical grid optimized for efficiently sampling rotationally invariant linear operators. This grid is conceptually similar to the Gauss-Legendre (GL) grid, aligning sample points with iso-latitude rings located on Legendre polynomial zeros. Unlike the GL grid, however, the number of grid points per ring varies as a function of latitude, avoiding expensive oversampling near the poles and ensuring nearly equal sky area per grid point. The ratio between the number of grid points in two neighboring rings is required to be a low-order rational number (3, 2, 1, 4/3, 5/4, or 6/5) to maintain a high degree of symmetries. Our main motivation for this grid is to solve linear systems using multi-grid methods, and to construct efficient preconditioners through pixel-space sampling of the linear operator in question. As a benchmark and representative example, we compute a preconditioner for a linear system that involves the operator \\widehat{{\\boldsymbol{D}}}+{\\widehat{{\\boldsymbol{B}}}}T{{\\boldsymbol{N}}}-1\\widehat{{\\boldsymbol{B}}}, where \\widehat{{\\boldsymbol{B}}} and \\widehat{{\\boldsymbol{D}}} may be described as both local and rotationally invariant operators, and {\\boldsymbol{N}} is diagonal in the pixel domain. For a bandwidth limit of {{\\ell }}{max} = 3000, we find that our new SymPix implementation yields average speed-ups of 360 and 23 for {\\widehat{{\\boldsymbol{B}}}}T{{\\boldsymbol{N}}}-1\\widehat{{\\boldsymbol{B}}} and \\widehat{{\\boldsymbol{D}}}, respectively, compared with the previous state-of-the-art implementation.

  5. Point-Structured Human Body Modeling Based on 3D Scan Data

    Directory of Open Access Journals (Sweden)

    Ming-June Tsai

    2018-01-01

    Full Text Available A novel point-structured geometrical modelling for realistic human body is introduced in this paper. This technique is based on the feature extraction from the 3D body scan data. Anatomic feature such as the neck, the arm pits, the crotch points, and other major feature points are recognized. The body data is then segmented into 6 major parts. A body model is then constructed by re-sampling the scanned data to create a point-structured mesh. The body model contains body geodetic landmarks in latitudinal and longitudinal curves passing through those feature points. The body model preserves the perfect body shape and all the body dimensions but requires little space. Therefore, the body model can be used as a mannequin in garment industry, or as a manikin in various human factor designs, but the most important application is to use as a virtue character to animate the body motion in mocap (motion capture systems. By adding suitable joint freedoms between the segmented body links, kinematic and dynamic properties of the motion theories can be applied to the body model. As a result, a 3D virtual character that is fully resembled the original scanned individual is vividly animating the body motions. The gaps between the body segments due to motion can be filled up by skin blending technique using the characteristic of the point-structured model. The model has the potential to serve as a standardized datatype to archive body information for all custom-made products.

  6. Detection of Bordetella pertussis from Clinical Samples by Culture and End-Point PCR in Malaysian Patients.

    Science.gov (United States)

    Ting, Tan Xue; Hashim, Rohaidah; Ahmad, Norazah; Abdullah, Khairul Hafizi

    2013-01-01

    Pertussis or whooping cough is a highly infectious respiratory disease caused by Bordetella pertussis. In vaccinating countries, infants, adolescents, and adults are relevant patients groups. A total of 707 clinical specimens were received from major hospitals in Malaysia in year 2011. These specimens were cultured on Regan-Lowe charcoal agar and subjected to end-point PCR, which amplified the repetitive insertion sequence IS481 and pertussis toxin promoter gene. Out of these specimens, 275 were positive: 4 by culture only, 6 by both end-point PCR and culture, and 265 by end-point PCR only. The majority of the positive cases were from ≤3 months old patients (77.1%) (P 0.05). Our study showed that the end-point PCR technique was able to pick up more positive cases compared to culture method.

  7. Ensemble Sampling

    OpenAIRE

    Lu, Xiuyuan; Van Roy, Benjamin

    2017-01-01

    Thompson sampling has emerged as an effective heuristic for a broad range of online decision problems. In its basic form, the algorithm requires computing and sampling from a posterior distribution over models, which is tractable only for simple special cases. This paper develops ensemble sampling, which aims to approximate Thompson sampling while maintaining tractability even in the face of complex models such as neural networks. Ensemble sampling dramatically expands on the range of applica...

  8. Mars Sample Return: The Next Step Required to Revolutionize Knowledge of Martian Geological and Climatological History

    Science.gov (United States)

    Mittlefehldt, D. W.

    2012-01-01

    The capability of scientific instrumentation flown on planetary orbiters and landers has made great advances since the signature Viking mission of the seventies. At some point, however, the science return from orbital remote sensing, and even in situ measurements, becomes incremental, rather than revolutionary. This is primarily caused by the low spatial resolution of such measurements, even for landed instrumentation, the incomplete mineralogical record derived from such measurements, the inability to do the detailed textural, mineralogical and compositional characterization needed to demonstrate equilibrium or reaction paths, and the lack of chronological characterization. For the foreseeable future, flight instruments will suffer from this limitation. In order to make the next revolutionary breakthrough in understanding the early geological and climatological history of Mars, samples must be available for interrogation using the full panoply of laboratory-housed analytical instrumentation. Laboratory studies of samples allow for determination of parageneses of rocks through microscopic identification of mineral assemblages, evaluation of equilibrium through electron microbeam analyses of mineral compositions and structures, determination of formation temperatures through secondary ion or thermal ionization mass spectrometry (SIMS or TIMS) analyses of stable isotope compositions. Such details are poorly constrained by orbital data (e.g. phyllosilicate formation at Mawrth Vallis), and incompletely described by in situ measurements (e.g. genesis of Burns formation sediments at Meridiani Planum). Laboratory studies can determine formation, metamorphism and/or alteration ages of samples through SIMS or TIMS of radiogenic isotope systems; a capability well-beyond flight instrumentation. Ideally, sample return should be from a location first scouted by landers such that fairly mature hypotheses have been formulated that can be tested. However, samples from clastic

  9. [Study of spatial stratified sampling strategy of Oncomelania hupensis snail survey based on plant abundance].

    Science.gov (United States)

    Xun-Ping, W; An, Z

    2017-07-27

    Objective To optimize and simplify the survey method of Oncomelania hupensis snails in marshland endemic regions of schistosomiasis, so as to improve the precision, efficiency and economy of the snail survey. Methods A snail sampling strategy (Spatial Sampling Scenario of Oncomelania based on Plant Abundance, SOPA) which took the plant abundance as auxiliary variable was explored and an experimental study in a 50 m×50 m plot in a marshland in the Poyang Lake region was performed. Firstly, the push broom surveyed data was stratified into 5 layers by the plant abundance data; then, the required numbers of optimal sampling points of each layer through Hammond McCullagh equation were calculated; thirdly, every sample point in the line with the Multiple Directional Interpolation (MDI) placement scheme was pinpointed; and finally, the comparison study among the outcomes of the spatial random sampling strategy, the traditional systematic sampling method, the spatial stratified sampling method, Sandwich spatial sampling and inference and SOPA was performed. Results The method (SOPA) proposed in this study had the minimal absolute error of 0.213 8; and the traditional systematic sampling method had the largest estimate, and the absolute error was 0.924 4. Conclusion The snail sampling strategy (SOPA) proposed in this study obtains the higher estimation accuracy than the other four methods.

  10. Review of light water reactor regulatory requirements: Assessment of selected regulatory requirements that may have marginal importance to risk: Postaccident sampling system, Turbine missiles, Combustible gas control, Charcoal filters

    International Nuclear Information System (INIS)

    Scott, W.B.; Jamison, J.D.; Stoetzel, G.A.; Tabatabai, A.S.; Vo, T.V.

    1987-05-01

    In a study commissioned by the Nuclear Regulatory Commission (NRC), Pacific Northwest Laboratory (PNL) evaluated the costs and benefits of modifying regulatory requirements in the areas of the postaccident sampling system, turbine rotor design reviews and inspections, combustible gas control for inerted Boiling Water Reactor (BWR) containments, and impregnated charcoal filters in certain plant ventilation systems. The basic framework for the analyses was that presented in the Regulatory Analysis Guidelines (NUREG/BR-0058) and in the Handbook for Value-Impact Assessment (NUREG/CR-3568). The effects of selected modifications to regulations were evaluated in terms of such factors as public risk and costs to industry and NRC. The results indicate that potential modifications of the regulatory requirements in three of the four areas would have little impact on public risk. In the fourth area, impregnated charcoal filters in building ventilation systems do appear to limit risks to the public and plant staff. Revisions in the severe accident source term assumptions, however, may reduce the theoretical value of charcoal filters. The cost analysis indicated that substantial savings in operating costs may be realized by changing the interval of turbine rotor inspections. Small to moderate operating cost savings may be realized through postulated modifications to the postaccident sampling system requirements and to the requirements for combustible gas control in inerted BWR containments. Finally, the use of impregnated charcoal filters in ventilation systems appears to be the most cost-effective method of reducing radioiodine concentrations

  11. Efficacy of microbial sampling recommendations and practices in sub-Saharan Africa.

    Science.gov (United States)

    Taylor, David D J; Khush, Ranjiv; Peletz, Rachel; Kumpel, Emily

    2018-05-01

    Current guidelines for testing drinking water quality recommend that the sampling rate, which is the number of samples tested for fecal indicator bacteria (FIB) per year, increases as the population served by the drinking water system increases. However, in low-resource settings, prevalence of contamination tends to be higher, potentially requiring higher sampling rates and different statistical methods not addressed by current sampling recommendations. We analyzed 27,930 tests for FIB collected from 351 piped water systems in eight countries in sub-Saharan Africa to assess current sampling rates, observed contamination prevalences, and the ability of monitoring agencies to complete two common objectives of sampling programs: determine regulatory compliance and detect a change over time. Although FIB were never detected in samples from 75% of piped water systems, only 14% were sampled often enough to conclude with 90% confidence that the true contamination prevalence met an example guideline (≤5% chance of any sample positive for FIB). Similarly, after observing a ten percentage point increase in contaminated samples, 43% of PWS would still require more than a year before their monitoring agency could be confident that contamination had actually increased. We conclude that current sampling practices in these settings may provide insufficient information because they collect too few samples. We also conclude that current guidelines could be improved by specifying how to increase sampling after contamination has been detected. Our results suggest that future recommendations should explicitly consider the regulatory limit and desired confidence in results, and adapt when FIB is detected. Copyright © 2018 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  12. New adaptive sampling method in particle image velocimetry

    International Nuclear Information System (INIS)

    Yu, Kaikai; Xu, Jinglei; Tang, Lan; Mo, Jianwei

    2015-01-01

    This study proposes a new adaptive method to enable the number of interrogation windows and their positions in a particle image velocimetry (PIV) image interrogation algorithm to become self-adapted according to the seeding density. The proposed method can relax the constraint of uniform sampling rate and uniform window size commonly adopted in the traditional PIV algorithm. In addition, the positions of the sampling points are redistributed on the basis of the spring force generated by the sampling points. The advantages include control of the number of interrogation windows according to the local seeding density and smoother distribution of sampling points. The reliability of the adaptive sampling method is illustrated by processing synthetic and experimental images. The synthetic example attests to the advantages of the sampling method. Compared with that of the uniform interrogation technique in the experimental application, the spatial resolution is locally enhanced when using the proposed sampling method. (technical design note)

  13. Point prevalence of vertigo and dizziness in a sample of 2672 subjects and correlation with headaches.

    Science.gov (United States)

    Teggi, R; Manfrin, M; Balzanelli, C; Gatti, O; Mura, F; Quaglieri, S; Pilolli, F; Redaelli de Zinis, L O; Benazzo, M; Bussi, M

    2016-06-01

    Vertigo and dizziness are common symptoms in the general population, with an estimated prevalence between 20% and 56%. The aim of our work was to assess the point prevalence of these symptoms in a population of 2672 subjects. Patients were asked to answer a questionnaire; in the first part they were asked about demographic data and previous vertigo and or dizziness. Mean age of the sample was 48.3 ± 15 years, and 46.7% were males. A total of 1077 (40.3%) subjects referred vertigo/dizziness during their lifetime, and the mean age of the first vertigo attack was 39.2 ± 15.4 years; in the second part they were asked about the characteristics of vertigo (age of first episode, rotational vertigo, relapsing episodes, positional exacerbation, presence of cochlear symptoms) and lifetime presence of moderate to severe headache and its clinical features (hemicranial, pulsatile, associated with phono and photophobia, worse on effort). An age and sex effect was demonstrated, with symptoms 4.4 times more elevated in females and 1.8 times in people over 50 years. In the total sample of 2672 responders, 13.7% referred a sensation of spinning, 26.3% relapsing episodes, 12.9% positional exacerbation and 4.8% cochlear symptoms; 34.8% referred headache during their lifetime. Subjects suffering from headache presented an increased rate of relapsing episodes, positional exacerbation, cochlear symptoms and a lower age of occurrence of the first vertigo/dizziness episode. In the discussion, our data are compared with those of previous studies, and we underline the relationship between vertigo/dizziness from one side and headache with migrainous features on the other. © Copyright by Società Italiana di Otorinolaringologia e Chirurgia Cervico-Facciale, Rome, Italy.

  14. 45 CFR 1356.84 - Sampling.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 4 2010-10-01 2010-10-01 false Sampling. 1356.84 Section 1356.84 Public Welfare....84 Sampling. (a) The State agency may collect and report the information required in section 1356.83(e) of this part on a sample of the baseline population consistent with the sampling requirements...

  15. On the influence of extrinsic point defects on irradiation-induced point-defect distributions in silicon

    International Nuclear Information System (INIS)

    Vanhellemont, J.; Romano-Rodriguez, A.

    1994-01-01

    A semi-quantitative model describing the influence of interfaces and stress fields on {113}-defect generation in silicon during 1-MeV electron irradiation, is further developed to take into account also the role of extrinsic point defects. It is shown that the observed distribution of {113}-defects in high-flux electron-irradiated silicon and its dependence on irradiation temperature and dopant concentration can be understood by taking into account not only the influence of the surfaces and interfaces as sinks for intrinsic point defects but also the thermal stability of the bulk sinks for intrinsic point defects. In heavily doped silicon the bulk sinks are related with pairing reactions of the dopant atoms with the generated intrinsic point defects or related with enhanced recombination of vacancies and self-interstitials at extrinsic point defects. The obtained theoretical results are correlated with published experimental data on boron-and phosphorus-doped silicon and are illustrated with observations obtained by irradiating cross-section transmission electron microscopy samples of wafer with highly doped surface layers. (orig.)

  16. Surface reconstruction through poisson disk sampling.

    Directory of Open Access Journals (Sweden)

    Wenguang Hou

    Full Text Available This paper intends to generate the approximate Voronoi diagram in the geodesic metric for some unbiased samples selected from original points. The mesh model of seeds is then constructed on basis of the Voronoi diagram. Rather than constructing the Voronoi diagram for all original points, the proposed strategy is to run around the obstacle that the geodesic distances among neighboring points are sensitive to nearest neighbor definition. It is obvious that the reconstructed model is the level of detail of original points. Hence, our main motivation is to deal with the redundant scattered points. In implementation, Poisson disk sampling is taken to select seeds and helps to produce the Voronoi diagram. Adaptive reconstructions can be achieved by slightly changing the uniform strategy in selecting seeds. Behaviors of this method are investigated and accuracy evaluations are done. Experimental results show the proposed method is reliable and effective.

  17. In-situ determination of cross-over point for overcoming plasma-related matrix effects in inductively coupled plasma-atomic emission spectrometry

    International Nuclear Information System (INIS)

    Chan, George C.-Y.; Hieftje, Gary M.

    2008-01-01

    A novel method is described for overcoming plasma-related matrix effects in inductively coupled plasma-atomic emission spectrometry (ICP-AES). The method is based on measurement of the vertically resolved atomic emission of analyte within the plasma and therefore requires the addition of no reagents to the sample solution or to the plasma. Plasma-related matrix effects enhance analyte emission intensity low in the plasma but depress the same emission signal at higher positions. Such bipolar behavior is true for all emission lines and matrices that induce plasma-related interferences. The transition where the enhancement is balanced by the depression (the so-called cross-over point) results in a spatial region with no apparent matrix effects. Although it would be desirable always to perform determinations at this cross-over point, its location varies between analytes and from matrix to matrix, so it would have to be found separately for every analyte and for every sample. Here, a novel approach is developed for the in-situ determination of the location of this cross-over point. It was found that the location of the cross-over point is practically invariant for a particular analyte emission line when the concentration of the matrix was varied. As a result, it is possible to determine in-situ the location of the cross-over point for all analyte emission lines in a sample by means of a simple one-step sample dilution. When the original sample is diluted by a factor of 2 and the diluted sample is analyzed again, the extent of the matrix effect is identical (zero) between the original sample and the diluted sample at one and only one location - the cross-over point. This novel method was verified with several single-element matrices (0.05 M Na, Ca, Ba and La) and some mixed-element matrices (mixtures of Na-Ca, Ca-Ba, and a plant-sample digest). The inaccuracy in emission intensity due to the matrix effect could be as large as - 30% for conventional measurements in the

  18. Speeding up coarse point cloud registration by threshold-independent baysac match selection

    NARCIS (Netherlands)

    Kang, Z.; Lindenbergh, R.C.; Pu, S

    2016-01-01

    This paper presents an algorithm for the automatic registration of terrestrial point clouds by match selection using an efficiently conditional sampling method - Threshold-independent BaySAC (BAYes SAmpling Consensus) and employs the error metric of average point- To-surface residual to reduce

  19. A carrier-based approach for overmodulation of three-level neutral-point-lamped inverter with zero neutral-point current

    DEFF Research Database (Denmark)

    Maheshwari, Ram Krishan; Munk-Nielsen, Stig; Busquets-Monge, S.

    2012-01-01

    In a voltage source inverter, overmodulation is required to extend the range of operation and enhance the dc-link voltage utilization. A carrier-based implementation of a modulation strategy for the three-level neutral-point-clamped inverter is proposed for the overmodulation region. The modulation...... strategy ensures zero average neutral-point current in a switching period. A newly proposed boundary compression is used to regulate the dc-link voltage at all operating points. A description of the algorithm to implement the modulation strategy is also presented. The main advantage of the proposed...

  20. Curvature computation in volume-of-fluid method based on point-cloud sampling

    Science.gov (United States)

    Kassar, Bruno B. M.; Carneiro, João N. E.; Nieckele, Angela O.

    2018-01-01

    This work proposes a novel approach to compute interface curvature in multiphase flow simulation based on Volume of Fluid (VOF) method. It is well documented in the literature that curvature and normal vector computation in VOF may lack accuracy mainly due to abrupt changes in the volume fraction field across the interfaces. This may cause deterioration on the interface tension forces estimates, often resulting in inaccurate results for interface tension dominated flows. Many techniques have been presented over the last years in order to enhance accuracy in normal vectors and curvature estimates including height functions, parabolic fitting of the volume fraction, reconstructing distance functions, coupling Level Set method with VOF, convolving the volume fraction field with smoothing kernels among others. We propose a novel technique based on a representation of the interface by a cloud of points. The curvatures and the interface normal vectors are computed geometrically at each point of the cloud and projected onto the Eulerian grid in a Front-Tracking manner. Results are compared to benchmark data and significant reduction on spurious currents as well as improvement in the pressure jump are observed. The method was developed in the open source suite OpenFOAM® extending its standard VOF implementation, the interFoam solver.

  1. 40 CFR 257.23 - Ground-water sampling and analysis requirements.

    Science.gov (United States)

    2010-07-01

    ...: (1) Sample collection; (2) Sample preservation and shipment; (3) Analytical procedures; (4) Chain of... theory test, then the data should be transformed or a distribution-free theory test should be used. If... chart and its associated parameter values shall be protective of human health and the environment. The...

  2. A random sampling approach for robust estimation of tissue-to-plasma ratio from extremely sparse data.

    Science.gov (United States)

    Chu, Hui-May; Ette, Ene I

    2005-09-02

    his study was performed to develop a new nonparametric approach for the estimation of robust tissue-to-plasma ratio from extremely sparsely sampled paired data (ie, one sample each from plasma and tissue per subject). Tissue-to-plasma ratio was estimated from paired/unpaired experimental data using independent time points approach, area under the curve (AUC) values calculated with the naïve data averaging approach, and AUC values calculated using sampling based approaches (eg, the pseudoprofile-based bootstrap [PpbB] approach and the random sampling approach [our proposed approach]). The random sampling approach involves the use of a 2-phase algorithm. The convergence of the sampling/resampling approaches was investigated, as well as the robustness of the estimates produced by different approaches. To evaluate the latter, new data sets were generated by introducing outlier(s) into the real data set. One to 2 concentration values were inflated by 10% to 40% from their original values to produce the outliers. Tissue-to-plasma ratios computed using the independent time points approach varied between 0 and 50 across time points. The ratio obtained from AUC values acquired using the naive data averaging approach was not associated with any measure of uncertainty or variability. Calculating the ratio without regard to pairing yielded poorer estimates. The random sampling and pseudoprofile-based bootstrap approaches yielded tissue-to-plasma ratios with uncertainty and variability. However, the random sampling approach, because of the 2-phase nature of its algorithm, yielded more robust estimates and required fewer replications. Therefore, a 2-phase random sampling approach is proposed for the robust estimation of tissue-to-plasma ratio from extremely sparsely sampled data.

  3. Nanotexturing of surfaces to reduce melting point.

    Energy Technology Data Exchange (ETDEWEB)

    Garcia, Ernest J.; Zubia, David (University of Texas at El Paso El Paso, TX); Mireles, Jose (Universidad Aut%C3%94onoma de Ciudad Ju%C3%94arez Ciudad Ju%C3%94arez, Mexico); Marquez, Noel (University of Texas at El Paso El Paso, TX); Quinones, Stella (University of Texas at El Paso El Paso, TX)

    2011-11-01

    This investigation examined the use of nano-patterned structures on Silicon-on-Insulator (SOI) material to reduce the bulk material melting point (1414 C). It has been found that sharp-tipped and other similar structures have a propensity to move to the lower energy states of spherical structures and as a result exhibit lower melting points than the bulk material. Such a reduction of the melting point would offer a number of interesting opportunities for bonding in microsystems packaging applications. Nano patterning process capabilities were developed to create the required structures for the investigation. One of the technical challenges of the project was understanding and creating the specialized conditions required to observe the melting and reshaping phenomena. Through systematic experimentation and review of the literature these conditions were determined and used to conduct phase change experiments. Melting temperatures as low as 1030 C were observed.

  4. Sample Results From Tank 48H Samples HTF-48-14-158, -159, -169, and -170

    Energy Technology Data Exchange (ETDEWEB)

    Peters, T. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Hang, T. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2015-04-28

    Savannah River National Laboratory (SRNL) analyzed samples from Tank 48H in support of determining the cause for the unusually high dose rates at the sampling points for this tank. A set of two samples was taken from the quiescent tank, and two additional samples were taken after the contents of the tank were mixed. The results of the analyses of all the samples show that the contents of the tank have changed very little since the analysis of the previous sample in 2012. The solids are almost exclusively composed of tetraphenylborate (TPB) salts, and there is no indication of acceleration in the TPB decomposition. The filtrate composition shows a moderate increase in salt concentration and density, which is attributable to the addition of NaOH for the purposes of corrosion control. An older modeling simulation of the TPB degradation was updated, and the supernate results from a 2012 sample were run in the model. This result was compared to the results from the 2014 recent sample results reported in this document. The model indicates there is no change in the TPB degradation from 2012 to 2014. SRNL measured the buoyancy of the TPB solids in Tank 48H simulant solutions. It was determined that a solution of density 1.279 g/mL (~6.5M sodium) was capable of indefinitely suspending the TPB solids evenly throughout the solution. A solution of density 1.296 g/mL (~7M sodium) caused a significant fraction of the solids to float on the solution surface. As the experiments could not include the effect of additional buoyancy elements such as benzene or hydrogen generation, the buoyancy measurements provide an upper bound estimate of the density in Tank 48H required to float the solids.

  5. Method and apparatus for continuously detecting and monitoring the hydrocarbon dew-point of gas

    Energy Technology Data Exchange (ETDEWEB)

    Boyle, G.J.; Pritchard, F.R.

    1987-08-04

    This patent describes a method and apparatus for continuously detecting and monitoring the hydrocarbon dew-point of a gas. A gas sample is supplied to a dew-point detector and the temperature of a portion of the sample gas stream to be investigated is lowered progressively prior to detection until the dew-point is reached. The presence of condensate within the flowing gas is detected and subsequently the supply gas sample is heated to above the dew-point. The procedure of cooling and heating the gas stream continuously in a cyclical manner is repeated.

  6. Critical points of DNA quantification by real-time PCR--effects of DNA extraction method and sample matrix on quantification of genetically modified organisms.

    Science.gov (United States)

    Cankar, Katarina; Stebih, Dejan; Dreo, Tanja; Zel, Jana; Gruden, Kristina

    2006-08-14

    Real-time PCR is the technique of choice for nucleic acid quantification. In the field of detection of genetically modified organisms (GMOs) quantification of biotech products may be required to fulfil legislative requirements. However, successful quantification depends crucially on the quality of the sample DNA analyzed. Methods for GMO detection are generally validated on certified reference materials that are in the form of powdered grain material, while detection in routine laboratories must be performed on a wide variety of sample matrixes. Due to food processing, the DNA in sample matrixes can be present in low amounts and also degraded. In addition, molecules of plant origin or from other sources that affect PCR amplification of samples will influence the reliability of the quantification. Further, the wide variety of sample matrixes presents a challenge for detection laboratories. The extraction method must ensure high yield and quality of the DNA obtained and must be carefully selected, since even components of DNA extraction solutions can influence PCR reactions. GMO quantification is based on a standard curve, therefore similarity of PCR efficiency for the sample and standard reference material is a prerequisite for exact quantification. Little information on the performance of real-time PCR on samples of different matrixes is available. Five commonly used DNA extraction techniques were compared and their suitability for quantitative analysis was assessed. The effect of sample matrix on nucleic acid quantification was assessed by comparing 4 maize and 4 soybean matrixes. In addition 205 maize and soybean samples from routine analysis were analyzed for PCR efficiency to assess variability of PCR performance within each sample matrix. Together with the amount of DNA needed for reliable quantification, PCR efficiency is the crucial parameter determining the reliability of quantitative results, therefore it was chosen as the primary criterion by which to

  7. On conjugate points and the Leitmann equivalent problem approach

    NARCIS (Netherlands)

    Wagener, F.O.O.

    2009-01-01

    This article extends the Leitmann equivalence method to a class of problems featuring conjugate points. The class is characterised by the requirement that the set of indifference points of a given problem forms a finite stratification.

  8. Minerals Intake Distributions in a Large Sample of Iranian at-Risk Population Using the National Cancer Institute Method: Do They Meet Their Requirements?

    Science.gov (United States)

    Heidari, Zahra; Feizi, Awat; Azadbakht, Leila; Sarrafzadegan, Nizal

    2015-01-01

    Minerals are required for the body's normal function. The current study assessed the intake distribution of minerals and estimated the prevalence of inadequacy and excess among a representative sample of healthy middle aged and elderly Iranian people. In this cross-sectional study, the second follow up to the Isfahan Cohort Study (ICS), 1922 generally healthy people aged 40 and older were investigated. Dietary intakes were collected using 24 hour recalls and two or more consecutive food records. Distribution of minerals intake was estimated using traditional (averaging dietary intake days) and National Cancer Institute (NCI) methods, and the results obtained from the two methods, were compared. The prevalence of minerals intake inadequacy or excess was estimated using the estimated average requirement (EAR) cut-point method, the probability approach and the tolerable upper intake levels (UL). There were remarkable differences between values obtained using traditional and NCI methods, particularly in the lower and upper percentiles of the estimated intake distributions. A high prevalence of inadequacy of magnesium (50 - 100 %), calcium (21 - 93 %) and zinc (30 - 55 % for males > 50 years) was observed. Significant gender differences were found regarding inadequate intakes of calcium (21 - 76 % for males vs. 45 - 93 % for females), magnesium (92 % vs. 100 %), iron (0 vs. 15 % for age group 40 - 50 years) and zinc (29 - 55 % vs. 0 %) (all; p < 0.05). Severely imbalanced intakes of magnesium, calcium and zinc were observed among the middle-aged and elderly Iranian population. Nutritional interventions and population-based education to improve healthy diets among the studied population at risk are needed.

  9. Detecting Change-Point via Saddlepoint Approximations

    Institute of Scientific and Technical Information of China (English)

    Zhaoyuan LI; Maozai TIAN

    2017-01-01

    It's well-known that change-point problem is an important part of model statistical analysis.Most of the existing methods are not robust to criteria of the evaluation of change-point problem.In this article,we consider "mean-shift" problem in change-point studies.A quantile test of single quantile is proposed based on saddlepoint approximation method.In order to utilize the information at different quantile of the sequence,we further construct a "composite quantile test" to calculate the probability of every location of the sequence to be a change-point.The location of change-point can be pinpointed rather than estimated within a interval.The proposed tests make no assumptions about the functional forms of the sequence distribution and work sensitively on both large and small size samples,the case of change-point in the tails,and multiple change-points situation.The good performances of the tests are confirmed by simulations and real data analysis.The saddlepoint approximation based distribution of the test statistic that is developed in the paper is of independent interest and appealing.This finding may be of independent interest to the readers in this research area.

  10. A fixed-point farrago

    CERN Document Server

    Shapiro, Joel H

    2016-01-01

    This text provides an introduction to some of the best-known fixed-point theorems, with an emphasis on their interactions with topics in analysis. The level of exposition increases gradually throughout the book, building from a basic requirement of undergraduate proficiency to graduate-level sophistication. Appendices provide an introduction to (or refresher on) some of the prerequisite material and exercises are integrated into the text, contributing to the volume’s ability to be used as a self-contained text. Readers will find the presentation especially useful for independent study or as a supplement to a graduate course in fixed-point theory. The material is split into four parts: the first introduces the Banach Contraction-Mapping Principle and the Brouwer Fixed-Point Theorem, along with a selection of interesting applications; the second focuses on Brouwer’s theorem and its application to John Nash’s work; the third applies Brouwer’s theorem to spaces of infinite dimension; and the fourth rests ...

  11. A simple method for simultaneous spectrophotometric determination of brilliant blue fcf and sunset yellow fcf in food samples after cloud point extraction

    International Nuclear Information System (INIS)

    Heydari, R.

    2016-01-01

    In this study, a simple and low-cost method for extraction and pre-concentration of brilliant blue FCF and sunset yellow FCF in food samples using cloud point extraction (CPE) and spectrophotometric detection was developed. The effects of main factors such as solution pH, surfactant concentration, salt and its concentration, incubation time and temperature on the CPE of both dyes were investigated and optimized. Linear range of calibration graphs were obtained in the range of 16.0-1300 ng mL-1 for brilliant blue FCF and 25.0-1300 ng mL/sup -1/ for sunset yellow FCF under the optimum conditions. Limit of detection values for brilliant blue FCF and sunset yellow FCF were 3 and 6 ng mL-1, respectively. The relative standard deviation (RSD) values of both dyes for repeated measurements (n=6) were less than 4.57 %. The obtained results were demonstrated the proposed method can be applied satisfactory to determine these dyes in different food samples. (author)

  12. Extreme simplification and rendering of point sets using algebraic multigrid

    NARCIS (Netherlands)

    Reniers, D.; Telea, A.C.

    2009-01-01

    We present a novel approach for extreme simplification of point set models, in the context of real-time rendering. Point sets are often rendered using simple point primitives, such as oriented discs. However, this requires using many primitives to render even moderately simple shapes. Often, one

  13. Optimizing detection of noble gas emission at a former UNE site: sample strategy, collection, and analysis

    Science.gov (United States)

    Kirkham, R.; Olsen, K.; Hayes, J. C.; Emer, D. F.

    2013-12-01

    Underground nuclear tests may be first detected by seismic or air samplers operated by the CTBTO (Comprehensive Nuclear-Test-Ban Treaty Organization). After initial detection of a suspicious event, member nations may call for an On-Site Inspection (OSI) that in part, will sample for localized releases of radioactive noble gases and particles. Although much of the commercially available equipment and methods used for surface and subsurface environmental sampling of gases can be used for an OSI scenario, on-site sampling conditions, required sampling volumes and establishment of background concentrations of noble gases require development of specialized methodologies. To facilitate development of sampling equipment and methodologies that address OSI sampling volume and detection objectives, and to collect information required for model development, a field test site was created at a former underground nuclear explosion site located in welded volcanic tuff. A mixture of SF-6, Xe127 and Ar37 was metered into 4400 m3 of air as it was injected into the top region of the UNE cavity. These tracers were expected to move towards the surface primarily in response to barometric pumping or through delayed cavity pressurization (accelerated transport to minimize source decay time). Sampling approaches compared during the field exercise included sampling at the soil surface, inside surface fractures, and at soil vapor extraction points at depths down to 2 m. Effectiveness of various sampling approaches and the results of tracer gas measurements will be presented.

  14. METHODS FOR DETERMINING AGITATOR MIXING REQUIREMENTS FOR A MIXING & SAMPLING FACILITY TO FEED WTP (WASTE TREATMENT PLANT)

    Energy Technology Data Exchange (ETDEWEB)

    GRIFFIN PW

    2009-08-27

    The following report is a summary of work conducted to evaluate the ability of existing correlative techniques and alternative methods to accurately estimate impeller speed and power requirements for mechanical mixers proposed for use in a mixing and sampling facility (MSF). The proposed facility would accept high level waste sludges from Hanford double-shell tanks and feed uniformly mixed high level waste to the Waste Treatment Plant. Numerous methods are evaluated and discussed, and resulting recommendations provided.

  15. Share point 2013 Implementation Strategy for Supporting KM System Requirements in Nuclear Malaysia

    International Nuclear Information System (INIS)

    Mohamad Safuan Sulaiman; Siti Nurbahyah Hamdan; Abdul Muin Abdul Rahman

    2015-01-01

    Knowledge Management system (KMS or KM System) is an important tool for knowledge intensive organization such as Nuclear Malaysia. In June 2010, MS Share Point 2007 was deployed as a tool for KM System in Nuclear Malaysia and was functioning correctly until the end of 2013, whereby the system failed due to software malfunction and inability of the infrastructure to support its continuous operation and usage expansion. This led to difficulties for users to access their operational data and information, hence hampering access to one of the most important tool for KM System in Nuclear Malaysia. However, recently a newer and updated version of the system for example Share point 2013 was deployed to meet the same objectives. Learning from previous failures, the tool has been analyzed at various stages of technical and management reviews. The implementation of this newer version has been designed to overcome most of the deficiencies faced by the older version, both from the software and infrastructure point of views. The tool has performed very well ever since its commissioning from December 2014 till today. As it is still under warranty till March 2016, minimum maintenance issues have been experienced and any problems have been rectified promptly. This paper describes the implementation strategy in preparing the design information of software and hardware architecture of the new tool to overcome the problems of older version, in order to provide a better platform for KM System in Nuclear Malaysia. (author)

  16. Determination of the solid-liquid-vapor triple point pressure of carbon

    International Nuclear Information System (INIS)

    Haaland, D.M.

    1976-01-01

    A detailed experimental study of the triple point pressure of carbon using laser heating techniques has been completed. Uncertainties and conflict in previous investigations have been addressed and substantial data presented which places the solid-liquid-vapor carbon triple point at 107 +- 2 atmospheres. This is in agreement with most investigations which have located the triple point pressure between 100 and 120 atmospheres, but is in disagreement with recent low pressure carbon experiments. The absence of any significant polymorphs of carbon other than graphite suggests that the graphite-liquid-vapor triple point has been measured. Graphite samples were melted in a pressure vessel using a 400 W Nd:YAG continuous-wave laser focused to a maximum power density of approximately 80 kW/cm 2 . Melt was confirmed by detailed microstructure analysis and x-ray diffraction of the recrystallized graphite. Experiments to determine the minimum melt pressure of carbon were completed as a function of sample size, type of inert gas, and laser power density to asure that laser power densities were sufficient to produce melt at the triple point pressure of carbon, and the pressure of carbon at the surface of the sample was identical to the measured pressure of the inert gas in the pressure vessel. High-speed color cinematography of the carbon heating revealed the presence of a laser-generated vapor or particle plume in front of the sample. The existence of this bright plume pevented the measurement of the carbon triple point temperature

  17. Sensitivity study of micro four-point probe measurements on small samples

    DEFF Research Database (Denmark)

    Wang, Fei; Petersen, Dirch Hjorth; Hansen, Torben Mikael

    2010-01-01

    probes than near the outer ones. The sensitive area is defined for infinite film, circular, square, and rectangular test pads, and convergent sensitivities are observed for small samples. The simulations show that the Hall sheet resistance RH in micro Hall measurements with position error suppression...

  18. A new dispersive liquid-liquid microextraction using ionic liquid based microemulsion coupled with cloud point extraction for determination of copper in serum and water samples.

    Science.gov (United States)

    Arain, Salma Aslam; Kazi, Tasneem Gul; Afridi, Hassan Imran; Arain, Mariam Shahzadi; Panhwar, Abdul Haleem; Khan, Naeemullah; Baig, Jameel Ahmed; Shah, Faheem

    2016-04-01

    A simple and rapid dispersive liquid-liquid microextraction procedure based on ionic liquid assisted microemulsion (IL-µE-DLLME) combined with cloud point extraction has been developed for preconcentration copper (Cu(2+)) in drinking water and serum samples of adolescent female hepatitits C (HCV) patients. In this method a ternary system was developed to form microemulsion (µE) by phase inversion method (PIM), using ionic liquid, 1-butyl-3-methylimidazolium hexafluorophosphate ([C4mim][PF6]) and nonionic surfactant, TX-100 (as a stabilizer in aqueous media). The Ionic liquid microemulsion (IL-µE) was evaluated through visual assessment, optical light microscope and spectrophotometrically. The Cu(2+) in real water and aqueous acid digested serum samples were complexed with 8-hydroxyquinoline (oxine) and extracted into IL-µE medium. The phase separation of stable IL-µE was carried out by the micellar cloud point extraction approach. The influence of of different parameters such as pH, oxine concentration, centrifugation time and rate were investigated. At optimized experimental conditions, the limit of detection and enhancement factor were found to be 0.132 µg/L and 70 respectively, with relative standard deviation <5%. In order to validate the developed method, certified reference materials (SLRS-4 Riverine water) and human serum (Sero-M10181) were analyzed. The resulting data indicated a non-significant difference in obtained and certified values of Cu(2+). The developed procedure was successfully applied for the preconcentration and determination of trace levels of Cu(2+) in environmental and biological samples. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Post analysis of AE data of seal plug leakage of NAPS-2 and fatigue crack initiation of three point bend sample using cluster and artificial neural network

    International Nuclear Information System (INIS)

    Singh, A.K.; Mehta, H.R.; Bhattacharya, S.

    2003-01-01

    Acoustic Emission data is very weak and passive in nature that leads to a challenging task to separate AE data from noise. This paper illuminates the work done of post analysis of acoustic emission data of seal plug leakage of operating PHWR, NAPS-2, Narora and Fatigue Crack initiation of three-point bend sample using cluster analysis and artificial neural network (ANN). First the known AE data generated in lab by PCB debonding and pencil leak break were analyzed using ANN to get the confidence. After that the AE data acquired by scanning all 306-coolant channels at NAPS-2 was sorted out in five separate clusters for different leakage rate and background noise. Fatigue crack initiation, AE data generated in MSD lab on three-point bend sample was clustered in ten separate clusters in which one cluster was having 98% AE data of crack initiation period noted with the help of travelling microscope but remaining clusters indicating AE data of different sources and noise. The above data was further analysed with self organizing map of Artificial Neural Network. (author)

  20. Accurate determination of rates from non-uniformly sampled relaxation data

    Energy Technology Data Exchange (ETDEWEB)

    Stetz, Matthew A.; Wand, A. Joshua, E-mail: wand@upenn.edu [University of Pennsylvania Perelman School of Medicine, Johnson Research Foundation and Department of Biochemistry and Biophysics (United States)

    2016-08-15

    The application of non-uniform sampling (NUS) to relaxation experiments traditionally used to characterize the fast internal motion of proteins is quantitatively examined. Experimentally acquired Poisson-gap sampled data reconstructed with iterative soft thresholding are compared to regular sequentially sampled (RSS) data. Using ubiquitin as a model system, it is shown that 25 % sampling is sufficient for the determination of quantitatively accurate relaxation rates. When the sampling density is fixed at 25 %, the accuracy of rates is shown to increase sharply with the total number of sampled points until eventually converging near the inherent reproducibility of the experiment. Perhaps contrary to some expectations, it is found that accurate peak height reconstruction is not required for the determination of accurate rates. Instead, inaccuracies in rates arise from inconsistencies in reconstruction across the relaxation series that primarily manifest as a non-linearity in the recovered peak height. This indicates that the performance of an NUS relaxation experiment cannot be predicted from comparison of peak heights using a single RSS reference spectrum. The generality of these findings was assessed using three alternative reconstruction algorithms, eight different relaxation measurements, and three additional proteins that exhibit varying degrees of spectral complexity. From these data, it is revealed that non-linearity in peak height reconstruction across the relaxation series is strongly correlated with errors in NUS-derived relaxation rates. Importantly, it is shown that this correlation can be exploited to reliably predict the performance of an NUS-relaxation experiment by using three or more RSS reference planes from the relaxation series. The RSS reference time points can also serve to provide estimates of the uncertainty of the sampled intensity, which for a typical relaxation times series incurs no penalty in total acquisition time.

  1. An approach for analyzing the ensemble mean from a dynamic point of view

    OpenAIRE

    Pengfei, Wang

    2014-01-01

    Simultaneous ensemble mean equations (LEMEs) for the Lorenz model are obtained, enabling us to analyze the properties of the ensemble mean from a dynamical point of view. The qualitative analysis for the two-sample and n-sample LEMEs show the locations and number of stable points are different from the Lorenz equations (LEs), and the results are validated by numerical experiments. The analysis for the eigenmatrix of the stable points of LEMEs indicates that the stability of these stable point...

  2. Beaconless Pointing for Deep-Space Optical Communication

    Science.gov (United States)

    Swank, Aaron J.; Aretskin-Hariton, Eliot; Le, Dzu K.; Sands, Obed S.; Wroblewski, Adam

    2016-01-01

    Free space optical communication is of interest to NASA as a complement to existing radio frequency communication methods. The potential for an increase in science data return capability over current radio-frequency communications is the primary objective. Deep space optical communication requires laser beam pointing accuracy on the order of a few microradians. The laser beam pointing approach discussed here operates without the aid of a terrestrial uplink beacon. Precision pointing is obtained from an on-board star tracker in combination with inertial rate sensors and an outgoing beam reference vector. The beaconless optical pointing system presented in this work is the current approach for the Integrated Radio and Optical Communication (iROC) project.

  3. Dynamics of Multibody Systems Near Lagrangian Points

    Science.gov (United States)

    Wong, Brian

    dynamics of two sample rigid bodies when they are in different periodic orbits around a collinear point, and the tether librations of a two-tether system in the same orbits. The results show that the rigid satellites and the tethered system experience greater attitude motions when they are in larger periodic orbits. The dynamics of variable length systems are also studied in order to determine the control cost associated with moving the end bodies in a gapless spiral to cover the area spanned by the system. The control cost is relatively low during tether deployment, and negligible effort is required to maintain the angular velocity of the tethered system after deployment. A set of recommendations for the applications of Lagrangian-point physically-connected systems are presented as well as some future research directions are suggested.

  4. Research on point source simulating the γ-ray detection efficiencies of stander source

    International Nuclear Information System (INIS)

    Tian Zining; Jia Mingyan; Shen Maoquan; Yang Xiaoyan; Cheng Zhiwei

    2010-01-01

    For φ 75 mm x 25 mm sample, the full energy peak efficiencies on different heights of sample radius were obtained using the point sources, and the function parameters about the full energy peak efficiencies of point sources based on radius was fixed. The 59.54 keV γ-ray, 661.66 keV γ-ray, 1173.2 keV γ-ray, 1332.5 keV γ-ray detection efficiencies on different height of samples were obtained, based on the full energy peak efficiencies of point sources and its height, and the function parameters about the full energy peak efficiencies of surface sources based on sample height was fixed. The detection efficiency of (75 mm x 25 mm calibration source can be obtained by integrality, the detection efficiencies simulated by point sources are consistent with the results of stander source in 10%. Therefore, the calibration method of stander source can be substituted by the point source simulation method, and it tis feasible when there is no stander source.) (authors)

  5. Evaluation of As, Se and Zn in octopus samples in different points of sales of the distribution chain in Brazil

    International Nuclear Information System (INIS)

    Marildes Josefina Lemos Neto; Elizabeth de Souza Nascimento; Mariza Landgraf; Vera Akiko Maihara; Silva, P.S.C.

    2014-01-01

    Shellfish such as squid and octopus, class Cephalopoda, has high commercial value in restaurants and for export. As, Se and Zn concentrations were determined in 117 octopus acquired in different points of the distribution chain in 4 coastal cities of Sao Paulo state (Guaruja, Santos, Sao Vicente and Praia Grande)-Brazil. The methodology for elemental determination was Instrumental Neutron Activation Analysis (INAA). The element concentration in the octopus samples (wet weight) range from: 0.184 to 35.4 mg kg -1 for As, 0.203 to 2.26 mg kg -1 for Se and 4.73 to 37.4 mg kg -1 for Zn. Arsenic and Se levels were above the limit for fish established by Brazilian legislation, while Zn concentrations were in accordance with literature values. (author)

  6. 40 CFR 141.703 - Sampling locations.

    Science.gov (United States)

    2010-07-01

    ... samples prior to the point of filter backwash water addition. (d) Bank filtration. (1) Systems that... applicable, must collect source water samples in the surface water prior to bank filtration. (2) Systems that use bank filtration as pretreatment to a filtration plant must collect source water samples from the...

  7. rCBF measurement by one-point venous sampling with the ARG method

    International Nuclear Information System (INIS)

    Yoshida, Nobuhiro; Okamoto, Toshiaki; Takahashi, Hidekado; Hattori, Teruo

    1997-01-01

    We investigated the possibility of using venous blood sampling instead of arterial blood sampling for the current method of ARG (autoradiography) used to determine regional cerebral blood flow (rCBF) on the basis of one session of arterial blood sampling and SPECT. For this purpose, the ratio of the arterial blood radioactivity count to the venous blood radioactivity count, the coefficient of variation, and the correlation and differences between arterial blood-based rCBF and venous blood-based rCBF were analyzed. The coefficient of variation was lowest (4.1%) 20 minutes after injection into the dorsum manus. When the relationship between venous and arterial blood counts was analyzed, arterial blood counts correlated well with venous blood counts collected at the dorsum manus 20 or 30 minutes after intravenous injection and with venous blood counts collected at the wrist 20 minutes after intravenous injection (r=0.97 or higher). The difference from rCBF determined on the basis of arterial blood was smallest (0.7) for rCBF determined on the basis of venous blood collected at the dorsum manus 20 minutes after intravenous injection. (author)

  8. New generation of gas infrared point heaters

    Energy Technology Data Exchange (ETDEWEB)

    Schink, Damian [Pintsch Aben B.V., Dinslaken (Germany)

    2011-11-15

    It is more than thirty years since gas infrared heating for points was introduced on the railway network of what is now Deutsche Bahn. These installations have remained in service right through to the present, with virtually no modifications. More stringent requirements as regards availability, maintainability and remote monitoring have, however, led to the development of a new system of gas infrared heating for points - truly a new generation. (orig.)

  9. Statistical inferences with jointly type-II censored samples from two Pareto distributions

    Science.gov (United States)

    Abu-Zinadah, Hanaa H.

    2017-08-01

    In the several fields of industries the product comes from more than one production line, which is required to work the comparative life tests. This problem requires sampling of the different production lines, then the joint censoring scheme is appeared. In this article we consider the life time Pareto distribution with jointly type-II censoring scheme. The maximum likelihood estimators (MLE) and the corresponding approximate confidence intervals as well as the bootstrap confidence intervals of the model parameters are obtained. Also Bayesian point and credible intervals of the model parameters are presented. The life time data set is analyzed for illustrative purposes. Monte Carlo results from simulation studies are presented to assess the performance of our proposed method.

  10. A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions

    Science.gov (United States)

    Pan, Guang; Ye, Pengcheng; Yang, Zhidong

    2014-01-01

    Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. PMID:25133206

  11. Survival End Points for Huntington Disease Trials Prior to a Motor Diagnosis.

    Science.gov (United States)

    Long, Jeffrey D; Mills, James A; Leavitt, Blair R; Durr, Alexandra; Roos, Raymund A; Stout, Julie C; Reilmann, Ralf; Landwehrmeyer, Bernhard; Gregory, Sarah; Scahill, Rachael I; Langbehn, Douglas R; Tabrizi, Sarah J

    2017-11-01

    Predictive genetic testing in Huntington disease (HD) enables therapeutic trials in HTT gene expansion mutation carriers prior to a motor diagnosis. Progression-free survival (PFS) is the composite of a motor diagnosis or a progression event, whichever comes first. To determine if PFS provides feasible sample sizes for trials with mutation carriers who have not yet received a motor diagnosis. This study uses data from the 2-phase, longitudinal cohort studies called Track and from a longitudinal cohort study called the Cooperative Huntington Observational Research Trial (COHORT). Track had 167 prediagnosis mutation carriers and 156 noncarriers, whereas COHORT had 366 prediagnosis mutation carriers and noncarriers. Track studies were conducted at 4 sites in 4 countries (Canada, France, England, and the Netherlands) from which data were collected from January 17, 2008, through November 17, 2014. The COHORT was conducted at 38 sites in 3 countries (Australia, Canada, and the United States) from which data were collected from February 14, 2006, through December 31, 2009. Results from the Track data were externally validated with data from the COHORT. The required sample size was estimated for a 2-arm prediagnosis clinical trial. Data analysis took place from May 1, 2016, to June 10, 2017. The primary end point is PFS. Huntington disease progression events are defined for the Unified Huntington's Disease Rating Scale total motor score, total functional capacity, symbol digit modalities test, and Stroop word test. Of Track's 167 prediagnosis mutation carriers, 93 (55.6%) were women, and the mean (SD) age was 40.06 (8.92) years; of the 156 noncarriers, 87 (55.7%) were women, and the mean (SD) age was 45.58 (10.30) years. Of the 366 COHORT participants, 229 (62.5%) were women and the mean (SD) age was 42.21 (12.48) years. The PFS curves of the Track mutation carriers showed good external validity with the COHORT mutation carriers after adjusting for initial progression. For

  12. Structure-based sampling and self-correcting machine learning for accurate calculations of potential energy surfaces and vibrational levels

    Science.gov (United States)

    Dral, Pavlo O.; Owens, Alec; Yurchenko, Sergei N.; Thiel, Walter

    2017-06-01

    We present an efficient approach for generating highly accurate molecular potential energy surfaces (PESs) using self-correcting, kernel ridge regression (KRR) based machine learning (ML). We introduce structure-based sampling to automatically assign nuclear configurations from a pre-defined grid to the training and prediction sets, respectively. Accurate high-level ab initio energies are required only for the points in the training set, while the energies for the remaining points are provided by the ML model with negligible computational cost. The proposed sampling procedure is shown to be superior to random sampling and also eliminates the need for training several ML models. Self-correcting machine learning has been implemented such that each additional layer corrects errors from the previous layer. The performance of our approach is demonstrated in a case study on a published high-level ab initio PES of methyl chloride with 44 819 points. The ML model is trained on sets of different sizes and then used to predict the energies for tens of thousands of nuclear configurations within seconds. The resulting datasets are utilized in variational calculations of the vibrational energy levels of CH3Cl. By using both structure-based sampling and self-correction, the size of the training set can be kept small (e.g., 10% of the points) without any significant loss of accuracy. In ab initio rovibrational spectroscopy, it is thus possible to reduce the number of computationally costly electronic structure calculations through structure-based sampling and self-correcting KRR-based machine learning by up to 90%.

  13. Pointing and control system performance and improvement strategies for the SOFIA Airborne Telescope

    Science.gov (United States)

    Graf, Friederike; Reinacher, Andreas; Jakob, Holger; Lampater, Ulrich; Pfueller, Enrico; Wiedemann, Manuel; Wolf, Jürgen; Fasoulas, Stefanos

    2016-07-01

    The Stratospheric Observatory for Infrared Astronomy (SOFIA) has already successfully conducted over 300 flights. In its early science phase, SOFIA's pointing requirements and especially the image jitter requirements of less than 1 arcsec rms have driven the design of the control system. Since the first observation flights, the image jitter has been gradually reduced by various control mechanisms. During smooth flight conditions, the current pointing and control system allows us to achieve the standards set for early science on SOFIA. However, the increasing demands on the image size require an image jitter of less than 0.4 arcsec rms during light turbulence to reach SOFIA's scientific goals. The major portion of the remaining image motion is caused by deformation and excitation of the telescope structure in a wide range of frequencies due to aircraft motion and aerodynamic and aeroacoustic effects. Therefore the so-called Flexible Body Compensation system (FBC) is used, a set of fixed-gain filters to counteract the structural bending and deformation. Thorough testing of the current system under various flight conditions has revealed a variety of opportunities for further improvements. The currently applied filters have solely been developed based on a FEM analysis. By implementing the inflight measurements in a simulation and optimization, an improved fixed-gain compensation method was identified. This paper will discuss promising results from various jitter measurements recorded with sampling frequencies of up to 400 Hz using the fast imaging tracking camera.

  14. Biopolymers for sample collection, protection, and preservation.

    Science.gov (United States)

    Sorokulova, Iryna; Olsen, Eric; Vodyanoy, Vitaly

    2015-07-01

    One of the principal challenges in the collection of biological samples from air, water, and soil matrices is that the target agents are not stable enough to be transferred from the collection point to the laboratory of choice without experiencing significant degradation and loss of viability. At present, there is no method to transport biological samples over considerable distances safely, efficiently, and cost-effectively without the use of ice or refrigeration. Current techniques of protection and preservation of biological materials have serious drawbacks. Many known techniques of preservation cause structural damages, so that biological materials lose their structural integrity and viability. We review applications of a novel bacterial preservation process, which is nontoxic and water soluble and allows for the storage of samples without refrigeration. The method is capable of protecting the biological sample from the effects of environment for extended periods of time and then allows for the easy release of these collected biological materials from the protective medium without structural or DNA damage. Strategies for sample collection, preservation, and shipment of bacterial, viral samples are described. The water-soluble polymer is used to immobilize the biological material by replacing the water molecules within the sample with molecules of the biopolymer. The cured polymer results in a solid protective film that is stable to many organic solvents, but quickly removed by the application of the water-based solution. The process of immobilization does not require the use of any additives, accelerators, or plastifiers and does not involve high temperature or radiation to promote polymerization.

  15. Development of an integrated pointing device driver for the disabled.

    Science.gov (United States)

    Shih, Ching-Hsiang; Shih, Ching-Tien

    2010-01-01

    To help people with disabilities such as those with spinal cord injury (SCI) to effectively utilise commercial pointing devices to operate computers. This study proposes a novel method to integrate the functions of commercial pointing devices. Utilising software technology to develop an integrated pointing device driver (IPDD) for a computer operating system. The proposed IPDD has the following benefits: (1) it does not require additional hardware cost or circuit preservations, (2) it supports all standard interfaces of commercial pointing devices, including PS/2, USB and wireless interfaces and (3) it can integrate any number of devices. The IPDD can be selected and combined according to their physical restriction. The IPDD is a novel method of integrating commercial pointing devices. Through IPDD, people with disabilities can choose a suitable combination of commercial pointing devices to achieve full cursor control and optimise operational performance. In contrast with previous studies, the software-based solution does not require additional hardware or circuit preservations, and it can support unlimited devices. In summary, the IPDD has the benefits of flexibility, low cost and high-device compatibility.

  16. CREPT-MCNP code for efficiency calibration of HPGe detectors with the representative point method.

    Science.gov (United States)

    Saegusa, Jun

    2008-01-01

    The representative point method for the efficiency calibration of volume samples has been previously proposed. For smoothly implementing the method, a calculation code named CREPT-MCNP has been developed. The code estimates the position of a representative point which is intrinsic to each shape of volume sample. The self-absorption correction factors are also given to make correction on the efficiencies measured at the representative point with a standard point source. Features of the CREPT-MCNP code are presented.

  17. Visibility Analysis in a Point Cloud Based on the Medial Axis Transform

    NARCIS (Netherlands)

    Peters, R.; Ledoux, H.; Biljecki, F.

    2015-01-01

    Visibility analysis is an important application of 3D GIS data. Current approaches require 3D city models that are often derived from detailed aerial point clouds. We present an approach to visibility analysis that does not require a city model but works directly on the point cloud. Our approach is

  18. Nonlinear consider covariance analysis using a sigma-point filter formulation

    Science.gov (United States)

    Lisano, Michael E.

    2006-01-01

    The research reported here extends the mathematical formulation of nonlinear, sigma-point estimators to enable consider covariance analysis for dynamical systems. This paper presents a novel sigma-point consider filter algorithm, for consider-parameterized nonlinear estimation, following the unscented Kalman filter (UKF) variation on the sigma-point filter formulation, which requires no partial derivatives of dynamics models or measurement models with respect to the parameter list. It is shown that, consistent with the attributes of sigma-point estimators, a consider-parameterized sigma-point estimator can be developed entirely without requiring the derivation of any partial-derivative matrices related to the dynamical system, the measurements, or the considered parameters, which appears to be an advantage over the formulation of a linear-theory sequential consider estimator. It is also demonstrated that a consider covariance analysis performed with this 'partial-derivative-free' formulation yields equivalent results to the linear-theory consider filter, for purely linear problems.

  19. Sampling Strategies and Processing of Biobank Tissue Samples from Porcine Biomedical Models.

    Science.gov (United States)

    Blutke, Andreas; Wanke, Rüdiger

    2018-03-06

    In translational medical research, porcine models have steadily become more popular. Considering the high value of individual animals, particularly of genetically modified pig models, and the often-limited number of available animals of these models, establishment of (biobank) collections of adequately processed tissue samples suited for a broad spectrum of subsequent analyses methods, including analyses not specified at the time point of sampling, represent meaningful approaches to take full advantage of the translational value of the model. With respect to the peculiarities of porcine anatomy, comprehensive guidelines have recently been established for standardized generation of representative, high-quality samples from different porcine organs and tissues. These guidelines are essential prerequisites for the reproducibility of results and their comparability between different studies and investigators. The recording of basic data, such as organ weights and volumes, the determination of the sampling locations and of the numbers of tissue samples to be generated, as well as their orientation, size, processing and trimming directions, are relevant factors determining the generalizability and usability of the specimen for molecular, qualitative, and quantitative morphological analyses. Here, an illustrative, practical, step-by-step demonstration of the most important techniques for generation of representative, multi-purpose biobank specimen from porcine tissues is presented. The methods described here include determination of organ/tissue volumes and densities, the application of a volume-weighted systematic random sampling procedure for parenchymal organs by point-counting, determination of the extent of tissue shrinkage related to histological embedding of samples, and generation of randomly oriented samples for quantitative stereological analyses, such as isotropic uniform random (IUR) sections generated by the "Orientator" and "Isector" methods, and vertical

  20. Reproducibility of preclinical animal research improves with heterogeneity of study samples

    Science.gov (United States)

    Vogt, Lucile; Sena, Emily S.; Würbel, Hanno

    2018-01-01

    Single-laboratory studies conducted under highly standardized conditions are the gold standard in preclinical animal research. Using simulations based on 440 preclinical studies across 13 different interventions in animal models of stroke, myocardial infarction, and breast cancer, we compared the accuracy of effect size estimates between single-laboratory and multi-laboratory study designs. Single-laboratory studies generally failed to predict effect size accurately, and larger sample sizes rendered effect size estimates even less accurate. By contrast, multi-laboratory designs including as few as 2 to 4 laboratories increased coverage probability by up to 42 percentage points without a need for larger sample sizes. These findings demonstrate that within-study standardization is a major cause of poor reproducibility. More representative study samples are required to improve the external validity and reproducibility of preclinical animal research and to prevent wasting animals and resources for inconclusive research. PMID:29470495

  1. Microfluidic-integrated biosensors: prospects for point-of-care diagnostics.

    Science.gov (United States)

    Kumar, Suveen; Kumar, Saurabh; Ali, Md Azahar; Anand, Pinki; Agrawal, Ved Varun; John, Renu; Maji, Sagar; Malhotra, Bansi D

    2013-11-01

    There is a growing demand to integrate biosensors with microfluidics to provide miniaturized platforms with many favorable properties, such as reduced sample volume, decreased processing time, low cost analysis and low reagent consumption. These microfluidics-integrated biosensors would also have numerous advantages such as laminar flow, minimal handling of hazardous materials, multiple sample detection in parallel, portability and versatility in design. Microfluidics involves the science and technology of manipulation of fluids at the micro- to nano-liter level. It is predicted that combining biosensors with microfluidic chips will yield enhanced analytical capability, and widen the possibilities for applications in clinical diagnostics. The recent developments in microfluidics have helped researchers working in industries and educational institutes to adopt some of these platforms for point-of-care (POC) diagnostics. This review focuses on the latest advancements in the fields of microfluidic biosensing technologies, and on the challenges and possible solutions for translation of this technology for POC diagnostic applications. We also discuss the fabrication techniques required for developing microfluidic-integrated biosensors, recently reported biomarkers, and the prospects of POC diagnostics in the medical industry. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Sampling bee communities using pan traps: alternative methods increase sample size

    Science.gov (United States)

    Monitoring of the status of bee populations and inventories of bee faunas require systematic sampling. Efficiency and ease of implementation has encouraged the use of pan traps to sample bees. Efforts to find an optimal standardized sampling method for pan traps have focused on pan trap color. Th...

  3. Fiber design and realization of point-by-point written fiber Bragg gratings in polymer optical fibers

    DEFF Research Database (Denmark)

    Stefani, Alessio; Stecher, Matthias; Town, Graham E.

    2012-01-01

    the gratings make the point-by-point grating writing technique very interesting and would appear to be able to fill this technological gap. On the other end this technique is hardly applicable for microstructured fibers because of the writing beam being scattered by the air-holes. We report on the design...... and because they allow to tune the guiding parameters by modifying the microstructure. Now a days the only technique used to write gratings in such fibers is the phase mask technique with UV light illumination. Despite the good results that have been obtained, a limited flexibility on the grating design...... and the very long times required for the writing of FBGs raise some questions about the possibility of exporting POF FBGs and the sensors based on them from the laboratory bench to the mass production market. The possibility of arbitrary design of fiber Bragg gratings and the very short time required to write...

  4. Pointo - a Low Cost Solution to Point Cloud Processing

    Science.gov (United States)

    Houshiar, H.; Winkler, S.

    2017-11-01

    With advance in technology access to data especially 3D point cloud data becomes more and more an everyday task. 3D point clouds are usually captured with very expensive tools such as 3D laser scanners or very time consuming methods such as photogrammetry. Most of the available softwares for 3D point cloud processing are designed for experts and specialists in this field and are usually very large software packages containing variety of methods and tools. This results in softwares that are usually very expensive to acquire and also very difficult to use. Difficulty of use is caused by complicated user interfaces that is required to accommodate a large list of features. The aim of these complex softwares is to provide a powerful tool for a specific group of specialist. However they are not necessary required by the majority of the up coming average users of point clouds. In addition to complexity and high costs of these softwares they generally rely on expensive and modern hardware and only compatible with one specific operating system. Many point cloud customers are not point cloud processing experts or willing to spend the high acquisition costs of these expensive softwares and hardwares. In this paper we introduce a solution for low cost point cloud processing. Our approach is designed to accommodate the needs of the average point cloud user. To reduce the cost and complexity of software our approach focuses on one functionality at a time in contrast with most available softwares and tools that aim to solve as many problems as possible at the same time. Our simple and user oriented design improve the user experience and empower us to optimize our methods for creation of an efficient software. In this paper we introduce Pointo family as a series of connected softwares to provide easy to use tools with simple design for different point cloud processing requirements. PointoVIEWER and PointoCAD are introduced as the first components of the Pointo family to provide a

  5. Opportunities and Challenges of Linking Scientific Core Samples to the Geoscience Data Ecosystem

    Science.gov (United States)

    Noren, A. J.

    2016-12-01

    Core samples generated in scientific drilling and coring are critical for the advancement of the Earth Sciences. The scientific themes enabled by analysis of these samples are diverse, and include plate tectonics, ocean circulation, Earth-life system interactions (paleoclimate, paleobiology, paleoanthropology), Critical Zone processes, geothermal systems, deep biosphere, and many others, and substantial resources are invested in their collection and analysis. Linking core samples to researchers, datasets, publications, and funding agencies through registration of globally unique identifiers such as International Geo Sample Numbers (IGSNs) offers great potential for advancing several frontiers. These include maximizing sample discoverability, access, reuse, and return on investment; a means for credit to researchers; and documentation of project outputs to funding agencies. Thousands of kilometers of core samples and billions of derivative subsamples have been generated through thousands of investigators' projects, yet the vast majority of these samples are curated at only a small number of facilities. These numbers, combined with the substantial similarity in sample types, make core samples a compelling target for IGSN implementation. However, differences between core sample communities and other geoscience disciplines continue to create barriers to implementation. Core samples involve parent-child relationships spanning 8 or more generations, an exponential increase in sample numbers between levels in the hierarchy, concepts related to depth/position in the sample, requirements for associating data derived from core scanning and lithologic description with data derived from subsample analysis, and publications based on tens of thousands of co-registered scan data points and thousands of analyses of subsamples. These characteristics require specialized resources for accurate and consistent assignment of IGSNs, and a community of practice to establish norms

  6. 21 CFR 203.38 - Sample lot or control numbers; labeling of sample units.

    Science.gov (United States)

    2010-04-01

    ... numbers; labeling of sample units. (a) Lot or control number required on drug sample labeling and sample... identifying lot or control number that will permit the tracking of the distribution of each drug sample unit... 21 Food and Drugs 4 2010-04-01 2010-04-01 false Sample lot or control numbers; labeling of sample...

  7. A new cloud point extraction procedure for determination of inorganic antimony species in beverages and biological samples by flame atomic absorption spectrometry.

    Science.gov (United States)

    Altunay, Nail; Gürkan, Ramazan

    2015-05-15

    A new cloud-point extraction (CPE) for the determination of antimony species in biological and beverages samples has been established with flame atomic absorption spectrometry (FAAS). The method is based on the fact that formation of the competitive ion-pairing complex of Sb(III) and Sb(V) with Victoria Pure Blue BO (VPB(+)) at pH 10. The antimony species were individually detected by FAAS. Under the optimized conditions, the calibration range for Sb(V) is 1-250 μg L(-1) with a detection limit of 0.25 μg L(-1) and sensitive enhancement factor of 76.3 while the calibration range for Sb(III) is 10-400 μg L(-1) with a detection limit of 5.15 μg L(-1) and sensitive enhancement factor of 48.3. The precision as a relative standard deviation is in range of 0.24-2.35%. The method was successfully applied to the speciative determination of antimony species in the samples. The validation was verified by analysis of certified reference materials (CRMs). Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Detecting spatial structures in throughfall data: The effect of extent, sample size, sampling design, and variogram estimation method

    Science.gov (United States)

    Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander

    2016-09-01

    In the last decades, an increasing number of studies analyzed spatial patterns in throughfall by means of variograms. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and a layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation method on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with large outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling) and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments (non-robust and robust estimators) and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the number recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous

  9. A critical analysis of the tender points in fibromyalgia.

    Science.gov (United States)

    Harden, R Norman; Revivo, Gadi; Song, Sharon; Nampiaparampil, Devi; Golden, Gary; Kirincic, Marie; Houle, Timothy T

    2007-03-01

    To pilot methodologies designed to critically assess the American College of Rheumatology's (ACR) diagnostic criteria for fibromyalgia. Prospective, psychophysical testing. An urban teaching hospital. Twenty-five patients with fibromyalgia and 31 healthy controls (convenience sample). Pressure pain threshold was determined at the 18 ACR tender points and five sham points using an algometer (dolorimeter). The patients "algometric total scores" (sums of the patients' average pain thresholds at the 18 tender points) were derived, as well as pain thresholds across sham points. The "algometric total score" could differentiate patients with fibromyalgia from normals with an accuracy of 85.7% (P pain across sham points than across ACR tender points, sham points also could be used for diagnosis (85.7%; Ps tested vs other painful conditions. The points specified by the ACR were only modestly superior to sham points in making the diagnosis. Most importantly, this pilot suggests single points, smaller groups of points, or sham points may be as effective in diagnosing fibromyalgia as the use of all 18 points, and suggests methodologies to definitively test that hypothesis.

  10. Point-by-point written fiber-Bragg gratings and their application in complex grating designs.

    Science.gov (United States)

    Marshall, Graham D; Williams, Robert J; Jovanovic, Nemanja; Steel, M J; Withford, Michael J

    2010-09-13

    The point-by-point technique of fabricating fibre-Bragg gratings using an ultrafast laser enables complete control of the position of each index modification that comprises the grating. By tailoring the local phase, amplitude and spacing of the grating's refractive index modulations it is possible to create gratings with complex transmission and reflection spectra. We report a series of grating structures that were realized by exploiting these flexibilities. Such structures include gratings with controlled bandwidth, and amplitude- and phase-modulated sampled (or superstructured) gratings. A model based on coupled-mode theory provides important insights into the manufacture of such gratings. Our approach offers a quick and easy method of producing complex, non-uniform grating structures in both fibres and other mono-mode waveguiding structures.

  11. Application of cloud point preconcentration and flame atomic absorption spectrometry for the determination of cadmium and zinc ions in urine, blood serum and water samples

    Directory of Open Access Journals (Sweden)

    Ardeshir Shokrollahi

    2013-01-01

    Full Text Available A simple, sensitive and selective cloud point extraction procedure is described for the preconcentration and atomic absorption spectrometric determination of Zn2+ and Cd2+ ions in water and biological samples, after complexation with 3,3',3",3'"-tetraindolyl (terephthaloyl dimethane (TTDM in basic medium, using Triton X-114 as nonionic surfactant. Detection limits of 3.0 and 2.0 µg L-1 and quantification limits 10.0 and 7.0 µg L-1were obtained for Zn2+ and Cd2+ ions, respectively. Relative standard deviation was 2.9 and 3.3, and enrichment factors 23.9 and 25.6, for Zn2+ and Cd2+ ions, respectively. The method enabled determination of low levels of Zn2+ and Cd2+ ions in urine, blood serum and water samples.

  12. METHODS FOR DETERMINING AGITATOR MIXING REQUIREMENTS FOR A MIXING and SAMPLING FACILITY TO FEED WTP (WASTE TREATMENT PLANT)

    International Nuclear Information System (INIS)

    Griffin, P.W.

    2009-01-01

    The following report is a summary of work conducted to evaluate the ability of existing correlative techniques and alternative methods to accurately estimate impeller speed and power requirements for mechanical mixers proposed for use in a mixing and sampling facility (MSF). The proposed facility would accept high level waste sludges from Hanford double-shell tanks and feed uniformly mixed high level waste to the Waste Treatment Plant. Numerous methods are evaluated and discussed, and resulting recommendations provided.

  13. Integrated approach for power quality requirements at the point of connection

    NARCIS (Netherlands)

    Cobben, J.F.G.; Bhattacharyya, S.; Myrzik, J.M.A.; Kling, W.L.

    2007-01-01

    Given the nature of electricity, every party connected to the power system influences voltage quality, which means that every party also should meet requirements. In this field, a sound coordination among technical standards (system-related, installation-related and product-related) is of paramount

  14. Speeding up Coarse Point Cloud Registration by Threshold-Independent Baysac Match Selection

    Science.gov (United States)

    Kang, Z.; Lindenbergh, R.; Pu, S.

    2016-06-01

    This paper presents an algorithm for the automatic registration of terrestrial point clouds by match selection using an efficiently conditional sampling method -- threshold-independent BaySAC (BAYes SAmpling Consensus) and employs the error metric of average point-to-surface residual to reduce the random measurement error and then approach the real registration error. BaySAC and other basic sampling algorithms usually need to artificially determine a threshold by which inlier points are identified, which leads to a threshold-dependent verification process. Therefore, we applied the LMedS method to construct the cost function that is used to determine the optimum model to reduce the influence of human factors and improve the robustness of the model estimate. Point-to-point and point-to-surface error metrics are most commonly used. However, point-to-point error in general consists of at least two components, random measurement error and systematic error as a result of a remaining error in the found rigid body transformation. Thus we employ the measure of the average point-to-surface residual to evaluate the registration accuracy. The proposed approaches, together with a traditional RANSAC approach, are tested on four data sets acquired by three different scanners in terms of their computational efficiency and quality of the final registration. The registration results show the st.dev of the average point-to-surface residuals is reduced from 1.4 cm (plain RANSAC) to 0.5 cm (threshold-independent BaySAC). The results also show that, compared to the performance of RANSAC, our BaySAC strategies lead to less iterations and cheaper computational cost when the hypothesis set is contaminated with more outliers.

  15. Critical points of DNA quantification by real-time PCR – effects of DNA extraction method and sample matrix on quantification of genetically modified organisms

    Directory of Open Access Journals (Sweden)

    Žel Jana

    2006-08-01

    Full Text Available Abstract Background Real-time PCR is the technique of choice for nucleic acid quantification. In the field of detection of genetically modified organisms (GMOs quantification of biotech products may be required to fulfil legislative requirements. However, successful quantification depends crucially on the quality of the sample DNA analyzed. Methods for GMO detection are generally validated on certified reference materials that are in the form of powdered grain material, while detection in routine laboratories must be performed on a wide variety of sample matrixes. Due to food processing, the DNA in sample matrixes can be present in low amounts and also degraded. In addition, molecules of plant origin or from other sources that affect PCR amplification of samples will influence the reliability of the quantification. Further, the wide variety of sample matrixes presents a challenge for detection laboratories. The extraction method must ensure high yield and quality of the DNA obtained and must be carefully selected, since even components of DNA extraction solutions can influence PCR reactions. GMO quantification is based on a standard curve, therefore similarity of PCR efficiency for the sample and standard reference material is a prerequisite for exact quantification. Little information on the performance of real-time PCR on samples of different matrixes is available. Results Five commonly used DNA extraction techniques were compared and their suitability for quantitative analysis was assessed. The effect of sample matrix on nucleic acid quantification was assessed by comparing 4 maize and 4 soybean matrixes. In addition 205 maize and soybean samples from routine analysis were analyzed for PCR efficiency to assess variability of PCR performance within each sample matrix. Together with the amount of DNA needed for reliable quantification, PCR efficiency is the crucial parameter determining the reliability of quantitative results, therefore it was

  16. Critical points of DNA quantification by real-time PCR – effects of DNA extraction method and sample matrix on quantification of genetically modified organisms

    Science.gov (United States)

    Cankar, Katarina; Štebih, Dejan; Dreo, Tanja; Žel, Jana; Gruden, Kristina

    2006-01-01

    Background Real-time PCR is the technique of choice for nucleic acid quantification. In the field of detection of genetically modified organisms (GMOs) quantification of biotech products may be required to fulfil legislative requirements. However, successful quantification depends crucially on the quality of the sample DNA analyzed. Methods for GMO detection are generally validated on certified reference materials that are in the form of powdered grain material, while detection in routine laboratories must be performed on a wide variety of sample matrixes. Due to food processing, the DNA in sample matrixes can be present in low amounts and also degraded. In addition, molecules of plant origin or from other sources that affect PCR amplification of samples will influence the reliability of the quantification. Further, the wide variety of sample matrixes presents a challenge for detection laboratories. The extraction method must ensure high yield and quality of the DNA obtained and must be carefully selected, since even components of DNA extraction solutions can influence PCR reactions. GMO quantification is based on a standard curve, therefore similarity of PCR efficiency for the sample and standard reference material is a prerequisite for exact quantification. Little information on the performance of real-time PCR on samples of different matrixes is available. Results Five commonly used DNA extraction techniques were compared and their suitability for quantitative analysis was assessed. The effect of sample matrix on nucleic acid quantification was assessed by comparing 4 maize and 4 soybean matrixes. In addition 205 maize and soybean samples from routine analysis were analyzed for PCR efficiency to assess variability of PCR performance within each sample matrix. Together with the amount of DNA needed for reliable quantification, PCR efficiency is the crucial parameter determining the reliability of quantitative results, therefore it was chosen as the primary

  17. Spatial determination of magnetic avalanche ignition points

    International Nuclear Information System (INIS)

    Jaafar, Reem; McHugh, S.; Suzuki, Yoko; Sarachik, M.P.; Myasoedov, Y.; Zeldov, E.; Shtrikman, H.; Bagai, R.; Christou, G.

    2008-01-01

    Using time-resolved measurements of local magnetization in the molecular magnet Mn 12 -ac, we report studies of magnetic avalanches (fast magnetization reversals) with non-planar propagating fronts, where the curved nature of the magnetic fronts is reflected in the time-of-arrival at micro-Hall sensors placed at the surface of the sample. Assuming that the avalanche interface is a spherical bubble that grows with a radius proportional to time, we are able to locate the approximate ignition point of each avalanche in a two-dimensional cross-section of the crystal. We find that although in most samples the avalanches ignite at the long ends, as found in earlier studies, there are crystals in which ignition points are distributed throughout an entire weak region near the center, with a few avalanches still originating at the ends

  18. Spatial determination of magnetic avalanche ignition points

    Energy Technology Data Exchange (ETDEWEB)

    Jaafar, Reem; McHugh, S.; Suzuki, Yoko [Physics Department, City College of the City University of New York, New York, NY 10031 (United States); Sarachik, M.P. [Physics Department, City College of the City University of New York, New York, NY 10031 (United States)], E-mail: sarachik@sci.ccny.cuny.edu; Myasoedov, Y.; Zeldov, E.; Shtrikman, H. [Department Condensed Matter Physics, Weizmann Institute of Science, Rehovot 76100 (Israel); Bagai, R.; Christou, G. [Department of Chemistry, University of Florida, Gainesville, FL 32611 (United States)

    2008-03-15

    Using time-resolved measurements of local magnetization in the molecular magnet Mn{sub 12}-ac, we report studies of magnetic avalanches (fast magnetization reversals) with non-planar propagating fronts, where the curved nature of the magnetic fronts is reflected in the time-of-arrival at micro-Hall sensors placed at the surface of the sample. Assuming that the avalanche interface is a spherical bubble that grows with a radius proportional to time, we are able to locate the approximate ignition point of each avalanche in a two-dimensional cross-section of the crystal. We find that although in most samples the avalanches ignite at the long ends, as found in earlier studies, there are crystals in which ignition points are distributed throughout an entire weak region near the center, with a few avalanches still originating at the ends.

  19. Achieving Accuracy Requirements for Forest Biomass Mapping: A Data Fusion Method for Estimating Forest Biomass and LiDAR Sampling Error with Spaceborne Data

    Science.gov (United States)

    Montesano, P. M.; Cook, B. D.; Sun, G.; Simard, M.; Zhang, Z.; Nelson, R. F.; Ranson, K. J.; Lutchke, S.; Blair, J. B.

    2012-01-01

    The synergistic use of active and passive remote sensing (i.e., data fusion) demonstrates the ability of spaceborne light detection and ranging (LiDAR), synthetic aperture radar (SAR) and multispectral imagery for achieving the accuracy requirements of a global forest biomass mapping mission. This data fusion approach also provides a means to extend 3D information from discrete spaceborne LiDAR measurements of forest structure across scales much larger than that of the LiDAR footprint. For estimating biomass, these measurements mix a number of errors including those associated with LiDAR footprint sampling over regional - global extents. A general framework for mapping above ground live forest biomass (AGB) with a data fusion approach is presented and verified using data from NASA field campaigns near Howland, ME, USA, to assess AGB and LiDAR sampling errors across a regionally representative landscape. We combined SAR and Landsat-derived optical (passive optical) image data to identify forest patches, and used image and simulated spaceborne LiDAR data to compute AGB and estimate LiDAR sampling error for forest patches and 100m, 250m, 500m, and 1km grid cells. Forest patches were delineated with Landsat-derived data and airborne SAR imagery, and simulated spaceborne LiDAR (SSL) data were derived from orbit and cloud cover simulations and airborne data from NASA's Laser Vegetation Imaging Sensor (L VIS). At both the patch and grid scales, we evaluated differences in AGB estimation and sampling error from the combined use of LiDAR with both SAR and passive optical and with either SAR or passive optical alone. This data fusion approach demonstrates that incorporating forest patches into the AGB mapping framework can provide sub-grid forest information for coarser grid-level AGB reporting, and that combining simulated spaceborne LiDAR with SAR and passive optical data are most useful for estimating AGB when measurements from LiDAR are limited because they minimized

  20. A simple vibrating sample magnetometer for macroscopic samples

    Science.gov (United States)

    Lopez-Dominguez, V.; Quesada, A.; Guzmán-Mínguez, J. C.; Moreno, L.; Lere, M.; Spottorno, J.; Giacomone, F.; Fernández, J. F.; Hernando, A.; García, M. A.

    2018-03-01

    We here present a simple model of a vibrating sample magnetometer (VSM). The system allows recording magnetization curves at room temperature with a resolution of the order of 0.01 emu and is appropriated for macroscopic samples. The setup can be mounted with different configurations depending on the requirements of the sample to be measured (mass, saturation magnetization, saturation field, etc.). We also include here examples of curves obtained with our setup and comparison curves measured with a standard commercial VSM that confirms the reliability of our device.

  1. Maximum power point tracker based on fuzzy logic

    International Nuclear Information System (INIS)

    Daoud, A.; Midoun, A.

    2006-01-01

    The solar energy is used as power source in photovoltaic power systems and the need for an intelligent power management system is important to obtain the maximum power from the limited solar panels. With the changing of the sun illumination due to variation of angle of incidence of sun radiation and of the temperature of the panels, Maximum Power Point Tracker (MPPT) enables optimization of solar power generation. The MPPT is a sub-system designed to extract the maximum power from a power source. In the case of solar panels power source. the maximum power point varies as a result of changes in its electrical characteristics which in turn are functions of radiation dose, temperature, ageing and other effects. The MPPT maximum the power output from panels for a given set of conditions by detecting the best working point of the power characteristic and then controls the current through the panels or the voltage across them. Many MPPT methods have been reported in literature. These techniques of MPPT can be classified into three main categories that include: lookup table methods, hill climbing methods and computational methods. The techniques vary according to the degree of sophistication, processing time and memory requirements. The perturbation and observation algorithm (hill climbing technique) is commonly used due to its ease of implementation, and relative tracking efficiency. However, it has been shown that when the insolation changes rapidly, the perturbation and observation method is slow to track the maximum power point. In recent years, the fuzzy controllers are used for maximum power point tracking. This method only requires the linguistic control rules for maximum power point, the mathematical model is not required and therefore the implementation of this control method is easy to real control system. In this paper, we we present a simple robust MPPT using fuzzy set theory where the hardware consists of the microchip's microcontroller unit control card and

  2. Gaussian process based intelligent sampling for measuring nano-structure surfaces

    Science.gov (United States)

    Sun, L. J.; Ren, M. J.; Yin, Y. H.

    2016-09-01

    Nanotechnology is the science and engineering that manipulate matters at nano scale, which can be used to create many new materials and devices with a vast range of applications. As the nanotech product increasingly enters the commercial marketplace, nanometrology becomes a stringent and enabling technology for the manipulation and the quality control of the nanotechnology. However, many measuring instruments, for instance scanning probe microscopy, are limited to relatively small area of hundreds of micrometers with very low efficiency. Therefore some intelligent sampling strategies should be required to improve the scanning efficiency for measuring large area. This paper presents a Gaussian process based intelligent sampling method to address this problem. The method makes use of Gaussian process based Bayesian regression as a mathematical foundation to represent the surface geometry, and the posterior estimation of Gaussian process is computed by combining the prior probability distribution with the maximum likelihood function. Then each sampling point is adaptively selected by determining the position which is the most likely outside of the required tolerance zone among the candidates and then inserted to update the model iteratively. Both simulationson the nominal surface and manufactured surface have been conducted on nano-structure surfaces to verify the validity of the proposed method. The results imply that the proposed method significantly improves the measurement efficiency in measuring large area structured surfaces.

  3. Direct sampling methods for inverse elastic scattering problems

    Science.gov (United States)

    Ji, Xia; Liu, Xiaodong; Xi, Yingxia

    2018-03-01

    We consider the inverse elastic scattering of incident plane compressional and shear waves from the knowledge of the far field patterns. Specifically, three direct sampling methods for location and shape reconstruction are proposed using the different component of the far field patterns. Only inner products are involved in the computation, thus the novel sampling methods are very simple and fast to be implemented. With the help of the factorization of the far field operator, we give a lower bound of the proposed indicator functionals for sampling points inside the scatterers. While for the sampling points outside the scatterers, we show that the indicator functionals decay like the Bessel functions as the sampling point goes away from the boundary of the scatterers. We also show that the proposed indicator functionals continuously dependent on the far field patterns, which further implies that the novel sampling methods are extremely stable with respect to data error. For the case when the observation directions are restricted into the limited aperture, we firstly introduce some data retrieval techniques to obtain those data that can not be measured directly and then use the proposed direct sampling methods for location and shape reconstructions. Finally, some numerical simulations in two dimensions are conducted with noisy data, and the results further verify the effectiveness and robustness of the proposed sampling methods, even for multiple multiscale cases and limited-aperture problems.

  4. Effective sample labeling

    International Nuclear Information System (INIS)

    Rieger, J.T.; Bryce, R.W.

    1990-01-01

    Ground-water samples collected for hazardous-waste and radiological monitoring have come under strict regulatory and quality assurance requirements as a result of laws such as the Resource Conservation and Recovery Act. To comply with these laws, the labeling system used to identify environmental samples had to be upgraded to ensure proper handling and to protect collection personnel from exposure to sample contaminants and sample preservatives. The sample label now used as the Pacific Northwest Laboratory is a complete sample document. In the event other paperwork on a labeled sample were lost, the necessary information could be found on the label

  5. Hanford analytical services quality assurance requirements documents. Volume 1: Administrative Requirements

    International Nuclear Information System (INIS)

    Hyatt, J.E.

    1997-01-01

    Hanford Analytical Services Quality Assurance Requirements Document (HASQARD) is issued by the Analytical Services, Program of the Waste Management Division, US Department of Energy (US DOE), Richland Operations Office (DOE-RL). The HASQARD establishes quality requirements in response to DOE Order 5700.6C (DOE 1991b). The HASQARD is designed to meet the needs of DOE-RL for maintaining a consistent level of quality for sampling and field and laboratory analytical services provided by contractor and commercial field and laboratory analytical operations. The HASQARD serves as the quality basis for all sampling and field/laboratory analytical services provided to DOE-RL through the Analytical Services Program of the Waste Management Division in support of Hanford Site environmental cleanup efforts. This includes work performed by contractor and commercial laboratories and covers radiological and nonradiological analyses. The HASQARD applies to field sampling, field analysis, and research and development activities that support work conducted under the Hanford Federal Facility Agreement and Consent Order Tri-Party Agreement and regulatory permit applications and applicable permit requirements described in subsections of this volume. The HASQARD applies to work done to support process chemistry analysis (e.g., ongoing site waste treatment and characterization operations) and research and development projects related to Hanford Site environmental cleanup activities. This ensures a uniform quality umbrella to analytical site activities predicated on the concepts contained in the HASQARD. Using HASQARD will ensure data of known quality and technical defensibility of the methods used to obtain that data. The HASQARD is made up of four volumes: Volume 1, Administrative Requirements; Volume 2, Sampling Technical Requirements; Volume 3, Field Analytical Technical Requirements; and Volume 4, Laboratory Technical Requirements. Volume 1 describes the administrative requirements

  6. A comparative study of simple methods to quantify cerebral blood flow with acetazolamide challenge by using iodine-123-IMP SPECT with one-point arterial sampling

    Energy Technology Data Exchange (ETDEWEB)

    Ohkubo, Masaki [Niigata Univ. (Japan). School of Health Sciences; Odano, Ikuo

    2000-04-01

    The aim of this study was to compare the accuracy of simplified methods for quantifying rCBF with acetazolamide challenge by using {sup 123}I-N-isopropyl-p-iodoamphetamine (IMP) and SPECT with one-point arterial sampling. After acetazolamide administration we quantified rCBF in 12 subjects by the following three methods: (a) the modified microsphere method, (b) the IMP-autoradiographic (ARG) method based on a two-compartment one-parameter model, and (c) the simplified method based on a two-compartment two-parameter model (functional IMP method). The accuracy of these methods was validated by comparing rCBF values with those obtained by the standard method: the super-early microsphere method with continuous withdrawal of arterial blood. On analyzing rCBF in each flow range (0-0.25, 0.25-0.5, 0.5-0.75 and more than 0.75 ml/g/min), rCBF values obtained by both methods (a) and (c) showed significant correlations (p<0.01) with those obtained by the standard method in every range, but rCBF values obtained by method (b) did not significantly correlated in the high flow range (0.5-0.75 and more than 0.75 ml/g/min). Method (c) was found to be the most accurate, even though it needs two serial SPECT scans. When requiring one SPECT scan, method (a) was considered to be superior to method (b) because of its accuracy, especially in high flow regions loaded with acetazolamide. (author)

  7. Adaptive sampling method in deep-penetration particle transport problem

    International Nuclear Information System (INIS)

    Wang Ruihong; Ji Zhicheng; Pei Lucheng

    2012-01-01

    Deep-penetration problem has been one of the difficult problems in shielding calculation with Monte Carlo method for several decades. In this paper, a kind of particle transport random walking system under the emission point as a sampling station is built. Then, an adaptive sampling scheme is derived for better solution with the achieved information. The main advantage of the adaptive scheme is to choose the most suitable sampling number from the emission point station to obtain the minimum value of the total cost in the process of the random walk. Further, the related importance sampling method is introduced. Its main principle is to define the importance function due to the particle state and to ensure the sampling number of the emission particle is proportional to the importance function. The numerical results show that the adaptive scheme under the emission point as a station could overcome the difficulty of underestimation of the result in some degree, and the adaptive importance sampling method gets satisfied results as well. (authors)

  8. Is Mars Sample Return Required Prior to Sending Humans to Mars?

    Science.gov (United States)

    Carr, Michael; Abell, Paul; Allwood, Abigail; Baker, John; Barnes, Jeff; Bass, Deborah; Beaty, David; Boston, Penny; Brinkerhoff, Will; Budney, Charles; hide

    2012-01-01

    Prior to potentially sending humans to the surface of Mars, it is fundamentally important to return samples from Mars. Analysis in Earth's extensive scientific laboratories would significantly reduce the risk of human Mars exploration and would also support the science and engineering decisions relating to the Mars human flight architecture. The importance of measurements of any returned Mars samples range from critical to desirable, and in all cases these samples will would enhance our understanding of the Martian environment before potentially sending humans to that alien locale. For example, Mars sample return (MSR) could yield information that would enable human exploration related to 1) enabling forward and back planetary protection, 2) characterizing properties of Martian materials relevant for in situ resource utilization (ISRU), 3) assessing any toxicity of Martian materials with respect to human health and performance, and 4) identifying information related to engineering surface hazards such as the corrosive effect of the Martian environment. In addition, MSR would be engineering 'proof of concept' for a potential round trip human mission to the planet, and a potential model for international Mars exploration.

  9. Hierarchical sampling of multiple strata: an innovative technique in exposure characterization

    International Nuclear Information System (INIS)

    Ericson, J.E.; Gonzalez, Elisabeth J.

    2003-01-01

    Sampling of multiple strata, or hierarchical sampling of various exposure sources and activity areas, has been tested and is suggested as a method to sample (or to locate) areas with a high prevalence of elevated blood lead in children. Hierarchical sampling was devised to supplement traditional soil lead sampling of a single stratum, either residential or fixed point source, using a multistep strategy. Blood lead (n=1141) and soil lead (n=378) data collected under the USEPA/UCI Tijuana Lead Project (1996-1999) were analyzed to evaluate the usefulness of sampling soil lead from background sites, schools and parks, point sources, and residences. Results revealed that industrial emissions have been a contributing factor to soil lead contamination in Tijuana. At the regional level, point source soil lead was associated with mean blood lead levels and concurrent high background, and point source soil lead levels were predictive of a high percentage of subjects with blood lead equal to or greater than 10 μg/dL (pe 10). Significant relationships were observed between mean blood lead level and fixed point source soil lead (r=0.93; P 2 =0.72 using a quadratic model) and between residential soil lead and fixed point source soil lead (r=0.90; P 2 =0.86 using a cubic model). This study suggests that point sources alone are not sufficient for predicting the relative risk of exposure to lead in the urban environment. These findings will be useful in defining regions for targeted or universal soil lead sampling by site type. Point sources have been observed to be predictive of mean blood lead at the regional level; however, this relationship alone was not sufficient to predict pe 10. It is concluded that when apparently undisturbed sites reveal high soil lead levels in addition to local point sources, dispersion of lead is widespread and will be associated with a high prevalence of elevated blood lead in children. Multiple strata sampling was shown to be useful in

  10. Determination of cadmium(II), cobalt(II), nickel(II), lead(II), zinc(II), and copper(II) in water samples using dual-cloud point extraction and inductively coupled plasma emission spectrometry

    International Nuclear Information System (INIS)

    Zhao, Lingling; Zhong, Shuxian; Fang, Keming; Qian, Zhaosheng; Chen, Jianrong

    2012-01-01

    Highlights: ► A dual-cloud point extraction (d-CPE) procedure was firstly developed for simultaneous pre-concentration and separation of trace metal ions combining with ICP-OES. ► The developed d-CPE can significantly eliminate the surfactant of Triton X-114 and successfully extend to the determination of water samples with good performance. ► The designed method is simple, high efficient, low cost, and in accordance with the green chemistry concept. - Abstract: A dual-cloud point extraction (d-CPE) procedure has been developed for simultaneous pre-concentration and separation of heavy metal ions (Cd 2+ , Co 2+ , Ni 2+ , Pb 2+ , Zn 2+ , and Cu 2+ ion) in water samples by inductively coupled plasma optical emission spectrometry (ICP-OES). The procedure is based on forming complexes of metal ion with 8-hydroxyquinoline (8-HQ) into the as-formed Triton X-114 surfactant rich phase. Instead of direct injection or analysis, the surfactant rich phase containing the complexes was treated by nitric acid, and the detected ions were back extracted again into aqueous phase at the second cloud point extraction stage, and finally determined by ICP-OES. Under the optimum conditions (pH = 7.0, Triton X-114 = 0.05% (w/v), 8-HQ = 2.0 × 10 −4 mol L −1 , HNO 3 = 0.8 mol L −1 ), the detection limits for Cd 2+ , Co 2+ , Ni 2+ , Pb 2+ , Zn 2+ , and Cu 2+ ions were 0.01, 0.04, 0.01, 0.34, 0.05, and 0.04 μg L −1 , respectively. Relative standard deviation (RSD) values for 10 replicates at 100 μg L −1 were lower than 6.0%. The proposed method could be successfully applied to the determination of Cd 2+ , Co 2+ , Ni 2+ , Pb 2+ , Zn 2+ , and Cu 2+ ion in water samples.

  11. BIOLOGICAL AND ENVIRONMENTAL RADIATION EXPERIENCE AT INDIAN POINT STATION

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, H. F.

    1963-09-15

    The environs monitoring program at Indian Point Station is presented. Thirty sampling stations within a circle of approximately 10 miles of the station are used for the collection of samples of air, water, vegetation, and soil that are then analyzed for gross beta-gamma activity. Data are tabulated. (P.C.H.)

  12. Sequential function approximation on arbitrarily distributed point sets

    Science.gov (United States)

    Wu, Kailiang; Xiu, Dongbin

    2018-02-01

    We present a randomized iterative method for approximating unknown function sequentially on arbitrary point set. The method is based on a recently developed sequential approximation (SA) method, which approximates a target function using one data point at each step and avoids matrix operations. The focus of this paper is on data sets with highly irregular distribution of the points. We present a nearest neighbor replacement (NNR) algorithm, which allows one to sample the irregular data sets in a near optimal manner. We provide mathematical justification and error estimates for the NNR algorithm. Extensive numerical examples are also presented to demonstrate that the NNR algorithm can deliver satisfactory convergence for the SA method on data sets with high irregularity in their point distributions.

  13. Maximum power point tracking

    International Nuclear Information System (INIS)

    Enslin, J.H.R.

    1990-01-01

    A well engineered renewable remote energy system, utilizing the principal of Maximum Power Point Tracking can be m ore cost effective, has a higher reliability and can improve the quality of life in remote areas. This paper reports that a high-efficient power electronic converter, for converting the output voltage of a solar panel, or wind generator, to the required DC battery bus voltage has been realized. The converter is controlled to track the maximum power point of the input source under varying input and output parameters. Maximum power point tracking for relative small systems is achieved by maximization of the output current in a battery charging regulator, using an optimized hill-climbing, inexpensive microprocessor based algorithm. Through practical field measurements it is shown that a minimum input source saving of 15% on 3-5 kWh/day systems can easily be achieved. A total cost saving of at least 10-15% on the capital cost of these systems are achievable for relative small rating Remote Area Power Supply systems. The advantages at larger temperature variations and larger power rated systems are much higher. Other advantages include optimal sizing and system monitor and control

  14. Boiling point measurement of a small amount of brake fluid by thermocouple and its application.

    Science.gov (United States)

    Mogami, Kazunari

    2002-09-01

    This study describes a new method for measuring the boiling point of a small amount of brake fluid using a thermocouple and a pear shaped flask. The boiling point of brake fluid was directly measured with an accuracy that was within approximately 3 C of that determined by the Japanese Industrial Standards method, even though the sample volume was only a few milliliters. The method was applied to measure the boiling points of brake fluid samples from automobiles. It was clear that the boiling points of brake fluid from some automobiles dropped to approximately 140 C from about 230 C, and that one of the samples from the wheel cylinder was approximately 45 C lower than brake fluid from the reserve tank. It is essential to take samples from the wheel cylinder, as this is most easily subjected to heating.

  15. Mars Sample Handling Functionality

    Science.gov (United States)

    Meyer, M. A.; Mattingly, R. L.

    2018-04-01

    The final leg of a Mars Sample Return campaign would be an entity that we have referred to as Mars Returned Sample Handling (MRSH.) This talk will address our current view of the functional requirements on MRSH, focused on the Sample Receiving Facility (SRF).

  16. Hanford site transuranic waste sampling plan

    International Nuclear Information System (INIS)

    GREAGER, T.M.

    1999-01-01

    This sampling plan (SP) describes the selection of containers for sampling of homogeneous solids and soil/gravel and for visual examination of transuranic and mixed transuranic (collectively referred to as TRU) waste generated at the U.S. Department of Energy (DOE) Hanford Site. The activities described in this SP will be conducted under the Hanford Site TRU Waste Certification Program. This SP is designed to meet the requirements of the Transuranic Waste Characterization Quality Assurance Program Plan (CAO-94-1010) (DOE 1996a) (QAPP), site-specific implementation of which is described in the Hanford Site Transuranic Waste Characterization Program Quality Assurance Project Plan (HNF-2599) (Hanford 1998b) (QAPP). The QAPP defines the quality assurance (QA) requirements and protocols for TRU waste characterization activities at the Hanford Site. In addition, the QAPP identifies responsible organizations, describes required program activities, outlines sampling and analysis strategies, and identifies procedures for characterization activities. The QAPP identifies specific requirements for TRU waste sampling plans. Table 1-1 presents these requirements and indicates sections in this SP where these requirements are addressed

  17. A voting-based statistical cylinder detection framework applied to fallen tree mapping in terrestrial laser scanning point clouds

    Science.gov (United States)

    Polewski, Przemyslaw; Yao, Wei; Heurich, Marco; Krzystek, Peter; Stilla, Uwe

    2017-07-01

    This paper introduces a statistical framework for detecting cylindrical shapes in dense point clouds. We target the application of mapping fallen trees in datasets obtained through terrestrial laser scanning. This is a challenging task due to the presence of ground vegetation, standing trees, DTM artifacts, as well as the fragmentation of dead trees into non-collinear segments. Our method shares the concept of voting in parameter space with the generalized Hough transform, however two of its significant drawbacks are improved upon. First, the need to generate samples on the shape's surface is eliminated. Instead, pairs of nearby input points lying on the surface cast a vote for the cylinder's parameters based on the intrinsic geometric properties of cylindrical shapes. Second, no discretization of the parameter space is required: the voting is carried out in continuous space by means of constructing a kernel density estimator and obtaining its local maxima, using automatic, data-driven kernel bandwidth selection. Furthermore, we show how the detected cylindrical primitives can be efficiently merged to obtain object-level (entire tree) semantic information using graph-cut segmentation and a tailored dynamic algorithm for eliminating cylinder redundancy. Experiments were performed on 3 plots from the Bavarian Forest National Park, with ground truth obtained through visual inspection of the point clouds. It was found that relative to sample consensus (SAC) cylinder fitting, the proposed voting framework can improve the detection completeness by up to 10 percentage points while maintaining the correctness rate.

  18. Dose point kernels for beta-emitting radioisotopes

    International Nuclear Information System (INIS)

    Prestwich, W.V.; Chan, L.B.; Kwok, C.S.; Wilson, B.

    1986-01-01

    Knowledge of the dose point kernel corresponding to a specific radionuclide is required to calculate the spatial dose distribution produced in a homogeneous medium by a distributed source. Dose point kernels for commonly used radionuclides have been calculated previously using as a basis monoenergetic dose point kernels derived by numerical integration of a model transport equation. The treatment neglects fluctuations in energy deposition, an effect which has been later incorporated in dose point kernels calculated using Monte Carlo methods. This work describes new calculations of dose point kernels using the Monte Carlo results as a basis. An analytic representation of the monoenergetic dose point kernels has been developed. This provides a convenient method both for calculating the dose point kernel associated with a given beta spectrum and for incorporating the effect of internal conversion. An algebraic expression for allowed beta spectra has been accomplished through an extension of the Bethe-Bacher approximation, and tested against the exact expression. Simplified expression for first-forbidden shape factors have also been developed. A comparison of the calculated dose point kernel for 32 P with experimental data indicates good agreement with a significant improvement over the earlier results in this respect. An analytic representation of the dose point kernel associated with the spectrum of a single beta group has been formulated. 9 references, 16 figures, 3 tables

  19. A step towards standardization: A method for end-point titer determination by fluorescence index of an automated microscope. End-point titer determination by fluorescence index.

    Science.gov (United States)

    Carbone, Teresa; Gilio, Michele; Padula, Maria Carmela; Tramontano, Giuseppina; D'Angelo, Salvatore; Pafundi, Vito

    2018-05-01

    Indirect Immunofluorescence (IIF) is widely considered the Gold Standard for Antinuclear Antibody (ANA) screening. However, the high inter-reader variability remains the major disadvantage associated with ANA testing and the main reason for the increasing demand of the computer-aided immunofluorescence microscope. Previous studies proposed the quantification of the fluorescence intensity as an alternative for the classical end-point titer evaluation. However, the different distribution of bright/dark light linked to the nature of the self-antigen and its location in the cells result in different mean fluorescence intensities. The aim of the present study was to correlate Fluorescence Index (F.I.) with end-point titers for each well-defined ANA pattern. Routine serum samples were screened for ANA testing on HEp-2000 cells using Immuno Concepts Image Navigator System, and positive samples were serially diluted to assign the end-point titer. A comparison between F.I. and end-point titers related to 10 different staining patterns was made. According to our analysis, good technical performance of F.I. (97% sensitivity and 94% specificity) was found. A significant correlation between quantitative reading of F.I. and end-point titer groups was observed using Spearman's test and regression analysis. A conversion scale of F.I. in end-point titers for each recognized ANA-pattern was obtained. The Image Navigator offers the opportunity to improve worldwide harmonization of ANA test results. In particular, digital F.I. allows quantifying ANA titers by using just one sample dilution. It could represent a valuable support for the routine laboratory and an effective tool to reduce inter- and intra-laboratory variability. Copyright © 2018. Published by Elsevier B.V.

  20. Operating point considerations for the Reference Theta-Pinch Reactor (RTPR)

    International Nuclear Information System (INIS)

    Krakowski, R.A.; Miller, R.L.; Hagenson, R.L.

    1976-01-01

    Aspects of the continuing engineering design-point reassessment and optimization of the Reference Theta-Pinch Reactor (RTPR) are discussed. An updated interim design point which achieves a favorable energy balance and involves relaxed technological requirements, which nonetheless satisfy more rigorous physics and engineering constraints, is presented

  1. Cloud point extraction and flame atomic absorption spectrometric determination of cadmium(II), lead(II), palladium(II) and silver(I) in environmental samples

    International Nuclear Information System (INIS)

    Ghaedi, Mehrorang; Shokrollahi, Ardeshir; Niknam, Khodabakhsh; Niknam, Ebrahim; Najibi, Asma; Soylak, Mustafa

    2009-01-01

    The phase-separation phenomenon of non-ionic surfactants occurring in aqueous solution was used for the extraction of cadmium(II), lead(II), palladium(II) and silver(I). The analytical procedure involved the formation of understudy metals complex with bis((1H-benzo [d] imidazol-2yl)ethyl) sulfane (BIES), and quantitatively extracted to the phase rich in octylphenoxypolyethoxyethanol (Triton X-114) after centrifugation. Methanol acidified with 1 mol L -1 HNO 3 was added to the surfactant-rich phase prior to its analysis by flame atomic absorption spectrometry (FAAS). The concentration of BIES, pH and amount of surfactant (Triton X-114) was optimized. At optimum conditions, the detection limits of (3 sdb/m) of 1.4, 2.8, 1.6 and 1.4 ng mL -1 for Cd 2+ , Pb 2+ , Pd 2+ and Ag + along with preconcentration factors of 30 and enrichment factors of 48, 39, 32 and 42 for Cd 2+ , Pb 2+ , Pd 2+ and Ag + , respectively, were obtained. The proposed cloud point extraction has been successfully applied for the determination of metal ions in real samples with complicated matrix such as radiology waste, vegetable, blood and urine samples.

  2. Galaxy redshift surveys with sparse sampling

    International Nuclear Information System (INIS)

    Chiang, Chi-Ting; Wullstein, Philipp; Komatsu, Eiichiro; Jee, Inh; Jeong, Donghui; Blanc, Guillermo A.; Ciardullo, Robin; Gronwall, Caryl; Hagen, Alex; Schneider, Donald P.; Drory, Niv; Fabricius, Maximilian; Landriau, Martin; Finkelstein, Steven; Jogee, Shardha; Cooper, Erin Mentuch; Tuttle, Sarah; Gebhardt, Karl; Hill, Gary J.

    2013-01-01

    Survey observations of the three-dimensional locations of galaxies are a powerful approach to measure the distribution of matter in the universe, which can be used to learn about the nature of dark energy, physics of inflation, neutrino masses, etc. A competitive survey, however, requires a large volume (e.g., V survey ∼ 10Gpc 3 ) to be covered, and thus tends to be expensive. A ''sparse sampling'' method offers a more affordable solution to this problem: within a survey footprint covering a given survey volume, V survey , we observe only a fraction of the volume. The distribution of observed regions should be chosen such that their separation is smaller than the length scale corresponding to the wavenumber of interest. Then one can recover the power spectrum of galaxies with precision expected for a survey covering a volume of V survey (rather than the volume of the sum of observed regions) with the number density of galaxies given by the total number of observed galaxies divided by V survey (rather than the number density of galaxies within an observed region). We find that regularly-spaced sampling yields an unbiased power spectrum with no window function effect, and deviations from regularly-spaced sampling, which are unavoidable in realistic surveys, introduce calculable window function effects and increase the uncertainties of the recovered power spectrum. On the other hand, we show that the two-point correlation function (pair counting) is not affected by sparse sampling. While we discuss the sparse sampling method within the context of the forthcoming Hobby-Eberly Telescope Dark Energy Experiment, the method is general and can be applied to other galaxy surveys

  3. Elemental composition at different points of the rainwater harvesting system

    International Nuclear Information System (INIS)

    Morrow, A.C.; Dunstan, R.H.; Coombes, P.J.

    2010-01-01

    Entry of contaminants, such as metals and non-metals, into rainwater harvesting systems can occur directly from rainfall with contributions from collection surfaces, accumulated debris and leachate from storage systems, pipes and taps. Ten rainwater harvesting systems on the east coast of Australia were selected for sampling of roof runoff, storage systems and tap outlets to investigate the variations in rainwater composition as it moved throughout the system, and to identify potential points of contribution to elemental loads. A total of 26 elements were screened at each site. Iron was the only element which was present in significantly higher concentrations in roof runoff samples compared with tank tap samples (P < 0.05). At one case study site, results suggested that piping and tap material can contribute to contaminant loads of harvested rainwater. Increased loads of copper were observed in hot tap samples supplied by the rainwater harvesting system via copper piping and a storage hot water system (P < 0.05). Similarly, zinc, lead, arsenic, strontium and molybdenum were significantly elevated in samples collected from a polyvinyl chloride pipe sampling point that does not supply household uses, compared with corresponding roof runoff samples (P < 0.05). Elemental composition was also found to vary significantly between the tank tap and an internal cold tap at one of the sites investigated, with several elements fluctuating significantly between the two outlets of interest at this site, including potassium, zinc, manganese, barium, copper, vanadium, chromium and arsenic. These results highlighted the variability in the elemental composition of collected rainwater between different study sites and between different sampling points. Atmospheric deposition was not a major contributor to the rainwater contaminant load at the sites tested. Piping materials, however, were shown to contribute significantly to the total elemental load at some locations.

  4. Modeling low-thrust transfers between periodic orbits about five libration points: Manifolds and hierarchical design

    Science.gov (United States)

    Zeng, Hao; Zhang, Jingrui

    2018-04-01

    The low-thrust version of the fuel-optimal transfers between periodic orbits with different energies in the vicinity of five libration points is exploited deeply in the Circular Restricted Three-Body Problem. Indirect optimization technique incorporated with constraint gradients is employed to further improve the computational efficiency and accuracy of the algorithm. The required optimal thrust magnitude and direction can be determined to create the bridging trajectory that connects the invariant manifolds. A hierarchical design strategy dividing the constraint set is proposed to seek the optimal solution when the problem cannot be solved directly. Meanwhile, the solution procedure and the value ranges of used variables are summarized. To highlight the effectivity of the transfer scheme and aim at different types of libration point orbits, transfer trajectories between some sample orbits, including Lyapunov orbits, planar orbits, halo orbits, axial orbits, vertical orbits and butterfly orbits for collinear and triangular libration points, are investigated with various time of flight. Numerical results show that the fuel consumption varies from a few kilograms to tens of kilograms, related to the locations and the types of mission orbits as well as the corresponding invariant manifold structures, and indicates that the low-thrust transfers may be a beneficial option for the extended science missions around different libration points.

  5. Determination of toxic trace elements in body fluid reference samples

    International Nuclear Information System (INIS)

    Gills, T.E.; McClendon, L.T.; Maienthal, E.J.; Becker, D.A.; Durst, R.A.; LaFleur, P.D.

    1974-01-01

    The measurement of elemental concentration in body fluids has been widely used to give indication of exposures to certain toxic materials and/or a measure of body burden. To understand fully the toxicological effect of these trace elements on our physiological system, meaningful analytical data are required along with accurate standards or reference samples. The National Bureau of Standards has prepared for the National Institute for Occupational Safety and Health (NIOSH) a number of reference samples containing selected toxic trace elements in body fluids. The reference samples produced include mercury in urine at three concentration levels, five elements (Se, Cu, As, Ni and Cr) in freeze-dried urine at two levels, fluorine in freeze-dried urine at two levels and lead in blood at two concentration levels. These reference samples have been found to be extremely useful for the evaluation of field and laboratory analytical methods for the analysis of toxic trace elements. In particular the use of at least two calibration points (i.e., ''normal'' and ''elevated'' levels) for a given matrix provides a more positive calibration for most analytical techniques over the range of interest for occupational toxicological levels of exposure. (U.S.)

  6. Comparing identified and statistically significant lipids and polar metabolites in 15-year old serum and dried blood spot samples for longitudinal studies: Comparing lipids and metabolites in serum and DBS samples

    Energy Technology Data Exchange (ETDEWEB)

    Kyle, Jennifer E. [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Casey, Cameron P. [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Stratton, Kelly G. [National Security Directorate, Pacific Northwest National Laboratory, Richland WA USA; Zink, Erika M. [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Kim, Young-Mo [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Zheng, Xueyun [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Monroe, Matthew E. [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Weitz, Karl K. [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Bloodsworth, Kent J. [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Orton, Daniel J. [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Ibrahim, Yehia M. [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Moore, Ronald J. [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Lee, Christine G. [Department of Medicine, Bone and Mineral Unit, Oregon Health and Science University, Portland OR USA; Research Service, Portland Veterans Affairs Medical Center, Portland OR USA; Pedersen, Catherine [Department of Medicine, Bone and Mineral Unit, Oregon Health and Science University, Portland OR USA; Orwoll, Eric [Department of Medicine, Bone and Mineral Unit, Oregon Health and Science University, Portland OR USA; Smith, Richard D. [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Burnum-Johnson, Kristin E. [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Baker, Erin S. [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA

    2017-02-05

    The use of dried blood spots (DBS) has many advantages over traditional plasma and serum samples such as smaller blood volume required, storage at room temperature, and ability for sampling in remote locations. However, understanding the robustness of different analytes in DBS samples is essential, especially in older samples collected for longitudinal studies. Here we analyzed DBS samples collected in 2000-2001 and stored at room temperature and compared them to matched serum samples stored at -80°C to determine if they could be effectively used as specific time points in a longitudinal study following metabolic disease. Four hundred small molecules were identified in both the serum and DBS samples using gas chromatograph-mass spectrometry (GC-MS), liquid chromatography-MS (LC-MS) and LC-ion mobility spectrometry-MS (LC-IMS-MS). The identified polar metabolites overlapped well between the sample types, though only one statistically significant polar metabolite in a case-control study was conserved, indicating degradation occurs in the DBS samples affecting quantitation. Differences in the lipid identifications indicated that some oxidation occurs in the DBS samples. However, thirty-six statistically significant lipids correlated in both sample types indicating that lipid quantitation was more stable across the sample types.

  7. Dual-cloud point extraction coupled to high performance liquid chromatography for simultaneous determination of trace sulfonamide antimicrobials in urine and water samples.

    Science.gov (United States)

    Nong, Chunyan; Niu, Zongliang; Li, Pengyao; Wang, Chunping; Li, Wanyu; Wen, Yingying

    2017-04-15

    Dual-cloud point extraction (dCPE) was successfully developed for simultaneous extraction of trace sulfonamides (SAs) including sulfamerazine (SMZ), sulfadoxin (SDX), sulfathiazole (STZ) in urine and water samples. Several parameters affecting the extraction were optimized, such as sample pH, concentration of Triton X-114, extraction temperature and time, centrifugation rate and time, back-extraction solution pH, back-extraction temperature and time, back-extraction centrifugation rate and time. High performance liquid chromatography (HPLC) was applied for the SAs analysis. Under the optimum extraction and detection conditions, successful separation of the SAs was achieved within 9min, and excellent analytical performances were attained. Good linear relationships (R 2 ≥0.9990) between peak area and concentration for SMZ and STZ were optimized from 0.02 to 10μg/mL, for SDX from 0.01 to 10μg/mL. Detection limits of 3.0-6.2ng/mL were achieved. Satisfactory recoveries ranging from 85 to 108% were determined with urine, lake and tap water spiked at 0.2, 0.5 and 1μg/mL, respectively, with relative standard deviations (RSDs, n=6) of 1.5-7.7%. This method was demonstrated to be convenient, rapid, cost-effective and environmentally benign, and could be used as an alternative tool to existing methods for analysing trace residues of SAs in urine and water samples. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. More practical critical height sampling.

    Science.gov (United States)

    Thomas B. Lynch; Jeffrey H. Gove

    2015-01-01

    Critical Height Sampling (CHS) (Kitamura 1964) can be used to predict cubic volumes per acre without using volume tables or equations. The critical height is defined as the height at which the tree stem appears to be in borderline condition using the point-sampling angle gauge (e.g. prism). An estimate of cubic volume per acre can be obtained from multiplication of the...

  9. Feasibility of Smartphone Based Photogrammetric Point Clouds for the Generation of Accessibility Maps

    Science.gov (United States)

    Angelats, E.; Parés, M. E.; Kumar, P.

    2018-05-01

    Accessible cities with accessible services are an old claim of people with reduced mobility. But this demand is still far away of becoming a reality as lot of work is required to be done yet. First step towards accessible cities is to know about real situation of the cities and its pavement infrastructure. Detailed maps or databases on street slopes, access to sidewalks, mobility in public parks and gardens, etc. are required. In this paper, we propose to use smartphone based photogrammetric point clouds, as a starting point to create accessible maps or databases. This paper analyses the performance of these point clouds and the complexity of the image acquisition procedure required to obtain them. The paper proves, through two test cases, that smartphone technology is an economical and feasible solution to get the required information, which is quite often seek by city planners to generate accessible maps. The proposed approach paves the way to generate, in a near term, accessibility maps through the use of point clouds derived from crowdsourced smartphone imagery.

  10. Novel Sample-handling Approach for XRD Analysis with Minimal Sample Preparation

    Science.gov (United States)

    Sarrazin, P.; Chipera, S.; Bish, D.; Blake, D.; Feldman, S.; Vaniman, D.; Bryson, C.

    2004-01-01

    Sample preparation and sample handling are among the most critical operations associated with X-ray diffraction (XRD) analysis. These operations require attention in a laboratory environment, but they become a major constraint in the deployment of XRD instruments for robotic planetary exploration. We are developing a novel sample handling system that dramatically relaxes the constraints on sample preparation by allowing characterization of coarse-grained material that would normally be impossible to analyze with conventional powder-XRD techniques.

  11. Isotopic effects in the neon fixed point: uncertainty of the calibration data correction

    Science.gov (United States)

    Steur, Peter P. M.; Pavese, Franco; Fellmuth, Bernd; Hermier, Yves; Hill, Kenneth D.; Seog Kim, Jin; Lipinski, Leszek; Nagao, Keisuke; Nakano, Tohru; Peruzzi, Andrea; Sparasci, Fernando; Szmyrka-Grzebyk, Anna; Tamura, Osamu; Tew, Weston L.; Valkiers, Staf; van Geel, Jan

    2015-02-01

    The neon triple point is one of the defining fixed points of the International Temperature Scale of 1990 (ITS-90). Although recognizing that natural neon is a mixture of isotopes, the ITS-90 definition only states that the neon should be of ‘natural isotopic composition’, without any further requirements. A preliminary study in 2005 indicated that most of the observed variability in the realized neon triple point temperatures within a range of about 0.5 mK can be attributed to the variability in isotopic composition among different samples of ‘natural’ neon. Based on the results of an International Project (EUROMET Project No. 770), the Consultative Committee for Thermometry decided to improve the realization of the neon fixed point by assigning the ITS-90 temperature value 24.5561 K to neon with the isotopic composition recommended by IUPAC, accompanied by a quadratic equation to take the deviations from the reference composition into account. In this paper, the uncertainties of the equation are discussed and an uncertainty budget is presented. The resulting standard uncertainty due to the isotopic effect (k = 1) after correction of the calibration data is reduced to (4 to 40) μK when using neon of ‘natural’ isotopic composition or to 30 μK when using 20Ne. For comparison, an uncertainty component of 0.15 mK should be included in the uncertainty budget for the neon triple point if the isotopic composition is unknown, i.e. whenever the correction cannot be applied.

  12. Species selective preconcentration and quantification of gold nanoparticles using cloud point extraction and electrothermal atomic absorption spectrometry

    International Nuclear Information System (INIS)

    Hartmann, Georg; Schuster, Michael

    2013-01-01

    Highlights: ► We optimized cloud point extraction and ET-AAS parameters for Au-NPs measurement. ► A selective ligand (sodium thiosulphate) is introduced for species separation. ► A limit of detection of 5 ng Au-NP per L is achieved for aqueous samples. ► Measurement of samples with high natural organic mater content is possible. ► Real water samples including wastewater treatment plant effluent were analyzed. - Abstract: The determination of metallic nanoparticles in environmental samples requires sample pretreatment that ideally combines pre-concentration and species selectivity. With cloud point extraction (CPE) using the surfactant Triton X-114 we present a simple and cost effective separation technique that meets both criteria. Effective separation of ionic gold species and Au nanoparticles (Au-NPs) is achieved by using sodium thiosulphate as a complexing agent. The extraction efficiency for Au-NP ranged from 1.01 ± 0.06 (particle size 2 nm) to 0.52 ± 0.16 (particle size 150 nm). An enrichment factor of 80 and a low limit of detection of 5 ng L −1 is achieved using electrothermal atomic absorption spectrometry (ET-AAS) for quantification. TEM measurements showed that the particle size is not affected by the CPE process. Natural organic matter (NOM) is tolerated up to a concentration of 10 mg L −1 . The precision of the method expressed as the standard deviation of 12 replicates at an Au-NP concentration of 100 ng L −1 is 9.5%. A relation between particle concentration and the extraction efficiency was not observed. Spiking experiments showed a recovery higher than 91% for environmental water samples.

  13. Demonstration/Validation of Incremental Sampling at Two Diverse Military Ranges and Development of an Incremental Sampling Tool

    Science.gov (United States)

    2010-06-01

    Sampling (MIS)? • Technique of combining many increments of soil from a number of points within exposure area • Developed by Enviro Stat (Trademarked...Demonstrating a reliable soil sampling strategy to accurately characterize contaminant concentrations in spatially extreme and heterogeneous...into a set of decision (exposure) units • One or several discrete or small- scale composite soil samples collected to represent each decision unit

  14. Arrests for child pornography production: data at two time points from a national sample of U.S. law enforcement agencies.

    Science.gov (United States)

    Wolak, Janis; Finkelhor, David; Mitchell, Kimberly J; Jones, Lisa M

    2011-08-01

    This study collected information on arrests for child pornography (CP) production at two points (2000-2001 and 2006) from a national sample of more than 2,500 law enforcement agencies. In addition to providing descriptive data about an understudied crime, the authors examined whether trends in arrests suggested increasing CP production, shifts in victim populations, and challenges to law enforcement. Arrests for CP production more than doubled from an estimated 402 in 2000-2001 to an estimated 859 in 2006. Findings suggest the increase was related to increased law enforcement activity rather than to growth in the population of CP producers. Adolescent victims increased, but there was no increase in the proportion of arrest cases involving very young victims or violent images. Producers distributed images in 23% of arrest cases, a proportion that did not change over time. This suggests that much CP production may be primarily for private use. Proactive law enforcement operations increased, as did other features consistent with a robust law enforcement response.

  15. Validation of intermediate end points in cancer research.

    Science.gov (United States)

    Schatzkin, A; Freedman, L S; Schiffman, M H; Dawsey, S M

    1990-11-21

    Investigations using intermediate end points as cancer surrogates are quicker, smaller, and less expensive than studies that use malignancy as the end point. We present a strategy for determining whether a given biomarker is a valid intermediate end point between an exposure and incidence of cancer. Candidate intermediate end points may be selected from case series, ecologic studies, and animal experiments. Prospective cohort and sometimes case-control studies may be used to quantify the intermediate end point-cancer association. The most appropriate measure of this association is the attributable proportion. The intermediate end point is a valid cancer surrogate if the attributable proportion is close to 1.0, but not if it is close to 0. Usually, the attributable proportion is close to neither 1.0 nor 0; in this case, valid surrogacy requires that the intermediate end point mediate an established exposure-cancer relation. This would in turn imply that the exposure effect would vanish if adjusted for the intermediate end point. We discuss the relative advantages of intervention and observational studies for the validation of intermediate end points. This validation strategy also may be applied to intermediate end points for adverse reproductive outcomes and chronic diseases other than cancer.

  16. Environmental monitoring of phenolic pollutants in water by cloud point extraction prior to micellar electrokinetic chromatography.

    Science.gov (United States)

    Stege, Patricia W; Sombra, Lorena L; Messina, Germán A; Martinez, Luis D; Silva, María F

    2009-05-01

    Many aromatic compounds can be found in the environment as a result of anthropogenic activities and some of them are highly toxic. The need to determine low concentrations of pollutants requires analytical methods with high sensitivity, selectivity, and resolution for application to soil, sediment, water, and other environmental samples. Complex sample preparation involving analyte isolation and enrichment is generally necessary before the final analysis. The present paper outlines a novel, simple, low-cost, and environmentally friendly method for the simultaneous determination of p-nitrophenol (PNP), p-aminophenol (PAP), and hydroquinone (HQ) by micellar electrokinetic capillary chromatography after preconcentration by cloud point extraction. Enrichment factors of 180 to 200 were achieved. The limits of detection of the analytes for the preconcentration of 50-ml sample volume were 0.10 microg L(-1) for PNP, 0.20 microg L(-1) for PAP, and 0.16 microg L(-1) for HQ. The optimized procedure was applied to the determination of phenolic pollutants in natural waters from San Luis, Argentina.

  17. The features of ballistic electron transport in a suspended quantum point contact

    International Nuclear Information System (INIS)

    Shevyrin, A. A.; Budantsev, M. V.; Bakarov, A. K.; Toropov, A. I.; Pogosov, A. G.; Ishutkin, S. V.; Shesterikov, E. V.

    2014-01-01

    A suspended quantum point contact and the effects of the suspension are investigated by performing identical electrical measurements on the same experimental sample before and after the suspension. In both cases, the sample demonstrates conductance quantization. However, the suspended quantum point contact shows certain features not observed before the suspension, namely, plateaus at the conductance values being non-integer multiples of the conductance quantum, including the “0.7-anomaly.” These features can be attributed to the strengthening of electron-electron interaction because of the electric field confinement within the suspended membrane. Thus, the suspended quantum point contact represents a one-dimensional system with strong electron-electron interaction

  18. Radiocarbon measurements of tree-ring samples from Japanese woods

    International Nuclear Information System (INIS)

    Ozaki, Hiromasa; Sakamoto, Minoru; Imamura, Mineo; Mitsutani, Takumi

    2008-01-01

    Since radiocarbon age is a model age based on constancy of atmospheric radiocarbon concentration and a provisional value of 5568 years for the 14 C half-life, calibration to calendar age is required for practical dating. The dataset, called IntCal, used for the calibration has been constructed by international consortium. Most parts of the IntCal have been based on the measurement of radiocarbon in dendrochronologically dated tree-ring samples from woods in Europe and North America. Regional offsets, which are designed as differences of local atmospheric radiocarbon from IntCal, have been pointed out based on recent radiocarbon measurements for tree-ring samples from a few regions. We have also measured radiocarbon of tree-ring samples from Japanese woods in order to investigate regional offsets in Japan. In this study, radiocarbon measurements for tree-ring samples from three different Japanese woods at around AD500 were carried out. Consequently, differences from IntCal04 at around AD500 were confirmed, although no systematic offset are found. However, the results obtained in this study agree with the raw data used for construction of IntCal04. This could pose a question to calculation method of IntCal04. (author)

  19. Simple and efficient way of speeding up transmission calculations with k-point sampling

    Directory of Open Access Journals (Sweden)

    Jesper Toft Falkenberg

    2015-07-01

    Full Text Available The transmissions as functions of energy are central for electron or phonon transport in the Landauer transport picture. We suggest a simple and computationally “cheap” post-processing scheme to interpolate transmission functions over k-points to get smooth well-converged average transmission functions. This is relevant for data obtained using typical “expensive” first principles calculations where the leads/electrodes are described by periodic boundary conditions. We show examples of transport in graphene structures where a speed-up of an order of magnitude is easily obtained.

  20. Sampling optimization for printer characterization by direct search.

    Science.gov (United States)

    Bianco, Simone; Schettini, Raimondo

    2012-12-01

    Printer characterization usually requires many printer inputs and corresponding color measurements of the printed outputs. In this brief, a sampling optimization for printer characterization on the basis of direct search is proposed to maintain high color accuracy with a reduction in the number of characterization samples required. The proposed method is able to match a given level of color accuracy requiring, on average, a characterization set cardinality which is almost one-fourth of that required by the uniform sampling, while the best method in the state of the art needs almost one-third. The number of characterization samples required can be further reduced if the proposed algorithm is coupled with a sequential optimization method that refines the sample values in the device-independent color space. The proposed sampling optimization method is extended to deal with multiple substrates simultaneously, giving statistically better colorimetric accuracy (at the α = 0.05 significance level) than sampling optimization techniques in the state of the art optimized for each individual substrate, thus allowing use of a single set of characterization samples for multiple substrates.

  1. Impact of confinement housing on study end-points in the calf model of cryptosporidiosis.

    Science.gov (United States)

    Graef, Geneva; Hurst, Natalie J; Kidder, Lance; Sy, Tracy L; Goodman, Laura B; Preston, Whitney D; Arnold, Samuel L M; Zambriski, Jennifer A

    2018-04-01

    Diarrhea is the second leading cause of death in children confinement housing, and Interval Collection (IC), which permits use of box stalls. CFC mimics human challenge model methodology but it is unknown if confinement housing impacts study end-points and if data gathered via this method is suitable for generalization to human populations. Using a modified crossover study design we compared CFC and IC and evaluated the impact of housing on study end-points. At birth, calves were randomly assigned to confinement (n = 14) or box stall housing (n = 9), or were challenged with 5 x 107 C. parvum oocysts, and followed for 10 days. Study end-points included fecal oocyst shedding, severity of diarrhea, degree of dehydration, and plasma cortisol. Calves in confinement had no significant differences in mean log oocysts enumerated per gram of fecal dry matter between CFC and IC samples (P = 0.6), nor were there diurnal variations in oocyst shedding (P = 0.1). Confinement housed calves shed significantly more oocysts (P = 0.05), had higher plasma cortisol (P = 0.001), and required more supportive care (P = 0.0009) than calves in box stalls. Housing method confounds study end-points in the calf model of cryptosporidiosis. Due to increased stress data collected from calves in confinement housing may not accurately estimate the efficacy of chemotherapeutics targeting C. parvum.

  2. Sampling, storage and sample preparation procedures for X ray fluorescence analysis of environmental materials

    International Nuclear Information System (INIS)

    1997-06-01

    X ray fluorescence (XRF) method is one of the most commonly used nuclear analytical technique because of its multielement and non-destructive character, speed, economy and ease of operation. From the point of view of quality assurance practices, sampling and sample preparation procedures are the most crucial steps in all analytical techniques, (including X ray fluorescence) applied for the analysis of heterogeneous materials. This technical document covers recent modes of the X ray fluorescence method and recent developments in sample preparation techniques for the analysis of environmental materials. Refs, figs, tabs

  3. The masking breakdown point of multivariate outlier identification rules

    OpenAIRE

    Becker, Claudia; Gather, Ursula

    1997-01-01

    In this paper, we consider one-step outlier identifiation rules for multivariate data, generalizing the concept of so-called alpha outlier identifiers, as presented in Davies and Gather (1993) for the case of univariate samples. We investigate, how the finite-sample breakdown points of estimators used in these identification rules influence the masking behaviour of the rules.

  4. Environmental sampling for trace analysis

    International Nuclear Information System (INIS)

    Markert, B.

    1994-01-01

    Often too little attention is given to the sampling before and after actual instrumental measurement. This leads to errors, despite increasingly sensitive analytical systems. This is one of the first books to pay proper attention to representative sampling. It offers an overview of the most common techniques used today for taking environmental samples. The techniques are clearly presented, yield accurate and reproducible results and can be used to sample -air - water - soil and sediments - plants and animals. A comprehensive handbook, this volume provides an excellent starting point for researchers in the rapidly expanding field of environmental analysis. (orig.)

  5. Applications of Asymptotic Sampling on High Dimensional Structural Dynamic Problems

    DEFF Research Database (Denmark)

    Sichani, Mahdi Teimouri; Nielsen, Søren R.K.; Bucher, Christian

    2011-01-01

    The paper represents application of the asymptotic sampling on various structural models subjected to random excitations. A detailed study on the effect of different distributions of the so-called support points is performed. This study shows that the distribution of the support points has consid...... dimensional reliability problems in structural dynamics.......The paper represents application of the asymptotic sampling on various structural models subjected to random excitations. A detailed study on the effect of different distributions of the so-called support points is performed. This study shows that the distribution of the support points has...... is minimized. Next, the method is applied on different cases of linear and nonlinear systems with a large number of random variables representing the dynamic excitation. The results show that asymptotic sampling is capable of providing good approximations of low failure probability events for very high...

  6. Pressure Points in Reading Comprehension: A Quantile Multiple Regression Analysis

    Science.gov (United States)

    Logan, Jessica

    2017-01-01

    The goal of this study was to examine how selected pressure points or areas of vulnerability are related to individual differences in reading comprehension and whether the importance of these pressure points varies as a function of the level of children's reading comprehension. A sample of 245 third-grade children were given an assessment battery…

  7. Lessons Learned from OMI Observations of Point Source SO2 Pollution

    Science.gov (United States)

    Krotkov, N.; Fioletov, V.; McLinden, Chris

    2011-01-01

    The Ozone Monitoring Instrument (OMI) on NASA Aura satellite makes global daily measurements of the total column of sulfur dioxide (SO2), a short-lived trace gas produced by fossil fuel combustion, smelting, and volcanoes. Although anthropogenic SO2 signals may not be detectable in a single OMI pixel, it is possible to see the source and determine its exact location by averaging a large number of individual measurements. We describe new techniques for spatial and temporal averaging that have been applied to the OMI SO2 data to determine the spatial distributions or "fingerprints" of SO2 burdens from top 100 pollution sources in North America. The technique requires averaging of several years of OMI daily measurements to observe SO2 pollution from typical anthropogenic sources. We found that the largest point sources of SO2 in the U.S. produce elevated SO2 values over a relatively small area - within 20-30 km radius. Therefore, one needs higher than OMI spatial resolution to monitor typical SO2 sources. TROPOMI instrument on the ESA Sentinel 5 precursor mission will have improved ground resolution (approximately 7 km at nadir), but is limited to once a day measurement. A pointable geostationary UVB spectrometer with variable spatial resolution and flexible sampling frequency could potentially achieve the goal of daily monitoring of SO2 point sources and resolve downwind plumes. This concept of taking the measurements at high frequency to enhance weak signals needs to be demonstrated with a GEOCAPE precursor mission before 2020, which will help formulating GEOCAPE measurement requirements.

  8. Manhattan-World Urban Reconstruction from Point Clouds

    KAUST Repository

    Li, Minglei; Wonka, Peter; Nan, Liangliang

    2016-01-01

    Manhattan-world urban scenes are common in the real world. We propose a fully automatic approach for reconstructing such scenes from 3D point samples. Our key idea is to represent the geometry of the buildings in the scene using a set of well-aligned boxes. We first extract plane hypothesis from the points followed by an iterative refinement step. Then, candidate boxes are obtained by partitioning the space of the point cloud into a non-uniform grid. After that, we choose an optimal subset of the candidate boxes to approximate the geometry of the buildings. The contribution of our work is that we transform scene reconstruction into a labeling problem that is solved based on a novel Markov Random Field formulation. Unlike previous methods designed for particular types of input point clouds, our method can obtain faithful reconstructions from a variety of data sources. Experiments demonstrate that our method is superior to state-of-the-art methods. © Springer International Publishing AG 2016.

  9. Manhattan-World Urban Reconstruction from Point Clouds

    KAUST Repository

    Li, Minglei

    2016-09-16

    Manhattan-world urban scenes are common in the real world. We propose a fully automatic approach for reconstructing such scenes from 3D point samples. Our key idea is to represent the geometry of the buildings in the scene using a set of well-aligned boxes. We first extract plane hypothesis from the points followed by an iterative refinement step. Then, candidate boxes are obtained by partitioning the space of the point cloud into a non-uniform grid. After that, we choose an optimal subset of the candidate boxes to approximate the geometry of the buildings. The contribution of our work is that we transform scene reconstruction into a labeling problem that is solved based on a novel Markov Random Field formulation. Unlike previous methods designed for particular types of input point clouds, our method can obtain faithful reconstructions from a variety of data sources. Experiments demonstrate that our method is superior to state-of-the-art methods. © Springer International Publishing AG 2016.

  10. Accuracy Constraint Determination in Fixed-Point System Design

    Directory of Open Access Journals (Sweden)

    Serizel R

    2008-01-01

    Full Text Available Most of digital signal processing applications are specified and designed with floatingpoint arithmetic but are finally implemented using fixed-point architectures. Thus, the design flow requires a floating-point to fixed-point conversion stage which optimizes the implementation cost under execution time and accuracy constraints. This accuracy constraint is linked to the application performances and the determination of this constraint is one of the key issues of the conversion process. In this paper, a method is proposed to determine the accuracy constraint from the application performance. The fixed-point system is modeled with an infinite precision version of the system and a single noise source located at the system output. Then, an iterative approach for optimizing the fixed-point specification under the application performance constraint is defined and detailed. Finally the efficiency of our approach is demonstrated by experiments on an MP3 encoder.

  11. Radiological analysis of environmental samples in some points of the coast of the Gulf of Mexico and Coast of Quintana Roo, Mexico

    International Nuclear Information System (INIS)

    Salas Mar, Bernardo; Martinez Negrete, Marco Antonio; Ruiz Chavarria, Gerardo; Abarca Munguia, Jose

    2008-01-01

    Full text: We describe in this paper the results obtained by the project 'Radiological analysis of environmental samples in some points of the coast of the Gulf of Mexico and coast of Quintana Roo, Mexico'. The purpose of the study is to identify and quantify the natural and anthropogenic radionuclides present from sediments, sand and seawater from several sites located along the coast of the Gulf of Mexico and the Caribean Sea. The samples are analysed in a Canberra Multichannel analyzer system for gamma spectrometry, equipped with a detector of hyper pure germanium and a Genie 2000 software, in the 'Laboratory of Radiological Analysis of Environmental Samples', belonging to the Physics Department, Faculty of Sciences, National Autonomous University of Mexico (UNAM). The geographic sites were samples were taken include the states of Tamaulipas, Veracruz, Tabasco, Campeche, Yucatan and Quintana Roo. The results of this studies will be published at the end of the project and we hope they will be useful for the national health and industrial sectors. Until now we have identified and measured the presence of natural radionuclides such as Potassium-40 (K-40), Bismuth 212 (Bi-212), Lead-212 (Pb-212), Bismuth-214 (Bi-214), Lead-214 (Pb-214), Radium-226 (Ra-226), Actinium 228 (Ac-228), Uranium-235 (U-235), as well as some anthropogenic radionuclides found near the Laguna Verde Nuclear Power Plant. The project is scheduled to last for three years, finishing in 2009. At its ending we shall be able to present conclusions and identify some tendencies, in connection with the background and possible radioactive contamination of the studied zones. This project takes place under the auspice of the 'Program of Support to Projects of Research and Technological Innovation' of the National Autonomous University of Mexico. (author)

  12. Sample management implementation plan: Salt Repository Project

    International Nuclear Information System (INIS)

    1987-01-01

    The purpose of the Sample Management Implementation Plan is to define management controls and building requirements for handling materials collected during the site characterization of the Deaf Smith County, Texas, site. This work will be conducted for the US Department of Energy Salt Repository Project Office (SRPO). The plan provides for controls mandated by the US Nuclear Regulatory Commission and the US Environmental Protection Agency. Salt Repository Project (SRP) Sample Management will interface with program participants who request, collect, and test samples. SRP Sample Management will be responsible for the following: (1) preparing samples; (2) ensuring documentation control; (3) providing for uniform forms, labels, data formats, and transportation and storage requirements; and (4) identifying sample specifications to ensure sample quality. The SRP Sample Management Facility will be operated under a set of procedures that will impact numerous program participants. Requesters of samples will be responsible for definition of requirements in advance of collection. Sample requests for field activities will be approved by the SRPO, aided by an advisory group, the SRP Sample Allocation Committee. This document details the staffing, building, storage, and transportation requirements for establishing an SRP Sample Management Facility. Materials to be managed in the facility include rock core and rock discontinuities, soils, fluids, biota, air particulates, cultural artifacts, and crop and food stuffs. 39 refs., 3 figs., 11 tabs

  13. Sample size methodology

    CERN Document Server

    Desu, M M

    2012-01-01

    One of the most important problems in designing an experiment or a survey is sample size determination and this book presents the currently available methodology. It includes both random sampling from standard probability distributions and from finite populations. Also discussed is sample size determination for estimating parameters in a Bayesian setting by considering the posterior distribution of the parameter and specifying the necessary requirements. The determination of the sample size is considered for ranking and selection problems as well as for the design of clinical trials. Appropria

  14. Sample selection based on kernel-subclustering for the signal reconstruction of multifunctional sensors

    International Nuclear Information System (INIS)

    Wang, Xin; Wei, Guo; Sun, Jinwei

    2013-01-01

    The signal reconstruction methods based on inverse modeling for the signal reconstruction of multifunctional sensors have been widely studied in recent years. To improve the accuracy, the reconstruction methods have become more and more complicated because of the increase in the model parameters and sample points. However, there is another factor that affects the reconstruction accuracy, the position of the sample points, which has not been studied. A reasonable selection of the sample points could improve the signal reconstruction quality in at least two ways: improved accuracy with the same number of sample points or the same accuracy obtained with a smaller number of sample points. Both ways are valuable for improving the accuracy and decreasing the workload, especially for large batches of multifunctional sensors. In this paper, we propose a sample selection method based on kernel-subclustering distill groupings of the sample data and produce the representation of the data set for inverse modeling. The method calculates the distance between two data points based on the kernel-induced distance instead of the conventional distance. The kernel function is a generalization of the distance metric by mapping the data that are non-separable in the original space into homogeneous groups in the high-dimensional space. The method obtained the best results compared with the other three methods in the simulation. (paper)

  15. The BD Onclarity HPV assay on SurePath collected samples meets the International Guidelines for Human Papillomavirus Test Requirements for Cervical Screening

    DEFF Research Database (Denmark)

    Ejegod, Ditte; Bottari, Fabio; Pedersen, Helle

    2016-01-01

    This study describes a validation of the BD Onclarity HPV (Onclarity) assay using the international guidelines for HPV test requirements for cervical cancer screening of women 30 years and above using Danish SurePath screening samples. The clinical specificity (0.90, 95% CI: 0.88-0.91) and sensit......This study describes a validation of the BD Onclarity HPV (Onclarity) assay using the international guidelines for HPV test requirements for cervical cancer screening of women 30 years and above using Danish SurePath screening samples. The clinical specificity (0.90, 95% CI: 0.......88-0.91) and sensitivity (0.97, 95% CI: 0.87-1.0) of the Onclarity assay were shown to be non-inferior to the reference assay (specificity 0.90, 95% CI: 0.88-0.92, sensitivity 0.98, 95% CI: 0.91-1.0). The intra-laboratory reproducibility of Onclarity was 97% with a lower confidence bound of 96% (kappa value: 0...

  16. Forecasting the Number of Soil Samples Required to Reduce Remediation Cost Uncertainty

    OpenAIRE

    Demougeot-Renard, Hélène; de Fouquet, Chantal; Renard, Philippe

    2008-01-01

    Sampling scheme design is an important step in the management of polluted sites. It largely controls the accuracy of remediation cost estimates. In practice, however, sampling is seldom designed to comply with a given level of remediation cost uncertainty. In this paper, we present a new technique that allows one to estimate of the number of samples that should be taken at a given stage of investigation to reach a forecasted level of accuracy. The uncertainty is expressed both in terms of vol...

  17. MIMIC: An Innovative Methodology for Determining Mobile Laser Scanning System Point Density

    Directory of Open Access Journals (Sweden)

    Conor Cahalane

    2014-08-01

    Full Text Available Understanding how various Mobile Mapping System (MMS laser hardware configurations and operating parameters exercise different influence on point density is important for assessing system performance, which in turn facilitates system design and MMS benchmarking. Point density also influences data processing, as objects that can be recognised using automated algorithms generally require a minimum point density. Although obtaining the necessary point density impacts on hardware costs, survey time and data storage requirements, a method for accurately and rapidly assessing MMS performance is lacking for generic MMSs. We have developed a method for quantifying point clouds collected by an MMS with respect to known objects at specified distances using 3D surface normals, 2D geometric formulae and line drawing algorithms. These algorithms were combined in a system called the Mobile Mapping Point Density Calculator (MIMIC and were validated using point clouds captured by both a single scanner and a dual scanner MMS. Results from MIMIC were promising: when considering the number of scan profiles striking the target, the average error equated to less than 1 point per scan profile. These tests highlight that MIMIC is capable of accurately calculating point density for both single and dual scanner MMSs.

  18. Hot sample archiving. Revision 3

    International Nuclear Information System (INIS)

    McVey, C.B.

    1995-01-01

    This Engineering Study revision evaluated the alternatives to provide tank waste characterization analytical samples for a time period as recommended by the Tank Waste Remediation Systems Program. The recommendation of storing 40 ml segment samples for a period of approximately 18 months (6 months past the approval date of the Tank Characterization Report) and then composite the core segment material in 125 ml containers for a period of five years. The study considers storage at 222-S facility. It was determined that the critical storage problem was in the hot cell area. The 40 ml sample container has enough material for approximately 3 times the required amount for a complete laboratory re-analysis. The final result is that 222-S can meet the sample archive storage requirements. During the 100% capture rate the capacity is exceeded in the hot cell area, but quick, inexpensive options are available to meet the requirements

  19. Writing for Distance Education. Samples Booklet.

    Science.gov (United States)

    International Extension Coll., Cambridge (England).

    Approaches to the format, design, and layout of printed instructional materials for distance education are illustrated in 36 samples designed to accompany the manual, "Writing for Distance Education." Each sample is presented on a single page with a note pointing out its key features. Features illustrated include use of typescript layout, a comic…

  20. Single- versus multiple-sample method to measure glomerular filtration rate.

    Science.gov (United States)

    Delanaye, Pierre; Flamant, Martin; Dubourg, Laurence; Vidal-Petiot, Emmanuelle; Lemoine, Sandrine; Cavalier, Etienne; Schaeffner, Elke; Ebert, Natalie; Pottel, Hans

    2018-01-08

    There are many different ways to measure glomerular filtration rate (GFR) using various exogenous filtration markers, each having their own strengths and limitations. However, not only the marker, but also the methodology may vary in many ways, including the use of urinary or plasma clearance, and, in the case of plasma clearance, the number of time points used to calculate the area under the concentration-time curve, ranging from only one (Jacobsson method) to eight (or more) blood samples. We collected the results obtained from 5106 plasma clearances (iohexol or 51Cr-ethylenediaminetetraacetic acid (EDTA)) using three to four time points, allowing GFR calculation using the slope-intercept method and the Bröchner-Mortensen correction. For each time point, the Jacobsson formula was applied to obtain the single-sample GFR. We used Bland-Altman plots to determine the accuracy of the Jacobsson method at each time point. The single-sample method showed within 10% concordances with the multiple-sample method of 66.4%, 83.6%, 91.4% and 96.0% at the time points 120, 180, 240 and ≥300 min, respectively. Concordance was poorer at lower GFR levels, and this trend is in parallel with increasing age. Results were similar in males and females. Some discordance was found in the obese subjects. Single-sample GFR is highly concordant with a multiple-sample strategy, except in the low GFR range (<30 mL/min). © The Author 2018. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.

  1. Ultrasound-guided thoracenthesis: the V-point as a site for optimal drainage positioning.

    Science.gov (United States)

    Zanforlin, A; Gavelli, G; Oboldi, D; Galletti, S

    2013-01-01

    In the latest years the use of lung ultrasound is increasing in the evaluation of pleural effusions, because it makes follow-up easier and drainage more efficient by providing guidance on the most appropriate sampling site. However, no standardized approach for ultrasound-guided thoracenthesis is actually available. To evaluate our usual ultrasonographic landmark as a possible standard site to perform thoracenthesis by assessing its value in terms of safety and efficiency (success at first attempt, drainage as complete as possible). Hospitalized patients with non organized pleural effusion underwent thoracenthesis after ultrasound evaluation. The point showing on ultrasound the maximum thickness of the effusion ("V-point") was chosen for drainage. 45 ultrasound guided thoracenthesis were performed in 12 months. In 22 cases there were no complications; 16 cases of cough, 2 cases of mild dyspnea without desaturation, 4 cases of mild pain; 2 cases of complications requiring medical intervention occurred. No case of pneumothorax related to the procedure was detected. In all cases drainage was successful on the first attempt. The collected values of maximum thickness at V-point (min 3.4 cm - max 15.3 cm) and drained fluid volume (min 70 ml - max 2000 ml) showed a significative correlation (p measure of the maximum thickness at V-point provides high efficiency to ultrasound guided thoracentesis and allows to estimate the amount of fluid in the pleural cavity. It is also an easy parameter that makes the proposed method quick to learn and apply.  

  2. Integrated modeling and analysis methodology for precision pointing applications

    Science.gov (United States)

    Gutierrez, Homero L.

    2002-07-01

    Space-based optical systems that perform tasks such as laser communications, Earth imaging, and astronomical observations require precise line-of-sight (LOS) pointing. A general approach is described for integrated modeling and analysis of these types of systems within the MATLAB/Simulink environment. The approach can be applied during all stages of program development, from early conceptual design studies to hardware implementation phases. The main objective is to predict the dynamic pointing performance subject to anticipated disturbances and noise sources. Secondary objectives include assessing the control stability, levying subsystem requirements, supporting pointing error budgets, and performing trade studies. The integrated model resides in Simulink, and several MATLAB graphical user interfaces (GUI"s) allow the user to configure the model, select analysis options, run analyses, and process the results. A convenient parameter naming and storage scheme, as well as model conditioning and reduction tools and run-time enhancements, are incorporated into the framework. This enables the proposed architecture to accommodate models of realistic complexity.

  3. Simple and efficient way of speeding up transmission calculations with k-point sampling

    DEFF Research Database (Denmark)

    Falkenberg, Jesper Toft; Brandbyge, Mads

    2015-01-01

    The transmissions as functions of energy are central for electron or phonon transport in the Landauer transport picture. We suggest a simple and computationally "cheap" post-processing scheme to interpolate transmission functions over k-points to get smooth well-converged average transmission...

  4. Portable Dew Point Mass Spectrometry System for Real-Time Gas and Moisture Analysis

    Science.gov (United States)

    Arkin, C.; Gillespie, Stacey; Ratzel, Christopher

    2010-01-01

    A portable instrument incorporates both mass spectrometry and dew point measurement to provide real-time, quantitative gas measurements of helium, nitrogen, oxygen, argon, and carbon dioxide, along with real-time, quantitative moisture analysis. The Portable Dew Point Mass Spectrometry (PDP-MS) system comprises a single quadrupole mass spectrometer and a high vacuum system consisting of a turbopump and a diaphragm-backing pump. A capacitive membrane dew point sensor was placed upstream of the MS, but still within the pressure-flow control pneumatic region. Pressure-flow control was achieved with an upstream precision metering valve, a capacitance diaphragm gauge, and a downstream mass flow controller. User configurable LabVIEW software was developed to provide real-time concentration data for the MS, dew point monitor, and sample delivery system pressure control, pressure and flow monitoring, and recording. The system has been designed to include in situ, NIST-traceable calibration. Certain sample tubing retains sufficient water that even if the sample is dry, the sample tube will desorb water to an amount resulting in moisture concentration errors up to 500 ppm for as long as 10 minutes. It was determined that Bev-A-Line IV was the best sample line to use. As a result of this issue, it is prudent to add a high-level humidity sensor to PDP-MS so such events can be prevented in the future.

  5. Individual and pen-based oral fluid sampling: A welfare-friendly sampling method for group-housed gestating sows.

    Science.gov (United States)

    Pol, Françoise; Dorenlor, Virginie; Eono, Florent; Eudier, Solveig; Eveno, Eric; Liégard-Vanhecke, Dorine; Rose, Nicolas; Fablet, Christelle

    2017-11-01

    The aims of this study were to assess the feasibility of individual and pen-based oral fluid sampling (OFS) in 35 pig herds with group-housed sows, compare these methods to blood sampling, and assess the factors influencing the success of sampling. Individual samples were collected from at least 30 sows per herd. Pen-based OFS was performed using devices placed in at least three pens for 45min. Information related to the farm, the sows, and their living conditions were collected. Factors significantly associated with the duration of sampling and the chewing behaviour of sows were identified by logistic regression. Individual OFS took 2min 42s on average; the type of floor, swab size, and operator were associated with a sampling time >2min. Pen-based OFS was obtained from 112 devices (62.2%). The type of floor, parity, pen-level activity, and type of feeding were associated with chewing behaviour. Pen activity was associated with the latency to interact with the device. The type of floor, gestation stage, parity, group size, and latency to interact with the device were associated with a chewing time >10min. After 15, 30 and 45min of pen-based OFS, 48%, 60% and 65% of the sows were lying down, respectively. The time spent after the beginning of sampling, genetic type, and time elapsed since the last meal were associated with 50% of the sows lying down at one time point. The mean time to blood sample the sows was 1min 16s and 2min 52s if the number of operators required was considered in the sampling time estimation. The genetic type, parity, and type of floor were significantly associated with a sampling time higher than 1min 30s. This study shows that individual OFS is easy to perform in group-housed sows by a single operator, even though straw-bedded animals take longer to sample than animals housed on slatted floors, and suggests some guidelines to optimise pen-based OFS success. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Determination of the Number of Fixture Locating Points for Sheet Metal By Grey Model

    Directory of Open Access Journals (Sweden)

    Yang Bo

    2017-01-01

    Full Text Available In the process of the traditional fixture design for sheet metal part based on the "N-2-1" locating principle, the number of fixture locating points is determined by trial and error or the experience of the designer. To that end, a new design method based on grey theory is proposed to determine the number of sheet metal fixture locating points in this paper. Firstly, the training sample set is generated by Latin hypercube sampling (LHS and finite element analysis (FEA. Secondly, the GM(1, 1 grey model is constructed based on the established training sample set to approximate the mapping relationship between the number of fixture locating points and the concerned sheet metal maximum deformation. Thirdly, the final number of fixture locating points for sheet metal can be inversely calculated under the allowable maximum deformation. Finally, a sheet metal case is conducted and the results indicate that the proposed approach is effective and efficient in determining the number of fixture locating points for sheet metal.

  7. Visibility of noisy point cloud data

    KAUST Repository

    Mehra, Ravish

    2010-06-01

    We present a robust algorithm for estimating visibility from a given viewpoint for a point set containing concavities, non-uniformly spaced samples, and possibly corrupted with noise. Instead of performing an explicit surface reconstruction for the points set, visibility is computed based on a construction involving convex hull in a dual space, an idea inspired by the work of Katz et al. [26]. We derive theoretical bounds on the behavior of the method in the presence of noise and concavities, and use the derivations to develop a robust visibility estimation algorithm. In addition, computing visibility from a set of adaptively placed viewpoints allows us to generate locally consistent partial reconstructions. Using a graph based approximation algorithm we couple such reconstructions to extract globally consistent reconstructions. We test our method on a variety of 2D and 3D point sets of varying complexity and noise content. © 2010 Elsevier Ltd. All rights reserved.

  8. A New Blind Pointing Model Improves Large Reflector Antennas Precision Pointing at Ka-Band (32 GHz)

    Science.gov (United States)

    Rochblatt, David J.

    2009-01-01

    The National Aeronautics and Space Administration (NASA), Jet Propulsion Laboratory (JPL)-Deep Space Network (DSN) subnet of 34-m Beam Waveguide (BWG) Antennas was recently upgraded with Ka-Band (32-GHz) frequency feeds for space research and communication. For normal telemetry tracking a Ka-Band monopulse system is used, which typically yields 1.6-mdeg mean radial error (MRE) pointing accuracy on the 34-m diameter antennas. However, for the monopulse to be able to acquire and lock, for special radio science applications where monopulse cannot be used, or as a back-up for the monopulse, high-precision open-loop blind pointing is required. This paper describes a new 4th order pointing model and calibration technique, which was developed and applied to the DSN 34-m BWG antennas yielding 1.8 to 3.0-mdeg MRE pointing accuracy and amplitude stability of 0.2 dB, at Ka-Band, and successfully used for the CASSINI spacecraft occultation experiment at Saturn and Titan. In addition, the new 4th order pointing model was used during a telemetry experiment at Ka-Band (32 GHz) utilizing the Mars Reconnaissance Orbiter (MRO) spacecraft while at a distance of 0.225 astronomical units (AU) from Earth and communicating with a DSN 34-m BWG antenna at a record high rate of 6-megabits per second (Mb/s).

  9. A handheld point-of-care genomic diagnostic system.

    Directory of Open Access Journals (Sweden)

    Frank B Myers

    Full Text Available The rapid detection and identification of infectious disease pathogens is a critical need for healthcare in both developed and developing countries. As we gain more insight into the genomic basis of pathogen infectivity and drug resistance, point-of-care nucleic acid testing will likely become an important tool for global health. In this paper, we present an inexpensive, handheld, battery-powered instrument designed to enable pathogen genotyping in the developing world. Our Microfluidic Biomolecular Amplification Reader (µBAR represents the convergence of molecular biology, microfluidics, optics, and electronics technology. The µBAR is capable of carrying out isothermal nucleic acid amplification assays with real-time fluorescence readout at a fraction of the cost of conventional benchtop thermocyclers. Additionally, the µBAR features cell phone data connectivity and GPS sample geotagging which can enable epidemiological surveying and remote healthcare delivery. The µBAR controls assay temperature through an integrated resistive heater and monitors real-time fluorescence signals from 60 individual reaction chambers using LEDs and phototransistors. Assays are carried out on PDMS disposable microfluidic cartridges which require no external power for sample loading. We characterize the fluorescence detection limits, heater uniformity, and battery life of the instrument. As a proof-of-principle, we demonstrate the detection of the HIV-1 integrase gene with the µBAR using the Loop-Mediated Isothermal Amplification (LAMP assay. Although we focus on the detection of purified DNA here, LAMP has previously been demonstrated with a range of clinical samples, and our eventual goal is to develop a microfluidic device which includes on-chip sample preparation from raw samples. The µBAR is based entirely around open source hardware and software, and in the accompanying online supplement we present a full set of schematics, bill of materials, PCB layouts

  10. Applicability of cloud point extraction for the separation trace amount of lead ion in environmental and biological samples prior to determination by flame atomic absorption spectrometry

    Directory of Open Access Journals (Sweden)

    Sayed Zia Mohammadi

    2016-09-01

    Full Text Available A sensitive cloud point extraction procedure(CPE for the preconcentration of trace lead prior to its determination by flame atomic absorption spectrometry (FAAS has been developed. The CPE method is based on the complex of Pb(II ion with 1-(2-pyridylazo-2-naphthol (PAN, and then entrapped in the non-ionic surfactant Triton X-114. The main factors affecting CPE efficiency, such as pH of sample solution, concentration of PAN and Triton X-114, equilibration temperature and time, were investigated in detail. A preconcentration factor of 30 was obtained for the preconcentration of Pb(II ion with 15.0 mL solution. Under the optimal conditions, the calibration curve was linear in the range of 7.5 ng mL−1–3.5 μg mL−1 of lead with R2 = 0.9998 (n = 10. Detection limit based on three times the standard deviation of the blank (3Sb was 5.27 ng mL−1. Eight replicate determinations of 1.0 μg mL−1 lead gave a mean absorbance of 0.275 with a relative standard deviation of 1.6%. The high efficiency of cloud point extraction to carry out the determination of analytes in complex matrices was demonstrated. The proposed method has been applied for determination of trace amounts of lead in biological and water samples with satisfactory results.

  11. Development of SYVAC sampling techniques

    International Nuclear Information System (INIS)

    Prust, J.O.; Dalrymple, G.J.

    1985-04-01

    This report describes the requirements of a sampling scheme for use with the SYVAC radiological assessment model. The constraints on the number of samples that may be taken is considered. The conclusions from earlier studies using the deterministic generator sampling scheme are summarised. The method of Importance Sampling and a High Dose algorithm, which are designed to preferentially sample in the high dose region of the parameter space, are reviewed in the light of experience gained from earlier studies and the requirements of a site assessment and sensitivity analyses. In addition the use of an alternative numerical integration method for estimating risk is discussed. It is recommended that the method of Importance Sampling is developed and tested for use with SYVAC. An alternative numerical integration method is not recommended for investigation at this stage but should be the subject of future work. (author)

  12. Real World SharePoint 2010 Indispensable Experiences from 22 MVPs

    CERN Document Server

    Hillier, Scot; Bishop, Darrin; Bleeker, Todd; Bogue, Robert; Bosch, Karine; Brotto, Claudio; Buenz, Adam; Connell, Andrew; Drisgill, Randy; Lapointe, Gary; Medero, Jason; Molnar, Agnes; O'Brien, Chris; Klindt, Todd; Poelmans, Joris; Rehmani, Asif; Ross, John; Swan, Nick; Walsh, Mike; Williams, Randy; Young, Shane; Macori, Igor

    2010-01-01

    Proven real-world best practices from leading Microsoft SharePoint MVPsSharePoint enables Web sites to host shared workspaces and is a leading solution for Enterprise Content Management. The newest version boasts significant changes, impressive enhancements, and new features, requiring developers and administrators of all levels of experience to quickly get up to speed on the latest changes. This book is a must-have anthology of current best practices for SharePoint 2010 from 20 of the top SharePoint MVPs. They offer insider advice on everything from installation, workflow, and Web parts to bu

  13. Isotopic analysis of bullet lead samples

    International Nuclear Information System (INIS)

    Sankar Das, M.; Venkatasubramanian, V.S.; Sreenivas, K.

    1976-01-01

    The possibility of using the isotopic composition of lead for the identification of bullet lead is investigated. Lead from several spent bullets were converted to lead sulphide and analysed for the isotopic abundances using an MS-7 mass spectrometer. The abundances are measured relative to that for Pb 204 was too small to permit differentiation, while the range of variation of Pb 206 and Pb 207 and the better precision in their analyses permitted differentiating samples from one another. The correlation among the samples examined has been pointed out. The method is complementary to characterisation of bullet leads by the trace element composition. The possibility of using isotopically enriched lead for tagging bullet lead is pointed out. (author)

  14. Equipment-free nucleic acid extraction and amplification on a simple paper disc for point-of-care diagnosis of rotavirus A.

    Science.gov (United States)

    Ye, Xin; Xu, Jin; Lu, Lijuan; Li, Xinxin; Fang, Xueen; Kong, Jilie

    2018-08-14

    The use of paper-based methods for clinical diagnostics is a rapidly expanding research topic attracting a great deal of interest. Some groups have attempted to realize an integrated nucleic acid test on a single microfluidic paper chip, including extraction, amplification, and readout functions. However, these studies were not able to overcome complex modification and fabrication requirements, long turn-around times, or the need for sophisticated equipment like pumps, thermal cyclers, or centrifuges. Here, we report an extremely simple paper-based test for the point-of-care diagnosis of rotavirus A, one of the most common pathogens that causes pediatric gastroenteritis. This paper-based test could perform nucleic acid extraction within 5 min, then took 25 min to amplify the target sequence, and the result was visible to the naked eye immediately afterward or quantitative by the UV-Vis absorbance. This low-cost method does not require extra equipment and is easy to use either in a lab or at the point-of-care. The detection limit for rotavirus A was found to be 1 × 10 3 copies/mL. In addition, 100% sensitivity and specificity were achieved when testing 48 clinical stool samples. In conclusion, the present paper-based test fulfills the main requirements for a point-of-care diagnostic tool, and has the potential to be applied to disease prevention, control, and precision diagnosis. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. Developing control points for halal slaughtering of poultry.

    Science.gov (United States)

    Shahdan, I A; Regenstein, J M; Shahabuddin, A S M; Rahman, M T

    2016-07-01

    Halal (permissible or lawful) poultry meat production must meet industry, economic, and production needs, and government health requirements without compromising the Islamic religious requirements derived from the Qur'an and the Hadiths (the actions and sayings of the Prophet Muhammad, peace and blessings be upon him). Halal certification authorities may vary in their interpretation of these teachings, which leads to differences in halal slaughter requirements. The current study proposes 6 control points (CP) for halal poultry meat production based on the most commonly used halal production systems. CP 1 describes what is allowed and prohibited, such as blood and animal manure, and feed ingredients for halal poultry meat production. CP 2 describes the requirements for humane handling during lairage. CP 3 describes different methods for immobilizing poultry, when immobilization is used, such as water bath stunning. CP 4 describes the importance of intention, details of the halal slaughter, and the equipment permitted. CP 5 and CP 6 describe the requirements after the neck cut has been made such as the time needed before the carcasses can enter the scalding tank, and the potential for meat adulteration with fecal residues and blood. It is important to note that the proposed halal CP program is presented as a starting point for any individual halal certifying body to improve its practices. © 2016 Poultry Science Association Inc.

  16. Tetrahedral meshing via maximal Poisson-disk sampling

    KAUST Repository

    Guo, Jianwei; Yan, Dongming; Chen, Li; Zhang, Xiaopeng; Deussen, Oliver; Wonka, Peter

    2016-01-01

    -distributed point sets in arbitrary domains. We first perform MPS on the boundary of the input domain, we then sample the interior of the domain, and we finally extract the tetrahedral mesh from the samples by using 3D Delaunay or regular triangulation for uniform

  17. On the influence of sample and target properties on the results of energy-dependent cross section measurements

    International Nuclear Information System (INIS)

    Winkler, G.

    1988-01-01

    The impact of sample and target properties on the accuracy of experimental nuclear cross section data is discussed in the context of the basic requirements in order to obtain reliable results from the respective measurements from the user's point of view. Special emphasis is put on activation measurements with fast neutrons. Some examples are given and suggestions are made based on experiences and recent investigations by the author and his coworkers. (author). Abstract only

  18. Retrospective research: What are the ethical and legal requirements?

    Science.gov (United States)

    Junod, V; Elger, B

    2010-07-25

    Retrospective research is conducted on already available data and/or biologic material. Whether such research requires that patients specifically consent to the use of "their" data continues to stir controversy. From a legal and ethical point of view, it depends on several factors. The main criteria to be considered are whether the data or the sample is anonymous, whether the researcher is the one who collected it and whether the patient was told of the possible research use. In Switzerland, several laws delineate the procedure to be followed. The definition of "anonymous" is open to some interpretation. In addition, it is debatable whether consent waivers that are legally admissible for data extend to research involving human biological samples. In a few years, a new Swiss federal law on human research could clarify the regulatory landscape. Meanwhile, hospital-internal guidelines may impose stricter conditions than required by federal or cantonal law. Conversely, Swiss and European ethical texts may suggest greater flexibility and call for a looser interpretation of existing laws. The present article provides an overview of the issues for physicians, scientists, ethics committee members and policy makers involved in retrospective research in Switzerland. It aims at provoking more open discussions of the regulatory problems and possible future legal and ethical solutions.

  19. Accounting for professionalism: an innovative point system to assess resident professionalism

    Directory of Open Access Journals (Sweden)

    Gary L. Malakoff

    2014-04-01

    Full Text Available Background: Professionalism is a core competency for residency required by the Accreditation Council of Graduate Medical Education. We sought a means to objectively assess professionalism among internal medicine and transitional year residents. Innovation: We established a point system to document unprofessional behaviors demonstrated by internal medicine and transitional year residents along with opportunities to redeem such negative points by deliberate positive professional acts. The intent of the policy is to assist residents in becoming aware of what constitutes unprofessional behavior and to provide opportunities for remediation by accruing positive points. A committee of core faculty and department leadership including the program director and clinic nurse manager determines professionalism points assigned. Negative points might be awarded for tardiness to mandatory or volunteered for events without a valid excuse, late evaluations or other paperwork required by the department, non-attendance at meetings prepaid by the department, and inappropriate use of personal days or leave. Examples of actions through which positive points can be gained to erase negative points include delivery of a mentored pre-conference talk, noon conference, medical student case/shelf review session, or a written reflection. Results: Between 2009 and 2012, 83 residents have trained in our program. Seventeen categorical internal medicine and two transitional year residents have been assigned points. A total of 55 negative points have been assigned and 19 points have been remediated. There appears to be a trend of fewer negative points and more positive points being assigned over each of the past three academic years. Conclusion: Commitment to personal professional behavior is a lifelong process that residents must commit to during their training. A professionalism policy, which employs a point system, has been instituted in our programs and may be a novel tool to

  20. Pointing control using a moving base of support.

    Science.gov (United States)

    Hondzinski, Jan M; Kwon, Taegyong

    2009-07-01

    The purposes of this study were to determine whether gaze direction provides a control signal for movement direction for a pointing task requiring a step and to gain insight into why discrepancies previously identified in the literature for endpoint accuracy with gaze directed eccentrically exist. Straight arm pointing movements were performed to real and remembered target locations, either toward or 30 degrees eccentric to gaze direction. Pointing occurred in normal room lighting or darkness while subjects sat, stood still or side-stepped left or right. Trunk rotation contributed 22-65% to gaze orientations when it was not constrained. Error differences for different target locations explained discrepancies among previous experiments. Variable pointing errors were influenced by gaze direction, while mean systematic pointing errors and trunk orientations were influenced by step direction. These data support the use of a control strategy that relies on gaze direction and equilibrium inputs for whole-body goal-directed movements.

  1. End points and assessments in esthetic dental treatment.

    Science.gov (United States)

    Ishida, Yuichi; Fujimoto, Keiko; Higaki, Nobuaki; Goto, Takaharu; Ichikawa, Tetsuo

    2015-10-01

    There are two key considerations for successful esthetic dental treatments. This article systematically describes the two key considerations: the end points of esthetic dental treatments and assessments of esthetic outcomes, which are also important for acquiring clinical skill in esthetic dental treatments. The end point and assessment of esthetic dental treatment were discussed through literature reviews and clinical practices. Before designing a treatment plan, the end point of dental treatment should be established. The section entitled "End point of esthetic dental treatment" discusses treatments for maxillary anterior teeth and the restoration of facial profile with prostheses. The process of assessing treatment outcomes entitled "Assessments of esthetic dental treatment" discusses objective and subjective evaluation methods. Practitioners should reach an agreement regarding desired end points with patients through medical interviews, and continuing improvements and developments of esthetic assessments are required to raise the therapeutic level of esthetic dental treatments. Copyright © 2015 Japan Prosthodontic Society. Published by Elsevier Ltd. All rights reserved.

  2. On the fairness of the main galaxy sample of SDSS

    International Nuclear Information System (INIS)

    Meng Kelai; Pan Jun; Feng Longlong; Ma Bin

    2011-01-01

    Flux-limited and volume-limited galaxy samples are constructed from the Sloan Digital Sky Survey (SDSS) data releases DR4, DR6 and DR7 for statistical analysis. The two-point correlation functions ξ(s), monopole of three-point correlation functions ζ 0 , projected two-point correlation function w p and pairwise velocity dispersion σ 12 are measured to test if galaxy samples are fair for these statistics. We find that with the increment of sky coverage of subsequent data releases in SDSS, ξ(s) of the flux-limited sample is extremely robust and insensitive to local structures at low redshift. However, for volume-limited samples fainter than L* at large scales s > or approx. 10 h -1 Mpc, the deviation of ξ(s) from different SDSS data releases (DR7, DR6 and DR4) increases with the increment of absolute magnitude. The case of ζ 0 (s) is similar to that of ξ(s). In the weakly nonlinear regime, there is no agreement between ζ 0 of different data releases in all luminosity bins. Furthermore, w p of volume-limited samples of DR7 in luminosity bins fainter than -M r,0.1 = [18.5, 19.5] are significantly larger and σ 12 of the two faintest volume-limited samples of DR7 display a very different scale dependence than results from DR4 and DR6. Our findings call for caution in understanding clustering analysis results of SDSS faint galaxy samples and higher order statistics of SDSS volume-limited samples in the weakly nonlinear regime. The first zero-crossing points of ξ(s) from volume-limited samples are also investigated and discussed. (research papers)

  3. Cloud point extraction and flame atomic absorption spectrometric determination of cadmium(II), lead(II), palladium(II) and silver(I) in environmental samples

    Energy Technology Data Exchange (ETDEWEB)

    Ghaedi, Mehrorang, E-mail: m_ghaedi@mail.yu.ac.ir [Chemistry Department, Yasouj University, Yasouj 75914-353 (Iran, Islamic Republic of); Shokrollahi, Ardeshir [Chemistry Department, Yasouj University, Yasouj 75914-353 (Iran, Islamic Republic of); Niknam, Khodabakhsh [Chemistry Department, Persian Gulf University, Bushehr (Iran, Islamic Republic of); Niknam, Ebrahim; Najibi, Asma [Chemistry Department, Yasouj University, Yasouj 75914-353 (Iran, Islamic Republic of); Soylak, Mustafa [Chemistry Department, University of Erciyes, 38039 Kayseri (Turkey)

    2009-09-15

    The phase-separation phenomenon of non-ionic surfactants occurring in aqueous solution was used for the extraction of cadmium(II), lead(II), palladium(II) and silver(I). The analytical procedure involved the formation of understudy metals complex with bis((1H-benzo [d] imidazol-2yl)ethyl) sulfane (BIES), and quantitatively extracted to the phase rich in octylphenoxypolyethoxyethanol (Triton X-114) after centrifugation. Methanol acidified with 1 mol L{sup -1} HNO{sub 3} was added to the surfactant-rich phase prior to its analysis by flame atomic absorption spectrometry (FAAS). The concentration of BIES, pH and amount of surfactant (Triton X-114) was optimized. At optimum conditions, the detection limits of (3 sdb/m) of 1.4, 2.8, 1.6 and 1.4 ng mL{sup -1} for Cd{sup 2+}, Pb{sup 2+}, Pd{sup 2+} and Ag{sup +} along with preconcentration factors of 30 and enrichment factors of 48, 39, 32 and 42 for Cd{sup 2+}, Pb{sup 2+}, Pd{sup 2+} and Ag{sup +}, respectively, were obtained. The proposed cloud point extraction has been successfully applied for the determination of metal ions in real samples with complicated matrix such as radiology waste, vegetable, blood and urine samples.

  4. Respondent driven sampling: determinants of recruitment and a method to improve point estimation.

    Directory of Open Access Journals (Sweden)

    Nicky McCreesh

    Full Text Available Respondent-driven sampling (RDS is a variant of a link-tracing design intended for generating unbiased estimates of the composition of hidden populations that typically involves giving participants several coupons to recruit their peers into the study. RDS may generate biased estimates if coupons are distributed non-randomly or if potential recruits present for interview non-randomly. We explore if biases detected in an RDS study were due to either of these mechanisms, and propose and apply weights to reduce bias due to non-random presentation for interview.Using data from the total population, and the population to whom recruiters offered their coupons, we explored how age and socioeconomic status were associated with being offered a coupon, and, if offered a coupon, with presenting for interview. Population proportions were estimated by weighting by the assumed inverse probabilities of being offered a coupon (as in existing RDS methods, and also of presentation for interview if offered a coupon by age and socioeconomic status group.Younger men were under-recruited primarily because they were less likely to be offered coupons. The under-recruitment of higher socioeconomic status men was due in part to them being less likely to present for interview. Consistent with these findings, weighting for non-random presentation for interview by age and socioeconomic status group greatly improved the estimate of the proportion of men in the lowest socioeconomic group, reducing the root-mean-squared error of RDS estimates of socioeconomic status by 38%, but had little effect on estimates for age. The weighting also improved estimates for tribe and religion (reducing root-mean-squared-errors by 19-29%, but had little effect for sexual activity or HIV status.Data collected from recruiters on the characteristics of men to whom they offered coupons may be used to reduce bias in RDS studies. Further evaluation of this new method is required.

  5. A method of language sampling

    DEFF Research Database (Denmark)

    Rijkhoff, Jan; Bakker, Dik; Hengeveld, Kees

    1993-01-01

    In recent years more attention is paid to the quality of language samples in typological work. Without an adequate sampling strategy, samples may suffer from various kinds of bias. In this article we propose a sampling method in which the genetic criterion is taken as the most important: samples...... to determine how many languages from each phylum should be selected, given any required sample size....

  6. Development of a High Temperature Antenna Pointing Mechanism for BepiColombo Planetary Orbiter

    Science.gov (United States)

    Campo, Pablo; Barrio, Aingeru; Puente, Nicolas; Kyle, Robert

    2013-09-01

    BepiColombo is an ESA mission to Mercury its planetary orbiter (MPO) has two antenna pointing mechanism, High gain antenna pointing mechanism steers and points a large reflector which is integrated at system level by TAS-I Rome. Medium gain antenna (MGA) APM points a 1.5 m boom with a horn antenna. Both radiating elements exposed to sun fluxes as high as 10 solar constants without protections.The pointing mechanism is a major challenge as high performances are required in a harsh environment. It has required the development of new technologies, and components specially dedicated for the mission needs. Some of the state of the art required for the mission was achieved during the preparatory technology development activities [1]. However the number of critical elements involved, and the difficulties of some areas have required the continuation of the developments, and new research activities had to be launched in CD phase. Some of the major concerns and related areas of development are:- High temperature and long life requirements for the gearhead motors (up to 15500 equivalent APM revolutions, 19 million motor revolution)- Low thermal distortion of the mechanical chain, being at the same time insulating from external environment and interfaces (55 arcsec pointing error)- Low heat leak to the spacecraft (in the order of 50W per APM)- High precision position control, low microvibration noise and error stability in motion (16 arcsec/s)- High power radio frequency (18W in band Ka, 30 in X band) with phase stability for use in radio-science (3mm in Ka band, 5o in X band).- Wide range of motion (full 360o with end-stops)Currently HGA APM EQM azimuth and elevation stages are assembled and ready for test at actuator level.

  7. Sampling and analyses report for June 1992 semiannual postburn sampling at the RM1 UCG site, Hanna, Wyoming

    International Nuclear Information System (INIS)

    Lindblom, S.R.

    1992-08-01

    The Rocky Mountain 1 (RMl) underground coal gasification (UCG) test was conducted from November 16, 1987 through February 26, 1988 (United Engineers and Constructors 1989) at a site approximately one mile south of Hanna, Wyoming. The test consisted of dual module operation to evaluate the controlled retracting injection point (CRIP) technology, the elongated linked well (ELW) technology, and the interaction of closely spaced modules operating simultaneously. The test caused two cavities to be formed in the Hanna No. 1 coal seam and associated overburden. The Hanna No. 1 coal seam is approximately 30 ft thick and lays at depths between 350 ft and 365 ft below the surface in the test area. The coal seam is overlain by sandstones, siltstones and claystones deposited by various fluvial environments. The groundwater monitoring was designed to satisfy the requirements of the Wyoming Department of Environmental Quality (WDEQ) in addition to providing research data toward the development of UCG technology that minimizes environmental impacts. The June 1992 semiannual groundwater.sampling took place from June 10 through June 13, 1992. This event occurred nearly 34 months after the second groundwater restoration at the RM1 site and was the fifteenth sampling event since UCG operations ceased. Samples were collected for analyses of a limited suite set of parameters as listed in Table 1. With a few exceptions, the groundwater is near baseline conditions. Data from the field measurements and analysis of samples are presented. Benzene concentrations in the groundwater were below analytical detection limits

  8. Improving the collection of knowledge, attitude and practice data with community surveys: a comparison of two second-stage sampling methods.

    Science.gov (United States)

    Davis, Rosemary H; Valadez, Joseph J

    2014-12-01

    Second-stage sampling techniques, including spatial segmentation, are widely used in community health surveys when reliable household sampling frames are not available. In India, an unresearched technique for household selection is used in eight states, which samples the house with the last marriage or birth as the starting point. Users question whether this last-birth or last-marriage (LBLM) approach introduces bias affecting survey results. We conducted two simultaneous population-based surveys. One used segmentation sampling; the other used LBLM. LBLM sampling required modification before assessment was possible and a more systematic approach was tested using last birth only. We compared coverage proportions produced by the two independent samples for six malaria indicators and demographic variables (education, wealth and caste). We then measured the level of agreement between the caste of the selected participant and the caste of the health worker making the selection. No significant difference between methods was found for the point estimates of six malaria indicators, education, caste or wealth of the survey participants (range of P: 0.06 to >0.99). A poor level of agreement occurred between the caste of the health worker used in household selection and the caste of the final participant, (Κ = 0.185), revealing little association between the two, and thereby indicating that caste was not a source of bias. Although LBLM was not testable, a systematic last-birth approach was tested. If documented concerns of last-birth sampling are addressed, this new method could offer an acceptable alternative to segmentation in India. However, inter-state caste variation could affect this result. Therefore, additional assessment of last birth is required before wider implementation is recommended. Published by Oxford University Press in association with The London School of Hygiene and Tropical Medicine © The Author 2013; all rights reserved.

  9. Soil Sampling to Demonstrate Compliance with Department of Energy Radiological Clearance Requirements for the ALE Unit of the Hanford Reach National Monument

    Energy Technology Data Exchange (ETDEWEB)

    Fritz, Brad G.; Dirkes, Roger L.; Napier, Bruce A.

    2007-04-01

    The Hanford Reach National Monument consists of several units, one of which is the Fitzner/Eberhardt Arid Lands Ecology Reserve (ALE) Unit. This unit is approximately 311 km2 of shrub-steppe habitat located to the south and west of Highway 240. To fulfill internal U. S. Department of Energy (DOE) requirements prior to any radiological clearance of land, DOE must evaluate the potential for residual radioactive contamination on this land and determine compliance with the requirements of DOE Order 5400.5. Historical soil monitoring conducted on ALE indicated soil concentrations of radionuclides were well below the Authorized Limits. However, the historical sampling was done at a limited number of sampling locations. Therefore, additional soil sampling was conducted to determine if the concentrations of radionuclides in soil on the ALE Unit were below the Authorized Limits. This report contains the results of 50 additional soil samples. The 50 soil samples collected from the ALE Unit all had concentrations of radionuclides far below the Authorized Limits. The average concentrations for all detectable radionuclides were less than the estimated Hanford Site background. Furthermore, the maximum observed soil concentrations for the radionuclides included in the Authorized Limits would result in a potential annual dose of 0.14 mrem assuming the most probable use scenario, a recreational visitor. This potential dose is well below the DOE 100-mrem per year dose limit for a member of the public. Spatial analysis of the results indicated no observable statistically significant differences between radionuclide concentrations across the ALE Unit. Furthermore, the results of the biota dose assessment screen, which used the ResRad Biota code, indicated that the concentrations of radionuclides in ALE Unit soil pose no significant health risk to biota.

  10. Adding-point strategy for reduced-order hypersonic aerothermodynamics modeling based on fuzzy clustering

    Science.gov (United States)

    Chen, Xin; Liu, Li; Zhou, Sida; Yue, Zhenjiang

    2016-09-01

    Reduced order models(ROMs) based on the snapshots on the CFD high-fidelity simulations have been paid great attention recently due to their capability of capturing the features of the complex geometries and flow configurations. To improve the efficiency and precision of the ROMs, it is indispensable to add extra sampling points to the initial snapshots, since the number of sampling points to achieve an adequately accurate ROM is generally unknown in prior, but a large number of initial sampling points reduces the parsimony of the ROMs. A fuzzy-clustering-based adding-point strategy is proposed and the fuzzy clustering acts an indicator of the region in which the precision of ROMs is relatively low. The proposed method is applied to construct the ROMs for the benchmark mathematical examples and a numerical example of hypersonic aerothermodynamics prediction for a typical control surface. The proposed method can achieve a 34.5% improvement on the efficiency than the estimated mean squared error prediction algorithm and shows same-level prediction accuracy.

  11. Single point estimation of phenytoin dosing: a reappraisal.

    Science.gov (United States)

    Koup, J R; Gibaldi, M; Godolphin, W

    1981-11-01

    A previously proposed method for estimation of phenytoin dosing requirement using a single serum sample obtained 24 hours after intravenous loading dose (18 mg/Kg) has been re-evaluated. Using more realistic values for the volume of distribution of phenytoin (0.4 to 1.2 L/Kg), simulations indicate that the proposed method will fail to consistently predict dosage requirements. Additional simulations indicate that two samples obtained during the 24 hour interval following the iv loading dose could be used to more reliably predict phenytoin dose requirement. Because of the nonlinear relationship which exists between phenytoin dose administration rate (RO) and the mean steady state serum concentration (CSS), small errors in prediction of the required RO result in much larger errors in CSS.

  12. Sterile paper points as a bacterial DNA-contamination source in microbiome profiles of clinical samples

    NARCIS (Netherlands)

    van der Horst, J.; Buijs, M.J.; Laine, M.L.; Wismeijer, D.; Loos, B.G.; Crielaard, W.; Zaura, E.

    2013-01-01

    Objectives High throughput sequencing of bacterial DNA from clinical samples provides untargeted, open-ended information on the entire microbial community. The downside of this approach is the vulnerability to DNA contamination from other sources than the clinical sample. Here we describe

  13. Coding and decoding in a point-to-point communication using the polarization of the light beam.

    Science.gov (United States)

    Kavehvash, Z; Massoumian, F

    2008-05-10

    A new technique for coding and decoding of optical signals through the use of polarization is described. In this technique the concept of coding is translated to polarization. In other words, coding is done in such a way that each code represents a unique polarization. This is done by implementing a binary pattern on a spatial light modulator in such a way that the reflected light has the required polarization. Decoding is done by the detection of the received beam's polarization. By linking the concept of coding to polarization we can use each of these concepts in measuring the other one, attaining some gains. In this paper the construction of a simple point-to-point communication where coding and decoding is done through polarization will be discussed.

  14. Use of digital image analysis to estimate fluid permeability of porous materials: Application of two-point correlation functions

    International Nuclear Information System (INIS)

    Berryman, J.G.; Blair, S.C.

    1986-01-01

    Scanning electron microscope images of cross sections of several porous specimens have been digitized and analyzed using image processing techniques. The porosity and specific surface area may be estimated directly from measured two-point spatial correlation functions. The measured values of porosity and image specific surface were combined with known values of electrical formation factors to estimate fluid permeability using one version of the Kozeny-Carman empirical relation. For glass bead samples with measured permeability values in the range of a few darcies, our estimates agree well ( +- 10--20%) with the measurements. For samples of Ironton-Galesville sandstone with a permeability in the range of hundreds of millidarcies, our best results agree with the laboratory measurements again within about 20%. For Berea sandstone with still lower permeability (tens of millidarcies), our predictions from the images agree within 10--30%. Best results for the sandstones were obtained by using the porosities obtained at magnifications of about 100 x (since less resolution and better statistics are required) and the image specific surface obtained at magnifications of about 500 x (since greater resolution is required)

  15. Asymptotic stability estimates near an equilibrium point

    Science.gov (United States)

    Dumas, H. Scott; Meyer, Kenneth R.; Palacián, Jesús F.; Yanguas, Patricia

    2017-07-01

    We use the error bounds for adiabatic invariants found in the work of Chartier, Murua and Sanz-Serna [3] to bound the solutions of a Hamiltonian system near an equilibrium over exponentially long times. Our estimates depend only on the linearized system and not on the higher order terms as in KAM theory, nor do we require any steepness or convexity conditions as in Nekhoroshev theory. We require that the equilibrium point where our estimate applies satisfy a type of formal stability called Lie stability.

  16. Small-angle X-ray scattering tensor tomography: model of the three-dimensional reciprocal-space map, reconstruction algorithm and angular sampling requirements.

    Science.gov (United States)

    Liebi, Marianne; Georgiadis, Marios; Kohlbrecher, Joachim; Holler, Mirko; Raabe, Jörg; Usov, Ivan; Menzel, Andreas; Schneider, Philipp; Bunk, Oliver; Guizar-Sicairos, Manuel

    2018-01-01

    Small-angle X-ray scattering tensor tomography, which allows reconstruction of the local three-dimensional reciprocal-space map within a three-dimensional sample as introduced by Liebi et al. [Nature (2015), 527, 349-352], is described in more detail with regard to the mathematical framework and the optimization algorithm. For the case of trabecular bone samples from vertebrae it is shown that the model of the three-dimensional reciprocal-space map using spherical harmonics can adequately describe the measured data. The method enables the determination of nanostructure orientation and degree of orientation as demonstrated previously in a single momentum transfer q range. This article presents a reconstruction of the complete reciprocal-space map for the case of bone over extended ranges of q. In addition, it is shown that uniform angular sampling and advanced regularization strategies help to reduce the amount of data required.

  17. Candidate sample acquisition systems for the Rosetta

    International Nuclear Information System (INIS)

    Magnani, P.G.; Gerli, C.; Colombina, G.; Vielmo, P.

    1989-01-01

    The Comet Nucleus Sample Return (CNSR) mission, one of the four cornerstones of the ESA scientific program, is one of the most complex space ventures within the next century, both from technological and deep space exploration point of view. In the Rosetta scenario the sample acquisition phase represents the most critical point for the global mission's success. The proposed paper illustrates the main results obtained in the context of the CNSR-SAS ongoing activity. The main areas covered are related to: (1) sample properties characterization (comet soil model, physical/chemical properties, reference material for testing); (2) concepts identification for coring, shovelling, harpooning and anchoring; (3) preferred concept (trade off among concepts, identification of the preferred configuration); and (4) proposed development activity for gaining the necessary confidence before finalizing the CNSR mission. Particular emphasis will be given to the robotic and flexibility aspects of the identified sample acquisition systems (SAS) configuration, intended as a means for the overall system performance enhancement

  18. The chaotic points and XRD analysis of Hg-based superconductors

    Energy Technology Data Exchange (ETDEWEB)

    Aslan, Oe [Anatuerkler Educational Consultancy and Trading Company, Orhan Veli Kanik Cad., 6/1, Kavacik 34810 Beykoz, Istanbul (Turkey); Oezdemir, Z Gueven [Physics Department, Yildiz Technical University, Davutpasa Campus, Esenler 34210, Istanbul (Turkey); Keskin, S S [Department of Environmental Eng., University of Marmara, Ziverbey, 34722, Istanbul (Turkey); Onbasli, Ue, E-mail: ozdenaslan@yahoo.co [Physics Department, University of Marmara, Ridvan Pasa Cad. 3. Sok. 85/12 Goztepe, Istanbul (Turkey)

    2009-03-01

    In this article, high T{sub c} mercury based cuprate superconductors with different oxygen doping rates have been examined by means of magnetic susceptibility (magnetization) versus temperature data and X-ray diffraction pattern analysis. The under, optimally and over oxygen doping procedures have been defined from the magnetic susceptibility versus temperature data of the superconducting sample by extracting the Meissner critical transition temperature, T{sub c} and the paramagnetic Meissner temperature, T{sub PME}, so called as the critical quantum chaos points. Moreover, the optimally oxygen doped samples have been investigated under both a.c. and d.c. magnetic fields. The related a.c. data for virgin(uncut) and cut samples with optimal doping have been obtained under a.c. magnetic field of 1 Gauss. For the cut sample with the rectangular shape, the chaotic points have been found to occur at 122 and 140 K, respectively. The Meissner critical temperature of 140 K is the new world record for the high temperature oxide superconductors under normal atmospheric pressure. Moreover, the crystallographic lattice parameters of superconducting samples have a crucial importance in calculating Josephson penetration depth determined by the XRD patterns. From the XRD data obtained for under and optimally doped samples, the crystal symmetries have been found in tetragonal structure.

  19. Floating-to-Fixed-Point Conversion for Digital Signal Processors

    Directory of Open Access Journals (Sweden)

    Menard Daniel

    2006-01-01

    Full Text Available Digital signal processing applications are specified with floating-point data types but they are usually implemented in embedded systems with fixed-point arithmetic to minimise cost and power consumption. Thus, methodologies which establish automatically the fixed-point specification are required to reduce the application time-to-market. In this paper, a new methodology for the floating-to-fixed point conversion is proposed for software implementations. The aim of our approach is to determine the fixed-point specification which minimises the code execution time for a given accuracy constraint. Compared to previous methodologies, our approach takes into account the DSP architecture to optimise the fixed-point formats and the floating-to-fixed-point conversion process is coupled with the code generation process. The fixed-point data types and the position of the scaling operations are optimised to reduce the code execution time. To evaluate the fixed-point computation accuracy, an analytical approach is used to reduce the optimisation time compared to the existing methods based on simulation. The methodology stages are described and several experiment results are presented to underline the efficiency of this approach.

  20. Floating-to-Fixed-Point Conversion for Digital Signal Processors

    Science.gov (United States)

    Menard, Daniel; Chillet, Daniel; Sentieys, Olivier

    2006-12-01

    Digital signal processing applications are specified with floating-point data types but they are usually implemented in embedded systems with fixed-point arithmetic to minimise cost and power consumption. Thus, methodologies which establish automatically the fixed-point specification are required to reduce the application time-to-market. In this paper, a new methodology for the floating-to-fixed point conversion is proposed for software implementations. The aim of our approach is to determine the fixed-point specification which minimises the code execution time for a given accuracy constraint. Compared to previous methodologies, our approach takes into account the DSP architecture to optimise the fixed-point formats and the floating-to-fixed-point conversion process is coupled with the code generation process. The fixed-point data types and the position of the scaling operations are optimised to reduce the code execution time. To evaluate the fixed-point computation accuracy, an analytical approach is used to reduce the optimisation time compared to the existing methods based on simulation. The methodology stages are described and several experiment results are presented to underline the efficiency of this approach.

  1. Cloud point extraction-flame atomic absorption spectrometry for pre-concentration and determination of trace amounts of silver ions in water samples.

    Science.gov (United States)

    Yang, Xiupei; Jia, Zhihui; Yang, Xiaocui; Li, Gu; Liao, Xiangjun

    2017-03-01

    A cloud point extraction (CPE) method was used as a pre-concentration strategy prior to the determination of trace levels of silver in water by flame atomic absorption spectrometry (FAAS) The pre-concentration is based on the clouding phenomena of non-ionic surfactant, triton X-114, with Ag (I)/diethyldithiocarbamate (DDTC) complexes in which the latter is soluble in a micellar phase composed by the former. When the temperature increases above its cloud point, the Ag (I)/DDTC complexes are extracted into the surfactant-rich phase. The factors affecting the extraction efficiency including pH of the aqueous solution, concentration of the DDTC, amount of the surfactant, incubation temperature and time were investigated and optimized. Under the optimal experimental conditions, no interference was observed for the determination of 100 ng·mL -1 Ag + in the presence of various cations below their maximum concentrations allowed in this method, for instance, 50 μg·mL -1 for both Zn 2+ and Cu 2+ , 80 μg·mL -1 for Pb 2+ , 1000 μg·mL -1 for Mn 2+ , and 100 μg·mL -1 for both Cd 2+ and Ni 2+ . The calibration curve was linear in the range of 1-500 ng·mL -1 with a limit of detection (LOD) at 0.3 ng·mL -1 . The developed method was successfully applied for the determination of trace levels of silver in water samples such as river water and tap water.

  2. Procedures for sampling and sample-reduction within quality assurance systems for solid biofuels

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2005-04-15

    The bias introduced when sampling solid biofuels from stockpiles or containers instead of from moving streams is assessed as well as the number and size of samples required to represent accurately the bulk sample, variations introduced when reducing bulk samples into samples for testing, and the usefulness of sample reduction methods. Details are given of the experimental work carried out in Sweden and Denmark using sawdust, wood chips, wood pellets, forestry residues and straw. The production of a model European Standard for quality assurance of solid biofuels is examined.

  3. Environmental and public interface for Point Aconi generating station, Point Aconi, Nova Scotia

    Energy Technology Data Exchange (ETDEWEB)

    Toner, T P

    1993-01-01

    Nova Scotia Power's most recent generating station is a 165 MW coal-fired circulating fluidized bed (CFB) unit located at Point Aconi on the northern tip of Boularderie Island. This paper discusses the environmental and public interfaces associated with this project, particularly on the unique items and issues requiring delicate and/or innovative approaches for their successful completion. Specific issues discussed include clarification of the process, the turnkey arrangement, the community liaison committee, freshwater supply, air emissions and dealings with commercial growers, dealings with lobster fishermen, dealings with Native peoples, and the transmission line.

  4. General Constraints on Sampling Wildlife on FIA Plots

    Science.gov (United States)

    Larissa L. Bailey; John R. Sauer; James D. Nichols; Paul H. Geissler

    2005-01-01

    This paper reviews the constraints to sampling wildlife populations at FIA points. Wildlife sampling programs must have well-defined goals and provide information adequate to meet those goals. Investigators should choose a State variable based on information needs and the spatial sampling scale. We discuss estimation-based methods for three State variables: species...

  5. Mathematical estimation of the level of microbial contamination on spacecraft surfaces by volumetric air sampling

    Science.gov (United States)

    Oxborrow, G. S.; Roark, A. L.; Fields, N. D.; Puleo, J. R.

    1974-01-01

    Microbiological sampling methods presently used for enumeration of microorganisms on spacecraft surfaces require contact with easily damaged components. Estimation of viable particles on surfaces using air sampling methods in conjunction with a mathematical model would be desirable. Parameters necessary for the mathematical model are the effect of angled surfaces on viable particle collection and the number of viable cells per viable particle. Deposition of viable particles on angled surfaces closely followed a cosine function, and the number of viable cells per viable particle was consistent with a Poisson distribution. Other parameters considered by the mathematical model included deposition rate and fractional removal per unit time. A close nonlinear correlation between volumetric air sampling and airborne fallout on surfaces was established with all fallout data points falling within the 95% confidence limits as determined by the mathematical model.

  6. Generation of a statistical shape model with probabilistic point correspondences and the expectation maximization- iterative closest point algorithm

    International Nuclear Information System (INIS)

    Hufnagel, Heike; Pennec, Xavier; Ayache, Nicholas; Ehrhardt, Jan; Handels, Heinz

    2008-01-01

    Identification of point correspondences between shapes is required for statistical analysis of organ shapes differences. Since manual identification of landmarks is not a feasible option in 3D, several methods were developed to automatically find one-to-one correspondences on shape surfaces. For unstructured point sets, however, one-to-one correspondences do not exist but correspondence probabilities can be determined. A method was developed to compute a statistical shape model based on shapes which are represented by unstructured point sets with arbitrary point numbers. A fundamental problem when computing statistical shape models is the determination of correspondences between the points of the shape observations of the training data set. In the absence of landmarks, exact correspondences can only be determined between continuous surfaces, not between unstructured point sets. To overcome this problem, we introduce correspondence probabilities instead of exact correspondences. The correspondence probabilities are found by aligning the observation shapes with the affine expectation maximization-iterative closest points (EM-ICP) registration algorithm. In a second step, the correspondence probabilities are used as input to compute a mean shape (represented once again by an unstructured point set). Both steps are unified in a single optimization criterion which depe nds on the two parameters 'registration transformation' and 'mean shape'. In a last step, a variability model which best represents the variability in the training data set is computed. Experiments on synthetic data sets and in vivo brain structure data sets (MRI) are then designed to evaluate the performance of our algorithm. The new method was applied to brain MRI data sets, and the estimated point correspondences were compared to a statistical shape model built on exact correspondences. Based on established measures of ''generalization ability'' and ''specificity'', the estimates were very satisfactory

  7. spsann - optimization of sample patterns using spatial simulated annealing

    Science.gov (United States)

    Samuel-Rosa, Alessandro; Heuvelink, Gerard; Vasques, Gustavo; Anjos, Lúcia

    2015-04-01

    There are many algorithms and computer programs to optimize sample patterns, some private and others publicly available. A few have only been presented in scientific articles and text books. This dispersion and somewhat poor availability is holds back to their wider adoption and further development. We introduce spsann, a new R-package for the optimization of sample patterns using spatial simulated annealing. R is the most popular environment for data processing and analysis. Spatial simulated annealing is a well known method with widespread use to solve optimization problems in the soil and geo-sciences. This is mainly due to its robustness against local optima and easiness of implementation. spsann offers many optimizing criteria for sampling for variogram estimation (number of points or point-pairs per lag distance class - PPL), trend estimation (association/correlation and marginal distribution of the covariates - ACDC), and spatial interpolation (mean squared shortest distance - MSSD). spsann also includes the mean or maximum universal kriging variance (MUKV) as an optimizing criterion, which is used when the model of spatial variation is known. PPL, ACDC and MSSD were combined (PAN) for sampling when we are ignorant about the model of spatial variation. spsann solves this multi-objective optimization problem scaling the objective function values using their maximum absolute value or the mean value computed over 1000 random samples. Scaled values are aggregated using the weighted sum method. A graphical display allows to follow how the sample pattern is being perturbed during the optimization, as well as the evolution of its energy state. It is possible to start perturbing many points and exponentially reduce the number of perturbed points. The maximum perturbation distance reduces linearly with the number of iterations. The acceptance probability also reduces exponentially with the number of iterations. R is memory hungry and spatial simulated annealing is a

  8. Points requiring elucidation” about Hawaiian volcanism: Chapter 24

    Science.gov (United States)

    Poland, Michael P.; Carey, Rebecca; Cayol, Valérie; Poland, Michael P.; Weis, Dominique

    2015-01-01

    Hawaiian volcanoes, which are easily accessed and observed at close range, are among the most studied on the planet and have spurred great advances in the geosciences, from understanding deep Earth processes to forecasting volcanic eruptions. More than a century of continuous observation and study of Hawai‘i's volcanoes has also sharpened focus on those questions that remain unanswered. Although there is good evidence that volcanism in Hawai‘i is the result of a high-temperature upwelling plume from the mantle, the source composition and dynamics of the plume are controversial. Eruptions at the surface build the volcanoes of Hawai‘i, but important topics, including how the volcanoes grow and collapse and how magma is stored and transported, continue to be subjects of intense research. Forecasting volcanic activity is based mostly on pattern recognition, but determining and predicting the nature of eruptions, especially in serving the critical needs of hazards mitigation, require more realistic models and a greater understanding of what drives eruptive activity. These needs may be addressed by better integration among disciplines as well as by developing dynamic physics- and chemistry-based models that more thoroughly relate the physiochemical behavior of Hawaiian volcanism, from the deep Earth to the surface, to geological, geochemical, and geophysical data.

  9. Comparing fixed sampling with minimizer sampling when using k-mer indexes to find maximal exact matches.

    Science.gov (United States)

    Almutairy, Meznah; Torng, Eric

    2018-01-01

    Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method.

  10. Comparing fixed sampling with minimizer sampling when using k-mer indexes to find maximal exact matches.

    Directory of Open Access Journals (Sweden)

    Meznah Almutairy

    Full Text Available Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method.

  11. Comparing fixed sampling with minimizer sampling when using k-mer indexes to find maximal exact matches

    Science.gov (United States)

    Torng, Eric

    2018-01-01

    Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method. PMID:29389989

  12. Lateral sample motion in the plate-rod impact experiments

    International Nuclear Information System (INIS)

    Zaretsky, Eugene; Levi-Hevroni, David; Shvarts, Dov; Ofer, Dror

    2000-01-01

    Velocity of the lateral motion of cylindrical, 9 mm diameter 20 mm length, samples impacted by WHA impactors of 5-mm thickness was monitored by VISAR at the different points of the sample surface at distance of 1 to 4 mm from the sample impacted edge. The impactors were accelerated in the 25-mm pneumatic gun up to velocities of about 300 m/sec. Integrating the VISAR data recorded at the different surface points after the impact with the same velocity allows to obtain the changes of the sample shape during the initial period of the sample deformation. It was found that the character of the lateral motion is different for samples made of WHA and commercial Titanium alloy Ti-6Al-4V. 2-D numerical simulation of the impact allows to conclude that the work hardening of the alloys is responsible for this difference

  13. UHE point source survey at Cygnus experiment

    International Nuclear Information System (INIS)

    Lu, X.; Yodh, G.B.; Alexandreas, D.E.; Allen, R.C.; Berley, D.; Biller, S.D.; Burman, R.L.; Cady, R.; Chang, C.Y.; Dingus, B.L.; Dion, G.M.; Ellsworth, R.W.; Gilra, M.K.; Goodman, J.A.; Haines, T.J.; Hoffman, C.M.; Kwok, P.; Lloyd-Evans, J.; Nagle, D.E.; Potter, M.E.; Sandberg, V.D.; Stark, M.J.; Talaga, R.L.; Vishwanath, P.R.; Zhang, W.

    1991-01-01

    A new method of searching for UHE point source has been developed. With a data sample of 150 million events, we have surveyed the sky for point sources over 3314 locations (1.4 degree <δ<70.4 degree). It was found that their distribution is consistent with a random fluctuation. In addition, fifty two known potential sources, including pulsars and binary x-ray sources, were studied. The source with the largest positive excess is the Crab Nebula. An excess of 2.5 sigma above the background is observed in a bin of 2.3 degree by 2.5 degree in declination and right ascension respectively

  14. Application of point-to-point matching algorithms for background correction in on-line liquid chromatography-Fourier transform infrared spectrometry (LC-FTIR).

    Science.gov (United States)

    Kuligowski, J; Quintás, G; Garrigues, S; de la Guardia, M

    2010-03-15

    A new background correction method for the on-line coupling of gradient liquid chromatography and Fourier transform infrared spectrometry has been developed. It is based on the use of a point-to-point matching algorithm that compares the absorption spectra of the sample data set with those of a previously recorded reference data set in order to select an appropriate reference spectrum. The spectral range used for the point-to-point comparison is selected with minimal user-interaction, thus facilitating considerably the application of the whole method. The background correction method has been successfully tested on a chromatographic separation of four nitrophenols running acetonitrile (0.08%, v/v TFA):water (0.08%, v/v TFA) gradients with compositions ranging from 35 to 85% (v/v) acetonitrile, giving accurate results for both, baseline resolved and overlapped peaks. Copyright (c) 2009 Elsevier B.V. All rights reserved.

  15. Application of the Monte Carlo method for the efficiency calibration of CsI and NaI detectors for gamma-ray measurements from terrestrial samples

    Energy Technology Data Exchange (ETDEWEB)

    Baccouche, S., E-mail: souad.baccouche@cnstn.rnrt.tn [UR-MDTN, National Center for Nuclear Sciences and Technology, Technopole Sidi Thabet, 2020 Sidi Thabet (Tunisia); Al-Azmi, D., E-mail: ds.alazmi@paaet.edu.kw [Department of Applied Sciences, College of Technological Studies, Public Authority for Applied Education and Training, Shuwaikh, P.O. Box 42325, Code 70654 (Kuwait); Karunakara, N., E-mail: karunakara_n@yahoo.com [University Science Instrumentation Centre, Mangalore University, Mangalagangotri 574199 (India); Trabelsi, A., E-mail: adel.trabelsi@fst.rnu.tn [UR-MDTN, National Center for Nuclear Sciences and Technology, Technopole Sidi Thabet, 2020 Sidi Thabet (Tunisia); UR-UPNHE, Faculty of Sciences of Tunis, El-Manar University, 2092 Tunis (Tunisia)

    2012-01-15

    Gamma-ray measurements in terrestrial/environmental samples require the use of high efficient detectors because of the low level of the radionuclide activity concentrations in the samples; thus scintillators are suitable for this purpose. Two scintillation detectors were studied in this work; CsI(Tl) and NaI(Tl) with identical size for measurement of terrestrial samples for performance study. This work describes a Monte Carlo method for making the full-energy efficiency calibration curves for both detectors using gamma-ray energies associated with the decay of naturally occurring radionuclides {sup 137}Cs (661 keV), {sup 40}K (1460 keV), {sup 238}U ({sup 214}Bi, 1764 keV) and {sup 232}Th ({sup 208}Tl, 2614 keV), which are found in terrestrial samples. The magnitude of the coincidence summing effect occurring for the 2614 keV emission of {sup 208}Tl is assessed by simulation. The method provides an efficient tool to make the full-energy efficiency calibration curve for scintillation detectors for any samples geometry and volume in order to determine accurate activity concentrations in terrestrial samples. - Highlights: Black-Right-Pointing-Pointer CsI (Tl) and NaI (Tl) detectors were studied for the measurement of terrestrial samples. Black-Right-Pointing-Pointer Monte Carlo method was used for efficiency calibration using natural gamma emitting terrestrial radionuclides. Black-Right-Pointing-Pointer The coincidence summing effect occurring for the 2614 keV emission of {sup 208}Tl is assessed by simulation.

  16. The Apache Point Observatory Galactic Evolution Experiment (APOGEE)

    DEFF Research Database (Denmark)

    Majewski, Steven R.; Schiavon, Ricardo P.; Frinchaboy, Peter M.

    2017-01-01

    The Apache Point Observatory Galactic Evolution Experiment (APOGEE), one of the programs in the Sloan Digital Sky Survey III (SDSS-III), has now completed its systematic, homogeneous spectroscopic survey sampling all major populations of the Milky Way. After a three-year observing campaign on the...

  17. Consistency of Trend Break Point Estimator with Underspecified Break Number

    Directory of Open Access Journals (Sweden)

    Jingjing Yang

    2017-01-01

    Full Text Available This paper discusses the consistency of trend break point estimators when the number of breaks is underspecified. The consistency of break point estimators in a simple location model with level shifts has been well documented by researchers under various settings, including extensions such as allowing a time trend in the model. Despite the consistency of break point estimators of level shifts, there are few papers on the consistency of trend shift break point estimators in the presence of an underspecified break number. The simulation study and asymptotic analysis in this paper show that the trend shift break point estimator does not converge to the true break points when the break number is underspecified. In the case of two trend shifts, the inconsistency problem worsens if the magnitudes of the breaks are similar and the breaks are either both positive or both negative. The limiting distribution for the trend break point estimator is developed and closely approximates the finite sample performance.

  18. Laboratory Guide for Residual Stress Sample Alignment and Experiment Planning-October 2011 Version

    Energy Technology Data Exchange (ETDEWEB)

    Cornwell, Paris A [ORNL; Bunn, Jeffrey R [ORNL; Schmidlin, Joshua E [ORNL; Hubbard, Camden R [ORNL

    2012-04-01

    The December 2010 version of the guide, ORNL/TM-2008/159, by Jeff Bunn, Josh Schmidlin, Camden Hubbard, and Paris Cornwell, has been further revised due to a major change in the GeoMagic Studio software for constructing a surface model. The Studio software update also includes a plug-in module to operate the FARO Scan Arm. Other revisions for clarity were also made. The purpose of this revision document is to guide the reader through the process of laser alignment used by NRSF2 at HFIR and VULCAN at SNS. This system was created to increase the spatial accuracy of the measurement points in a sample, reduce the use of neutron time used for alignment, improve experiment planning, and reduce operator error. The need for spatial resolution has been driven by the reduction in gauge volumes to the sub-millimeter level, steep strain gradients in some samples, and requests to mount multiple samples within a few days for relating data from each sample to a common sample coordinate system. The first step in this process involves mounting the sample on an indexer table in a laboratory set up for offline sample mounting and alignment in the same manner it would be mounted at either instrument. In the shared laboratory, a FARO ScanArm is used to measure the coordinates of points on the sample surface ('point cloud'), specific features and fiducial points. A Sample Coordinate System (SCS) needs to be established first. This is an advantage of the technique because the SCS can be defined in such a way to facilitate simple definition of measurement points within the sample. Next, samples are typically mounted to a frame of 80/20 and fiducial points are attached to the sample or frame then measured in the established sample coordinate system. The laser scan probe on the ScanArm can then be used to scan in an 'as-is' model of the sample as well as mounting hardware. GeoMagic Studio 12 is the software package used to construct the model from the point cloud the

  19. Laboratory Guide for Residual Stress Sample Alignment and Experiment Planning-October 2011 Version

    International Nuclear Information System (INIS)

    Cornwell, Paris A.; Bunn, Jeffrey R.; Schmidlin, Joshua E.; Hubbard, Camden R.

    2012-01-01

    The December 2010 version of the guide, ORNL/TM-2008/159, by Jeff Bunn, Josh Schmidlin, Camden Hubbard, and Paris Cornwell, has been further revised due to a major change in the GeoMagic Studio software for constructing a surface model. The Studio software update also includes a plug-in module to operate the FARO Scan Arm. Other revisions for clarity were also made. The purpose of this revision document is to guide the reader through the process of laser alignment used by NRSF2 at HFIR and VULCAN at SNS. This system was created to increase the spatial accuracy of the measurement points in a sample, reduce the use of neutron time used for alignment, improve experiment planning, and reduce operator error. The need for spatial resolution has been driven by the reduction in gauge volumes to the sub-millimeter level, steep strain gradients in some samples, and requests to mount multiple samples within a few days for relating data from each sample to a common sample coordinate system. The first step in this process involves mounting the sample on an indexer table in a laboratory set up for offline sample mounting and alignment in the same manner it would be mounted at either instrument. In the shared laboratory, a FARO ScanArm is used to measure the coordinates of points on the sample surface ('point cloud'), specific features and fiducial points. A Sample Coordinate System (SCS) needs to be established first. This is an advantage of the technique because the SCS can be defined in such a way to facilitate simple definition of measurement points within the sample. Next, samples are typically mounted to a frame of 80/20 and fiducial points are attached to the sample or frame then measured in the established sample coordinate system. The laser scan probe on the ScanArm can then be used to scan in an 'as-is' model of the sample as well as mounting hardware. GeoMagic Studio 12 is the software package used to construct the model from the point cloud the scan arm creates. Once

  20. Scattering and absorption of particles emitted by a point source in a cluster of point scatterers

    International Nuclear Information System (INIS)

    Liljequist, D.

    2012-01-01

    A theory for the scattering and absorption of particles isotropically emitted by a point source in a cluster of point scatterers is described and related to the theory for the scattering of an incident particle beam. The quantum mechanical probability of escape from the cluster in different directions is calculated, as well as the spatial distribution of absorption events within the cluster. A source strength renormalization procedure is required. The average quantum scattering in clusters with randomly shifting scatterer positions is compared to trajectory simulation with the aim of studying the validity of the trajectory method. Differences between the results of the quantum and trajectory methods are found primarily for wavelengths larger than the average distance between nearest neighbour scatterers. The average quantum results include, for example, a local minimum in the number of absorption events at the location of the point source and interference patterns in the angle-dependent escape probability as well as in the distribution of absorption events. The relative error of the trajectory method is in general, though not generally, of similar magnitude as that obtained for beam scattering.