WorldWideScience

Sample records for sample 50-point objective

  1. A comparison of point counts with a new acoustic sampling method ...

    African Journals Online (AJOL)

    We showed that the estimates of species richness, abundance and community composition based on point counts and post-hoc laboratory listening to acoustic samples are very similar, especially for a distance limited up to 50 m. Species that were frequently missed during both point counts and listening to acoustic samples ...

  2. Iowa Geologic Sampling Points

    Data.gov (United States)

    Iowa State University GIS Support and Research Facility — Point locations of geologic samples/files in the IGS repository. Types of samples include well cuttings, outcrop samples, cores, drillers logs, measured sections,...

  3. 50 CFR 270.21 - Petition of objection.

    Science.gov (United States)

    2010-10-01

    ... 50 Wildlife and Fisheries 7 2010-10-01 2010-10-01 false Petition of objection. 270.21 Section 270... § 270.21 Petition of objection. (a) Filing a petition. Any person issued a notice of assessment under... promotion plan by filing a written petition of objection with NMFS. Petitions of objection may be filed: (1...

  4. A simple method for measurement of cerebral blood flow using 123I-IMP SPECT with calibrated standard input function by one point blood sampling. Validation of calibration by one point venous blood sampling as a substitute for arterial blood sampling

    International Nuclear Information System (INIS)

    Ito, Hiroshi; Akaizawa, Takashi; Goto, Ryoui

    1994-01-01

    In a simplified method for measurement of cerebral blood flow using one 123 I-IMP SPECT scan and one point arterial blood sampling (Autoradiography method), input function is obtained by calibrating a standard input function by one point arterial blood sampling. A purpose of this study is validation of calibration by one point venous blood sampling as a substitute for one point arterial blood sampling. After intravenous infusion of 123 I-IMP, frequent arterial and venous blood sampling were simultaneously performed on 12 patients of CNS disease without any heart and lung disease and 5 normal volunteers. The radioactivity ratio of venous whole blood which obtained from cutaneous cubital vein to arterial whole blood were 0.76±0.08, 0.80±0.05, 0.81±0.06, 0.83±0.11 at 10, 20, 30, 50 min after 123 I-IMP infusion, respectively. The venous blood radioactivities were always 20% lower than those of arterial blood radioactivity during 50 min. However, the ratio which obtained from cutaneous dorsal hand vein to artery were 0.93±0.02, 0.94±0.05, 0.98±0.04, 0.98±0.03, at 10, 20, 30, 50 min after 123 I-IMP infusion, respectively. The venous blood radioactivity was consistent with artery. These indicate that arterio-venous difference of radioactivity in a peripheral cutaneous vein like a dorsal hand vein is minimal due to arteriovenous shunt in palm. Therefore, a substitution by blood sampling from cutaneous dorsal hand vein for artery will be possible. Optimized time for venous blood sampling evaluated by error analysis was 20 min after 123 I-IMP infusion, which is 10 min later than that of arterial blood sampling. (author)

  5. Distribution majorization of corner points by reinforcement learning for moving object detection

    Science.gov (United States)

    Wu, Hao; Yu, Hao; Zhou, Dongxiang; Cheng, Yongqiang

    2018-04-01

    Corner points play an important role in moving object detection, especially in the case of free-moving camera. Corner points provide more accurate information than other pixels and reduce the computation which is unnecessary. Previous works only use intensity information to locate the corner points, however, the information that former and the last frames provided also can be used. We utilize the information to focus on more valuable area and ignore the invaluable area. The proposed algorithm is based on reinforcement learning, which regards the detection of corner points as a Markov process. In the Markov model, the video to be detected is regarded as environment, the selections of blocks for one corner point are regarded as actions and the performance of detection is regarded as state. Corner points are assigned to be the blocks which are seperated from original whole image. Experimentally, we select a conventional method which uses marching and Random Sample Consensus algorithm to obtain objects as the main framework and utilize our algorithm to improve the result. The comparison between the conventional method and the same one with our algorithm show that our algorithm reduce 70% of the false detection.

  6. 3-D OBJECT RECOGNITION FROM POINT CLOUD DATA

    Directory of Open Access Journals (Sweden)

    W. Smith

    2012-09-01

    Full Text Available The market for real-time 3-D mapping includes not only traditional geospatial applications but also navigation of unmanned autonomous vehicles (UAVs. Massively parallel processes such as graphics processing unit (GPU computing make real-time 3-D object recognition and mapping achievable. Geospatial technologies such as digital photogrammetry and GIS offer advanced capabilities to produce 2-D and 3-D static maps using UAV data. The goal is to develop real-time UAV navigation through increased automation. It is challenging for a computer to identify a 3-D object such as a car, a tree or a house, yet automatic 3-D object recognition is essential to increasing the productivity of geospatial data such as 3-D city site models. In the past three decades, researchers have used radiometric properties to identify objects in digital imagery with limited success, because these properties vary considerably from image to image. Consequently, our team has developed software that recognizes certain types of 3-D objects within 3-D point clouds. Although our software is developed for modeling, simulation and visualization, it has the potential to be valuable in robotics and UAV applications. The locations and shapes of 3-D objects such as buildings and trees are easily recognizable by a human from a brief glance at a representation of a point cloud such as terrain-shaded relief. The algorithms to extract these objects have been developed and require only the point cloud and minimal human inputs such as a set of limits on building size and a request to turn on a squaring option. The algorithms use both digital surface model (DSM and digital elevation model (DEM, so software has also been developed to derive the latter from the former. The process continues through the following steps: identify and group 3-D object points into regions; separate buildings and houses from trees; trace region boundaries; regularize and simplify boundary polygons; construct complex

  7. 3-D Object Recognition from Point Cloud Data

    Science.gov (United States)

    Smith, W.; Walker, A. S.; Zhang, B.

    2011-09-01

    The market for real-time 3-D mapping includes not only traditional geospatial applications but also navigation of unmanned autonomous vehicles (UAVs). Massively parallel processes such as graphics processing unit (GPU) computing make real-time 3-D object recognition and mapping achievable. Geospatial technologies such as digital photogrammetry and GIS offer advanced capabilities to produce 2-D and 3-D static maps using UAV data. The goal is to develop real-time UAV navigation through increased automation. It is challenging for a computer to identify a 3-D object such as a car, a tree or a house, yet automatic 3-D object recognition is essential to increasing the productivity of geospatial data such as 3-D city site models. In the past three decades, researchers have used radiometric properties to identify objects in digital imagery with limited success, because these properties vary considerably from image to image. Consequently, our team has developed software that recognizes certain types of 3-D objects within 3-D point clouds. Although our software is developed for modeling, simulation and visualization, it has the potential to be valuable in robotics and UAV applications. The locations and shapes of 3-D objects such as buildings and trees are easily recognizable by a human from a brief glance at a representation of a point cloud such as terrain-shaded relief. The algorithms to extract these objects have been developed and require only the point cloud and minimal human inputs such as a set of limits on building size and a request to turn on a squaring option. The algorithms use both digital surface model (DSM) and digital elevation model (DEM), so software has also been developed to derive the latter from the former. The process continues through the following steps: identify and group 3-D object points into regions; separate buildings and houses from trees; trace region boundaries; regularize and simplify boundary polygons; construct complex roofs. Several case

  8. KNOWLEDGE-BASED OBJECT DETECTION IN LASER SCANNING POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    F. Boochs

    2012-07-01

    Full Text Available Object identification and object processing in 3D point clouds have always posed challenges in terms of effectiveness and efficiency. In practice, this process is highly dependent on human interpretation of the scene represented by the point cloud data, as well as the set of modeling tools available for use. Such modeling algorithms are data-driven and concentrate on specific features of the objects, being accessible to numerical models. We present an approach that brings the human expert knowledge about the scene, the objects inside, and their representation by the data and the behavior of algorithms to the machine. This “understanding” enables the machine to assist human interpretation of the scene inside the point cloud. Furthermore, it allows the machine to understand possibilities and limitations of algorithms and to take this into account within the processing chain. This not only assists the researchers in defining optimal processing steps, but also provides suggestions when certain changes or new details emerge from the point cloud. Our approach benefits from the advancement in knowledge technologies within the Semantic Web framework. This advancement has provided a strong base for applications based on knowledge management. In the article we will present and describe the knowledge technologies used for our approach such as Web Ontology Language (OWL, used for formulating the knowledge base and the Semantic Web Rule Language (SWRL with 3D processing and topologic built-ins, aiming to combine geometrical analysis of 3D point clouds, and specialists’ knowledge of the scene and algorithmic processing.

  9. Knowledge-Based Object Detection in Laser Scanning Point Clouds

    Science.gov (United States)

    Boochs, F.; Karmacharya, A.; Marbs, A.

    2012-07-01

    Object identification and object processing in 3D point clouds have always posed challenges in terms of effectiveness and efficiency. In practice, this process is highly dependent on human interpretation of the scene represented by the point cloud data, as well as the set of modeling tools available for use. Such modeling algorithms are data-driven and concentrate on specific features of the objects, being accessible to numerical models. We present an approach that brings the human expert knowledge about the scene, the objects inside, and their representation by the data and the behavior of algorithms to the machine. This "understanding" enables the machine to assist human interpretation of the scene inside the point cloud. Furthermore, it allows the machine to understand possibilities and limitations of algorithms and to take this into account within the processing chain. This not only assists the researchers in defining optimal processing steps, but also provides suggestions when certain changes or new details emerge from the point cloud. Our approach benefits from the advancement in knowledge technologies within the Semantic Web framework. This advancement has provided a strong base for applications based on knowledge management. In the article we will present and describe the knowledge technologies used for our approach such as Web Ontology Language (OWL), used for formulating the knowledge base and the Semantic Web Rule Language (SWRL) with 3D processing and topologic built-ins, aiming to combine geometrical analysis of 3D point clouds, and specialists' knowledge of the scene and algorithmic processing.

  10. 50 CFR 260.58 - Accessibility for sampling.

    Science.gov (United States)

    2010-10-01

    ... 50 Wildlife and Fisheries 7 2010-10-01 2010-10-01 false Accessibility for sampling. 260.58 Section... Fishery Products for Human Consumption Sampling § 260.58 Accessibility for sampling. Each applicant shall cause the processed products for which inspection is requested to be made accessible for proper sampling...

  11. 46 CFR 30.10-50 - Pilot boarding equipment and point of access.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 1 2010-10-01 2010-10-01 false Pilot boarding equipment and point of access. 30.10-50... Definitions § 30.10-50 Pilot boarding equipment and point of access. (a) Pilot boarding equipment means a... boarding equipment. [CGD 79-032, 49 FR 25455, June 21, 1984] ...

  12. Extracting Corresponding Point Based on Texture Synthesis for Nearly Flat Textureless Object Surface

    Directory of Open Access Journals (Sweden)

    Min Mao

    2015-01-01

    Full Text Available Since the image feature points are always gathered at the range with significant intensity change, such as textured portions or edges of an image, which can be detected by the state-of-the-art intensity based point-detectors, there is nearly no point in the areas of low textured detected by classical interest-point detectors. In this paper we describe a novel algorithm based on affine transform and graph cut for interest point detecting and matching from wide baseline image pairs with weakly textured object. The detection and matching mechanism can be separated into three steps: firstly, the information on the large textureless areas will be enhanced by adding textures through the proposed texture synthesis algorithm TSIQ. Secondly, the initial interest-point set is detected by classical interest-point detectors. Finally, graph cuts are used to find the globally optimal set of matching points on stereo pairs. The efficacy of the proposed algorithm is verified by three kinds of experiments, that is, the influence of point detecting from synthetic texture with different texture sample, the stability under the different geometric transformations, and the performance to improve the quasi-dense matching algorithm, respectively.

  13. Object tracking system using a VSW algorithm based on color and point features

    Directory of Open Access Journals (Sweden)

    Lim Hye-Youn

    2011-01-01

    Full Text Available Abstract An object tracking system using a variable search window (VSW algorithm based on color and feature points is proposed. A meanshift algorithm is an object tracking technique that works according to color probability distributions. An advantage of this algorithm based on color is that it is robust to specific color objects; however, a disadvantage is that it is sensitive to non-specific color objects due to illumination and noise. Therefore, to offset this weakness, it presents the VSW algorithm based on robust feature points for the accurate tracking of moving objects. The proposed method extracts the feature points of a detected object which is the region of interest (ROI, and generates a VSW using the given information which is the positions of extracted feature points. The goal of this paper is to achieve an efficient and effective object tracking system that meets the accurate tracking of moving objects. Through experiments, the object tracking system is implemented that it performs more precisely than existing techniques.

  14. 50 CFR 222.404 - Observer program sampling.

    Science.gov (United States)

    2010-10-01

    ... 50 Wildlife and Fisheries 7 2010-10-01 2010-10-01 false Observer program sampling. 222.404 Section 222.404 Wildlife and Fisheries NATIONAL MARINE FISHERIES SERVICE, NATIONAL OCEANIC AND ATMOSPHERIC... Requirement § 222.404 Observer program sampling. (a) During the program design, NMFS would be guided by the...

  15. a Point Cloud Classification Approach Based on Vertical Structures of Ground Objects

    Science.gov (United States)

    Zhao, Y.; Hu, Q.; Hu, W.

    2018-04-01

    This paper proposes a novel method for point cloud classification using vertical structural characteristics of ground objects. Since urbanization develops rapidly nowadays, urban ground objects also change frequently. Conventional photogrammetric methods cannot satisfy the requirements of updating the ground objects' information efficiently, so LiDAR (Light Detection and Ranging) technology is employed to accomplish this task. LiDAR data, namely point cloud data, can obtain detailed three-dimensional coordinates of ground objects, but this kind of data is discrete and unorganized. To accomplish ground objects classification with point cloud, we first construct horizontal grids and vertical layers to organize point cloud data, and then calculate vertical characteristics, including density and measures of dispersion, and form characteristic curves for each grids. With the help of PCA processing and K-means algorithm, we analyze the similarities and differences of characteristic curves. Curves that have similar features will be classified into the same class and point cloud correspond to these curves will be classified as well. The whole process is simple but effective, and this approach does not need assistance of other data sources. In this study, point cloud data are classified into three classes, which are vegetation, buildings, and roads. When horizontal grid spacing and vertical layer spacing are 3 m and 1 m respectively, vertical characteristic is set as density, and the number of dimensions after PCA processing is 11, the overall precision of classification result is about 86.31 %. The result can help us quickly understand the distribution of various ground objects.

  16. Point of Injury Sampling Technology for Battlefield Molecular Diagnostics

    Science.gov (United States)

    2011-11-14

    Injury" Sampling Technology for Battlefield Molecular Diagnostics November 14, 2011 Sponsored by Defense Advanced Research Projects Agency (DOD...Date of Contract: April 25, 2011 Short Title of Work: "Point of Injury" Sampling Technology for Battlefield Molecular Diagnostics " Contract...PHASE I FINAL REPORT: Point of Injury, Sampling Technology for Battlefield Molecular Diagnostics . W31P4Q-11-C-0222 (UNCLASSIFIED) P.I: Bernardo

  17. Dew Point modelling using GEP based multi objective optimization

    OpenAIRE

    Shroff, Siddharth; Dabhi, Vipul

    2013-01-01

    Different techniques are used to model the relationship between temperatures, dew point and relative humidity. Gene expression programming is capable of modelling complex realities with great accuracy, allowing at the same time, the extraction of knowledge from the evolved models compared to other learning algorithms. We aim to use Gene Expression Programming for modelling of dew point. Generally, accuracy of the model is the only objective used by selection mechanism of GEP. This will evolve...

  18. Roadside Multiple Objects Extraction from Mobile Laser Scanning Point Cloud Based on DBN

    Directory of Open Access Journals (Sweden)

    LUO Haifeng

    2018-02-01

    Full Text Available This paper proposed an novel algorithm for exploring deep belief network (DBN architectures to extract and recognize roadside facilities (trees,cars and traffic poles from mobile laser scanning (MLS point cloud.The proposed methods firstly partitioned the raw MLS point cloud into blocks and then removed the ground and building points.In order to partition the off-ground objects into individual objects,off-ground points were organized into an Octree structure and clustered into candidate objects based on connected component.To improve segmentation performance on clusters containing overlapped objects,a refining processing using a voxel-based normalized cut was then implemented.In addition,multi-view features descriptor was generated for each independent roadside facilities based on binary images.Finally,a deep belief network (DBN was trained to extract trees,cars and traffic pole objects.Experiments are undertaken to evaluate the validities of the proposed method with two datasets acquired by Lynx Mobile Mapper System.The precision of trees,cars and traffic poles objects extraction results respectively was 97.31%,97.79% and 92.78%.The recall was 98.30%,98.75% and 96.77% respectively.The quality is 95.70%,93.81% and 90.00%.And the F1 measure was 97.80%,96.81% and 94.73%.

  19. Salient Point Detection in Protrusion Parts of 3D Object Robust to Isometric Variations

    Science.gov (United States)

    Mirloo, Mahsa; Ebrahimnezhad, Hosein

    2018-03-01

    In this paper, a novel method is proposed to detect 3D object salient points robust to isometric variations and stable against scaling and noise. Salient points can be used as the representative points from object protrusion parts in order to improve the object matching and retrieval algorithms. The proposed algorithm is started by determining the first salient point of the model based on the average geodesic distance of several random points. Then, according to the previous salient point, a new point is added to this set of points in each iteration. By adding every salient point, decision function is updated. Hence, a condition is created for selecting the next point in which the iterative point is not extracted from the same protrusion part so that drawing out of a representative point from every protrusion part is guaranteed. This method is stable against model variations with isometric transformations, scaling, and noise with different levels of strength due to using a feature robust to isometric variations and considering the relation between the salient points. In addition, the number of points used in averaging process is decreased in this method, which leads to lower computational complexity in comparison with the other salient point detection algorithms.

  20. Neutron-rich isotopes around the r-process 'waiting-point' nuclei 2979Cu50 and 3080Zn50

    International Nuclear Information System (INIS)

    Kratz, K.L.; Gabelmann, H.; Pfeiffer, B.; Woehr, A.

    1991-01-01

    Beta-decay half-lives (T 1/2 ) and delayed-neutron emission probabilities (P n ) of very neutron-rich Cu to As nuclei have been measured, among them the new isotopes 77 Cu 48 , 79 Cu 50 , 81 Zn 51 and 84 Ga 53 . With the T 1/2 and P n -values of now four N≅50 'waiting-point' nuclei known, our hypothesis that the r-process has attained a local β-flow equilibrium around A≅80 is further strengthened. (orig.)

  1. Water Sample Points, Navajo Nation, 2000, USACE

    Data.gov (United States)

    U.S. Environmental Protection Agency — This point shapefile presents the locations and results for water samples collected on the Navajo Nation by the US Army Corps of Engineers (USACE) for the US...

  2. File list: Unc.Kid.50.AllAg.Nephrectomy_sample [Chip-atlas[Archive

    Lifescience Database Archive (English)

    Full Text Available Unc.Kid.50.AllAg.Nephrectomy_sample hg19 Unclassified Kidney Nephrectomy sample htt...p://dbarchive.biosciencedbc.jp/kyushu-u/hg19/assembled/Unc.Kid.50.AllAg.Nephrectomy_sample.bed ...

  3. File list: DNS.Kid.50.AllAg.Nephrectomy_sample [Chip-atlas[Archive

    Lifescience Database Archive (English)

    Full Text Available DNS.Kid.50.AllAg.Nephrectomy_sample hg19 DNase-seq Kidney Nephrectomy sample http:/.../dbarchive.biosciencedbc.jp/kyushu-u/hg19/assembled/DNS.Kid.50.AllAg.Nephrectomy_sample.bed ...

  4. Provisional-Ideal-Point-Based Multi-objective Optimization Method for Drone Delivery Problem

    Science.gov (United States)

    Omagari, Hiroki; Higashino, Shin-Ichiro

    2018-04-01

    In this paper, we proposed a new evolutionary multi-objective optimization method for solving drone delivery problems (DDP). It can be formulated as a constrained multi-objective optimization problem. In our previous research, we proposed the "aspiration-point-based method" to solve multi-objective optimization problems. However, this method needs to calculate the optimal values of each objective function value in advance. Moreover, it does not consider the constraint conditions except for the objective functions. Therefore, it cannot apply to DDP which has many constraint conditions. To solve these issues, we proposed "provisional-ideal-point-based method." The proposed method defines a "penalty value" to search for feasible solutions. It also defines a new reference solution named "provisional-ideal point" to search for the preferred solution for a decision maker. In this way, we can eliminate the preliminary calculations and its limited application scope. The results of the benchmark test problems show that the proposed method can generate the preferred solution efficiently. The usefulness of the proposed method is also demonstrated by applying it to DDP. As a result, the delivery path when combining one drone and one truck drastically reduces the traveling distance and the delivery time compared with the case of using only one truck.

  5. File list: Oth.Kid.50.AllAg.Nephrectomy_sample [Chip-atlas[Archive

    Lifescience Database Archive (English)

    Full Text Available Oth.Kid.50.AllAg.Nephrectomy_sample hg19 TFs and others Kidney Nephrectomy sample h...ttp://dbarchive.biosciencedbc.jp/kyushu-u/hg19/assembled/Oth.Kid.50.AllAg.Nephrectomy_sample.bed ...

  6. File list: His.Kid.50.AllAg.Nephrectomy_sample [Chip-atlas[Archive

    Lifescience Database Archive (English)

    Full Text Available His.Kid.50.AllAg.Nephrectomy_sample hg19 Histone Kidney Nephrectomy sample SRX95646...X1037580,SRX1037579,SRX956473 http://dbarchive.biosciencedbc.jp/kyushu-u/hg19/assembled/His.Kid.50.AllAg.Nephrectomy_sample.bed ...

  7. Restoration of an object from its complex cross sections and surface smoothing of the object

    International Nuclear Information System (INIS)

    Agui, Takeshi; Arai, Kiyoshi; Nakajima, Masayuki

    1990-01-01

    In clinical medicine, restoring the surface of a three-dimensional object from its set of parallel cross sections obtained by CT or MRI is useful in diagnoses. A method of connecting a pair of contours on neighboring cross sections to each other by triangular patches is generally used for this restoration. This method, however, has the complexity of triangulation algorithm, and requires the numerous quantity of calculations when surface smoothing is executed. In our new method, the positions of sampling points are expressed in cylindrical coordinates. Sampling points including auxiliary points are extracted and connected using simple algorithm. Surface smoothing is executed by moving sampling points. This method extends the application scope of restoring objects by triangulation. (author)

  8. Sequential sampling of visual objects during sustained attention.

    Directory of Open Access Journals (Sweden)

    Jianrong Jia

    2017-06-01

    Full Text Available In a crowded visual scene, attention must be distributed efficiently and flexibly over time and space to accommodate different contexts. It is well established that selective attention enhances the corresponding neural responses, presumably implying that attention would persistently dwell on the task-relevant item. Meanwhile, recent studies, mostly in divided attentional contexts, suggest that attention does not remain stationary but samples objects alternately over time, suggesting a rhythmic view of attention. However, it remains unknown whether the dynamic mechanism essentially mediates attentional processes at a general level. Importantly, there is also a complete lack of direct neural evidence reflecting whether and how the brain rhythmically samples multiple visual objects during stimulus processing. To address these issues, in this study, we employed electroencephalography (EEG and a temporal response function (TRF approach, which can dissociate responses that exclusively represent a single object from the overall neuronal activity, to examine the spatiotemporal characteristics of attention in various attentional contexts. First, attention, which is characterized by inhibitory alpha-band (approximately 10 Hz activity in TRFs, switches between attended and unattended objects every approximately 200 ms, suggesting a sequential sampling even when attention is required to mostly stay on the attended object. Second, the attentional spatiotemporal pattern is modulated by the task context, such that alpha-mediated switching becomes increasingly prominent as the task requires a more uniform distribution of attention. Finally, the switching pattern correlates with attentional behavioral performance. Our work provides direct neural evidence supporting a generally central role of temporal organization mechanism in attention, such that multiple objects are sequentially sorted according to their priority in attentional contexts. The results suggest

  9. Comprehensive Interpretation of a Three-Point Gauss Quadrature with Variable Sampling Points and Its Application to Integration for Discrete Data

    Directory of Open Access Journals (Sweden)

    Young-Doo Kwon

    2013-01-01

    Full Text Available This study examined the characteristics of a variable three-point Gauss quadrature using a variable set of weighting factors and corresponding optimal sampling points. The major findings were as follows. The one-point, two-point, and three-point Gauss quadratures that adopt the Legendre sampling points and the well-known Simpson’s 1/3 rule were found to be special cases of the variable three-point Gauss quadrature. In addition, the three-point Gauss quadrature may have out-of-domain sampling points beyond the domain end points. By applying the quadratically extrapolated integrals and nonlinearity index, the accuracy of the integration could be increased significantly for evenly acquired data, which is popular with modern sophisticated digital data acquisition systems, without using higher-order extrapolation polynomials.

  10. [Anxiety in a representative sample of the Spanish population over 50 years-old].

    Science.gov (United States)

    Carreira Capeáns, Cecilia; Facal, David

    Anxiety is common throughout the ageing process. The objective of this study is to estimate the prevalence of anxiety in a representative sample of the Spanish population over 50 years-old. The data of this study come from the Pilot Study developed within the Longitudinal Ageing Study in Spain (ELES), in which a representative sample of the non-institutionalised Spanish population was evaluated. An analysis was performed on the data of 1086 people who answered the question «I am now going to read a list with a series of diseases or health problems. I would like you to tell me if your doctor has diagnosed any of them». The tools used were a questionnaire consisting of 218 questions, along with standardised tests, such as the Spanish version of the Mini-Mental State Examination. Anxiety was reported to have been diagnosed at some time in 14.3% of the sample. The prevalence was higher in women than in men (77.8 vs. 22.2%), decreasing with age, and related to different chronic diseases. The results show that the prevalence of anxiety throughout the lifespan is noticeable in people over 50 years, and should be taken into account, especially in the female population and in those with chronic diseases. Copyright © 2017 SEGG. Publicado por Elsevier España, S.L.U. All rights reserved.

  11. Point Counts of Birds in Bottomland Hardwood Forests of the Mississippi Alluvial Valley: Duration, Minimum Sample Size, and Points Versus Visits

    Science.gov (United States)

    Winston Paul Smith; Daniel J. Twedt; David A. Wiedenfeld; Paul B. Hamel; Robert P. Ford; Robert J. Cooper

    1993-01-01

    To compare efficacy of point count sampling in bottomland hardwood forests, duration of point count, number of point counts, number of visits to each point during a breeding season, and minimum sample size are examined.

  12. Point of Injury’ Sampling Technology for Battlefield Molecular Diagnostics

    Science.gov (United States)

    2012-03-17

    Injury" Sampling Technology for Battlefield Molecular Diagnostics March 17,2012 Sponsored by Defense Advanced Research Projects Agency (DOD) Defense...Contract: April 25, 2011 Short Title of Work: "Point of Injury" Sampling Technology for Battlefield Molecular Diagnostics " Contract Expiration Date...SBIR PHASE I OPTION REPORT: Point of Injury, Sampling Technology for Battlefield Molecular Diagnostics . W31P4Q-1 l-C-0222 (UNCLASSIFIED) P.I

  13. a Two-Step Classification Approach to Distinguishing Similar Objects in Mobile LIDAR Point Clouds

    Science.gov (United States)

    He, H.; Khoshelham, K.; Fraser, C.

    2017-09-01

    Nowadays, lidar is widely used in cultural heritage documentation, urban modeling, and driverless car technology for its fast and accurate 3D scanning ability. However, full exploitation of the potential of point cloud data for efficient and automatic object recognition remains elusive. Recently, feature-based methods have become very popular in object recognition on account of their good performance in capturing object details. Compared with global features describing the whole shape of the object, local features recording the fractional details are more discriminative and are applicable for object classes with considerable similarity. In this paper, we propose a two-step classification approach based on point feature histograms and the bag-of-features method for automatic recognition of similar objects in mobile lidar point clouds. Lamp post, street light and traffic sign are grouped as one category in the first-step classification for their inter similarity compared with tree and vehicle. A finer classification of the lamp post, street light and traffic sign based on the result of the first-step classification is implemented in the second step. The proposed two-step classification approach is shown to yield a considerable improvement over the conventional one-step classification approach.

  14. The nature of 50 Palermo Swift-BAT hard X-ray objects through optical spectroscopy

    Science.gov (United States)

    Rojas, A. F.; Masetti, N.; Minniti, D.; Jiménez-Bailón, E.; Chavushyan, V.; Hau, G.; McBride, V. A.; Bassani, L.; Bazzano, A.; Bird, A. J.; Galaz, G.; Gavignaud, I.; Landi, R.; Malizia, A.; Morelli, L.; Palazzi, E.; Patiño-Álvarez, V.; Stephen, J. B.; Ubertini, P.

    2017-06-01

    We present the nature of 50 hard X-ray emitting objects unveiled through an optical spectroscopy campaign performed at seven telescopes in the northern and southern hemispheres. These objects were detected with the Burst Alert Telescope (BAT) instrument onboard the Swift satellite and listed as of unidentified nature in the 54-month Palermo BAT catalogue. In detail, 45 sources in our sample are identified as active galactic nuclei of which, 27 are classified as type 1 (with broad and narrow emission lines) and 18 are classified as type 2 (with only narrow emission lines). Among the broad-line emission objects, one is a type 1 high-redshift quasi-stellar object, and among the narrow-line emission objects, one is a starburst galaxy, one is a X-ray bright optically normal galaxy, and one is a low ionization nuclear emission line region. We report 30 new redshift measurements, 13 confirmations and 2 more accurate redshift values. The remaining five objects are galactic sources: three are Cataclismic Variables, one is a X-ray Binary probably with a low mass secondary star, and one is an active star. Based on observations obtained from the following observatories: Cerro Tololo Interamerican Observatory (Chile); Astronomical Observatory of Bologna in Loiano (Italy); Observatorio Astronómico Nacional (San Pedro Mártir, Mexico); Radcliffe telescope of the South African Astronomical Observatory (Sutherland, South Africa); Sloan Digital Sky Survey; Observatorio del Roque de los Muchachos of the Instituto de Astrofísica de Canarias (Canary Islands, Spain) and New Technology Telescope (NTT) of La Silla Observatory, Chile.

  15. Groundwater sampling with well-points

    International Nuclear Information System (INIS)

    Laubacher, R.C.; Bailey, W.M.

    1992-01-01

    This paper reports that BP Oil Company and Engineering-Science (ES) conducted a groundwater investigation at a BP Oil Distribution facility in the coastal plain of south central Alabama. The predominant lithologies include unconsolidated Quaternary-aged gravels, sands, silts and clay. Wellpoints were used to determine the vertical and horizontal extent of volatile hydrocarbons in the water table aquifer. To determine the vertical extent of contaminant migration, the hollow-stem augers were advanced approximately 10 feet into the aquifer near a suspected source. The drill stem and bit were removed very slowly to prevent sand heaving. The well-point was again driven ahead of the augers and four volumes (18 liters) of groundwater were purged. A sample was collected and the headspace vapor was analyzed as before. Groundwater from a total of seven borings was analyzed using these techniques. Permanent monitoring wells were installed at four boring locations which had volatile concentrations less than 1 part per million. Later groundwater sampling and laboratory analysis confirmed the wells had been installed near or beyond both the horizontal and vertical plume boundaries

  16. File list: InP.Kid.50.AllAg.Nephrectomy_sample [Chip-atlas[Archive

    Lifescience Database Archive (English)

    Full Text Available InP.Kid.50.AllAg.Nephrectomy_sample hg19 Input control Kidney Nephrectomy sample SR...84,SRX1037589,SRX1037590,SRX1037583 http://dbarchive.biosciencedbc.jp/kyushu-u/hg19/assembled/InP.Kid.50.AllAg.Nephrectomy_sample.bed ...

  17. A TWO-STEP CLASSIFICATION APPROACH TO DISTINGUISHING SIMILAR OBJECTS IN MOBILE LIDAR POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    H. He

    2017-09-01

    Full Text Available Nowadays, lidar is widely used in cultural heritage documentation, urban modeling, and driverless car technology for its fast and accurate 3D scanning ability. However, full exploitation of the potential of point cloud data for efficient and automatic object recognition remains elusive. Recently, feature-based methods have become very popular in object recognition on account of their good performance in capturing object details. Compared with global features describing the whole shape of the object, local features recording the fractional details are more discriminative and are applicable for object classes with considerable similarity. In this paper, we propose a two-step classification approach based on point feature histograms and the bag-of-features method for automatic recognition of similar objects in mobile lidar point clouds. Lamp post, street light and traffic sign are grouped as one category in the first-step classification for their inter similarity compared with tree and vehicle. A finer classification of the lamp post, street light and traffic sign based on the result of the first-step classification is implemented in the second step. The proposed two-step classification approach is shown to yield a considerable improvement over the conventional one-step classification approach.

  18. Pacific Northwest National Laboratory Facility Radionuclide Emission Points and Sampling Systems

    International Nuclear Information System (INIS)

    Barfuss, Brad C.; Barnett, J. M.; Ballinger, Marcel Y.

    2009-01-01

    Battelle-Pacific Northwest Division operates numerous research and development laboratories in Richland, Washington, including those associated with the Pacific Northwest National Laboratory (PNNL) on the Department of Energy's Hanford Site that have the potential for radionuclide air emissions. The National Emission Standard for Hazardous Air Pollutants (NESHAP 40 CFR 61, Subparts H and I) requires an assessment of all effluent release points that have the potential for radionuclide emissions. Potential emissions are assessed annually. Sampling, monitoring, and other regulatory compliance requirements are designated based upon the potential-to-emit dose criteria found in the regulations. The purpose of this document is to describe the facility radionuclide air emission sampling program and provide current and historical facility emission point system performance, operation, and design information. A description of the buildings, exhaust points, control technologies, and sample extraction details is provided for each registered or deregistered facility emission point. Additionally, applicable stack sampler configuration drawings, figures, and photographs are provided

  19. Pacific Northwest National Laboratory Facility Radionuclide Emission Points and Sampling Systems

    Energy Technology Data Exchange (ETDEWEB)

    Barfuss, Brad C.; Barnett, J. Matthew; Ballinger, Marcel Y.

    2009-04-08

    Battelle—Pacific Northwest Division operates numerous research and development laboratories in Richland, Washington, including those associated with the Pacific Northwest National Laboratory (PNNL) on the Department of Energy’s Hanford Site that have the potential for radionuclide air emissions. The National Emission Standard for Hazardous Air Pollutants (NESHAP 40 CFR 61, Subparts H and I) requires an assessment of all effluent release points that have the potential for radionuclide emissions. Potential emissions are assessed annually. Sampling, monitoring, and other regulatory compliance requirements are designated based upon the potential-to-emit dose criteria found in the regulations. The purpose of this document is to describe the facility radionuclide air emission sampling program and provide current and historical facility emission point system performance, operation, and design information. A description of the buildings, exhaust points, control technologies, and sample extraction details is provided for each registered or deregistered facility emission point. Additionally, applicable stack sampler configuration drawings, figures, and photographs are provided.

  20. Results for the first quarter calendar year 2017 tank 50H salt solution sample

    Energy Technology Data Exchange (ETDEWEB)

    Crawford, C. L. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2017-04-12

    In this memorandum, the chemical and radionuclide contaminant results from the First Quarter Calendar Year 2017 (CY17) sample of Tank 50H salt solution are presented in tabulated form. The First Quarter CY17 Tank 50H samples [a 200 mL sample obtained 6” below the surface (HTF-50-17-7) and a 1 L sample obtained 66” from the tank bottom (HTF-50-17-8)] were obtained on January 15, 2017 and received at Savannah River National Laboratory (SRNL) on January 16, 2017. Prior to obtaining the samples from Tank 50H, a single pump was run at least 4.4 hours and the samples were pulled immediately after pump shut down. All volatile organic analysis (VOA) and semi-volatile organic analysis (SVOA) were performed on the surface sample and all other analyses were performed on the variable depth sample. The information from this characterization will be used by Savannah River Remediation (SRR) for the transfer of aqueous waste from Tank 50H to the Saltstone Production Facility, where the waste will be treated and disposed of in the Saltstone Disposal Facility. This memorandum compares results, where applicable, to Saltstone Waste Acceptance Criteria (WAC) limits and targets. The chemical and radionuclide contaminant results from the characterization of the First Quarter CY17 sampling of Tank 50H were requested by SRR personnel and details of the testing are presented in the SRNL Task Technical and Quality Assurance Plan (TTQAP). This memorandum is part of Deliverable 2 from SRR request. Data pertaining to the regulatory limits for Resource Conservation and Recovery Act (RCRA) metals will be documented at a later time per the TTQAP for the Tank 50H saltstone task.

  1. Critical point relascope sampling for unbiased volume estimation of downed coarse woody debris

    Science.gov (United States)

    Jeffrey H. Gove; Michael S. Williams; Mark J. Ducey; Mark J. Ducey

    2005-01-01

    Critical point relascope sampling is developed and shown to be design-unbiased for the estimation of log volume when used with point relascope sampling for downed coarse woody debris. The method is closely related to critical height sampling for standing trees when trees are first sampled with a wedge prism. Three alternative protocols for determining the critical...

  2. Point efficiency of the notion in multi objective programming

    International Nuclear Information System (INIS)

    Kampempe, B.J.D.; Manya, N.L.

    2010-01-01

    The approaches to the problem of multi-objective linear programming stochastic (PLMS) which have been proposed so far in the literature are not really satisfactory (9,11), so we want, in this article, to approach the problem of PLMS using the concept of efficiency point. It is also necessary to define what is meant by efficiency point in the context of PLMS. This is precisely the purpose of this article. In fact, it seeks to provide specific definitions of effective solutions that are not only mathematically consistent, but also have significance to a decision maker faced with such a decision problem. As a result, we have to use the concept of dominance in the time of PLMS, in the context where one has ordinal preference but no utility functions. In this paper, we propose to further explore the concepts of dominance and efficiency point. Indeed, the whole point P effective solutions are usually very broad and as we shall see, it can be identical to X. Accordingly, we will try to relax the definition of dominance relation >p in order to obtain other types of dominance point less demanding and generating subsets may be more effective especially interesting for a decision maker. We shall have to distinguish two other families of dominance relations point : the dominance and dominance scenario test, and within sets of efficient solutions proposed by these last two relations, we will focus on subsets of efficient solutions called sponsored and unanimous. We will study the properties of these various relationships and the possible links between the different effective resulting sets in order to find them and to calculate them explicitly. Finally we will establish some connections between different notions of efficiency and timely concept of Pareto-efficient solution on the deterministic case (PLMD)

  3. Using the method of ideal point to solve dual-objective problem for production scheduling

    Directory of Open Access Journals (Sweden)

    Mariia Marko

    2016-07-01

    Full Text Available In practice, there are often problems, which must simultaneously optimize several criterias. This so-called multi-objective optimization problem. In the article we consider the use of the method ideal point to solve the two-objective optimization problem of production planning. The process of finding solution to the problem consists of a series of steps where using simplex method, we find the ideal point. After that for solving a scalar problems, we use the method of Lagrange multipliers

  4. Efficient triangulation of Poisson-disk sampled point sets

    KAUST Repository

    Guo, Jianwei

    2014-05-06

    In this paper, we present a simple yet efficient algorithm for triangulating a 2D input domain containing a Poisson-disk sampled point set. The proposed algorithm combines a regular grid and a discrete clustering approach to speedup the triangulation. Moreover, our triangulation algorithm is flexible and performs well on more general point sets such as adaptive, non-maximal Poisson-disk sets. The experimental results demonstrate that our algorithm is robust for a wide range of input domains and achieves significant performance improvement compared to the current state-of-the-art approaches. © 2014 Springer-Verlag Berlin Heidelberg.

  5. Evaluation of the point-centred-quarter method of sampling ...

    African Journals Online (AJOL)

    -quarter method.The parameter which was most efficiently sampled was species composition relativedensity) with 90% replicate similarity being achieved with 100 point-centred-quarters. However, this technique cannot be recommended, even ...

  6. Results For The Third Quarter Calendar Year 2016 Tank 50H Salt Solution Sample

    Energy Technology Data Exchange (ETDEWEB)

    Crawford, C. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2016-10-13

    In this memorandum, the chemical and radionuclide contaminant results from the Third Quarter Calendar Year 2016 (CY16) sample of Tank 50H salt solution are presented in tabulated form. The Third Quarter CY16 Tank 50H samples (a 200 mL sample obtained 6” below the surface (HTF-5-16-63) and a 1 L sample obtained 66” from the tank bottom (HTF-50-16-64)) were obtained on July 14, 2016 and received at Savannah River National Laboratory (SRNL) on the same day. Prior to obtaining the samples from Tank 50H, a single pump was run at least 4.4 hours, and the samples were pulled immediately after pump shut down. The information from this characterization will be used by Defense Waste Processing Facility (DWPF) & Saltstone Facility Engineering for the transfer of aqueous waste from Tank 50H to the Saltstone Production Facility, where the waste will be treated and disposed of in the Saltstone Disposal Facility. This memorandum compares results, where applicable, to Saltstone Waste Acceptance Criteria (WAC) limits and targets. Data pertaining to the regulatory limits for Resource Conservation and Recovery Act (RCRA) metals will be documented at a later time per the Task Technical and Quality Assurance Plan (TTQAP) for the Tank 50H saltstone task. The chemical and radionuclide contaminant results from the characterization of the Third Quarter CY16 sampling of Tank 50H were requested by Savannah River Remediation (SRR) personnel and details of the testing are presented in the SRNL TTQAP.

  7. Development of spatial scaling technique of forest health sample point information

    Science.gov (United States)

    Lee, J.; Ryu, J.; Choi, Y. Y.; Chung, H. I.; Kim, S. H.; Jeon, S. W.

    2017-12-01

    Most forest health assessments are limited to monitoring sampling sites. The monitoring of forest health in Britain in Britain was carried out mainly on five species (Norway spruce, Sitka spruce, Scots pine, Oak, Beech) Database construction using Oracle database program with density The Forest Health Assessment in GreatBay in the United States was conducted to identify the characteristics of the ecosystem populations of each area based on the evaluation of forest health by tree species, diameter at breast height, water pipe and density in summer and fall of 200. In the case of Korea, in the first evaluation report on forest health vitality, 1000 sample points were placed in the forests using a systematic method of arranging forests at 4Km × 4Km at regular intervals based on an sample point, and 29 items in four categories such as tree health, vegetation, soil, and atmosphere. As mentioned above, existing researches have been done through the monitoring of the survey sample points, and it is difficult to collect information to support customized policies for the regional survey sites. In the case of special forests such as urban forests and major forests, policy and management appropriate to the forest characteristics are needed. Therefore, it is necessary to expand the survey headquarters for diagnosis and evaluation of customized forest health. For this reason, we have constructed a method of spatial scale through the spatial interpolation according to the characteristics of each index of the main sample point table of 29 index in the four points of diagnosis and evaluation report of the first forest health vitality report, PCA statistical analysis and correlative analysis are conducted to construct the indicators with significance, and then weights are selected for each index, and evaluation of forest health is conducted through statistical grading.

  8. Cloud point extraction for trace inorganic arsenic speciation analysis in water samples by hydride generation atomic fluorescence spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Li, Shan, E-mail: ls_tuzi@163.com; Wang, Mei, E-mail: wmei02@163.com; Zhong, Yizhou, E-mail: yizhz@21cn.com; Zhang, Zehua, E-mail: kazuki.0101@aliyun.com; Yang, Bingyi, E-mail: e_yby@163.com

    2015-09-01

    A new cloud point extraction technique was established and used for the determination of trace inorganic arsenic species in water samples combined with hydride generation atomic fluorescence spectrometry (HGAFS). As(III) and As(V) were complexed with ammonium pyrrolidinedithiocarbamate and molybdate, respectively. The complexes were quantitatively extracted with the non-ionic surfactant (Triton X-114) by centrifugation. After addition of antifoam, the surfactant-rich phase containing As(III) was diluted with 5% HCl for HGAFS determination. For As(V) determination, 50% HCl was added to the surfactant-rich phase, and the mixture was placed in an ultrasonic bath at 70 °C for 30 min. As(V) was reduced to As(III) with thiourea–ascorbic acid solution, followed by HGAFS. Under the optimum conditions, limits of detection of 0.009 and 0.012 μg/L were obtained for As(III) and As(V), respectively. Concentration factors of 9.3 and 7.9, respectively, were obtained for a 50 mL sample. The precisions were 2.1% for As(III) and 2.3% for As(V). The proposed method was successfully used for the determination of trace As(III) and As(V) in water samples, with satisfactory recoveries. - Highlights: • Cloud point extraction was firstly established to determine trace inorganic arsenic(As) species combining with HGAFS. • Separate As(III) and As(V) determinations improve the accuracy. • Ultrasonic release of complexed As(V) enables complete As(V) reduction to As(III). • Direct HGAFS analysis can be performed.

  9. Measurement of regional cerebral blood flow using one-point arterial blood sampling and microsphere model with 123I-IMP. Correction of one-point arterial sampling count by whole brain count ratio

    International Nuclear Information System (INIS)

    Makino, Kenichi; Masuda, Yasuhiko; Gotoh, Satoshi

    1998-01-01

    The experimental subjects were 189 patients with cerebrovascular disorders. 123 I-IMP, 222 MBq, was administered by intravenous infusion. Continuous arterial blood sampling was carried out for 5 minutes, and arterial blood was also sampled once at 5 minutes after 123 I-IMP administration. Then the whole blood count of the one-point arterial sampling was compared with the octanol-extracted count of the continuous arterial sampling. A positive correlation was found between the two values. The ratio of the continuous sampling octanol-extracted count (OC) to the one-point sampling whole blood count (TC5) was compared with the whole brain count ratio (5:29 ratio, Cn) using 1-minute planar SPECT images, centering on 5 and 29 minutes after 123 I-IMP administration. Correlation was found between the two values. The following relationship was shown from the correlation equation. OC/TC5=0.390969 x Cn-0.08924. Based on this correlation equation, we calculated the theoretical continuous arterial sampling octanol-extracted count (COC). COC=TC5 x (0.390969 x Cn-0.08924). There was good correlation between the value calculated with this equation and the actually measured value. The coefficient improved to r=0.94 from the r=0.87 obtained before using the 5:29 ratio for correction. For 23 of these 189 cases, another one-point arterial sampling was carried out at 6, 7, 8, 9 and 10 minutes after the administration of 123 I-IMP. The correlation coefficient was also improved for these other point samplings when this correction method using the 5:29 ratio was applied. It was concluded that it is possible to obtain highly accurate input functions, i.e., calculated continuous arterial sampling octanol-extracted counts, using one-point arterial sampling whole blood counts by performing correction using the 5:29 ratio. (K.H.)

  10. Reliability of impingement sampling designs: An example from the Indian Point station

    International Nuclear Information System (INIS)

    Mattson, M.T.; Waxman, J.B.; Watson, D.A.

    1988-01-01

    A 4-year data base (1976-1979) of daily fish impingement counts at the Indian Point electric power station on the Hudson River was used to compare the precision and reliability of three random-sampling designs: (1) simple random, (2) seasonally stratified, and (3) empirically stratified. The precision of daily impingement estimates improved logarithmically for each design as more days in the year were sampled. Simple random sampling was the least, and empirically stratified sampling was the most precise design, and the difference in precision between the two stratified designs was small. Computer-simulated sampling was used to estimate the reliability of the two stratified-random-sampling designs. A seasonally stratified sampling design was selected as the most appropriate reduced-sampling program for Indian Point station because: (1) reasonably precise and reliable impingement estimates were obtained using this design for all species combined and for eight common Hudson River fish by sampling only 30% of the days in a year (110 d); and (2) seasonal strata may be more precise and reliable than empirical strata if future changes in annual impingement patterns occur. The seasonally stratified design applied to the 1976-1983 Indian Point impingement data showed that selection of sampling dates based on daily species-specific impingement variability gave results that were more precise, but not more consistently reliable, than sampling allocations based on the variability of all fish species combined. 14 refs., 1 fig., 6 tabs

  11. Method and apparatus for producing and selectively directing x-rays to different points on an object

    International Nuclear Information System (INIS)

    Haimson, J.

    1981-01-01

    The invention relates to apparatus suitable for use in a computer tomography X-ray scanner. High intensity X-rays are produced and directed towards the object of interest from any of a plurality of preselected coplanar points spaced from the object and spaced radially about a line through the object. There are no moving parts. The electron beam, which produces X-rays as a consequence of impact with the target, is directed selectively to preselected points on the stationary target. Beam-direction compensates for the beam spreading effect of space charge forces acting on the beam, and beam-shaping shapes the beam to a predetermined cross-sectional configuration at its point of incidence with the target. Beam aberrations including sextupole aberrations are corrected. (U.K.)

  12. Determination of rhodium in metallic alloy and water samples using cloud point extraction coupled with spectrophotometric technique

    Science.gov (United States)

    Kassem, Mohammed A.; Amin, Alaa S.

    2015-02-01

    A new method to estimate rhodium in different samples at trace levels had been developed. Rhodium was complexed with 5-(4‧-nitro-2‧,6‧-dichlorophenylazo)-6-hydroxypyrimidine-2,4-dione (NDPHPD) as a complexing agent in an aqueous medium and concentrated by using Triton X-114 as a surfactant. The investigated rhodium complex was preconcentrated with cloud point extraction process using the nonionic surfactant Triton X-114 to extract rhodium complex from aqueous solutions at pH 4.75. After the phase separation at 50 °C, the surfactant-rich phase was heated again at 100 °C to remove water after decantation and the remaining phase was dissolved using 0.5 mL of acetonitrile. Under optimum conditions, the calibration curve was linear for the concentration range of 0.5-75 ng mL-1 and the detection limit was 0.15 ng mL-1 of the original solution. The enhancement factor of 500 was achieved for 250 mL samples containing the analyte and relative standard deviations were ⩽1.50%. The method was found to be highly selective, fairly sensitive, simple, rapid and economical and safely applied for rhodium determination in different complex materials such as synthetic mixture of alloys and environmental water samples.

  13. Association Between Objectively Measured Physical Activity and Erectile Dysfunction among a Nationally Representative Sample of American Men.

    Science.gov (United States)

    Loprinzi, Paul D; Edwards, Meghan

    2015-09-01

    Emerging work suggests an inverse association between physical activity and erectile dysfunction (ED). The majority of this cross-sectional research comes from convenience samples and all studies on this topic have employed self-report physical activity methodology. Therefore, the purpose of this brief-report, confirmatory research study was to examine the association between objectively measured physical activity and ED in a national sample of Americans. Data from the 2003-2004 National Health and Nutrition Examination Survey were used. Six hundred ninety-two adults between the ages of 50 and 85 years (representing 33.2 million adults) constituted the analytic sample. Participants wore an ActiGraph 7164 accelerometer (ActiGraph, Pensacola, FL, USA) for up to 7 days with ED assessed via self-report. The main outcome measure used was ED assessed via self-report. After adjustments, for every 30 min/day increase in moderate-to-vigorous physical activity, participants had a 43% reduced odds of having ED (odds ratioadjusted  = 0.57; 95% confidence interval: 0.40-0.81; P = 0.004). This confirmatory study employing an objective measure of physical activity in a national sample suggests an inverse association between physical activity and ED. © 2015 International Society for Sexual Medicine.

  14. An inversion-relaxation approach for sampling stationary points of spin model Hamiltonians

    International Nuclear Information System (INIS)

    Hughes, Ciaran; Mehta, Dhagash; Wales, David J.

    2014-01-01

    Sampling the stationary points of a complicated potential energy landscape is a challenging problem. Here, we introduce a sampling method based on relaxation from stationary points of the highest index of the Hessian matrix. We illustrate how this approach can find all the stationary points for potentials or Hamiltonians bounded from above, which includes a large class of important spin models, and we show that it is far more efficient than previous methods. For potentials unbounded from above, the relaxation part of the method is still efficient in finding minima and transition states, which are usually the primary focus of attention for atomistic systems

  15. Results For The Third Quarter 2013 Tank 50 WAC Slurry Sample

    Energy Technology Data Exchange (ETDEWEB)

    Bannochie, Christopher J.

    2013-11-26

    This report details the chemical and radionuclide contaminant results for the characterization of the 2013 Third Quarter sampling of Tank 50 for the Saltstone Waste Acceptance Criteria (WAC) in effect at that time. Information from this characterization will be used by DWPF & Saltstone Facility Engineering (DSFE) to support the transfer of low-level aqueous waste from Tank 50 to the Salt Feed Tank in the Saltstone Facility in Z-Area, where the waste will be immobilized. This information is also used to update the Tank 50 Waste Characterization System.

  16. The narcissism and death of Yukio Mishima--from the object relational point of view.

    Science.gov (United States)

    Ushijima, S

    1987-12-01

    The author discussed the life and work of Yukio Mishima from the object relational point of view. First, he described his brief life history, pointing out the four big identity crises in his life as his fierce struggles against the suicidal wishes were likely to enlarge within himself. Then, he suggested that Mishima had been in the state of part object relationship throughout his life. Thirdly, the important role of the body or bodies in his fantastic and real life was discussed as a manifestation of not merely autoerotic activities but also disturbances of the core of identity. Finally, the fragility of the intermediate area of experience which was thought to have eventually led him to the last action, the seppuku, was examined.

  17. Improved technical success and radiation safety of adrenal vein sampling using rapid, semi-quantitative point-of-care cortisol measurement.

    Science.gov (United States)

    Page, Michael M; Taranto, Mario; Ramsay, Duncan; van Schie, Greg; Glendenning, Paul; Gillett, Melissa J; Vasikaran, Samuel D

    2018-01-01

    Objective Primary aldosteronism is a curable cause of hypertension which can be treated surgically or medically depending on the findings of adrenal vein sampling studies. Adrenal vein sampling studies are technically demanding with a high failure rate in many centres. The use of intraprocedural cortisol measurement could improve the success rates of adrenal vein sampling but may be impracticable due to cost and effects on procedural duration. Design Retrospective review of the results of adrenal vein sampling procedures since commencement of point-of-care cortisol measurement using a novel single-use semi-quantitative measuring device for cortisol, the adrenal vein sampling Accuracy Kit. Success rate and complications of adrenal vein sampling procedures before and after use of the adrenal vein sampling Accuracy Kit. Routine use of the adrenal vein sampling Accuracy Kit device for intraprocedural measurement of cortisol commenced in 2016. Results Technical success rate of adrenal vein sampling increased from 63% of 99 procedures to 90% of 48 procedures ( P = 0.0007) after implementation of the adrenal vein sampling Accuracy Kit. Failure of right adrenal vein cannulation was the main reason for an unsuccessful study. Radiation dose decreased from 34.2 Gy.cm 2 (interquartile range, 15.8-85.9) to 15.7 Gy.cm 2 (6.9-47.3) ( P = 0.009). No complications were noted, and implementation costs were minimal. Conclusions Point-of-care cortisol measurement during adrenal vein sampling improved cannulation success rates and reduced radiation exposure. The use of the adrenal vein sampling Accuracy Kit is now standard practice at our centre.

  18. Optimal time points sampling in pathway modelling.

    Science.gov (United States)

    Hu, Shiyan

    2004-01-01

    Modelling cellular dynamics based on experimental data is at the heart of system biology. Considerable progress has been made to dynamic pathway modelling as well as the related parameter estimation. However, few of them gives consideration for the issue of optimal sampling time selection for parameter estimation. Time course experiments in molecular biology rarely produce large and accurate data sets and the experiments involved are usually time consuming and expensive. Therefore, to approximate parameters for models with only few available sampling data is of significant practical value. For signal transduction, the sampling intervals are usually not evenly distributed and are based on heuristics. In the paper, we investigate an approach to guide the process of selecting time points in an optimal way to minimize the variance of parameter estimates. In the method, we first formulate the problem to a nonlinear constrained optimization problem by maximum likelihood estimation. We then modify and apply a quantum-inspired evolutionary algorithm, which combines the advantages of both quantum computing and evolutionary computing, to solve the optimization problem. The new algorithm does not suffer from the morass of selecting good initial values and being stuck into local optimum as usually accompanied with the conventional numerical optimization techniques. The simulation results indicate the soundness of the new method.

  19. Application of In-Segment Multiple Sampling in Object-Based Classification

    Directory of Open Access Journals (Sweden)

    Nataša Đurić

    2014-12-01

    Full Text Available When object-based analysis is applied to very high-resolution imagery, pixels within the segments reveal large spectral inhomogeneity; their distribution can be considered complex rather than normal. When normality is violated, the classification methods that rely on the assumption of normally distributed data are not as successful or accurate. It is hard to detect normality violations in small samples. The segmentation process produces segments that vary highly in size; samples can be very big or very small. This paper investigates whether the complexity within the segment can be addressed using multiple random sampling of segment pixels and multiple calculations of similarity measures. In order to analyze the effect sampling has on classification results, statistics and probability value equations of non-parametric two-sample Kolmogorov-Smirnov test and parametric Student’s t-test are selected as similarity measures in the classification process. The performance of both classifiers was assessed on a WorldView-2 image for four land cover classes (roads, buildings, grass and trees and compared to two commonly used object-based classifiers—k-Nearest Neighbor (k-NN and Support Vector Machine (SVM. Both proposed classifiers showed a slight improvement in the overall classification accuracies and produced more accurate classification maps when compared to the ground truth image.

  20. Determination of trace inorganic mercury species in water samples by cloud point extraction and UV-vis spectrophotometry.

    Science.gov (United States)

    Ulusoy, Halil Ibrahim

    2014-01-01

    A new micelle-mediated extraction method was developed for preconcentration of ultratrace Hg(II) ions prior to spectrophotometric determination. 2-(2'-Thiazolylazo)-p-cresol (TAC) and Ponpe 7.5 were used as the chelating agent and nonionic surfactant, respectively. Hg(II) ions form a hydrophobic complex with TAC in a micelle medium. The main factors affecting cloud point extraction efficiency, such as pH of the medium, concentrations of TAC and Ponpe 7.5, and equilibration temperature and time, were investigated in detail. An overall preconcentration factor of 33.3 was obtained upon preconcentration of a 50 mL sample. The LOD obtained under the optimal conditions was 0.86 microg/L, and the RSD for five replicate measurements of 100 microg/L Hg(II) was 3.12%. The method was successfully applied to the determination of Hg in environmental water samples.

  1. Uncertainty analysis of point-by-point sampling complex surfaces using touch probe CMMs DOE for complex surfaces verification with CMM

    DEFF Research Database (Denmark)

    Barini, Emanuele Modesto; Tosello, Guido; De Chiffre, Leonardo

    2010-01-01

    The paper describes a study concerning point-by-point sampling of complex surfaces using tactile CMMs. A four factor, two level completely randomized factorial experiment was carried out, involving measurements on a complex surface configuration item comprising a sphere, a cylinder and a cone, co...

  2. Cloud point extraction of palladium in water samples and alloy mixtures using new synthesized reagent with flame atomic absorption spectrometry (FAAS)

    International Nuclear Information System (INIS)

    Priya, B. Krishna; Subrahmanayam, P.; Suvardhan, K.; Kumar, K. Suresh; Rekha, D.; Rao, A. Venkata; Rao, G.C.; Chiranjeevi, P.

    2007-01-01

    The present paper outlines novel, simple and sensitive method for the determination of palladium by flame atomic absorption spectrometry (FAAS) after separation and preconcentration by cloud point extraction (CPE). The cloud point methodology was successfully applied for palladium determination by using new reagent 4-(2-naphthalenyl)thiozol-2yl azo chromotropic acid (NTACA) and hydrophobic ligand Triton X-114 as chelating agent and nonionic surfactant respectively in the water samples and alloys. The following parameters such as pH, concentration of the reagent and Triton X-114, equilibrating temperature and centrifuging time were evaluated and optimized to enhance the sensitivity and extraction efficiency of the proposed method. The preconcentration factor was found to be (50-fold) for 250 ml of water sample. Under optimum condition the detection limit was found as 0.067 ng ml -1 for palladium in various environmental matrices. The present method was applied for the determination of palladium in various water samples, alloys and the result shows good agreement with reported method and the recoveries are in the range of 96.7-99.4%

  3. Results for the Fourth Quarter Calendar Year 2015 Tank 50H Salt Solution Sample

    Energy Technology Data Exchange (ETDEWEB)

    Crawford, C. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2016-01-11

    In this memorandum, the chemical and radionuclide contaminant results from the Fourth Quarter Calendar Year 2015 (CY15) sample of Tank 50H salt solution are presented in tabulated form. The Fourth Quarter CY15 Tank 50H samples were obtained on October 29, 2015 and received at Savannah River National Laboratory (SRNL) on October 30, 2015. The information from this characterization will be used by Defense Waste Processing Facility (DWPF) & Saltstone Facility Engineering for the transfer of aqueous waste from Tank 50H to the Salt Feed Tank in the Saltstone Production Facility, where the waste will be treated and disposed of in the Saltstone Disposal Facility. This memorandum compares results, where applicable, to Saltstone Waste Acceptance Criteria (WAC) limits and targets. Data pertaining to the regulatory limits for Resource Conservation and Recovery Act (RCRA) metals will be documented at a later time per the Task Technical and Quality Assurance Plan (TTQAP) for the Tank 50H saltstone task. The chemical and radionuclide contaminant results from the characterization of the Fourth Quarter Calendar Year 2015 (CY15) sampling of Tank 50H were requested by SRR personnel and details of the testing are presented in the SRNL Task Technical and Quality Assurance Plan.

  4. The optimal shape of an object for generating maximum gravity field at a given point in space

    OpenAIRE

    Wang, Xiao-Wei; Su, Yue

    2014-01-01

    How can we design the shape of an object, in the framework of Newtonian gravity, in order to generate maximum gravity at a given point in space? In this work we present a study on this interesting problem. We obtain compact solutions for all dimensional cases. The results are commonly characterized by a simple "physical" feature that any mass element unit on the object surface generates the same gravity strength at the considered point, in the direction along the rotational symmetry axis.

  5. The optimal shape of an object for generating maximum gravity field at a given point in space

    International Nuclear Information System (INIS)

    Wang, Xiao-Wei; Su, Yue

    2015-01-01

    How can we design the shape of an object, in the framework of Newtonian gravity, in order to generate maximum gravity at a given point in space? In this work we present a study on this interesting problem. We obtain compact solutions for all dimensional cases. The results are commonly characterized by a simple ‘physical’ feature that any mass element unit on the object surface generates the same gravity strength at the considered point, in the direction along the rotational symmetry axis. (paper)

  6. Comparison of Single-Point and Continuous Sampling Methods for Estimating Residential Indoor Temperature and Humidity.

    Science.gov (United States)

    Johnston, James D; Magnusson, Brianna M; Eggett, Dennis; Collingwood, Scott C; Bernhardt, Scott A

    2015-01-01

    Residential temperature and humidity are associated with multiple health effects. Studies commonly use single-point measures to estimate indoor temperature and humidity exposures, but there is little evidence to support this sampling strategy. This study evaluated the relationship between single-point and continuous monitoring of air temperature, apparent temperature, relative humidity, and absolute humidity over four exposure intervals (5-min, 30-min, 24-hr, and 12-days) in 9 northern Utah homes, from March-June 2012. Three homes were sampled twice, for a total of 12 observation periods. Continuous data-logged sampling was conducted in homes for 2-3 wks, and simultaneous single-point measures (n = 114) were collected using handheld thermo-hygrometers. Time-centered single-point measures were moderately correlated with short-term (30-min) data logger mean air temperature (r = 0.76, β = 0.74), apparent temperature (r = 0.79, β = 0.79), relative humidity (r = 0.70, β = 0.63), and absolute humidity (r = 0.80, β = 0.80). Data logger 12-day means were also moderately correlated with single-point air temperature (r = 0.64, β = 0.43) and apparent temperature (r = 0.64, β = 0.44), but were weakly correlated with single-point relative humidity (r = 0.53, β = 0.35) and absolute humidity (r = 0.52, β = 0.39). Of the single-point RH measures, 59 (51.8%) deviated more than ±5%, 21 (18.4%) deviated more than ±10%, and 6 (5.3%) deviated more than ±15% from data logger 12-day means. Where continuous indoor monitoring is not feasible, single-point sampling strategies should include multiple measures collected at prescribed time points based on local conditions.

  7. Soft X-Ray Observations of a Complete Sample of X-Ray--selected BL Lacertae Objects

    Science.gov (United States)

    Perlman, Eric S.; Stocke, John T.; Wang, Q. Daniel; Morris, Simon L.

    1996-01-01

    We present the results of ROSAT PSPC observations of the X-ray selected BL Lacertae objects (XBLs) in the complete Einstein Extended Medium Sensitivity Survey (EM MS) sample. None of the objects is resolved in their respective PSPC images, but all are easily detected. All BL Lac objects in this sample are well-fitted by single power laws. Their X-ray spectra exhibit a variety of spectral slopes, with best-fit energy power-law spectral indices between α = 0.5-2.3. The PSPC spectra of this sample are slightly steeper than those typical of flat ratio-spectrum quasars. Because almost all of the individual PSPC spectral indices are equal to or slightly steeper than the overall optical to X-ray spectral indices for these same objects, we infer that BL Lac soft X-ray continua are dominated by steep-spectrum synchrotron radiation from a broad X-ray jet, rather than flat-spectrum inverse Compton radiation linked to the narrower radio/millimeter jet. The softness of the X-ray spectra of these XBLs revives the possibility proposed by Guilbert, Fabian, & McCray (1983) that BL Lac objects are lineless because the circumnuclear gas cannot be heated sufficiently to permit two stable gas phases, the cooler of which would comprise the broad emission-line clouds. Because unified schemes predict that hard self-Compton radiation is beamed only into a small solid angle in BL Lac objects, the steep-spectrum synchrotron tail controls the temperature of the circumnuclear gas at r ≤ 1018 cm and prevents broad-line cloud formation. We use these new ROSAT data to recalculate the X-ray luminosity function and cosmological evolution of the complete EMSS sample by determining accurate K-corrections for the sample and estimating the effects of variability and the possibility of incompleteness in the sample. Our analysis confirms that XBLs are evolving "negatively," opposite in sense to quasars, with Ve/Va = 0.331±0.060. The statistically significant difference between the values for X

  8. An adaptive Monte Carlo method under emission point as sampling station for deep penetration calculation

    International Nuclear Information System (INIS)

    Wang, Ruihong; Yang, Shulin; Pei, Lucheng

    2011-01-01

    Deep penetration problem has been one of the difficult problems in shielding calculation with Monte Carlo method for several decades. In this paper, an adaptive technique under the emission point as a sampling station is presented. The main advantage is to choose the most suitable sampling number from the emission point station to get the minimum value of the total cost in the process of the random walk. Further, the related importance sampling method is also derived. The main principle is to define the importance function of the response due to the particle state and ensure the sampling number of the emission particle is proportional to the importance function. The numerical results show that the adaptive method under the emission point as a station could overcome the difficulty of underestimation to the result in some degree, and the related importance sampling method gets satisfied results as well. (author)

  9. Sample size and classification error for Bayesian change-point models with unlabelled sub-groups and incomplete follow-up.

    Science.gov (United States)

    White, Simon R; Muniz-Terrera, Graciela; Matthews, Fiona E

    2018-05-01

    Many medical (and ecological) processes involve the change of shape, whereby one trajectory changes into another trajectory at a specific time point. There has been little investigation into the study design needed to investigate these models. We consider the class of fixed effect change-point models with an underlying shape comprised two joined linear segments, also known as broken-stick models. We extend this model to include two sub-groups with different trajectories at the change-point, a change and no change class, and also include a missingness model to account for individuals with incomplete follow-up. Through a simulation study, we consider the relationship of sample size to the estimates of the underlying shape, the existence of a change-point, and the classification-error of sub-group labels. We use a Bayesian framework to account for the missing labels, and the analysis of each simulation is performed using standard Markov chain Monte Carlo techniques. Our simulation study is inspired by cognitive decline as measured by the Mini-Mental State Examination, where our extended model is appropriate due to the commonly observed mixture of individuals within studies who do or do not exhibit accelerated decline. We find that even for studies of modest size ( n = 500, with 50 individuals observed past the change-point) in the fixed effect setting, a change-point can be detected and reliably estimated across a range of observation-errors.

  10. Unidentified point sources in the IRAS minisurvey

    Science.gov (United States)

    Houck, J. R.; Soifer, B. T.; Neugebauer, G.; Beichman, C. A.; Aumann, H. H.; Clegg, P. E.; Gillett, F. C.; Habing, H. J.; Hauser, M. G.; Low, F. J.

    1984-01-01

    Nine bright, point-like 60 micron sources have been selected from the sample of 8709 sources in the IRAS minisurvey. These sources have no counterparts in a variety of catalogs of nonstellar objects. Four objects have no visible counterparts, while five have faint stellar objects visible in the error ellipse. These sources do not resemble objects previously known to be bright infrared sources.

  11. [Study of spatial stratified sampling strategy of Oncomelania hupensis snail survey based on plant abundance].

    Science.gov (United States)

    Xun-Ping, W; An, Z

    2017-07-27

    Objective To optimize and simplify the survey method of Oncomelania hupensis snails in marshland endemic regions of schistosomiasis, so as to improve the precision, efficiency and economy of the snail survey. Methods A snail sampling strategy (Spatial Sampling Scenario of Oncomelania based on Plant Abundance, SOPA) which took the plant abundance as auxiliary variable was explored and an experimental study in a 5050 m plot in a marshland in the Poyang Lake region was performed. Firstly, the push broom surveyed data was stratified into 5 layers by the plant abundance data; then, the required numbers of optimal sampling points of each layer through Hammond McCullagh equation were calculated; thirdly, every sample point in the line with the Multiple Directional Interpolation (MDI) placement scheme was pinpointed; and finally, the comparison study among the outcomes of the spatial random sampling strategy, the traditional systematic sampling method, the spatial stratified sampling method, Sandwich spatial sampling and inference and SOPA was performed. Results The method (SOPA) proposed in this study had the minimal absolute error of 0.213 8; and the traditional systematic sampling method had the largest estimate, and the absolute error was 0.924 4. Conclusion The snail sampling strategy (SOPA) proposed in this study obtains the higher estimation accuracy than the other four methods.

  12. Advances in paper-based sample pretreatment for point-of-care testing.

    Science.gov (United States)

    Tang, Rui Hua; Yang, Hui; Choi, Jane Ru; Gong, Yan; Feng, Shang Sheng; Pingguan-Murphy, Belinda; Huang, Qing Sheng; Shi, Jun Ling; Mei, Qi Bing; Xu, Feng

    2017-06-01

    In recent years, paper-based point-of-care testing (POCT) has been widely used in medical diagnostics, food safety and environmental monitoring. However, a high-cost, time-consuming and equipment-dependent sample pretreatment technique is generally required for raw sample processing, which are impractical for low-resource and disease-endemic areas. Therefore, there is an escalating demand for a cost-effective, simple and portable pretreatment technique, to be coupled with the commonly used paper-based assay (e.g. lateral flow assay) in POCT. In this review, we focus on the importance of using paper as a platform for sample pretreatment. We firstly discuss the beneficial use of paper for sample pretreatment, including sample collection and storage, separation, extraction, and concentration. We highlight the working principle and fabrication of each sample pretreatment device, the existing challenges and the future perspectives for developing paper-based sample pretreatment technique.

  13. Interference and k-point sampling in the supercell approach to phase-coherent transport - art. no. 0333401

    DEFF Research Database (Denmark)

    Thygesen, Kristian Sommer; Jacobsen, Karsten Wedel

    2005-01-01

    We present a systematic study of interference and k-point sampling effects in the supercell approach to phase-coherent electron transport. We use a representative tight-binding model to show that interference between the repeated images is a small effect compared to the error introduced by using...... only the Gamma-point for a supercell containing (3,3) sites in the transverse plane. An insufficient k-point sampling can introduce strong but unphysical features in the transmission function which can be traced to the presence of van Hove singularities in the lead. We present a first......-principles calculation of the transmission through a Pt contact which shows that the k-point sampling is also important for realistic systems....

  14. Data quality objectives for the B-Cell waste stream classification sampling

    International Nuclear Information System (INIS)

    Barnett, J.M.

    1998-01-01

    This document defines the data quality objectives, (DQOS) for sampling the B-Cell racks waste stream. The sampling effort is concentrated on determining a ratio of Cs-137 to Sr-90 and Cs-137 to transuranics (TRU). Figure 1.0 shows the logic path of sampling effort. The flow chart begins with sample and data acquisition and progresses toward (a) statistical confidence and waste classification boundaries, (b) management decisions based on the input parameters and technical methods available, and (c) grout container volume/weight limits and radiation limits. The end result will be accurately classifying the B-Cell rack waste stream

  15. AMCO Scribe Sampling Data Points, Oakland CA, 2017, US EPA Region 9

    Data.gov (United States)

    U.S. Environmental Protection Agency — This feature class contains points depicting archived sampling data for Vinyl Chloride, Trichloroethene (TCE), and Tetrachloroethene (PCE) for the R09 AMCO-OTIE...

  16. Post-Newtonian equations of motion for LEO debris objects and space-based acquisition, pointing and tracking laser systems

    Science.gov (United States)

    Gambi, J. M.; García del Pino, M. L.; Gschwindl, J.; Weinmüller, E. B.

    2017-12-01

    This paper deals with the problem of throwing middle-sized low Earth orbit debris objects into the atmosphere via laser ablation. The post-Newtonian equations here provided allow (hypothetical) space-based acquisition, pointing and tracking systems endowed with very narrow laser beams to reach the pointing accuracy presently prescribed. In fact, whatever the orbital elements of these objects may be, these equations will allow the operators to account for the corrections needed to balance the deviations of the line of sight directions due to the curvature of the paths the laser beams are to travel along. To minimize the respective corrections, the systems will have to perform initial positioning manoeuvres, and the shooting point-ahead angles will have to be adapted in real time. The enclosed numerical experiments suggest that neglecting these measures will cause fatal errors, due to differences in the actual locations of the objects comparable to their size.

  17. Measuring saccade peak velocity using a low-frequency sampling rate of 50 Hz.

    Science.gov (United States)

    Wierts, Roel; Janssen, Maurice J A; Kingma, Herman

    2008-12-01

    During the last decades, small head-mounted video eye trackers have been developed in order to record eye movements. Real-time systems-with a low sampling frequency of 50/60 Hz-are used for clinical vestibular practice, but are generally considered not to be suited for measuring fast eye movements. In this paper, it is shown that saccadic eye movements, having an amplitude of at least 5 degrees, can, in good approximation, be considered to be bandwidth limited up to a frequency of 25-30 Hz. Using the Nyquist theorem to reconstruct saccadic eye movement signals at higher temporal resolutions, it is shown that accurate values for saccade peak velocities, recorded at 50 Hz, can be obtained, but saccade peak accelerations and decelerations cannot. In conclusion, video eye trackers sampling at 50/60 Hz are appropriate for detecting the clinical relevant saccade peak velocities in contrast to what has been stated up till now.

  18. Limited Sampling Strategy for Accurate Prediction of Pharmacokinetics of Saroglitazar: A 3-point Linear Regression Model Development and Successful Prediction of Human Exposure.

    Science.gov (United States)

    Joshi, Shuchi N; Srinivas, Nuggehally R; Parmar, Deven V

    2018-03-01

    Our aim was to develop and validate the extrapolative performance of a regression model using a limited sampling strategy for accurate estimation of the area under the plasma concentration versus time curve for saroglitazar. Healthy subject pharmacokinetic data from a well-powered food-effect study (fasted vs fed treatments; n = 50) was used in this work. The first 25 subjects' serial plasma concentration data up to 72 hours and corresponding AUC 0-t (ie, 72 hours) from the fasting group comprised a training dataset to develop the limited sampling model. The internal datasets for prediction included the remaining 25 subjects from the fasting group and all 50 subjects from the fed condition of the same study. The external datasets included pharmacokinetic data for saroglitazar from previous single-dose clinical studies. Limited sampling models were composed of 1-, 2-, and 3-concentration-time points' correlation with AUC 0-t of saroglitazar. Only models with regression coefficients (R 2 ) >0.90 were screened for further evaluation. The best R 2 model was validated for its utility based on mean prediction error, mean absolute prediction error, and root mean square error. Both correlations between predicted and observed AUC 0-t of saroglitazar and verification of precision and bias using Bland-Altman plot were carried out. None of the evaluated 1- and 2-concentration-time points models achieved R 2 > 0.90. Among the various 3-concentration-time points models, only 4 equations passed the predefined criterion of R 2 > 0.90. Limited sampling models with time points 0.5, 2, and 8 hours (R 2 = 0.9323) and 0.75, 2, and 8 hours (R 2 = 0.9375) were validated. Mean prediction error, mean absolute prediction error, and root mean square error were prediction of saroglitazar. The same models, when applied to the AUC 0-t prediction of saroglitazar sulfoxide, showed mean prediction error, mean absolute prediction error, and root mean square error model predicts the exposure of

  19. Procedures for sampling and sample reduction within quality assurance systems for solid biofuels

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2005-07-01

    The objective of this experimental study on sampling was to determine the size and number of samples of biofuels required (taken at two sampling points in each case) and to compare two methods of sampling. The first objective of the sample-reduction exercise was to compare the reliability of various sampling methods, and the second objective was to measure the variations introduced as a result of reducing the sample size to form suitable test portions. The materials studied were sawdust, wood chips, wood pellets and bales of straw, and these were analysed for moisture, ash, particle size and chloride. The sampling procedures are described. The study was conducted in Scandinavia. The results of the study were presented in Leipzig in October 2004. The work was carried out as part of the UK's DTI Technology Programme: New and Renewable Energy.

  20. Evaluation of factor for one-point venous blood sampling method based on the causality model

    International Nuclear Information System (INIS)

    Matsutomo, Norikazu; Onishi, Hideo; Kobara, Kouichi; Sasaki, Fumie; Watanabe, Haruo; Nagaki, Akio; Mimura, Hiroaki

    2009-01-01

    One-point venous blood sampling method (Mimura, et al.) can evaluate the regional cerebral blood flow (rCBF) value with a high degree of accuracy. However, the method is accompanied by complexity of technique because it requires a venous blood Octanol value, and its accuracy is affected by factors of input function. Therefore, we evaluated the factors that are used for input function to determine the accuracy input function and simplify the technique. The input function which uses the time-dependent brain count of 5 minutes, 15 minutes, and 25 minutes from administration, and the input function in which an objective variable is used as the artery octanol value to exclude the venous blood octanol value are created. Therefore, a correlation between these functions and rCBF value by the microsphere (MS) method is evaluated. Creation of a high-accuracy input function and simplification of technique are possible. The rCBF value obtained by the input function, the factor of which is a time-dependent brain count of 5 minutes from administration, and the objective variable is artery octanol value, had a high correlation with the MS method (y=0.899x+4.653, r=0.842). (author)

  1. Control of a Clonal Outbreak of Multidrug-Resistant Acinetobacter baumannii in a Hospital of the Basque Country after the Introduction of Environmental Cleaning Led by the Systematic Sampling from Environmental Objects.

    Science.gov (United States)

    Delgado Naranjo, Jesús; Villate Navarro, José Ignacio; Sota Busselo, Mercedes; Martínez Ruíz, Alberto; Hernández Hernández, José María; Torres Garmendia, María Pilar; Urcelay López, María Isabel

    2013-01-01

    Background. Between July 2009 and September 2010, an outbreak of multidrug-resistant (MDR) Acinetobacter baumannii was detected in one critical care unit of a tertiary hospital in the Basque Country, involving 49 infected and 16 colonized patients. The aim was to evaluate the impact of environmental cleaning and systematic sampling from environmental objects on the risk of infection by MDR A. baumannii. Methods. After systematic sampling from environmental objects and molecular typing of all new MDR A. baumannii strains from patients and environmental isolates, we analyzed the correlation (Pearson's r) between new infected cases and positive environmental samples. The risk ratio (RR) of infection was estimated with Poisson regression. Results. The risk increased significantly with the number of positive samples in common areas (RR = 1.40; 95%CI = 0.99-1.94) and positive samples in boxes (RR = 1.19; 95%CI = 1.01-1.40). The number of cases also positively correlated with positive samples in boxes (r = 0.50; P systematic sampling from environmental objects, provided the objective risk reduction of new cases and enabled the full control of the outbreak.

  2. Identification of the Area for Proper Integration of Three Current Storage Objects into One Complex Logistics Point

    Directory of Open Access Journals (Sweden)

    Li Chenguang

    2017-05-01

    Full Text Available The paper presents an option to identify the suitable area for locating the specific storage object in order to integrate three current storage objects into one complex logistics point. Initial chapters of the paper are focused on overview of theoretical terms related to the storage objects, their activities, services, parameters as well as their location and allocation. Other parts of this paper outline specific methods regarding the issue of storage objects location. Its main part describes individual steps for identifying the suitable storage object location in the certain area using the specific method.

  3. Tracking 3D Moving Objects Based on GPS/IMU Navigation Solution, Laser Scanner Point Cloud and GIS Data

    Directory of Open Access Journals (Sweden)

    Siavash Hosseinyalamdary

    2015-07-01

    Full Text Available Monitoring vehicular road traffic is a key component of any autonomous driving platform. Detecting moving objects, and tracking them, is crucial to navigating around objects and predicting their locations and trajectories. Laser sensors provide an excellent observation of the area around vehicles, but the point cloud of objects may be noisy, occluded, and prone to different errors. Consequently, object tracking is an open problem, especially for low-quality point clouds. This paper describes a pipeline to integrate various sensor data and prior information, such as a Geospatial Information System (GIS map, to segment and track moving objects in a scene. We show that even a low-quality GIS map, such as OpenStreetMap (OSM, can improve the tracking accuracy, as well as decrease processing time. A bank of Kalman filters is used to track moving objects in a scene. In addition, we apply non-holonomic constraint to provide a better orientation estimation of moving objects. The results show that moving objects can be correctly detected, and accurately tracked, over time, based on modest quality Light Detection And Ranging (LiDAR data, a coarse GIS map, and a fairly accurate Global Positioning System (GPS and Inertial Measurement Unit (IMU navigation solution.

  4. Bayesian object classification of gold nanoparticles

    KAUST Repository

    Konomi, Bledar A.

    2013-06-01

    The properties of materials synthesized with nanoparticles (NPs) are highly correlated to the sizes and shapes of the nanoparticles. The transmission electron microscopy (TEM) imaging technique can be used to measure the morphological characteristics of NPs, which can be simple circles or more complex irregular polygons with varying degrees of scales and sizes. A major difficulty in analyzing the TEM images is the overlapping of objects, having different morphological properties with no specific information about the number of objects present. Furthermore, the objects lying along the boundary render automated image analysis much more difficult. To overcome these challenges, we propose a Bayesian method based on the marked-point process representation of the objects. We derive models, both for the marks which parameterize the morphological aspects and the points which determine the location of the objects. The proposed model is an automatic image segmentation and classification procedure, which simultaneously detects the boundaries and classifies the NPs into one of the predetermined shape families. We execute the inference by sampling the posterior distribution using Markov chainMonte Carlo (MCMC) since the posterior is doubly intractable. We apply our novel method to several TEM imaging samples of gold NPs, producing the needed statistical characterization of their morphology. © Institute of Mathematical Statistics, 2013.

  5. Bayesian object classification of gold nanoparticles

    KAUST Repository

    Konomi, Bledar A.; Dhavala, Soma S.; Huang, Jianhua Z.; Kundu, Subrata; Huitink, David; Liang, Hong; Ding, Yu; Mallick, Bani K.

    2013-01-01

    The properties of materials synthesized with nanoparticles (NPs) are highly correlated to the sizes and shapes of the nanoparticles. The transmission electron microscopy (TEM) imaging technique can be used to measure the morphological characteristics of NPs, which can be simple circles or more complex irregular polygons with varying degrees of scales and sizes. A major difficulty in analyzing the TEM images is the overlapping of objects, having different morphological properties with no specific information about the number of objects present. Furthermore, the objects lying along the boundary render automated image analysis much more difficult. To overcome these challenges, we propose a Bayesian method based on the marked-point process representation of the objects. We derive models, both for the marks which parameterize the morphological aspects and the points which determine the location of the objects. The proposed model is an automatic image segmentation and classification procedure, which simultaneously detects the boundaries and classifies the NPs into one of the predetermined shape families. We execute the inference by sampling the posterior distribution using Markov chainMonte Carlo (MCMC) since the posterior is doubly intractable. We apply our novel method to several TEM imaging samples of gold NPs, producing the needed statistical characterization of their morphology. © Institute of Mathematical Statistics, 2013.

  6. A test of alternative estimators for volume at time 1 from remeasured point samples

    Science.gov (United States)

    Francis A. Roesch; Edwin J. Green; Charles T. Scott

    1993-01-01

    Two estimators for volume at time 1 for use with permanent horizontal point samples are evaluated. One estimator, used traditionally, uses only the trees sampled at time 1, while the second estimator, originally presented by Roesch and coauthors (F.A. Roesch, Jr., E.J. Green, and C.T. Scott. 1989. For. Sci. 35(2):281-293). takes advantage of additional sample...

  7. Object-Based Point Cloud Analysis of Full-Waveform Airborne Laser Scanning Data for Urban Vegetation Classification

    Directory of Open Access Journals (Sweden)

    Norbert Pfeifer

    2008-08-01

    Full Text Available Airborne laser scanning (ALS is a remote sensing technique well-suited for 3D vegetation mapping and structure characterization because the emitted laser pulses are able to penetrate small gaps in the vegetation canopy. The backscattered echoes from the foliage, woody vegetation, the terrain, and other objects are detected, leading to a cloud of points. Higher echo densities (> 20 echoes/m2 and additional classification variables from full-waveform (FWF ALS data, namely echo amplitude, echo width and information on multiple echoes from one shot, offer new possibilities in classifying the ALS point cloud. Currently FWF sensor information is hardly used for classification purposes. This contribution presents an object-based point cloud analysis (OBPA approach, combining segmentation and classification of the 3D FWF ALS points designed to detect tall vegetation in urban environments. The definition tall vegetation includes trees and shrubs, but excludes grassland and herbage. In the applied procedure FWF ALS echoes are segmented by a seeded region growing procedure. All echoes sorted descending by their surface roughness are used as seed points. Segments are grown based on echo width homogeneity. Next, segment statistics (mean, standard deviation, and coefficient of variation are calculated by aggregating echo features such as amplitude and surface roughness. For classification a rule base is derived automatically from a training area using a statistical classification tree. To demonstrate our method we present data of three sites with around 500,000 echoes each. The accuracy of the classified vegetation segments is evaluated for two independent validation sites. In a point-wise error assessment, where the classification is compared with manually classified 3D points, completeness and correctness better than 90% are reached for the validation sites. In comparison to many other algorithms the proposed 3D point classification works on the original

  8. Object-Based Coregistration of Terrestrial Photogrammetric and ALS Point Clouds in Forested Areas

    Science.gov (United States)

    Polewski, P.; Erickson, A.; Yao, W.; Coops, N.; Krzystek, P.; Stilla, U.

    2016-06-01

    Airborne Laser Scanning (ALS) and terrestrial photogrammetry are methods applicable for mapping forested environments. While ground-based techniques provide valuable information about the forest understory, the measured point clouds are normally expressed in a local coordinate system, whose transformation into a georeferenced system requires additional effort. In contrast, ALS point clouds are usually georeferenced, yet the point density near the ground may be poor under dense overstory conditions. In this work, we propose to combine the strengths of the two data sources by co-registering the respective point clouds, thus enriching the georeferenced ALS point cloud with detailed understory information in a fully automatic manner. Due to markedly different sensor characteristics, coregistration methods which expect a high geometric similarity between keypoints are not suitable in this setting. Instead, our method focuses on the object (tree stem) level. We first calculate approximate stem positions in the terrestrial and ALS point clouds and construct, for each stem, a descriptor which quantifies the 2D and vertical distances to other stem centers (at ground height). Then, the similarities between all descriptor pairs from the two point clouds are calculated, and standard graph maximum matching techniques are employed to compute corresponding stem pairs (tiepoints). Finally, the tiepoint subset yielding the optimal rigid transformation between the terrestrial and ALS coordinate systems is determined. We test our method on simulated tree positions and a plot situated in the northern interior of the Coast Range in western Oregon, USA, using ALS data (76 x 121 m2) and a photogrammetric point cloud (33 x 35 m2) derived from terrestrial photographs taken with a handheld camera. Results on both simulated and real data show that the proposed stem descriptors are discriminative enough to derive good correspondences. Specifically, for the real plot data, 24

  9. Ultra-High-Throughput Sample Preparation System for Lymphocyte Immunophenotyping Point-of-Care Diagnostics.

    Science.gov (United States)

    Walsh, David I; Murthy, Shashi K; Russom, Aman

    2016-10-01

    Point-of-care (POC) microfluidic devices often lack the integration of common sample preparation steps, such as preconcentration, which can limit their utility in the field. In this technology brief, we describe a system that combines the necessary sample preparation methods to perform sample-to-result analysis of large-volume (20 mL) biopsy model samples with staining of captured cells. Our platform combines centrifugal-paper microfluidic filtration and an analysis system to process large, dilute biological samples. Utilizing commercialization-friendly manufacturing methods and materials, yielding a sample throughput of 20 mL/min, and allowing for on-chip staining and imaging bring together a practical, yet powerful approach to microfluidic diagnostics of large, dilute samples. © 2016 Society for Laboratory Automation and Screening.

  10. Improving Multi-Objective Management of Water Quality Tipping Points: Revisiting the Classical Shallow Lake Problem

    Science.gov (United States)

    Quinn, J. D.; Reed, P. M.; Keller, K.

    2015-12-01

    Recent multi-objective extensions of the classical shallow lake problem are useful for exploring the conceptual and computational challenges that emerge when managing irreversible water quality tipping points. Building on this work, we explore a four objective version of the lake problem where a hypothetical town derives economic benefits from polluting a nearby lake, but at the risk of irreversibly tipping the lake into a permanently polluted state. The trophic state of the lake exhibits non-linear threshold dynamics; below some critical phosphorus (P) threshold it is healthy and oligotrophic, but above this threshold it is irreversibly eutrophic. The town must decide how much P to discharge each year, a decision complicated by uncertainty in the natural P inflow to the lake. The shallow lake problem provides a conceptually rich set of dynamics, low computational demands, and a high level of mathematical difficulty. These properties maximize its value for benchmarking the relative merits and limitations of emerging decision support frameworks, such as Direct Policy Search (DPS). Here, we explore the use of DPS as a formal means of developing robust environmental pollution control rules that effectively account for deeply uncertain system states and conflicting objectives. The DPS reformulation of the shallow lake problem shows promise in formalizing pollution control triggers and signposts, while dramatically reducing the computational complexity of the multi-objective pollution control problem. More broadly, the insights from the DPS variant of the shallow lake problem formulated in this study bridge emerging work related to socio-ecological systems management, tipping points, robust decision making, and robust control.

  11. 50 CFR 660.410 - Conservation objectives.

    Science.gov (United States)

    2010-10-01

    ... objective: except that the 35,000 natural spawner floor and the de minimis fishing provisions for Klamath... low natural spawner abundance, including the risk of Klamath Basin substocks dropping below crucial genetic thresholds; (ii) A series of low spawner abundance in recent years; (iii) The status of co-mingled...

  12. A simple method for regional cerebral blood flow measurement by one-point arterial blood sampling and 123I-IMP microsphere model (part 2). A study of time correction of one-point blood sample count

    International Nuclear Information System (INIS)

    Masuda, Yasuhiko; Makino, Kenichi; Gotoh, Satoshi

    1999-01-01

    In our previous paper regarding determination of the regional cerebral blood flow (rCBF) using the 123 I-IMP microsphere model, we reported that the accuracy of determination of the integrated value of the input function from one-point arterial blood sampling can be increased by performing correction using the 5 min: 29 min ratio for the whole-brain count. However, failure to carry out the arterial blood collection at exactly 5 minutes after 123 I-IMP injection causes errors with this method, and there is thus a time limitation. We have now revised out method so that the one-point arterial blood sampling can be performed at any time during the interval between 5 minutes and 20 minutes after 123 I-IMP injection, with addition of a correction step for the sampling time. This revised method permits more accurate estimation of the integral of the input functions. This method was then applied to 174 experimental subjects: one-point blood samples collected at random times between 5 and 20 minutes, and the estimated values for the continuous arterial octanol extraction count (COC) were determined. The mean error rate between the COC and the actual measured continuous arterial octanol extraction count (OC) was 3.6%, and the standard deviation was 12.7%. Accordingly, in 70% of the cases, the rCBF was able to be estimated within an error rate of 13%, while estimation was possible in 95% of the cases within an error rate of 25%. This improved method is a simple technique for determination of the rCBF by 123 I-IMP microsphere model and one-point arterial blood sampling which no longer shows a time limitation and does not require any octanol extraction step. (author)

  13. Results for the second quarter 2014 tank 50 WAC slurry sample chemical and radionuclide contaminants

    International Nuclear Information System (INIS)

    Bannochie, C.

    2014-01-01

    This report details the chemical and radionuclide contaminant results for the characterization of the 2014 Second Quarter sampling of Tank 50 for the Saltstone Waste Acceptance Criteria (WAC) in effect at that time. Information from this characterization will be used by DWPF & Saltstone Facility Engineering (DSFE) to support the transfer of low-level aqueous waste from Tank 50 to the Salt Feed Tank in the Saltstone Facility in Z-Area, where the waste will be immobilized. This information is also used to update the Tank 50 Waste Characterization System

  14. Results for the Third Quarter 2014 Tank 50 WAC slurry sample: Chemical and radionuclide contaminants

    Energy Technology Data Exchange (ETDEWEB)

    Crawford, Charles L. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2015-01-08

    This report details the chemical and radionuclide contaminant results for the characterization of the 2014 Third Quarter sampling of Tank 50 for the Saltstone Waste Acceptance Criteria (WAC) in effect at that time.1 Information from this characterization will be used by DWPF & Saltstone Facility Engineering (DSFE) to support the transfer of low-level aqueous waste from Tank 50 to the Salt Feed in the Saltstone Facility in Z-Area, where the waste will be immobilized. This information is also used to update the Tank 50 Waste Characterization System.

  15. Results For The Fourth Quarter 2014 Tank 50 WAC Slurry Sample: Chemical And Radionuclide Contaminants

    Energy Technology Data Exchange (ETDEWEB)

    Crawford, C. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2015-09-30

    This report details the chemical and radionuclide contaminant results for the characterization of the Calendar Year (CY) 2014 Fourth Quarter sampling of Tank 50 for the Saltstone Waste Acceptance Criteria (WAC) in effect at that time. Information from this characterization will be used by DWPF & Saltstone Facility Engineering (DSFE) to support the transfer of low-level aqueous waste from Tank 50 to the Salt Feed Tank in the Saltstone Facility in Z-Area, where the waste will be immobilized. This information is also used to update the Tank 50 Waste Characterization System.

  16. Results For The Second Quarter 2013 Tank 50 WAC Slurry Sample: Chemical And Radionuclide Contaminants

    Energy Technology Data Exchange (ETDEWEB)

    Bannochie, Christopher J.

    2013-07-31

    This report details the chemical and radionuclide contaminant results for the characterization of the 2013 Second Quarter sampling of Tank 50 for the Saltstone Waste Acceptance Criteria (WAC) in effect at that time. Information from this characterization will be used by Saltstone Facility Engineering (SFE) to support the transfer of low-level aqueous waste from Tank 50 to the Salt Feed Tank in the Saltstone Facility in Z-Area, where the waste will be immobilized. This information is also used to update the Tank 50 Waste Characterization System.

  17. Accuracy of micro four-point probe measurements on inhomogeneous samples: A probe spacing dependence study

    DEFF Research Database (Denmark)

    Wang, Fei; Petersen, Dirch Hjorth; Østerberg, Frederik Westergaard

    2009-01-01

    In this paper, we discuss a probe spacing dependence study in order to estimate the accuracy of micro four-point probe measurements on inhomogeneous samples. Based on sensitivity calculations, both sheet resistance and Hall effect measurements are studied for samples (e.g. laser annealed samples...... the probe spacing is smaller than 1/40 of the variation wavelength, micro four-point probes can provide an accurate record of local properties with less than 1% measurement error. All the calculations agree well with previous experimental results.......) with periodic variations of sheet resistance, sheet carrier density, and carrier mobility. With a variation wavelength of ¿, probe spacings from 0.0012 to 1002 have been applied to characterize the local variations. The calculations show that the measurement error is highly dependent on the probe spacing. When...

  18. Proposal of sampling protocols to verify possible performance objectives for Campylobacter species control in Italian broiler batches

    Directory of Open Access Journals (Sweden)

    Gerardo Manfreda

    2013-04-01

    Full Text Available Campylobacteriosis represents the most important food-borne illness in the EU. Broilers, as well as poultry meat, spread the majority of strains responsible for human cases. The main aims of this study were to suggest an approach for the definition of performance objectives (POs based on prevalence and concentration of Campylobacter species (spp. in broiler carcasses; moreover, sampling plans to determine the acceptability of broiler batches at the slaughterhouses in relation to such POs were formulated. The dataset used in this study was the one regarding Italy composed during the European Food Safety Authority baseline survey which was performed in the EU in 2008. A total of 393 carcasses obtained from 393 different batches collected from 48 Italian slaughterhouses were included in the analysis. Uncertainty in prevalence and concentration of Campylobacter spp. on carcasses was quantified assuming a beta and log normal distribution. Statistical analysis and distribution fitting were performed in ModelRisk v4.3 (Monte Carlo simulation with 10,000 iterations. By taking the 50th percentile of prevalence distribution as safety limit, sampling plans were subsequently calculated basing on the binomial approach. Final values of number of samples were equal to 4 or 5 to test with qualitative analysis. Considering a limit of quantification of 10 colony forming units/g, a higher number of samples (i.e. 10-13 would be necessary to test using enumeration. An increase of the sensibility of the analytical technique should be necessary to achieve realistic and useful sampling plans based on concentration data.

  19. Point-of-purchase health information encourages customers to purchase vegetables: objective analysis by using a point-of-sales system.

    Science.gov (United States)

    Ogawa, Yoshiko; Tanabe, Naohito; Honda, Akiko; Azuma, Tomoko; Seki, Nao; Suzuki, Tsubasa; Suzuki, Hiroshi

    2011-07-01

    Point-of-purchase (POP) information at food stores could help promote healthy dietary habits. However, it has been difficult to evaluate the effects of such intervention on customers' behavior. We objectively evaluated the usefulness of POP health information for vegetables in the modification of customers' purchasing behavior by using the database of a point-of-sales (POS) system. Two supermarket stores belonging to the same chain were assigned as the intervention store (store I) and control store (store C). POP health information for vegetables was presented in store I for 60 days. The percent increase in daily sales of vegetables over the sales on the same date of the previous year was compared between the stores by using the database of the POS system, adjusting for the change in monthly visitors from the previous year (adjusted ∆sales). The adjusted ∆sales significantly increased during the intervention period (Spearman's ρ = 0.258, P for trend = 0.006) at store I but did not increase at store C (ρ = -0.037, P for trend = 0.728). The growth of the mean adjusted ∆sales of total vegetables from 30 days before the intervention period through the latter half of the intervention period was estimated to be greater at store I than at store C by 18.7 percentage points (95% confidence interval 1.6-35.9). Health-related POP information for vegetables in supermarkets can encourage customers to purchase and, probably, consume vegetables.

  20. Quantifying regional cerebral blood flow with N-isopropyl-p-[123I]iodoamphetamine and SPECT by one-point sampling method

    International Nuclear Information System (INIS)

    Odano, Ikuo; Takahashi, Naoya; Noguchi, Eikichi; Ohtaki, Hiro; Hatano, Masayoshi; Yamazaki, Yoshihiro; Higuchi, Takeshi; Ohkubo, Masaki.

    1994-01-01

    We developed a new non-invasive technique; one-point sampling method, for quantitative measurement of regional cerebral blood flow (rCBF) with N-isopropyl-p-[ 123 I]iodoamphetamine and SPECT. Although the continuous withdrawal of arterial blood and octanol treatment of the blood are required in the conventional microsphere method, the new technique dose not require these two procedures. The total activity of 123 I-IMP obtained by the continuous withdrawal of arterial blood is inferred by the activity of 133 I-IMP obtained by the one point arterial sample using a regression line. To determine when one point sampling time was optimum for inferring integral input function of the continuous withdrawal and whether the treatment of sampled blood for octanol fraction was required, we examined a correlation between the total activity of arterial blood withdrawn from 0 to 5 min after the injection and the activity of one point sample obtained at time t, and calculated a regression line. As a result, the minimum % error for the inference using the regression line was obtained at 6 min after the 123 I-IMP injection, moreover, the octanol treatment was not required. Then examining an effect on the values of rCBF when the sampling time was deviated from 6 min, we could correct the values in approximately 3% error when the sample was obtained at 6±1 min after the injection. The one-point sampling method provides accurate and relatively non-invasive measurement of rCBF without octanol extraction of arterial blood. (author)

  1. Ballistic electron transport in mesoscopic samples

    International Nuclear Information System (INIS)

    Diaconescu, D.

    2000-01-01

    In the framework of this thesis, the electron transport in the ballistic regime has been studied. Ballistic means that the lateral sample dimensions are smaller than the mean free path of the electrons, i.e. the electrons can travel through the whole device without being scattered. This leads to transport characteristics that differ significantly from the diffusive regime which is realised in most experiments. Making use of samples with high mean free path, features of ballistic transport have been observed on samples with sizes up to 100 μm. The basic device used in ballistic electron transport is the point contact, from which a collimated beam of ballistic electrons can be injected. Such point contacts were realised with focused ion beam (FIB) implantation and the collimating properties were analysed using a two opposite point contact configuration. The typical angular width at half maximum is around 50 , which is comparable with that of point contacts defined by other methods. (orig.)

  2. Effect of exposure history on microbial herbicide degradation in an aerobic aquifer affected by a point source

    DEFF Research Database (Denmark)

    Tuxen, Nina; de Lipthay, J.R.; Albrechtsen, Hans-Jørgen

    2002-01-01

    sampling points from within the plume, and neither BAM, bentazone, nor isoproturon was degraded in any sampling point. A linear correlation (R2 g 0.83) between pre-exposure and amount of herbicide degraded within 50 days was observed for the phenoxy acids, mecoprop and dichlorprop. An improved model fit...

  3. Sterile paper points as a bacterial DNA-contamination source in microbiome profiles of clinical samples

    NARCIS (Netherlands)

    van der Horst, J.; Buijs, M.J.; Laine, M.L.; Wismeijer, D.; Loos, B.G.; Crielaard, W.; Zaura, E.

    2013-01-01

    Objectives High throughput sequencing of bacterial DNA from clinical samples provides untargeted, open-ended information on the entire microbial community. The downside of this approach is the vulnerability to DNA contamination from other sources than the clinical sample. Here we describe

  4. Cloud point extraction, preconcentration and spectrophotometric determination of nickel in water samples using dimethylglyoxime

    Directory of Open Access Journals (Sweden)

    Morteza Bahram

    2013-01-01

    Full Text Available A new and simple method for the preconcentration and spectrophotometric determination of trace amounts of nickel was developed by cloud point extraction (CPE. In the proposed work, dimethylglyoxime (DMG was used as the chelating agent and Triton X-114 was selected as a non-ionic surfactant for CPE. The parameters affecting the cloud point extraction including the pH of sample solution, concentration of the chelating agent and surfactant, equilibration temperature and time were optimized. Under the optimum conditions, the calibration graph was linear in the range of 10-150 ng mL-1 with a detection limit of 4 ng mL-1. The relative standard deviation for 9 replicates of 100 ng mL-1 Ni(II was 1.04%. The interference effect of some anions and cations was studied. The method was applied to the determination of Ni(II in water samples with satisfactory results.

  5. Cloud point extraction and spectrophotometric determination of mercury species at trace levels in environmental samples.

    Science.gov (United States)

    Ulusoy, Halil İbrahim; Gürkan, Ramazan; Ulusoy, Songül

    2012-01-15

    A new micelle-mediated separation and preconcentration method was developed for ultra-trace quantities of mercury ions prior to spectrophotometric determination. The method is based on cloud point extraction (CPE) of Hg(II) ions with polyethylene glycol tert-octylphenyl ether (Triton X-114) in the presence of chelating agents such as 1-(2-pyridylazo)-2-naphthol (PAN) and 4-(2-thiazolylazo) resorcinol (TAR). Hg(II) ions react with both PAN and TAR in a surfactant solution yielding a hydrophobic complex at pH 9.0 and 8.0, respectively. The phase separation was accomplished by centrifugation for 5 min at 3500 rpm. The calibration graphs obtained from Hg(II)-PAN and Hg(II)-TAR complexes were linear in the concentration ranges of 10-1000 μg L(-1) and 50-2500 μg L(-1) with detection limits of 1.65 and 14.5 μg L(-1), respectively. The relative standard deviations (RSDs) were 1.85% and 2.35% in determinations of 25 and 250 μg L(-1) Hg(II), respectively. The interference effect of several ions were studied and seen commonly present ions in water samples had no significantly effect on determination of Hg(II). The developed methods were successfully applied to determine mercury concentrations in environmental water samples. The accuracy and validity of the proposed methods were tested by means of five replicate analyses of the certified standard materials such as QC Metal LL3 (VWR, drinking water) and IAEA W-4 (NIST, simulated fresh water). Copyright © 2011 Elsevier B.V. All rights reserved.

  6. Sample to answer visualization pipeline for low-cost point-of-care blood cell counting

    Science.gov (United States)

    Smith, Suzanne; Naidoo, Thegaran; Davies, Emlyn; Fourie, Louis; Nxumalo, Zandile; Swart, Hein; Marais, Philip; Land, Kevin; Roux, Pieter

    2015-03-01

    We present a visualization pipeline from sample to answer for point-of-care blood cell counting applications. Effective and low-cost point-of-care medical diagnostic tests provide developing countries and rural communities with accessible healthcare solutions [1], and can be particularly beneficial for blood cell count tests, which are often the starting point in the process of diagnosing a patient [2]. The initial focus of this work is on total white and red blood cell counts, using a microfluidic cartridge [3] for sample processing. Analysis of the processed samples has been implemented by means of two main optical visualization systems developed in-house: 1) a fluidic operation analysis system using high speed video data to determine volumes, mixing efficiency and flow rates, and 2) a microscopy analysis system to investigate homogeneity and concentration of blood cells. Fluidic parameters were derived from the optical flow [4] as well as color-based segmentation of the different fluids using a hue-saturation-value (HSV) color space. Cell count estimates were obtained using automated microscopy analysis and were compared to a widely accepted manual method for cell counting using a hemocytometer [5]. The results using the first iteration microfluidic device [3] showed that the most simple - and thus low-cost - approach for microfluidic component implementation was not adequate as compared to techniques based on manual cell counting principles. An improved microfluidic design has been developed to incorporate enhanced mixing and metering components, which together with this work provides the foundation on which to successfully implement automated, rapid and low-cost blood cell counting tests.

  7. Data Quality Objectives For Selecting Waste Samples For Bench-Scale Reformer Treatability Studies

    International Nuclear Information System (INIS)

    Banning, D.L.

    2011-01-01

    This document describes the data quality objectives to select archived samples located at the 222-S Laboratory for Bench-Scale Reforming testing. The type, quantity, and quality of the data required to select the samples for Fluid Bed Steam Reformer testing are discussed. In order to maximize the efficiency and minimize the time to treat Hanford tank waste in the Waste Treatment and Immobilization Plant, additional treatment processes may be required. One of the potential treatment processes is the fluidized bed steam reformer. A determination of the adequacy of the fluidized bed steam reformer process to treat Hanford tank waste is required. The initial step in determining the adequacy of the fluidized bed steam reformer process is to select archived waste samples from the 222-S Laboratory that will be used in a bench scale tests. Analyses of the selected samples will be required to confirm the samples meet the shipping requirements and for comparison to the bench scale reformer (BSR) test sample selection requirements.

  8. Effects of LiDAR point density, sampling size and height threshold on estimation accuracy of crop biophysical parameters.

    Science.gov (United States)

    Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong

    2016-05-30

    Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data.

  9. Neutron powder diffraction analysis of (Tm{sub 0.50}Ca{sub 0.50})MnO{sub 3} and (Lu{sub 0.50}Ca{sub 0.50})MnO{sub 3}

    Energy Technology Data Exchange (ETDEWEB)

    Martinelli, A., E-mail: alberto.martinelli@spin.cnr.it [SPIN-CNR, C.so Perrone 24, 16152 Genova (Italy); Ferretti, M. [SPIN-CNR, C.so Perrone 24, 16152 Genova (Italy); Universita degli Studi di Genova, Dipartimento di Chimica e Chimica Industriale, Via Dodecaneso 31, 16146 Genova (Italy); Cimberle, M.R. [IMEM-CNR, Via Dodecaneso 33, 16146, Genova (Italy); Ritter, C. [Institute Laue-Langevin, 6 rue Jules Horowitz, 38042 Grenoble Cedex 9 (France)

    2012-12-15

    The crystal and magnetic structures of (Tm{sub 0.50}Ca{sub 0.50})MnO{sub 3} and (Lu{sub 0.50}Ca{sub 0.50})MnO{sub 3} have been investigated between 5 K and 300 K by means of high resolution neutron powder diffraction followed by Rietveld refinement and dc magnetic measurements. During cooling orbital ordering at the Mn sub-lattice takes place at T{sub OO}{approx}280 K in both compounds, inducing an orthorhombic to monoclinic phase transition. As the temperature is further decreased an antiferromagnetic CE-type structure occurs in both compounds at T{sub N}{approx}105 K. The comparison with other (Ln{sub 0.50}Ca{sub 0.50})MnO{sub 3} compounds reveals that at room temperature the average Jahn-Teller distortion increases sharply with the decrease of the ionic radius for lanthanides heavier than Sm. The ordered magnetic moment progressively decreases as the lanthanide ionic radius decreases on account of the decreased values of the Mn-O-Mn bond angles. - Graphical abstract: Rietveld refinement plot for (Tm{sub 0.50}Ca{sub 0.50})MnO{sub 3} obtained from neutron powder diffraction data collected at 5 K; the inset shows the CE-type spin ordering taking place at the Mn sub-lattice. Highlights: Black-Right-Pointing-Pointer The crystal and magnetic structures of (Tm{sub 0.50}Ca{sub 0.50})MnO{sub 3} and (Lu{sub 0.50}Ca{sub 0.50})MnO{sub 3} were analyzed by neutron powder diffraction. Black-Right-Pointing-Pointer Orbital ordering takes place below T{sub OO}{approx}280 K in both compounds. Black-Right-Pointing-Pointer An antiferromagnetic CE-type structure occurs in both compounds below T{sub N}{approx}105 K. Black-Right-Pointing-Pointer A Comparison with other (Ln{sub 0.50}Ca{sub 0.50})MnO{sub 3} compounds is reported.

  10. Augmented reality system using lidar point cloud data for displaying dimensional information of objects on mobile phones

    Science.gov (United States)

    Gupta, S.; Lohani, B.

    2014-05-01

    Mobile augmented reality system is the next generation technology to visualise 3D real world intelligently. The technology is expanding at a fast pace to upgrade the status of a smart phone to an intelligent device. The research problem identified and presented in the current work is to view actual dimensions of various objects that are captured by a smart phone in real time. The methodology proposed first establishes correspondence between LiDAR point cloud, that are stored in a server, and the image t hat is captured by a mobile. This correspondence is established using the exterior and interior orientation parameters of the mobile camera and the coordinates of LiDAR data points which lie in the viewshed of the mobile camera. A pseudo intensity image is generated using LiDAR points and their intensity. Mobile image and pseudo intensity image are then registered using image registration method SIFT thereby generating a pipeline to locate a point in point cloud corresponding to a point (pixel) on the mobile image. The second part of the method uses point cloud data for computing dimensional information corresponding to the pairs of points selected on mobile image and fetch the dimensions on top of the image. This paper describes all steps of the proposed method. The paper uses an experimental setup to mimic the mobile phone and server system and presents some initial but encouraging results

  11. Safety Evaluation Report related to the operation of Nine Mile Point Nuclear Station, Unit No. 2 (Docket No. 50-410)

    International Nuclear Information System (INIS)

    1987-07-01

    This report supplements the Safety Evaluation Report (NUREG-1047, February 1985) for the application filed by Niagara Mohawk Power Corporation, as applicant and co-owner, for the license to operate Nine Mile Point Nuclear Station, Unit 2 (Docket No. 50-410). It has been prepared by the Office of Nuclear Reactor Regulation of the US Nuclear Regulatory Commission. The facility is located near Oswego, New York. This report supports the issuance of the full-power license for Nine Mile Point Nuclear Station, Unit No. 2

  12. Intraosseous blood samples for point-of-care analysis: agreement between intraosseous and arterial analyses.

    Science.gov (United States)

    Jousi, Milla; Saikko, Simo; Nurmi, Jouni

    2017-09-11

    Point-of-care (POC) testing is highly useful when treating critically ill patients. In case of difficult vascular access, the intraosseous (IO) route is commonly used, and blood is aspirated to confirm the correct position of the IO-needle. Thus, IO blood samples could be easily accessed for POC analyses in emergency situations. The aim of this study was to determine whether IO values agree sufficiently with arterial values to be used for clinical decision making. Two samples of IO blood were drawn from 31 healthy volunteers and compared with arterial samples. The samples were analysed for sodium, potassium, ionized calcium, glucose, haemoglobin, haematocrit, pH, blood gases, base excess, bicarbonate, and lactate using the i-STAT® POC device. Agreement and reliability were estimated by using the Bland-Altman method and intraclass correlation coefficient calculations. Good agreement was evident between the IO and arterial samples for pH, glucose, and lactate. Potassium levels were clearly higher in the IO samples than those from arterial blood. Base excess and bicarbonate were slightly higher, and sodium and ionised calcium values were slightly lower, in the IO samples compared with the arterial values. The blood gases in the IO samples were between arterial and venous values. Haemoglobin and haematocrit showed remarkable variation in agreement. POC diagnostics of IO blood can be a useful tool to guide treatment in critical emergency care. Seeking out the reversible causes of cardiac arrest or assessing the severity of shock are examples of situations in which obtaining vascular access and blood samples can be difficult, though information about the electrolytes, acid-base balance, and lactate could guide clinical decision making. The analysis of IO samples should though be limited to situations in which no other option is available, and the results should be interpreted with caution, because there is not yet enough scientific evidence regarding the agreement of IO

  13. Visible Wavelength Reflectance Spectra and Taxonomies of Near-Earth Objects from Apache Point Observatory

    Science.gov (United States)

    Hammergren, Mark; Brucker, Melissa J.; Nault, Kristie A.; Gyuk, Geza; Solontoi, Michael R.

    2015-11-01

    Near-Earth Objects (NEOs) are interesting to scientists and the general public for diverse reasons: their impacts pose a threat to life and property; they present important albeit biased records of the formation and evolution of the Solar System; and their materials may provide in situ resources for future space exploration and habitation.In January 2015 we began a program of NEO astrometric follow-up and physical characterization using a 17% share of time on the Astrophysical Research Consortium (ARC) 3.5-meter telescope at Apache Point Observatory (APO). Our 500 hours of annual observing time are split into frequent, short astrometric runs (see poster by K. A. Nault et. al), and half-night runs devoted to physical characterization (see poster by M. J. Brucker et. al for preliminary rotational lightcurve results). NEO surface compositions are investigated with 0.36-1.0 μm reflectance spectroscopy using the Dual Imaging Spectrograph (DIS) instrument. As of August 25, 2015, including testing runs during fourth quarter 2014, we have obtained reflectance spectra of 68 unique NEOs, ranging in diameter from approximately 5m to 8km.In addition to investigating the compositions of individual NEOs to inform impact hazard and space resource evaluations, we may examine the distribution of taxonomic types and potential trends with other physical and orbital properties. For example, the Yarkovsky effect, which is dependent on asteroid shape, mass, rotation, and thermal characteristics, is believed to dominate other dynamical effects in driving the delivery of small NEOs from the main asteroid belt. Studies of the taxonomic distribution of a large sample of NEOs of a wide range of sizes will test this hypothesis.We present a preliminary analysis of the reflectance spectra obtained in our survey to date, including taxonomic classifications and potential trends with size.Acknowledgements: Based on observations obtained with the Apache Point Observatory 3.5-meter telescope, which

  14. Results For The Third Quarter 2010 Tank 50 WAC Slurry Sample: Chemical And Radionuclide Contaminant Results

    International Nuclear Information System (INIS)

    Reigel, M.; Bibler, N.

    2010-01-01

    This report details the chemical and radionuclide contaminant results for the characterization of the 2010 Third Quarter sampling of Tank 50 for the Saltstone Waste Acceptance Criteria (WAC). Information from this characterization will be used by Liquid Waste Operations (LWO) to support the transfer of low-level aqueous waste from Tank 50 to the Salt Feed Tank in the Saltstone Facility in Z-Area, where the waste will be immobilized. This information is also used to update the Tank 50 Waste Characterization System. The following conclusions are drawn from the analytical results provided in this report: (i) The concentrations of the reported chemical and radioactive contaminants were less than their respective WAC targets or limits unless noted in this section. (ii) The reported detection limits for 94 Nb, 247 Cm and 249 Cf are above the requested limits from Reference 4. However, they are below the limits established in Reference 3. (iii) The reported detection limit for 242m Am is greater than the requested limit from Attachment 8.4 of the WAC. (iv) The reported detection limit for Isopar L is greater than the limit from Table 3 of the WAC. (v) The reported concentration of Isopropanol is greater than the limit from Table 4 of the WAC. (vi) Isopar L and Norpar 13 have limited solubility in aqueous solutions making it difficult to obtain consistent and reliable sub-samples. The values reported in this memo are the concentrations in the sub-sample as detected by the GC/MS; however, the results may not accurately represent the concentrations of the analytes in Tank 50.

  15. Uncertainty analysis of point by point sampling complex surfaces using touch probe CMMs

    DEFF Research Database (Denmark)

    Barini, Emanuele; Tosello, Guido; De Chiffre, Leonardo

    2007-01-01

    The paper describes a study concerning point by point scanning of complex surfaces using tactile CMMs. A four factors-two level full factorial experiment was carried out, involving measurements on a complex surface configuration item comprising a sphere, a cylinder and a cone, combined in a singl...

  16. Measuring the critical current in superconducting samples made of NT-50 under pulse irradiation by high-energy particles

    International Nuclear Information System (INIS)

    Vasilev, P.G.; Vladimirova, N.M.; Volkov, V.I.; Goncharov, I.N.; Zajtsev, L.N.; Zel'dich, B.D.; Ivanov, V.I.; Kleshchenko, E.D.; Khvostov, V.B.

    1981-01-01

    The results of tests of superconducting samples of an uninsulated wire of the 0.5 mm diameter, containing 1045 superconducting filaments of the 10 μm diameter made of NT-50 superconductor in a copper matrix, are given. The upper part of the sample (''closed'') is placed between two glass-cloth-base laminate plates of the 50 mm length, and the lower part (''open'') of the 45 mm length is immerged into liquid helium. The sample is located perpendicular to the magnetic field of a superconducting solenoid and it is irradiated by charged particle beams at the energy of several GeV. The measurement results of permissible energy release in the sample depending on subcriticality (I/Isub(c) where I is an operating current through the sample, and Isub(c) is a critical current for lack of the beam) and the particle flux density, as well as of the maximum permissible fluence depending on subcriticality. In case of the ''closed'' sample irradiated by short pulses (approximately 1 ms) for I/Isub(c) [ru

  17. Spatiotemporal Compression Techniques for Moving Point Objects

    NARCIS (Netherlands)

    Meratnia, Nirvana; de By, R.A.; de By, R.A.; Bertino, E.

    Moving object data handling has received a fair share of attention over recent years in the spatial database community. This is understandable as positioning technology is rapidly making its way into the consumer market, not only through the already ubiquitous cell phone but soon also through small,

  18. MUSIC ALGORITHM FOR LOCATING POINT-LIKE SCATTERERS CONTAINED IN A SAMPLE ON FLAT SUBSTRATE

    Institute of Scientific and Technical Information of China (English)

    Dong Heping; Ma Fuming; Zhang Deyue

    2012-01-01

    In this paper,we consider a MUSIC algorithm for locating point-like scatterers contained in a sample on flat substrate.Based on an asymptotic expansion of the scattering amplitude proposed by Ammari et al.,the reconstruction problem can be reduced to a calculation of Green function corresponding to the background medium.In addition,we use an explicit formulation of Green function in the MUSIC algorithm to simplify the calculation when the cross-section of sample is a half-disc.Numerical experiments are included to demonstrate the feasibility of this method.

  19. Communication: Newton homotopies for sampling stationary points of potential energy landscapes

    Energy Technology Data Exchange (ETDEWEB)

    Mehta, Dhagash, E-mail: dmehta@nd.edu [Department of Applied and Computational Mathematics and Statistics, University of Notre Dame, Notre Dame, Indiana 46556 (United States); University Chemical Laboratory, The University of Cambridge, Cambridge CB2 1EW (United Kingdom); Chen, Tianran, E-mail: chentia1@msu.edu [Department of Mathematics, Michigan State University, East Lansing, Michigan 48823 (United States); Hauenstein, Jonathan D., E-mail: hauenstein@nd.edu [Department of Applied and Computational Mathematics and Statistics, University of Notre Dame, Notre Dame, Indiana 46556 (United States); Wales, David J., E-mail: dw34@cam.ac.uk [University Chemical Laboratory, The University of Cambridge, Cambridge CB2 1EW (United Kingdom)

    2014-09-28

    One of the most challenging and frequently arising problems in many areas of science is to find solutions of a system of multivariate nonlinear equations. There are several numerical methods that can find many (or all if the system is small enough) solutions but they all exhibit characteristic problems. Moreover, traditional methods can break down if the system contains singular solutions. Here, we propose an efficient implementation of Newton homotopies, which can sample a large number of the stationary points of complicated many-body potentials. We demonstrate how the procedure works by applying it to the nearest-neighbor ϕ{sup 4} model and atomic clusters.

  20. Communication: Newton homotopies for sampling stationary points of potential energy landscapes

    International Nuclear Information System (INIS)

    Mehta, Dhagash; Chen, Tianran; Hauenstein, Jonathan D.; Wales, David J.

    2014-01-01

    One of the most challenging and frequently arising problems in many areas of science is to find solutions of a system of multivariate nonlinear equations. There are several numerical methods that can find many (or all if the system is small enough) solutions but they all exhibit characteristic problems. Moreover, traditional methods can break down if the system contains singular solutions. Here, we propose an efficient implementation of Newton homotopies, which can sample a large number of the stationary points of complicated many-body potentials. We demonstrate how the procedure works by applying it to the nearest-neighbor ϕ 4 model and atomic clusters

  1. Objective assessment of subjective tinnitus through contralateral suppression of otoacoustic emissions by white noise; suggested cut-off points.

    Science.gov (United States)

    Riga, M; Komis, A; Maragkoudakis, P; Korres, G; Danielides, V

    2016-12-01

    Normative otoacoustic emission (OAE) suppression values are currently lacking and the role of cochlear efferent innervation in tinnitus is controversial. The aim of this study was to investigate the association between tinnitus and medial olivocochlear bundle (MOCB) malfunction. Potential suppression amplitude cut-off criteria that could differentiate participants with tinnitus from those without were sought. Mean suppression amplitudes of transient evoked OAEs and distortion product OAEs by contralateral white noise (50 dBSL) were recorded. Six mean suppression amplitudes criteria were validated as possible cut-off points. The population consisted of normal hearing (n = 78) or presbycusic adults (n = 19) with tinnitus or without (n = 28 and 13, respectively) chronic tinnitus (in total, n = 138 78 females/60males, aged 49 ± 14 years). Participants with mean suppression values lower than 0.5-1 dBSPL seem to present a high probability to report tinnitus (specificity 88-97%). On the other hand, participants with mean suppression values larger than 2-2.5dBSPL seem to present a high probability of the absence of tinnitus (sensitivity 87-99%). Correlations were stronger among participants with bilateral presence or absence of tinnitus. This study seem to confirm an association between tinnitus and low suppression amplitudes (<1 dBSPL), which might evolve into an objective examination tool, supplementary to conventional audiological testing.

  2. Hidden treasures - 50 km points of interests

    Science.gov (United States)

    Lommi, Matias; Kortelainen, Jaana

    2015-04-01

    Tampere is third largest city in Finland and a regional centre. During 70's there occurred several communal mergers. Nowadays this local area has both strong and diversed identity - from wilderness and agricultural fields to high density city living. Outside the city center there are interesting geological points unknown for modern city settlers. There is even a local proverb, "Go abroad to Teisko!". That is the area the Hidden Treasures -student project is focused on. Our school Tammerkoski Upper Secondary School (or Gymnasium) has emphasis on visual arts. We are going to offer our art students scientific and artistic experiences and knowledge about the hidden treasures of Teisko area and involve the Teisko inhabitants into this project. Hidden treasures - Precambrian subduction zone and a volcanism belt with dense bed of gold (Au) and arsenic (As), operating goldmines and quarries of minerals and metamorphic slates. - North of subduction zone a homogenic precambrian magmastone area with quarries, products known as Kuru Grey. - Former ashores of post-glasial Lake Näsijärvi and it's sediments enabled the developing agriculture and sustained settlement. Nowadays these ashores have both scenery and biodiversity values. - Old cattle sheds and dairy buildings made of local granite stones related to cultural stonebuilding inheritance. - Local active community of Kapee, about 100 inhabitants. Students will discover information of these "hidden" phenomena, and rendering this information trough Enviromental Art Method. Final form of this project will be published in several artistic and informative geocaches. These caches are achieved by a GPS-based special Hidden Treasures Cycling Route and by a website guiding people to find these hidden points of interests.

  3. Quantification of regional cerebral blood flow (rCBF) measurement with one point sampling by sup 123 I-IMP SPECT

    Energy Technology Data Exchange (ETDEWEB)

    Munaka, Masahiro [University of Occupational and Enviromental Health, Kitakyushu (Japan); Iida, Hidehiro; Murakami, Matsutaro

    1992-02-01

    A handy method of quantifying regional cerebral blood flow (rCBF) measurement by {sup 123}I-IMP SPECT was designed. A standard input function was made and the sampling time to calibrate this standard input function by one point sampling was optimized. An average standard input function was obtained from continuous arterial samplings of 12 healthy adults. The best sampling time was the minimum differential value between the integral calculus value of the standard input function calibrated by one point sampling and the input funciton by continuous arterial samplings. This time was 8 minutes after an intravenous injection of {sup 123}I-IMP and an error was estimated to be {+-}4.1%. The rCBF values by this method were evaluated by comparing them with the rCBF values of the input function with continuous arterial samplings in 2 healthy adults and a patient with cerebral infarction. A significant correlation (r=0.764, p<0.001) was obtained between both. (author).

  4. Novel analytical reagent for the application of cloud-point preconcentration and flame atomic absorption spectrometric determination of nickel in natural water samples

    International Nuclear Information System (INIS)

    Suvardhan, K.; Rekha, D.; Kumar, K. Suresh; Prasad, P. Reddy; Kumar, J. Dilip; Jayaraj, B.; Chiranjeevi, P.

    2007-01-01

    Cloud-point extraction was applied as a preconcentration of nickel after formation of complex with newly synthesized N-quino[8,7-b]azin-5-yl-2,3,5,6,8,9,11,12octahydrobenzo[b][1,4,7,10,13] pentaoxacyclopentadecin-15-yl-methanimine, and later determined by flame atomic absorption spectrometry (FAAS) using octyl phenoxy polyethoxy ethanol (Triton X-114) as surfactant. Nickel was complexed with N-quino[8,7-b]azin-5-yl-2,3,5,6,8,9,11,12 octahydrobenzo[b][1,4,7,10,13]pentaoxacyclopentadecin-15-yl-methanimine in an aqueous phase and was kept for 15 min in a thermo-stated bath at 40 deg. C. Separation of the two phases was accomplished by centrifugation for 15 min at 4000 rpm. The chemical variables affecting the cloud-point extraction were evaluated, optimized and successfully applied to the nickel determination in various water samples. Under the optimized conditions, the preconcentration system of 100 ml sample permitted an enhancement factor of 50-fold. The detailed study of various interferences made the method more selective. The detection limits obtained under optimal condition was 0.042 ng ml -1 . The extraction efficiency was investigated at different nickel concentrations (20-80 ng ml -1 ) and good recoveries (99.05-99.93%) were obtained using present method. The proposed method has been applied successfully for the determination of nickel in various water samples and compared with reported method in terms of Student's t-test and variance ratio f-test which indicate the significance of present method over reported and spectrophotometric methods at 95% confidence level

  5. Normal P50 gating in children with autism, yet attenuated P50 amplitude in the Asperger subcategory

    DEFF Research Database (Denmark)

    Madsen, Gitte; Bilenberg, Niels; Jepsen, Jens Richardt Møllegaard

    2015-01-01

    Autism spectrum disorders (ASD) and schizophrenia are separate disorders, but there is evidence of conversion or comorbid overlap. The objective of this paper was to explore whether deficits in sensory gating, as seen in some schizophrenia patients, can also be found in a group of ASD children...... compared to neurotypically developed children. An additional aim was to investigate the possibility of subdividing our ASD sample based on these gating deficits. In a case-control design, we assessed gating of the P50 and N100 amplitude in 31 ASD children and 39 healthy matched controls (8-12 years......) and screened for differences between groups and within the ASD group. We did not find disturbances in auditory P50 and N100 filtering in the group of ASD children as a whole, nor did we find abnormal P50 and N100 amplitudes. However, the P50 amplitude to the conditioning stimulus was significantly reduced...

  6. Using the Direct Sampling Multiple-Point Geostatistical Method for Filling Gaps in Landsat 7 ETM+ SLC-off Imagery

    KAUST Repository

    Yin, Gaohong

    2016-05-01

    Since the failure of the Scan Line Corrector (SLC) instrument on Landsat 7, observable gaps occur in the acquired Landsat 7 imagery, impacting the spatial continuity of observed imagery. Due to the highly geometric and radiometric accuracy provided by Landsat 7, a number of approaches have been proposed to fill the gaps. However, all proposed approaches have evident constraints for universal application. The main issues in gap-filling are an inability to describe the continuity features such as meandering streams or roads, or maintaining the shape of small objects when filling gaps in heterogeneous areas. The aim of the study is to validate the feasibility of using the Direct Sampling multiple-point geostatistical method, which has been shown to reconstruct complicated geological structures satisfactorily, to fill Landsat 7 gaps. The Direct Sampling method uses a conditional stochastic resampling of known locations within a target image to fill gaps and can generate multiple reconstructions for one simulation case. The Direct Sampling method was examined across a range of land cover types including deserts, sparse rural areas, dense farmlands, urban areas, braided rivers and coastal areas to demonstrate its capacity to recover gaps accurately for various land cover types. The prediction accuracy of the Direct Sampling method was also compared with other gap-filling approaches, which have been previously demonstrated to offer satisfactory results, under both homogeneous area and heterogeneous area situations. Studies have shown that the Direct Sampling method provides sufficiently accurate prediction results for a variety of land cover types from homogeneous areas to heterogeneous land cover types. Likewise, it exhibits superior performances when used to fill gaps in heterogeneous land cover types without input image or with an input image that is temporally far from the target image in comparison with other gap-filling approaches.

  7. Evaluation of cephalogram using multi-objective frequency processing

    Energy Technology Data Exchange (ETDEWEB)

    Hagiwara, Sakae; Takizawa, Tsutomu; Osako, Miho; Kaneda, Takashi; Kasai, Kazutaka [Nihon Univ., Chiba (Japan). School of Dentistry at Matsudo

    2002-12-01

    A diagnosis with cephalogram is important for orthodontic treatment. Recently, computed radiography (CR) has been performed to the cephalogram. However, evaluation of multi-objective frequency processing (MFP) for cephalograms has been received little attention. The purpose of this study was to evaluate the cephalogram using MFP CR. At first, 450 lateral cephalograms were made, from 50 orthodontic patients, with 9 possible spatial frequency parameter combinations and a contrast scale held fixed in images processing. For each film, the clarity of radiographic images were estimated and scored with respect to landmark identification (total 26 points, 20 points of hard tissue and 6 points of soft tissue). A specific combination of spatial frequency scales (multi-frequency balance types (MRB) F-type, multi-frequency enhancement (MRE) 8) was proved to be adequate to achieve the optimal image quality in the cephalogram. (author)

  8. Evaluation of cephalogram using multi-objective frequency processing

    International Nuclear Information System (INIS)

    Hagiwara, Sakae; Takizawa, Tsutomu; Osako, Miho; Kaneda, Takashi; Kasai, Kazutaka

    2002-01-01

    A diagnosis with cephalogram is important for orthodontic treatment. Recently, computed radiography (CR) has been performed to the cephalogram. However, evaluation of multi-objective frequency processing (MFP) for cephalograms has been received little attention. The purpose of this study was to evaluate the cephalogram using MFP CR. At first, 450 lateral cephalograms were made, from 50 orthodontic patients, with 9 possible spatial frequency parameter combinations and a contrast scale held fixed in images processing. For each film, the clarity of radiographic images were estimated and scored with respect to landmark identification (total 26 points, 20 points of hard tissue and 6 points of soft tissue). A specific combination of spatial frequency scales (multi-frequency balance types (MRB) F-type, multi-frequency enhancement (MRE) 8) was proved to be adequate to achieve the optimal image quality in the cephalogram. (author)

  9. Data Quality Objectives for Regulatory Requirements for Dangerous Waste Sampling and Analysis

    International Nuclear Information System (INIS)

    MULKEY, C.H.

    1999-01-01

    This document describes sampling and analytical requirements needed to meet state and federal regulations for dangerous waste (DW). The River Protection Project (RPP) is assigned to the task of storage and interim treatment of hazardous waste. Any final treatment or disposal operations, as well as requirements under the land disposal restrictions (LDRs), fall in the jurisdiction of another Hanford organization and are not part of this scope. The requirements for this Data Quality Objective (DQO) Process were developed using the RPP Data Quality Objective Procedure (Banning 1996), which is based on the U.S. Environmental Protection Agency's (EPA) Guidance for the Data Quality Objectives Process (EPA 1994). Hereafter, this document is referred to as the DW DQO. Federal and state laws and regulations pertaining to waste contain requirements that are dependent upon the composition of the waste stream. These regulatory drivers require that pertinent information be obtained. For many requirements, documented process knowledge of a waste composition can be used instead of analytical data to characterize or designate a waste. When process knowledge alone is used to characterize a waste, it is a best management practice to validate the information with analytical measurements

  10. Sample Set (SE): SE50 [Metabolonote[Archive

    Lifescience Database Archive (English)

    Full Text Available oids We report that flavonoids with radical scavenging activity mitigate against ox...ress. These data confirm the usefulness of flavonoids for enhancing both biotic and abiotic stress tolerance...SE50 Enhancement of oxidative and drought tolerance in arabidopsis by overaccumulation of antioxidant flavon

  11. Coarse point cloud registration by EGI matching of voxel clusters

    NARCIS (Netherlands)

    Wang, J.; Lindenbergh, R.C.; Shen, Y.; Menenti, M.

    2016-01-01

    Laser scanning samples the surface geometry of objects efficiently and records versatile information as point clouds. However, often more scans are required to fully cover a scene. Therefore, a registration step is required that transforms the different scans into a common coordinate system. The

  12. Investigation of the Equivalence of National Dew-Point Temperature Realizations in the -50 °C to + 20 °C Range

    Science.gov (United States)

    Heinonen, Martti; Anagnostou, Miltiadis; Bell, Stephanie; Stevens, Mark; Benyon, Robert; Bergerud, Reidun Anita; Bojkovski, Jovan; Bosma, Rien; Nielsen, Jan; Böse, Norbert; Cromwell, Plunkett; Kartal Dogan, Aliye; Aytekin, Seda; Uytun, Ali; Fernicola, Vito; Flakiewicz, Krzysztof; Blanquart, Bertrand; Hudoklin, Domen; Jacobson, Per; Kentved, Anders; Lóio, Isabel; Mamontov, George; Masarykova, Alexandra; Mitter, Helmut; Mnguni, Regina; Otych, Jan; Steiner, Anton; Szilágyi Zsófia, Nagyné; Zvizdic, Davor

    2012-09-01

    In the field of humidity quantities, the first CIPM key comparison, CCT-K6 is at its end. The corresponding European regional key comparison, EUROMET.T-K6, was completed in early 2008, about 4 years after the starting initial measurements in the project. In total, 24 NMIs from different countries took part in the comparison. This number includes 22 EURAMET countries, and Russia and South Africa. The comparison covered the dew-point temperature range from -50 °C to +20 °C. It was carried out in three parallel loops, each with two chilled mirror hygrometers as transfer standards in each loop. The comparison scheme was designed to ensure high quality results with evenly spread workload for the participants. It is shown that the standard uncertainty due to the long-term instability was smaller than 0.008 °C in all loops. The standard uncertainties due to links between the loops were found to be smaller than 0.025 °C at -50 °C and 0.010 °C elsewhere. Conclusions on the equivalence of the dew-point temperature standards are drawn on the basis of calculated bilateral degrees of equivalence and deviations from the EURAMET comparison reference values (ERV). Taking into account 16 different primary dew-point realizations and 8 secondary realizations, the results demonstrate the equivalence of a large number of laboratories at an uncertainty level that is better than achieved in other multilateral comparisons so far in the humidity field.

  13. Bibliography of papers, reports, and presentations related to point-sample dimensional measurement methods for machined part evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Baldwin, J.M. [Sandia National Labs., Livermore, CA (United States). Integrated Manufacturing Systems

    1996-04-01

    The Dimensional Inspection Techniques Specification (DITS) Project is an ongoing effort to produce tools and guidelines for optimum sampling and data analysis of machined parts, when measured using point-sample methods of dimensional metrology. This report is a compilation of results of a literature survey, conducted in support of the DITS. Over 160 citations are included, with author abstracts where available.

  14. Speciation and Determination of Low Concentration of Iron in Beer Samples by Cloud Point Extraction

    Science.gov (United States)

    Khalafi, Lida; Doolittle, Pamela; Wright, John

    2018-01-01

    A laboratory experiment is described in which students determine the concentration and speciation of iron in beer samples using cloud point extraction and absorbance spectroscopy. The basis of determination is the complexation between iron and 2-(5-bromo-2- pyridylazo)-5-diethylaminophenol (5-Br-PADAP) as a colorimetric reagent in an aqueous…

  15. A 14-bit 50 MS/s sample-and-hold circuit for pipelined ADC

    International Nuclear Information System (INIS)

    Yue Sen; Zhao Yiqiang; Pang Ruilong; Sheng Yun

    2014-01-01

    A high performance sample-and-hold (S/H) circuit used in a pipelined analog-to-digital converter (ADC) is presented. Capacitor flip-around architecture is used in this S/H circuit with a novel gain-boosted differential folded cascode operational transconductance amplifier. A double-bootstrapped switch is designed to improve the performance of the circuit. The circuit is implemented using a 0.18 μm 1P6M CMOS process. Measurement results show that the effective number of bits is 14.03 bits, the spurious free dynamic range is 94.62 dB, the signal to noise and distortion ratio is 86.28 dB, and the total harmonic distortion is −91:84 dB for a 5 MHz input signal with 50 MS/s sampling rate. A pipeline ADC with the designed S/H circuit has been implemented. (semiconductor integrated circuits)

  16. Systematic sampling with errors in sample locations

    DEFF Research Database (Denmark)

    Ziegel, Johanna; Baddeley, Adrian; Dorph-Petersen, Karl-Anton

    2010-01-01

    analysis using point process methods. We then analyze three different models for the error process, calculate exact expressions for the variances, and derive asymptotic variances. Errors in the placement of sample points can lead to substantial inflation of the variance, dampening of zitterbewegung......Systematic sampling of points in continuous space is widely used in microscopy and spatial surveys. Classical theory provides asymptotic expressions for the variance of estimators based on systematic sampling as the grid spacing decreases. However, the classical theory assumes that the sample grid...... is exactly periodic; real physical sampling procedures may introduce errors in the placement of the sample points. This paper studies the effect of errors in sample positioning on the variance of estimators in the case of one-dimensional systematic sampling. First we sketch a general approach to variance...

  17. 10 CFR Appendix I to Part 50 - Numerical Guides for Design Objectives and Limiting Conditions for Operation To Meet the...

    Science.gov (United States)

    2010-01-01

    ... pathways and individual receptors as actually exist at the time the plant is licensed. 2. The... Reactor Effluents SECTION I. Introduction. Section 50.34a provides that an application for a construction... guides on design objectives set forth in this section may be used by an applicant for a construction...

  18. 76 FR 40945 - Entergy Nuclear Indian Point 2, LLC, Entergy Nuclear Indian Point 3, LLC, Entergy Nuclear...

    Science.gov (United States)

    2011-07-12

    ... NUCLEAR REGULATORY COMMISSION [NRC-2011-0150; Docket Nos. 50-003, 50-247, and 50-286; License Nos. DPR-5, DPR-26, and DPR-64] Entergy Nuclear Indian Point 2, LLC, Entergy Nuclear Indian Point 3, LLC, Entergy Nuclear Operations, Inc.; Receipt of Request for Action Notice is hereby given that by petition...

  19. 75 FR 16201 - FPL Energy Point Beach, LLC; Point Beach Nuclear Plant, Units 1 and 2; Exemption

    Science.gov (United States)

    2010-03-31

    ... NUCLEAR REGULATORY COMMISSION [Docket Nos. 50-266 and 50-301; NRC-2010-0123] FPL Energy Point Beach, LLC; Point Beach Nuclear Plant, Units 1 and 2; Exemption 1.0 Background FPL Energy Point Beach.... Borchardt (NRC) to M. S. Fertel (Nuclear Energy Institute) dated June 4, 2009. The licensee's request for an...

  20. Object Classification Using Airborne Multispectral LiDAR Data

    Directory of Open Access Journals (Sweden)

    PAN Suoyan

    2018-02-01

    Full Text Available Airborne multispectral LiDAR system,which obtains surface geometry and spectral data of objects,simultaneously,has become a fast effective,large-scale spatial data acquisition method.Multispectral LiDAR data are characteristics of completeness and consistency of spectrum and spatial geometric information.Support vector machine (SVM,a machine learning method,is capable of classifying objects based on small samples.Therefore,by means of SVM,this paper performs land cover classification using multispectral LiDAR data. First,all independent point cloud with different wavelengths are merged into a single point cloud,where each pixel contains the three-wavelength spectral information.Next,the merged point cloud is converted into range and intensity images.Finally,land-cover classification is performed by means of SVM.All experiments were conducted on the Optech Titan multispectral LiDAR data,containing three individual point cloud collected by 532 nm,1024 nm,and 1550 nm laser beams.Experimental results demonstrate that ①compared to traditional single-wavelength LiDAR data,multispectral LiDAR data provide a promising solution to land use and land cover applications;②SVM is a feasible method for land cover classification of multispectral LiDAR data.

  1. Benthic faunal sampling adjacent to the Barbers Point ocean outfall, Oahu, Hawaii, 1986-2010 (NODC Accession 9900098)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Benthic fauna in the vicinity of the Barbers Point (Honouliuli) ocean outfall were sampled from 1986-2010. To assess the environmental quality, sediment grain size...

  2. An analytic solution to the alibi query in the space-time prisms model for moving object data

    OpenAIRE

    GRIMSON, Rafael; KUIJPERS, Bart; OTHMAN, Walied

    2010-01-01

    Moving objects produce trajectories, which are stored in databases by means of finite samples of time-stamped locations. When also speed limitations in these sample points are known, space-time prisms (also called beads) (Egenhofer 2003, Miller 2005, Pfoser and Jensen 1999) can be used to model the uncertainty about an object’s location in between sample points. In this setting, a query of particular interest, that has been studied in the literature of geographic information systems (GIS), is...

  3. A novel knot selection method for the error-bounded B-spline curve fitting of sampling points in the measuring process

    International Nuclear Information System (INIS)

    Liang, Fusheng; Zhao, Ji; Ji, Shijun; Zhang, Bing; Fan, Cheng

    2017-01-01

    The B-spline curve has been widely used in the reconstruction of measurement data. The error-bounded sampling points reconstruction can be achieved by the knot addition method (KAM) based B-spline curve fitting. In KAM, the selection pattern of initial knot vector has been associated with the ultimate necessary number of knots. This paper provides a novel initial knots selection method to condense the knot vector required for the error-bounded B-spline curve fitting. The initial knots are determined by the distribution of features which include the chord length (arc length) and bending degree (curvature) contained in the discrete sampling points. Firstly, the sampling points are fitted into an approximate B-spline curve Gs with intensively uniform knot vector to substitute the description of the feature of the sampling points. The feature integral of Gs is built as a monotone increasing function in an analytic form. Then, the initial knots are selected according to the constant increment of the feature integral. After that, an iterative knot insertion (IKI) process starting from the initial knots is introduced to improve the fitting precision, and the ultimate knot vector for the error-bounded B-spline curve fitting is achieved. Lastly, two simulations and the measurement experiment are provided, and the results indicate that the proposed knot selection method can reduce the number of ultimate knots available. (paper)

  4. Coarse Point Cloud Registration by Egi Matching of Voxel Clusters

    Science.gov (United States)

    Wang, Jinhu; Lindenbergh, Roderik; Shen, Yueqian; Menenti, Massimo

    2016-06-01

    Laser scanning samples the surface geometry of objects efficiently and records versatile information as point clouds. However, often more scans are required to fully cover a scene. Therefore, a registration step is required that transforms the different scans into a common coordinate system. The registration of point clouds is usually conducted in two steps, i.e. coarse registration followed by fine registration. In this study an automatic marker-free coarse registration method for pair-wise scans is presented. First the two input point clouds are re-sampled as voxels and dimensionality features of the voxels are determined by principal component analysis (PCA). Then voxel cells with the same dimensionality are clustered. Next, the Extended Gaussian Image (EGI) descriptor of those voxel clusters are constructed using significant eigenvectors of each voxel in the cluster. Correspondences between clusters in source and target data are obtained according to the similarity between their EGI descriptors. The random sampling consensus (RANSAC) algorithm is employed to remove outlying correspondences until a coarse alignment is obtained. If necessary, a fine registration is performed in a final step. This new method is illustrated on scan data sampling two indoor scenarios. The results of the tests are evaluated by computing the point to point distance between the two input point clouds. The presented two tests resulted in mean distances of 7.6 mm and 9.5 mm respectively, which are adequate for fine registration.

  5. Augmented reality system using lidar point cloud data for displaying dimensional information of objects on mobile phones

    OpenAIRE

    Gupta, S.; Lohani, B.

    2014-01-01

    Mobile augmented reality system is the next generation technology to visualise 3D real world intelligently. The technology is expanding at a fast pace to upgrade the status of a smart phone to an intelligent device. The research problem identified and presented in the current work is to view actual dimensions of various objects that are captured by a smart phone in real time. The methodology proposed first establishes correspondence between LiDAR point cloud, that are stored in a ser...

  6. Results for the First, Second, and Third Quarter Calendar Year 2015 Tank 50H WAC slurry samples chemical and radionuclide contaminants

    Energy Technology Data Exchange (ETDEWEB)

    Crawford, C. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2016-02-18

    This report details the chemical and radionuclide contaminant results for the characterization of the Calendar Year (CY) 2015 First, Second, and Third Quarter sampling of Tank 50H for the Saltstone Waste Acceptance Criteria (WAC) in effect at that time. Information from this characterization will be used by Defense Waste Processing Facility (DWPF) & Saltstone Facility Engineering (D&S-FE) to support the transfer of low-level aqueous waste from Tank 50H to the Salt Feed Tank in the Saltstone Facility in Z-Area, where the waste will be immobilized. This information is also used to update the Tank 50H Waste Characterization System. Previous memoranda documenting the WAC analyses results have been issued for these three samples.

  7. Point prevalence of vertigo and dizziness in a sample of 2672 subjects and correlation with headaches.

    Science.gov (United States)

    Teggi, R; Manfrin, M; Balzanelli, C; Gatti, O; Mura, F; Quaglieri, S; Pilolli, F; Redaelli de Zinis, L O; Benazzo, M; Bussi, M

    2016-06-01

    Vertigo and dizziness are common symptoms in the general population, with an estimated prevalence between 20% and 56%. The aim of our work was to assess the point prevalence of these symptoms in a population of 2672 subjects. Patients were asked to answer a questionnaire; in the first part they were asked about demographic data and previous vertigo and or dizziness. Mean age of the sample was 48.3 ± 15 years, and 46.7% were males. A total of 1077 (40.3%) subjects referred vertigo/dizziness during their lifetime, and the mean age of the first vertigo attack was 39.2 ± 15.4 years; in the second part they were asked about the characteristics of vertigo (age of first episode, rotational vertigo, relapsing episodes, positional exacerbation, presence of cochlear symptoms) and lifetime presence of moderate to severe headache and its clinical features (hemicranial, pulsatile, associated with phono and photophobia, worse on effort). An age and sex effect was demonstrated, with symptoms 4.4 times more elevated in females and 1.8 times in people over 50 years. In the total sample of 2672 responders, 13.7% referred a sensation of spinning, 26.3% relapsing episodes, 12.9% positional exacerbation and 4.8% cochlear symptoms; 34.8% referred headache during their lifetime. Subjects suffering from headache presented an increased rate of relapsing episodes, positional exacerbation, cochlear symptoms and a lower age of occurrence of the first vertigo/dizziness episode. In the discussion, our data are compared with those of previous studies, and we underline the relationship between vertigo/dizziness from one side and headache with migrainous features on the other. © Copyright by Società Italiana di Otorinolaringologia e Chirurgia Cervico-Facciale, Rome, Italy.

  8. Data Quality Objectives For Selecting Waste Samples To Test The Fluid Bed Steam Reformer Test

    International Nuclear Information System (INIS)

    Banning, D.L.

    2010-01-01

    This document describes the data quality objectives to select archived samples located at the 222-S Laboratory for Fluid Bed Steam Reformer testing. The type, quantity and quality of the data required to select the samples for Fluid Bed Steam Reformer testing are discussed. In order to maximize the efficiency and minimize the time to treat Hanford tank waste in the Waste Treatment and Immobilization Plant, additional treatment processes may be required. One of the potential treatment processes is the fluid bed steam reformer (FBSR). A determination of the adequacy of the FBSR process to treat Hanford tank waste is required. The initial step in determining the adequacy of the FBSR process is to select archived waste samples from the 222-S Laboratory that will be used to test the FBSR process. Analyses of the selected samples will be required to confirm the samples meet the testing criteria.

  9. A comparison of Landsat point and rectangular field training sets for land-use classification

    Science.gov (United States)

    Tom, C. H.; Miller, L. D.

    1984-01-01

    Rectangular training fields of homogeneous spectroreflectance are commonly used in supervised pattern recognition efforts. Trial image classification with manually selected training sets gives irregular and misleading results due to statistical bias. A self-verifying, grid-sampled training point approach is proposed as a more statistically valid feature extraction technique. A systematic pixel sampling network of every ninth row and ninth column efficiently replaced the full image scene with smaller statistical vectors which preserved the necessary characteristics for classification. The composite second- and third-order average classification accuracy of 50.1 percent for 331,776 pixels in the full image substantially agreed with the 51 percent value predicted by the grid-sampled, 4,100-point training set.

  10. Correlation between Body Mass Index, Gender, and Skeletal Muscle Mass Cut off Point in Bandung

    OpenAIRE

    Richi Hendrik Wattimena; Vitriana; Irma Ruslina Defi

    2017-01-01

    Objective: To determine the average skeletal muscle mass (SMM) value in young adults as a reference population; to analyze the correlation of gender, and body mass index to the cut off point; and to determine skeletal muscle mass cut off points of population in Bandung, Indonesia. Methods: This was a cross-sectional study involving 199 participants, 122 females and 77 males. The sampling technique used was the multistage random sampling. The participants were those who lived in four ma...

  11. Development of a cloud-point extraction method for copper and nickel determination in food samples

    International Nuclear Information System (INIS)

    Azevedo Lemos, Valfredo; Selis Santos, Moacy; Teixeira David, Graciete; Vasconcelos Maciel, Mardson; Almeida Bezerra, Marcos de

    2008-01-01

    A new, simple and versatile cloud-point extraction (CPE) methodology has been developed for the separation and preconcentration of copper and nickel. The metals in the initial aqueous solution were complexed with 2-(2'-benzothiazolylazo)-5-(N,N-diethyl)aminophenol (BDAP) and Triton X-114 was added as surfactant. Dilution of the surfactant-rich phase with acidified methanol was performed after phase separation, and the copper and nickel contents were measured by flame atomic absorption spectrometry. The variables affecting the cloud-point extraction were optimized using a Box-Behnken design. Under the optimum experimental conditions, enrichment factors of 29 and 25 were achieved for copper and nickel, respectively. The accuracy of the method was evaluated and confirmed by analysis of the followings certified reference materials: Apple Leaves, Spinach Leaves and Tomato Leaves. The limits of detection expressed to solid sample analysis were 0.1 μg g -1 (Cu) and 0.4 μg g -1 (Ni). The precision for 10 replicate measurements of 75 μg L -1 Cu or Ni was 6.4 and 1.0, respectively. The method has been successfully applied to the analysis of food samples

  12. Distance of Sample Measurement Points to Prototype Catalog Curve

    DEFF Research Database (Denmark)

    Hjorth, Poul G.; Karamehmedovic, Mirza; Perram, John

    2006-01-01

    We discuss strategies for comparing discrete data points to a catalog (reference) curve by means of the Euclidean distance from each point to the curve in a pump's head H vs. flow Qdiagram. In particular we find that a method currently in use is inaccurate. We propose several alternatives...

  13. Variogram based and Multiple - Point Statistical simulation of shallow aquifer structures in the Upper Salzach valley, Austria

    Science.gov (United States)

    Jandrisevits, Carmen; Marschallinger, Robert

    2014-05-01

    Quarternary sediments in overdeepened alpine valleys and basins in the Eastern Alps bear substantial groundwater resources. The associated aquifer systems are generally geometrically complex with highly variable hydraulic properties. 3D geological models provide predictions of both geometry and properties of the subsurface required for subsequent modelling of groundwater flow and transport. In hydrology, geostatistical Kriging and Kriging based conditional simulations are widely used to predict the spatial distribution of hydrofacies. In the course of investigating the shallow aquifer structures in the Zell basin in the Upper Salzach valley (Salzburg, Austria), a benchmark of available geostatistical modelling and simulation methods was performed: traditional variogram based geostatistical methods, i.e. Indicator Kriging, Sequential Indicator Simulation and Sequential Indicator Co - Simulation were used as well as Multiple Point Statistics. The ~ 6 km2 investigation area is sampled by 56 drillings with depths of 5 to 50 m; in addition, there are 2 geophysical sections with lengths of 2 km and depths of 50 m. Due to clustered drilling sites, indicator Kriging models failed to consistently model the spatial variability of hydrofacies. Using classical variogram based geostatistical simulation (SIS), equally probable realizations were generated with differences among the realizations providing an uncertainty measure. The yielded models are unstructured from a geological point - they do not portray the shapes and lateral extensions of associated sedimentary units. Since variograms consider only two - point spatial correlations, they are unable to capture the spatial variability of complex geological structures. The Multiple Point Statistics approach overcomes these limitations of two point statistics as it uses a Training image instead of variograms. The 3D Training Image can be seen as a reference facies model where geological knowledge about depositional

  14. Data Quality Objectives for Regulatory Requirements for Dangerous Waste Sampling and Analysis; FINAL

    International Nuclear Information System (INIS)

    MULKEY, C.H.

    1999-01-01

    This document describes sampling and analytical requirements needed to meet state and federal regulations for dangerous waste (DW). The River Protection Project (RPP) is assigned to the task of storage and interim treatment of hazardous waste. Any final treatment or disposal operations, as well as requirements under the land disposal restrictions (LDRs), fall in the jurisdiction of another Hanford organization and are not part of this scope. The requirements for this Data Quality Objective (DQO) Process were developed using the RPP Data Quality Objective Procedure (Banning 1996), which is based on the U.S. Environmental Protection Agency's (EPA) Guidance for the Data Quality Objectives Process (EPA 1994). Hereafter, this document is referred to as the DW DQO. Federal and state laws and regulations pertaining to waste contain requirements that are dependent upon the composition of the waste stream. These regulatory drivers require that pertinent information be obtained. For many requirements, documented process knowledge of a waste composition can be used instead of analytical data to characterize or designate a waste. When process knowledge alone is used to characterize a waste, it is a best management practice to validate the information with analytical measurements

  15. Large-scale urban point cloud labeling and reconstruction

    Science.gov (United States)

    Zhang, Liqiang; Li, Zhuqiang; Li, Anjian; Liu, Fangyu

    2018-04-01

    The large number of object categories and many overlapping or closely neighboring objects in large-scale urban scenes pose great challenges in point cloud classification. In this paper, a novel framework is proposed for classification and reconstruction of airborne laser scanning point cloud data. To label point clouds, we present a rectified linear units neural network named ReLu-NN where the rectified linear units (ReLu) instead of the traditional sigmoid are taken as the activation function in order to speed up the convergence. Since the features of the point cloud are sparse, we reduce the number of neurons by the dropout to avoid over-fitting of the training process. The set of feature descriptors for each 3D point is encoded through self-taught learning, and forms a discriminative feature representation which is taken as the input of the ReLu-NN. The segmented building points are consolidated through an edge-aware point set resampling algorithm, and then they are reconstructed into 3D lightweight models using the 2.5D contouring method (Zhou and Neumann, 2010). Compared with deep learning approaches, the ReLu-NN introduced can easily classify unorganized point clouds without rasterizing the data, and it does not need a large number of training samples. Most of the parameters in the network are learned, and thus the intensive parameter tuning cost is significantly reduced. Experimental results on various datasets demonstrate that the proposed framework achieves better performance than other related algorithms in terms of classification accuracy and reconstruction quality.

  16. Point cloud processing for smart systems

    Directory of Open Access Journals (Sweden)

    Jaromír Landa

    2013-01-01

    Full Text Available High population as well as the economical tension emphasises the necessity of effective city management – from land use planning to urban green maintenance. The management effectiveness is based on precise knowledge of the city environment. Point clouds generated by mobile and terrestrial laser scanners provide precise data about objects in the scanner vicinity. From these data pieces the state of the roads, buildings, trees and other objects important for this decision-making process can be obtained. Generally, they can support the idea of “smart” or at least “smarter” cities.Unfortunately the point clouds do not provide this type of information automatically. It has to be extracted. This extraction is done by expert personnel or by object recognition software. As the point clouds can represent large areas (streets or even cities, usage of expert personnel to identify the required objects can be very time-consuming, therefore cost ineffective. Object recognition software allows us to detect and identify required objects semi-automatically or automatically.The first part of the article reviews and analyses the state of current art point cloud object recognition techniques. The following part presents common formats used for point cloud storage and frequently used software tools for point cloud processing. Further, a method for extraction of geospatial information about detected objects is proposed. Therefore, the method can be used not only to recognize the existence and shape of certain objects, but also to retrieve their geospatial properties. These objects can be later directly used in various GIS systems for further analyses.

  17. Using the multi-objective optimization replica exchange Monte Carlo enhanced sampling method for protein-small molecule docking.

    Science.gov (United States)

    Wang, Hongrui; Liu, Hongwei; Cai, Leixin; Wang, Caixia; Lv, Qiang

    2017-07-10

    In this study, we extended the replica exchange Monte Carlo (REMC) sampling method to protein-small molecule docking conformational prediction using RosettaLigand. In contrast to the traditional Monte Carlo (MC) and REMC sampling methods, these methods use multi-objective optimization Pareto front information to facilitate the selection of replicas for exchange. The Pareto front information generated to select lower energy conformations as representative conformation structure replicas can facilitate the convergence of the available conformational space, including available near-native structures. Furthermore, our approach directly provides min-min scenario Pareto optimal solutions, as well as a hybrid of the min-min and max-min scenario Pareto optimal solutions with lower energy conformations for use as structure templates in the REMC sampling method. These methods were validated based on a thorough analysis of a benchmark data set containing 16 benchmark test cases. An in-depth comparison between MC, REMC, multi-objective optimization-REMC (MO-REMC), and hybrid MO-REMC (HMO-REMC) sampling methods was performed to illustrate the differences between the four conformational search strategies. Our findings demonstrate that the MO-REMC and HMO-REMC conformational sampling methods are powerful approaches for obtaining protein-small molecule docking conformational predictions based on the binding energy of complexes in RosettaLigand.

  18. Point and Fixed Plot Sampling Inventory Estimates at the Savannah River Site, South Carolina.

    Energy Technology Data Exchange (ETDEWEB)

    Parresol, Bernard, R.

    2004-02-01

    This report provides calculation of systematic point sampling volume estimates for trees greater than or equal to 5 inches diameter breast height (dbh) and fixed radius plot volume estimates for trees < 5 inches dbh at the Savannah River Site (SRS), Aiken County, South Carolina. The inventory of 622 plots was started in March 1999 and completed in January 2002 (Figure 1). Estimates are given in cubic foot volume. The analyses are presented in a series of Tables and Figures. In addition, a preliminary analysis of fuel levels on the SRS is given, based on depth measurements of the duff and litter layers on the 622 inventory plots plus line transect samples of down coarse woody material. Potential standing live fuels are also included. The fuels analyses are presented in a series of tables.

  19. Future orientation and suicide ideation and attempts in depressed adults ages 50 and over.

    Science.gov (United States)

    Hirsch, Jameson K; Duberstein, Paul R; Conner, Kenneth R; Heisel, Marnin J; Beckman, Anthony; Franus, Nathan; Conwell, Yeates

    2006-09-01

    The objective of this study was to test the hypothesis that future orientation is associated with lower levels of suicide ideation and lower likelihood of suicide attempt in a sample of patients in treatment for major depression. Two hundred two participants (116 female, 57%) ages 50-88 years were recruited from inpatient and outpatient settings. All were diagnosed with major depression using a structured diagnostic interview. Suicide ideation was assessed with the Scale for Suicide Ideation (both current and worst point ratings), and a measure of future orientation was created to assess future expectancies. The authors predicted that greater future orientation would be associated with less current and worst point suicide ideation, and would distinguish current and lifetime suicide attempters from nonattempters. Hypotheses were tested using multivariate logistic regression and linear regression analyses that accounted for age, gender, hopelessness, and depression. As hypothesized, higher future orientation scores were associated with lower current suicidal ideation, less intense suicidal ideation at its worst point, and lower probability of a history of attempted suicide after accounting for covariates. Future orientation was not associated with current attempt status. Future orientation holds promise as a cognitive variable associated with decreased suicide risk; a better understanding of its putative protective role is needed. Treatments designed to enhance future orientation might decrease suicide risk.

  20. A scalable and multi-purpose point cloud server (PCS) for easier and faster point cloud data management and processing

    Science.gov (United States)

    Cura, Rémi; Perret, Julien; Paparoditis, Nicolas

    2017-05-01

    In addition to more traditional geographical data such as images (rasters) and vectors, point cloud data are becoming increasingly available. Such data are appreciated for their precision and true three-Dimensional (3D) nature. However, managing point clouds can be difficult due to scaling problems and specificities of this data type. Several methods exist but are usually fairly specialised and solve only one aspect of the management problem. In this work, we propose a comprehensive and efficient point cloud management system based on a database server that works on groups of points (patches) rather than individual points. This system is specifically designed to cover the basic needs of point cloud users: fast loading, compressed storage, powerful patch and point filtering, easy data access and exporting, and integrated processing. Moreover, the proposed system fully integrates metadata (like sensor position) and can conjointly use point clouds with other geospatial data, such as images, vectors, topology and other point clouds. Point cloud (parallel) processing can be done in-base with fast prototyping capabilities. Lastly, the system is built on open source technologies; therefore it can be easily extended and customised. We test the proposed system with several billion points obtained from Lidar (aerial and terrestrial) and stereo-vision. We demonstrate loading speeds in the ˜50 million pts/h per process range, transparent-for-user and greater than 2 to 4:1 compression ratio, patch filtering in the 0.1 to 1 s range, and output in the 0.1 million pts/s per process range, along with classical processing methods, such as object detection.

  1. Object Recognition and Localization: The Role of Tactile Sensors

    Directory of Open Access Journals (Sweden)

    Achint Aggarwal

    2014-02-01

    Full Text Available Tactile sensors, because of their intrinsic insensitivity to lighting conditions and water turbidity, provide promising opportunities for augmenting the capabilities of vision sensors in applications involving object recognition and localization. This paper presents two approaches for haptic object recognition and localization for ground and underwater environments. The first approach called Batch Ransac and Iterative Closest Point augmented Particle Filter (BRICPPF is based on an innovative combination of particle filters, Iterative-Closest-Point algorithm, and a feature-based Random Sampling and Consensus (RANSAC algorithm for database matching. It can handle a large database of 3D-objects of complex shapes and performs a complete six-degree-of-freedom localization of static objects. The algorithms are validated by experimentation in ground and underwater environments using real hardware. To our knowledge this is the first instance of haptic object recognition and localization in underwater environments. The second approach is biologically inspired, and provides a close integration between exploration and recognition. An edge following exploration strategy is developed that receives feedback from the current state of recognition. A recognition by parts approach is developed which uses the BRICPPF for object sub-part recognition. Object exploration is either directed to explore a part until it is successfully recognized, or is directed towards new parts to endorse the current recognition belief. This approach is validated by simulation experiments.

  2. New neural-networks-based 3D object recognition system

    Science.gov (United States)

    Abolmaesumi, Purang; Jahed, M.

    1997-09-01

    Three-dimensional object recognition has always been one of the challenging fields in computer vision. In recent years, Ulman and Basri (1991) have proposed that this task can be done by using a database of 2-D views of the objects. The main problem in their proposed system is that the correspondent points should be known to interpolate the views. On the other hand, their system should have a supervisor to decide which class does the represented view belong to. In this paper, we propose a new momentum-Fourier descriptor that is invariant to scale, translation, and rotation. This descriptor provides the input feature vectors to our proposed system. By using the Dystal network, we show that the objects can be classified with over 95% precision. We have used this system to classify the objects like cube, cone, sphere, torus, and cylinder. Because of the nature of the Dystal network, this system reaches to its stable point by a single representation of the view to the system. This system can also classify the similar views to a single class (e.g., for the cube, the system generated 9 different classes for 50 different input views), which can be used to select an optimum database of training views. The system is also very flexible to the noise and deformed views.

  3. Modelling magnetic polarisation J 50 by different methods

    International Nuclear Information System (INIS)

    Yonamine, Taeko; Campos, Marcos F. de; Castro, Nicolau A.; Landgraf, Fernando J.G.

    2006-01-01

    Two different methods for modelling the angular behaviour of magnetic polarisation at 5000 A/m (J 50 ) of electrical steels were evaluated and compared. Both methods are based upon crystallographic texture data. The texture of non-oriented electrical steels with silicon content ranging from 0.11 to 3%Si was determined by X-ray diffraction. In the first method, J 50 was correlated to the calculated value of the average anisotropy energy in each direction, using texture data. In the second method, the first three coefficients of the spherical harmonic series of the ODF and two experimental points were used to estimate the angular variation of J 50 . The first method allows the estimation of J 50 for samples with different textures and Si contents using only the texture data, with no need of magnetic measurement, and this is advantageous, because texture data can be acquired with less than 2 g of material. The second method may give better adjust in some situations but besides the texture data, it requests magnetic measurements in at least two directions, for example, rolling and transverse directions

  4. Toward Rapid Unattended X-ray Tomography of Large Planar Samples at 50-nm Resolution

    International Nuclear Information System (INIS)

    Rudati, J.; Tkachuk, A.; Gelb, J.; Hsu, G.; Feng, Y.; Pastrick, R.; Lyon, A.; Trapp, D.; Beetz, T.; Chen, S.; Hornberger, B.; Seshadri, S.; Kamath, S.; Zeng, X.; Feser, M.; Yun, W.; Pianetta, P.; Andrews, J.; Brennan, S.; Chu, Y. S.

    2009-01-01

    X-ray tomography at sub-50 nm resolution of small areas (∼15 μmx15 μm) are routinely performed with both laboratory and synchrotron sources. Optics and detectors for laboratory systems have been optimized to approach the theoretical efficiency limit. Limited by the availability of relatively low-brightness laboratory X-ray sources, exposure times for 3-D data sets at 50 nm resolution are still many hours up to a full day. However, for bright synchrotron sources, the use of these optimized imaging systems results in extremely short exposure times, approaching live-camera speeds at the Advanced Photon Source at Argonne National Laboratory near Chicago in the US These speeds make it possible to acquire a full tomographic dataset at 50 nm resolution in less than a minute of true X-ray exposure time. However, limits in the control and positioning system lead to large overhead that results in typical exposure times of ∼15 min currently.We present our work on the reduction and elimination of system overhead and toward complete automation of the data acquisition process. The enhancements underway are primarily to boost the scanning rate, sample positioning speed, and illumination homogeneity to performance levels necessary for unattended tomography of large areas (many mm 2 in size). We present first results on this ongoing project.

  5. ICECAP: an integrated, general-purpose, automation-assisted IC50/EC50 assay platform.

    Science.gov (United States)

    Li, Ming; Chou, Judy; King, Kristopher W; Jing, Jing; Wei, Dong; Yang, Liyu

    2015-02-01

    IC50 and EC50 values are commonly used to evaluate drug potency. Mass spectrometry (MS)-centric bioanalytical and biomarker labs are now conducting IC50/EC50 assays, which, if done manually, are tedious and error-prone. Existing bioanalytical sample preparation automation systems cannot meet IC50/EC50 assay throughput demand. A general-purpose, automation-assisted IC50/EC50 assay platform was developed to automate the calculations of spiking solutions and the matrix solutions preparation scheme, the actual spiking and matrix solutions preparations, as well as the flexible sample extraction procedures after incubation. In addition, the platform also automates the data extraction, nonlinear regression curve fitting, computation of IC50/EC50 values, graphing, and reporting. The automation-assisted IC50/EC50 assay platform can process the whole class of assays of varying assay conditions. In each run, the system can handle up to 32 compounds and up to 10 concentration levels per compound, and it greatly improves IC50/EC50 assay experimental productivity and data processing efficiency. © 2014 Society for Laboratory Automation and Screening.

  6. Objective Voice Parameters in Colombian School Workers with Healthy Voices

    Directory of Open Access Journals (Sweden)

    Lady Catherine Cantor Cutiva

    2015-09-01

    Full Text Available Objectives: To characterize the objective voice parameters among school workers, and to identi­fy associated factors of three objective voice parameters, namely fundamental frequency, sound pressure level and maximum phonation time. Materials and methods: We conducted a cross-sectional study among 116 Colombian teachers and 20 Colombian non-teachers. After signing the informed consent form, participants filled out a questionnaire. Then, a voice sample was recorded and evaluated perceptually by a speech therapist and by objective voice analysis with praat software. Short-term environmental measurements of sound level, temperature, humi­dity, and reverberation time were conducted during visits at the workplaces, such as classrooms and offices. Linear regression analysis was used to determine associations between individual and work-related factors and objective voice parameters. Results: Compared with men, women had higher fundamental frequency (201 Hz for teachers and 209 for non-teachers vs. 120 Hz for teachers and 127 for non-teachers and sound pressure level (82 dB vs. 80 dB, and shorter maximum phonation time (around 14 seconds vs. around 16 seconds. Female teachers younger than 50 years of age evidenced a significant tendency to speak with lower fundamental frequen­cy and shorter mpt compared with female teachers older than 50 years of age. Female teachers had significantly higher fundamental frequency (66 Hz, higher sound pressure level (2 dB and short phonation time (2 seconds than male teachers. Conclusion: Female teachers younger than 50 years of age had significantly lower F0 and shorter mpt compared with those older than 50 years of age. The multivariate analysis showed that gender was a much more important determinant of variations in F0, spl and mpt than age and teaching occupation. Objectively measured temperature also contributed to the changes on spl among school workers.

  7. Determining Plane-Sweep Sampling Points in Image Space Using the Cross-Ratio for Image-Based Depth Estimation

    Science.gov (United States)

    Ruf, B.; Erdnuess, B.; Weinmann, M.

    2017-08-01

    With the emergence of small consumer Unmanned Aerial Vehicles (UAVs), the importance and interest of image-based depth estimation and model generation from aerial images has greatly increased in the photogrammetric society. In our work, we focus on algorithms that allow an online image-based dense depth estimation from video sequences, which enables the direct and live structural analysis of the depicted scene. Therefore, we use a multi-view plane-sweep algorithm with a semi-global matching (SGM) optimization which is parallelized for general purpose computation on a GPU (GPGPU), reaching sufficient performance to keep up with the key-frames of input sequences. One important aspect to reach good performance is the way to sample the scene space, creating plane hypotheses. A small step size between consecutive planes, which is needed to reconstruct details in the near vicinity of the camera may lead to ambiguities in distant regions, due to the perspective projection of the camera. Furthermore, an equidistant sampling with a small step size produces a large number of plane hypotheses, leading to high computational effort. To overcome these problems, we present a novel methodology to directly determine the sampling points of plane-sweep algorithms in image space. The use of the perspective invariant cross-ratio allows us to derive the location of the sampling planes directly from the image data. With this, we efficiently sample the scene space, achieving higher sampling density in areas which are close to the camera and a lower density in distant regions. We evaluate our approach on a synthetic benchmark dataset for quantitative evaluation and on a real-image dataset consisting of aerial imagery. The experiments reveal that an inverse sampling achieves equal and better results than a linear sampling, with less sampling points and thus less runtime. Our algorithm allows an online computation of depth maps for subsequences of five frames, provided that the relative

  8. DETERMINING PLANE-SWEEP SAMPLING POINTS IN IMAGE SPACE USING THE CROSS-RATIO FOR IMAGE-BASED DEPTH ESTIMATION

    Directory of Open Access Journals (Sweden)

    B. Ruf

    2017-08-01

    Full Text Available With the emergence of small consumer Unmanned Aerial Vehicles (UAVs, the importance and interest of image-based depth estimation and model generation from aerial images has greatly increased in the photogrammetric society. In our work, we focus on algorithms that allow an online image-based dense depth estimation from video sequences, which enables the direct and live structural analysis of the depicted scene. Therefore, we use a multi-view plane-sweep algorithm with a semi-global matching (SGM optimization which is parallelized for general purpose computation on a GPU (GPGPU, reaching sufficient performance to keep up with the key-frames of input sequences. One important aspect to reach good performance is the way to sample the scene space, creating plane hypotheses. A small step size between consecutive planes, which is needed to reconstruct details in the near vicinity of the camera may lead to ambiguities in distant regions, due to the perspective projection of the camera. Furthermore, an equidistant sampling with a small step size produces a large number of plane hypotheses, leading to high computational effort. To overcome these problems, we present a novel methodology to directly determine the sampling points of plane-sweep algorithms in image space. The use of the perspective invariant cross-ratio allows us to derive the location of the sampling planes directly from the image data. With this, we efficiently sample the scene space, achieving higher sampling density in areas which are close to the camera and a lower density in distant regions. We evaluate our approach on a synthetic benchmark dataset for quantitative evaluation and on a real-image dataset consisting of aerial imagery. The experiments reveal that an inverse sampling achieves equal and better results than a linear sampling, with less sampling points and thus less runtime. Our algorithm allows an online computation of depth maps for subsequences of five frames, provided that

  9. Obstacles of Scientific Research with Faculty of University of Jadara from Their Point of View

    Science.gov (United States)

    Hatamleh, Habes Moh'd

    2016-01-01

    This study aimed to estimate the existence of the scientific research obstacles' degree from the point of faculty at the University of Jadara from their point of view. The number of members that responded to the study reached 100 samples, and this number accounts for 80% of the study society. To achieve the objectives of the study, the researcher…

  10. Self-organizing adaptive map: autonomous learning of curves and surfaces from point samples.

    Science.gov (United States)

    Piastra, Marco

    2013-05-01

    Competitive Hebbian Learning (CHL) (Martinetz, 1993) is a simple and elegant method for estimating the topology of a manifold from point samples. The method has been adopted in a number of self-organizing networks described in the literature and has given rise to related studies in the fields of geometry and computational topology. Recent results from these fields have shown that a faithful reconstruction can be obtained using the CHL method only for curves and surfaces. Within these limitations, these findings constitute a basis for defining a CHL-based, growing self-organizing network that produces a faithful reconstruction of an input manifold. The SOAM (Self-Organizing Adaptive Map) algorithm adapts its local structure autonomously in such a way that it can match the features of the manifold being learned. The adaptation process is driven by the defects arising when the network structure is inadequate, which cause a growth in the density of units. Regions of the network undergo a phase transition and change their behavior whenever a simple, local condition of topological regularity is met. The phase transition is eventually completed across the entire structure and the adaptation process terminates. In specific conditions, the structure thus obtained is homeomorphic to the input manifold. During the adaptation process, the network also has the capability to focus on the acquisition of input point samples in critical regions, with a substantial increase in efficiency. The behavior of the network has been assessed experimentally with typical data sets for surface reconstruction, including suboptimal conditions, e.g. with undersampling and noise. Copyright © 2012 Elsevier Ltd. All rights reserved.

  11. Latin Hypercube Sampling (LHS) at variable resolutions for enhanced watershed scale Soil Sampling and Digital Soil Mapping.

    Science.gov (United States)

    Hamalainen, Sampsa; Geng, Xiaoyuan; He, Juanxia

    2017-04-01

    Latin Hypercube Sampling (LHS) at variable resolutions for enhanced watershed scale Soil Sampling and Digital Soil Mapping. Sampsa Hamalainen, Xiaoyuan Geng, and Juanxia, He. AAFC - Agriculture and Agr-Food Canada, Ottawa, Canada. The Latin Hypercube Sampling (LHS) approach to assist with Digital Soil Mapping has been developed for some time now, however the purpose of this work was to complement LHS with use of multiple spatial resolutions of covariate datasets and variability in the range of sampling points produced. This allowed for specific sets of LHS points to be produced to fulfil the needs of various partners from multiple projects working in the Ontario and Prince Edward Island provinces of Canada. Secondary soil and environmental attributes are critical inputs that are required in the development of sampling points by LHS. These include a required Digital Elevation Model (DEM) and subsequent covariate datasets produced as a result of a Digital Terrain Analysis performed on the DEM. These additional covariates often include but are not limited to Topographic Wetness Index (TWI), Length-Slope (LS) Factor, and Slope which are continuous data. The range of specific points created in LHS included 50 - 200 depending on the size of the watershed and more importantly the number of soil types found within. The spatial resolution of covariates included within the work ranged from 5 - 30 m. The iterations within the LHS sampling were run at an optimal level so the LHS model provided a good spatial representation of the environmental attributes within the watershed. Also, additional covariates were included in the Latin Hypercube Sampling approach which is categorical in nature such as external Surficial Geology data. Some initial results of the work include using a 1000 iteration variable within the LHS model. 1000 iterations was consistently a reasonable value used to produce sampling points that provided a good spatial representation of the environmental

  12. Sample and injection manifolds used to in-place test of nuclear air-cleaning system

    International Nuclear Information System (INIS)

    Qiu Dangui; Li Xinzhi; Hou Jianrong; Qiao Taifei; Wu Tao; Zhang Jirong; Han Lihong

    2012-01-01

    Objective: According to the regulations of nuclear safety rules and related standards, in-place test of the nuclear air-cleaning systems should be carried out before and during operation of the nuclear facilities, which ensure them to be in good condition. In some special conditions, the use of sample and injection manifolds is required to make the test tracer and ventilating duct air fully mixed, so as to get the on-spot typical sample. Methods: This paper introduces the technology and application of the sample and injection manifolds in nuclear air-cleaning system. Results: Multi point injection and multi point sampling technology as an effective experimental method, has been used in a of domestic and international nuclear facilities. Conclusion: The technology solved the problem of uniformly of on-spot injection and sampling,which plays an important role in objectively evaluating the function of nuclear air-cleaning system. (authors)

  13. Sample sufficiency of chinese pink grown in different substrates

    Directory of Open Access Journals (Sweden)

    Sidinei José lopes

    2016-04-01

    Full Text Available The cravina is an excellent plant to build up gardens due to its early flowering, abundant flowering and great performance in spring and autumn. The objective was to estimate the sample size for plant chinese pink, grown on different substrates, and check the variability of the sample size between growth parameters and production and substrates. They used seven treatments (substrates: S1 = 50% soil + 50% rice husk ash; S2 = 80% soil + 20% earthworm castings; S3 = 80% rice husk ash + 20% earthworm castings; S4 = 40% soil + 40% rice husk ash + 20% earthworm castings; S5 = 100% peat; S6 = 100% commercial substrate Mecplant®; S7 = 50% peat + 50% rice husk ash, with 56 repetitions each, totaling 392 plants of garden pink, which was evaluated in 17 of growth and production parameters. The methodology used to bootstrap resampling, with replacement, for each character within each substrate with predetermined error: 5, 10, 20 and 40% of the average (D%. To a 95% confidence interval, with D = 20%, the substrate 50% soil and 50% of rice husk ash had the largest sample size 11 characters; when comparing the characters , the number of flower buds had the highest sample size on average 113 plants. Using samples of 44 plant chinese pink for commercial substrate Mecplant® meet the lower precisions or equal to 20% for all variables. There is variation in sample size in relation to the substrate used and the variable evaluated in chinese pink plants.

  14. Critical analysis of consecutive unilateral cleft lip repairs: determining ideal sample size.

    Science.gov (United States)

    Power, Stephanie M; Matic, Damir B

    2013-03-01

    Objective : Cleft surgeons often show 10 consecutive lip repairs to reduce presentation bias, however the validity remains unknown. The purpose of this study is to determine the number of consecutive cases that represent average outcomes. Secondary objectives are to determine if outcomes correlate with cleft severity and to calculate interrater reliability. Design : Consecutive preoperative and 2-year postoperative photographs of the unilateral cleft lip-nose complex were randomized and evaluated by cleft surgeons. Parametric analysis was performed according to chronologic, consecutive order. The mean standard deviation over all raters enabled calculation of expected 95% confidence intervals around a mean tested for various sample sizes. Setting : Meeting of the American Cleft Palate-Craniofacial Association in 2009. Patients, Participants : Ten senior cleft surgeons evaluated 39 consecutive lip repairs. Main Outcome Measures : Preoperative severity and postoperative outcomes were evaluated using descriptive and quantitative scales. Results : Intraclass correlation coefficients for cleft severity and postoperative evaluations were 0.65 and 0.21, respectively. Outcomes did not correlate with cleft severity (P  =  .28). Calculations for 10 consecutive cases demonstrated wide 95% confidence intervals, spanning two points on both postoperative grading scales. Ninety-five percent confidence intervals narrowed within one qualitative grade (±0.30) and one point (±0.50) on the 10-point scale for 27 consecutive cases. Conclusions : Larger numbers of consecutive cases (n > 27) are increasingly representative of average results, but less practical in presentation format. Ten consecutive cases lack statistical support. Cleft surgeons showed low interrater reliability for postoperative assessments, which may reflect personal bias when evaluating another surgeon's results.

  15. 75 FR 14206 - FPL Energy Point Beach, LLC; Point Beach Nuclear Plant, Units 1 and 2; Environmental Assessment...

    Science.gov (United States)

    2010-03-24

    ... NUCLEAR REGULATORY COMMISSION [Docket Nos. 50-266 And 50-301; NRC-2010-0123 FPL Energy Point Beach, LLC; Point Beach Nuclear Plant, Units 1 and 2; Environmental Assessment and Finding of No Significant Impact The U.S. Nuclear Regulatory Commission (NRC) is considering issuance of an Exemption, pursuant to...

  16. Development of septum-free injector for gas chromatography and its application to the samples with a high boiling point.

    Science.gov (United States)

    Ito, Hiroshi; Hayakawa, Kazuichi; Yamamoto, Atsushi; Murase, Atsushi; Hayakawa, Kazumi; Kuno, Minoru; Inoue, Yoshinori

    2006-11-03

    A novel apparatus with a simple structure has been developed for introducing samples into the vaporizing chamber of a gas chromatograph. It requires no septum due to the gas sealing structure over the carrier gas supply line. The septum-free injector made it possible to use injection port temperatures as high as 450 degrees C. Repetitive injection of samples with boiling points below 300 degrees C resulted in peak areas with relative standard deviations between 1.25 and 3.28% (n=5) and good linearity (r(2)>0.9942) for the calibration curve. In the analysis of polycyclic aromatic hydrocarbons and a base oil, the peak areas of components with high boiling points increased as the injection port temperature was increased to 450 degrees C.

  17. Scanning electron microscope autoradiography of critical point dried biological samples

    International Nuclear Information System (INIS)

    Weiss, R.L.

    1980-01-01

    A technique has been developed for the localization of isotopes in the scanning electron microscope. Autoradiographic studies have been performed using a model system and a unicellular biflagellate alga. One requirement of this technique is that all manipulations be carried out on samples that are maintained in a liquid state. Observations of a source of radiation ( 125 I-ferritin) show that the nuclear emulsion used to detect radiation is active under these conditions. Efficiency measurement performed using 125 I-ferritin indicate that 125 I-SEM autoradiography is an efficient process that exhibits a 'dose dependent' response. Two types of labeling methods were used with cells, surface labeling with 125 I and internal labeling with 3 H. Silver grains appeared on labeled cells after autoradiography, removal of residual gelatin and critical point drying. The location of grains was examined on a flagellated green alga (Chlamydomonas reinhardi) capable of undergoing cell fusion. Fusion experiments using labeled and unlabeled cells indicate that 1. Labeling is specific for incorporated radioactivity; 2. Cell surface structure is preserved in SEM autoradiographs and 3. The technique appears to produce reliable autoradiographs. Thus scanning electron microscope autoradiography should provide a new and useful experimental approach

  18. [A 20-year follow-up study of a sample of 50 pairs of twins with neurotic-psychosomatic disorders].

    Science.gov (United States)

    Muhs, A; Schepank, H; Manz, R

    1990-01-01

    As part of a research project, examination was made of a sample of 50 pairs of twins (21 pairs of identical twins, 16 pairs of non-identical twins of the same sex, and 13 pairs of male-female twins [n = 100 test persons]) between 1963 and 1969 and again recently after a period of 20 years. The index twins were drawn from among the patients who made use of the services of an out-patient psychotherapeutic clinic, and they were determined to be either psychoneurotic, character neurotic, or psychosomatically ill. The question examined was again one of nature vs. nurture. Identical twins showed a significantly higher similarity with regard to the seriousness of their neuroses and the manifestation of neurotic symptoms than did non-identical twins. Noticeable similarities existed in cases of depressive disturbances, disturbances of oral and aggressive behavior, and disturbances of interpersonal contact. With regard to the influence of variables in the environment, we examined the effect of factors in early childhood on neurotic development. Lack of a reference person, a negative attitude on the part of parents toward the child, etc., frustration within and outside the family have an effect on the manifestation of neuroses and on the course of their development. The influence of early childhood factors on the degree of neurotic disorder is still to be noted in the current point prevalence.

  19. Foreign Object Detection by Sub-Terahertz Quasi-Bessel Beam Imaging

    Directory of Open Access Journals (Sweden)

    Hyang Sook Chun

    2012-12-01

    Full Text Available Food quality monitoring, particularly foreign object detection, has recently become a critical issue for the food industry. In contrast to X-ray imaging, terahertz imaging can provide a safe and ionizing-radiation-free nondestructive inspection method for foreign object sensing. In this work, a quasi-Bessel beam (QBB known to be nondiffracting was generated by a conical dielectric lens to detect foreign objects in food samples. Using numerical evaluation via the finite-difference time-domain (FDTD method, the beam profiles of a QBB were evaluated and compared with the results obtained via analytical calculation and experimental characterization (knife edge method, point scanning method. The FDTD method enables a more precise estimation of the beam profile. Foreign objects in food samples, namely crickets, were then detected with the QBB, which had a deep focus and a high spatial resolution at 210 GHz. Transmitted images using a Gaussian beam obtained with a conventional lens were compared in the sub-terahertz frequency experimentally with those using a QBB generated using an axicon.

  20. Fluid sample collection and distribution system. [qualitative analysis of aqueous samples from several points

    Science.gov (United States)

    Brooks, R. L. (Inventor)

    1979-01-01

    A multipoint fluid sample collection and distribution system is provided wherein the sample inputs are made through one or more of a number of sampling valves to a progressive cavity pump which is not susceptible to damage by large unfiltered particles. The pump output is through a filter unit that can provide a filtered multipoint sample. An unfiltered multipoint sample is also provided. An effluent sample can be taken and applied to a second progressive cavity pump for pumping to a filter unit that can provide one or more filtered effluent samples. The second pump can also provide an unfiltered effluent sample. Means are provided to periodically back flush each filter unit without shutting off the whole system.

  1. Functional Principal Components Analysis of Shanghai Stock Exchange 50 Index

    Directory of Open Access Journals (Sweden)

    Zhiliang Wang

    2014-01-01

    Full Text Available The main purpose of this paper is to explore the principle components of Shanghai stock exchange 50 index by means of functional principal component analysis (FPCA. Functional data analysis (FDA deals with random variables (or process with realizations in the smooth functional space. One of the most popular FDA techniques is functional principal component analysis, which was introduced for the statistical analysis of a set of financial time series from an explorative point of view. FPCA is the functional analogue of the well-known dimension reduction technique in the multivariate statistical analysis, searching for linear transformations of the random vector with the maximal variance. In this paper, we studied the monthly return volatility of Shanghai stock exchange 50 index (SSE50. Using FPCA to reduce dimension to a finite level, we extracted the most significant components of the data and some relevant statistical features of such related datasets. The calculated results show that regarding the samples as random functions is rational. Compared with the ordinary principle component analysis, FPCA can solve the problem of different dimensions in the samples. And FPCA is a convenient approach to extract the main variance factors.

  2. The multi-objective optimization of the horizontal-axis marine current turbine based on NSGA-II algorithm

    International Nuclear Information System (INIS)

    Zhu, G J; Guo, P C; Luo, X Q; Feng, J J

    2012-01-01

    The present paper describes a hydrodynamic optimization technique for horizontal-axial marine current turbine. The pitch angle distribution is important to marine current turbine. In this paper, the pitch angle distribution curve is parameterized as four control points by Bezier curve method. The coordinates of the four control points are chosen as optimization variables, and the sample space are structured according to the Box-Behnken experimental design method (BBD). Then the power capture coefficient and axial thrust coefficient in design tip-speed ratio is obtained for all the elements in the sample space by CFD numerical simulation. The power capture coefficient and axial thrust are chosen as objective function, and quadratic polynomial regression equations are constructed to fit the relationship between the optimization variables and each objective function according to response surface model. With the obtained quadratic polynomial regression equations as performance prediction model, the marine current turbine is optimized using the NSGA-II multi-objective genetic algorithm, which finally offers an improved marine current turbine.

  3. Development of cloud point extraction - UV-visible spectrophotometric method for vanadium (V) determination in hydrogeochemical samples

    International Nuclear Information System (INIS)

    Durani, Smeer; Mathur, Neerja; Chowdary, G.S.

    2007-01-01

    The cloud point extraction behavior (CPE) of vanadium (V) using 5,7 dibromo 8-hydroxyquinoline (DBHQ) and triton X 100 was investigated. Vanadium (V) was extracted with 4 ml of 0.5 mg/ml DBHQ and 6 ml of 8% (V/V) triton X 100 at the pH 3.7. A few hydrogeochemical samples were analysed for vanadium using the above method. (author)

  4. (LMRG): Microscope Resolution, Objective Quality, Spectral Accuracy and Spectral Un-mixing

    Science.gov (United States)

    Bayles, Carol J.; Cole, Richard W.; Eason, Brady; Girard, Anne-Marie; Jinadasa, Tushare; Martin, Karen; McNamara, George; Opansky, Cynthia; Schulz, Katherine; Thibault, Marc; Brown, Claire M.

    2012-01-01

    The second study by the LMRG focuses on measuring confocal laser scanning microscope (CLSM) resolution, objective lens quality, spectral imaging accuracy and spectral un-mixing. Affordable test samples for each aspect of the study were designed, prepared and sent to 116 labs from 23 countries across the globe. Detailed protocols were designed for the three tests and customized for most of the major confocal instruments being used by the study participants. One protocol developed for measuring resolution and objective quality was recently published in Nature Protocols (Cole, R. W., T. Jinadasa, et al. (2011). Nature Protocols 6(12): 1929–1941). The first study involved 3D imaging of sub-resolution fluorescent microspheres to determine the microscope point spread function. Results of the resolution studies as well as point spread function quality (i.e. objective lens quality) from 140 different objective lenses will be presented. The second study of spectral accuracy looked at the reflection of the laser excitation lines into the spectral detection in order to determine the accuracy of these systems to report back the accurate laser emission wavelengths. Results will be presented from 42 different spectral confocal systems. Finally, samples with double orange beads (orange core and orange coating) were imaged spectrally and the imaging software was used to un-mix fluorescence signals from the two orange dyes. Results from 26 different confocal systems will be summarized. Time will be left to discuss possibilities for the next LMRG study.

  5. Safety Evaluation Report related to the operation of Nine Mile Point Nuclear Station, Unit No. 2 (Docket No. 50-410)

    International Nuclear Information System (INIS)

    1985-02-01

    The Safety Evaluation Report for the application filed by the Niagara Mohawk Power Corporation, as applicant and co-owner, for a license to operate the Nine Mile Point Nuclear Station, Unit 2 (Docket No. 50-410), has been prepared by the Office of Nuclear Reactor Regulation of the US Nuclear Regulatory Commission. The facility is located near Oswego, New York. Subject to favorable resolution of the items discussed in this report, the NRC staff concludes that the facility can be operated by the applicant without endangering the health and safety of the public

  6. Active AirCore Sampling: Constraining Point Sources of Methane and Other Gases with Fixed Wing Unmanned Aerial Systems

    Science.gov (United States)

    Bent, J. D.; Sweeney, C.; Tans, P. P.; Newberger, T.; Higgs, J. A.; Wolter, S.

    2017-12-01

    Accurate estimates of point source gas emissions are essential for reconciling top-down and bottom-up greenhouse gas measurements, but sampling such sources is challenging. Remote sensing methods are limited by resolution and cloud cover; aircraft methods are limited by air traffic control clearances, and the need to properly determine boundary layer height. A new sampling approach leverages the ability of unmanned aerial systems (UAS) to measure all the way to the surface near the source of emissions, improving sample resolution, and reducing the need to characterize a wide downstream swath, or measure to the full height of the planetary boundary layer (PBL). The "Active-AirCore" sampler, currently under development, will fly on a fixed wing UAS in Class G airspace, spiraling from the surface to 1200 ft AGL around point sources such as leaking oil wells to measure methane, carbon dioxide and carbon monoxide. The sampler collects a 100-meter long sample "core" of air in an 1/8" passivated stainless steel tube. This "core" is run on a high-precision instrument shortly after the UAS is recovered. Sample values are mapped to a specific geographic location by cross-referencing GPS and flow/pressure metadata, and fluxes are quantified by applying Gauss's theorem to the data, mapped onto the spatial "cylinder" circumscribed by the UAS. The AirCore-Active builds off the sampling ability and analytical approach of the related AirCore sampler, which profiles the atmosphere passively using a balloon launch platform, but will add an active pumping capability needed for near-surface horizontal sampling applications. Here, we show design elements, laboratory and field test results for methane, describe the overall goals of the mission, and discuss how the platform can be adapted, with minimal effort, to measure other gas species.

  7. The goal of ape pointing.

    Science.gov (United States)

    Halina, Marta; Liebal, Katja; Tomasello, Michael

    2018-01-01

    Captive great apes regularly use pointing gestures in their interactions with humans. However, the precise function of this gesture is unknown. One possibility is that apes use pointing primarily to direct attention (as in "please look at that"); another is that they point mainly as an action request (such as "can you give that to me?"). We investigated these two possibilities here by examining how the looking behavior of recipients affects pointing in chimpanzees (Pan troglodytes) and bonobos (Pan paniscus). Upon pointing to food, subjects were faced with a recipient who either looked at the indicated object (successful-look) or failed to look at the indicated object (failed-look). We predicted that, if apes point primarily to direct attention, subjects would spend more time pointing in the failed-look condition because the goal of their gesture had not been met. Alternatively, we expected that, if apes point primarily to request an object, subjects would not differ in their pointing behavior between the successful-look and failed-look conditions because these conditions differed only in the looking behavior of the recipient. We found that subjects did differ in their pointing behavior across the successful-look and failed-look conditions, but contrary to our prediction subjects spent more time pointing in the successful-look condition. These results suggest that apes are sensitive to the attentional states of gestural recipients, but their adjustments are aimed at multiple goals. We also found a greater number of individuals with a strong right-hand than left-hand preference for pointing.

  8. Small target detection using objectness and saliency

    Science.gov (United States)

    Zhang, Naiwen; Xiao, Yang; Fang, Zhiwen; Yang, Jian; Wang, Li; Li, Tao

    2017-10-01

    We are motived by the need for generic object detection algorithm which achieves high recall for small targets in complex scenes with acceptable computational efficiency. We propose a novel object detection algorithm, which has high localization quality with acceptable computational cost. Firstly, we obtain the objectness map as in BING[1] and use NMS to get the top N points. Then, k-means algorithm is used to cluster them into K classes according to their location. We set the center points of the K classes as seed points. For each seed point, an object potential region is extracted. Finally, a fast salient object detection algorithm[2] is applied to the object potential regions to highlight objectlike pixels, and a series of efficient post-processing operations are proposed to locate the targets. Our method runs at 5 FPS on 1000*1000 images, and significantly outperforms previous methods on small targets in cluttered background.

  9. spsann - optimization of sample patterns using spatial simulated annealing

    Science.gov (United States)

    Samuel-Rosa, Alessandro; Heuvelink, Gerard; Vasques, Gustavo; Anjos, Lúcia

    2015-04-01

    There are many algorithms and computer programs to optimize sample patterns, some private and others publicly available. A few have only been presented in scientific articles and text books. This dispersion and somewhat poor availability is holds back to their wider adoption and further development. We introduce spsann, a new R-package for the optimization of sample patterns using spatial simulated annealing. R is the most popular environment for data processing and analysis. Spatial simulated annealing is a well known method with widespread use to solve optimization problems in the soil and geo-sciences. This is mainly due to its robustness against local optima and easiness of implementation. spsann offers many optimizing criteria for sampling for variogram estimation (number of points or point-pairs per lag distance class - PPL), trend estimation (association/correlation and marginal distribution of the covariates - ACDC), and spatial interpolation (mean squared shortest distance - MSSD). spsann also includes the mean or maximum universal kriging variance (MUKV) as an optimizing criterion, which is used when the model of spatial variation is known. PPL, ACDC and MSSD were combined (PAN) for sampling when we are ignorant about the model of spatial variation. spsann solves this multi-objective optimization problem scaling the objective function values using their maximum absolute value or the mean value computed over 1000 random samples. Scaled values are aggregated using the weighted sum method. A graphical display allows to follow how the sample pattern is being perturbed during the optimization, as well as the evolution of its energy state. It is possible to start perturbing many points and exponentially reduce the number of perturbed points. The maximum perturbation distance reduces linearly with the number of iterations. The acceptance probability also reduces exponentially with the number of iterations. R is memory hungry and spatial simulated annealing is a

  10. IRAS observations of the exciting stars of Herbig-Haro objects. II. The Reipurth and Graham sample and low-resolution spectra

    International Nuclear Information System (INIS)

    Cohen, M.

    1990-01-01

    Using IRAS COADDed images, candidates are suggested for the exciting stars of Herbig-Haro objects from the Reipurth and Graham sample. The IRAS low-resolution spectrometer provides spectra for 20 of the 46 candidate stars so far identified as exciting young, unevolved H-H systems. These reveal 10-micron silicate absorption features, or are too red to show detectable flux near 10 microns. The histogram of bolometric luminosities for 46 young Herbig-Haro exciting stars has a median of 13 solar luminosities and a mode between 16 and 32 solar luminosities. Although the enlarged sample of known exciting stars has more of the higher luminosity objects than an earlier sample, the histogram still represents a generally low-luminosity distribution. 27 refs

  11. Predictive value of routine point-of-care cardiac troponin T measurement for prehospital diagnosis and risk-stratification in patients with suspected acute myocardial infarction

    DEFF Research Database (Denmark)

    Rasmussen, Martin B; Stengaard, Carsten; Sørensen, Jacob T

    2017-01-01

    -of-care cardiac troponin T measurements (11.0%) had a value ≥50 ng/l, including 966 with acute myocardial infarction (sensitivity: 44.2%, specificity: 92.8%). Patients presenting with a prehospital point-of-care cardiac troponin T value ≥50 ng/l had a one-year mortality of 24% compared with 4.8% in those...... with values analysis: point-of-care cardiac troponin T≥50 ng/l (hazard ratio 2.10, 95% confidence interval: 1.90-2.33), congestive heart failure (hazard ratio 1.93, 95% confidence interval: 1......OBJECTIVE: The purpose of this study was to determine the predictive value of routine prehospital point-of-care cardiac troponin T measurement for diagnosis and risk stratification of patients with suspected acute myocardial infarction. METHODS AND RESULTS: All prehospital emergency medical service...

  12. Instrument air dew point requirements -- 108-P, L, K

    International Nuclear Information System (INIS)

    Fairchild, P.N.

    1994-01-01

    The 108 Building dew point analyzers measure dew point at atmospheric pressure. Existing 108 Roundsheets state the maximum dew point temperature shall be less than -50 F. After repeatedly failing to maintain a -50 F dew point temperature Reactor Engineering researched the basis for the existing limit. This report documents the results of the study and provides technical justification for a new maximum dew point temperature of -35 F at atmospheric pressure as read by the 108 building dew point analyzers

  13. Objections to Gorleben from the geomorphological point of view

    International Nuclear Information System (INIS)

    Semmel, A.

    1979-01-01

    Grimmel's objections concerning the geological suitability of the Gorleben salt plug do not seem to be completly unjustified. They should, however, be checked by field studies. It remains to be seen which arguments will be presented by Venzlaff, who in his article did not deal with these problems in detail. (orig.) [de

  14. Two-point vs multipoint sample collection for the analysis of energy expenditure by use of the doubly labeled water method

    International Nuclear Information System (INIS)

    Welle, S.

    1990-01-01

    Energy expenditure over a 2-wk period was determined by the doubly labeled water (2H2(18)O) method in nine adults. When daily samples were analyzed, energy expenditure was 2859 +/- 453 kcal/d (means +/- SD); when only the first and last time points were considered, the mean calculated energy expenditure was not significantly different (2947 +/- 430 kcal/d). An analysis of theoretical cases in which isotope flux is not constant indicates that the multipoint method can cause errors in the calculation of average isotope fluxes, but these are generally small. Simulations of the effect of analytical error indicate that increasing the number of replicates on two points reduces the impact of technical errors more effectively than does performing single analyses on multiple samples. It appears that generally there is no advantage to collecting frequent samples when the 2H2(18)O method is used to estimate energy expenditure in adult humans

  15. Freezing point osmometry of milk to determine the additional water content – an issue in general quality control and German food regulation

    Directory of Open Access Journals (Sweden)

    Holz Birger

    2008-03-01

    Full Text Available Abstract Background The determination of the osmolality of aqueous samples using a freezing point osmometer is a well-established, routine laboratory method. In addition to their use in clinical and pharmaceutical laboratories, freezing point osmometers are also employed in food testing laboratories. One application is the determination of the osmolality of milk. Although cow's milk is a natural product whose water content is approximately 87%, the osmolality of milk is a significant value when the milk is collected from a larger population of animals. This value is used in milk processing to control the water content, based on the German Food Control Regulations for Milk. Results Measurement of the freezing point and osmolality of milk samples was performed with a Knauer Semi-Micro Freezing Point Osmometer. Osmolality was measured for the untreated milk samples and following their dilution (by volume with 10% and 50% water. The measurements were made after 1, 4 and 7 days to evaluate changes over time. All measurement values for the undiluted milk were spread over a small interval with an average of 271 mOsmol/kg. After mixing the milk samples with 10% water, the average decreased to 242 mOsmol/kg, while mixing with 50% water resulted in an average osmolality of 129 mOsmol/kg. There was no significant change for the osmolality within the 7 days (measurements from days 1, 4 and 7. Conclusion The results observed demonstrate clearly that the additional water content of milk can be determined easily using a freezing point osmometer. Milk samples that contain additional water have a significantly decreased osmolality, corresponding to an increased freezing point. The effect on osmolality of ageing the milk samples could not be determined in this study's time-dependent measurements.

  16. Fully 3D printed integrated reactor array for point-of-care molecular diagnostics.

    Science.gov (United States)

    Kadimisetty, Karteek; Song, Jinzhao; Doto, Aoife M; Hwang, Young; Peng, Jing; Mauk, Michael G; Bushman, Frederic D; Gross, Robert; Jarvis, Joseph N; Liu, Changchun

    2018-06-30

    Molecular diagnostics that involve nucleic acid amplification tests (NAATs) are crucial for prevention and treatment of infectious diseases. In this study, we developed a simple, inexpensive, disposable, fully 3D printed microfluidic reactor array that is capable of carrying out extraction, concentration and isothermal amplification of nucleic acids in variety of body fluids. The method allows rapid molecular diagnostic tests for infectious diseases at point of care. A simple leak-proof polymerization strategy was developed to integrate flow-through nucleic acid isolation membranes into microfluidic devices, yielding a multifunctional diagnostic platform. Static coating technology was adopted to improve the biocompatibility of our 3D printed device. We demonstrated the suitability of our device for both end-point colorimetric qualitative detection and real-time fluorescence quantitative detection. We applied our diagnostic device to detection of Plasmodium falciparum in plasma samples and Neisseria meningitides in cerebrospinal fluid (CSF) samples by loop-mediated, isothermal amplification (LAMP) within 50 min. The detection limits were 100 fg for P. falciparum and 50 colony-forming unit (CFU) for N. meningitidis per reaction, which are comparable to that of benchtop instruments. This rapid and inexpensive 3D printed device has great potential for point-of-care molecular diagnosis of infectious disease in resource-limited settings. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. 3d object segmentation of point clouds using profiling techniques

    African Journals Online (AJOL)

    Administrator

    optimization attempts to physically store the point cloud so that storage, retrieval and visualisation ..... Ideally three stacks should be sufficient, but in practice four or five are used. .... The authors would like to acknowledge that this paper is based on a paper presented at ... Theory, Processing and Application, 5 pages.

  18. Cloud point extraction-flame atomic absorption spectrometry for pre-concentration and determination of trace amounts of silver ions in water samples.

    Science.gov (United States)

    Yang, Xiupei; Jia, Zhihui; Yang, Xiaocui; Li, Gu; Liao, Xiangjun

    2017-03-01

    A cloud point extraction (CPE) method was used as a pre-concentration strategy prior to the determination of trace levels of silver in water by flame atomic absorption spectrometry (FAAS) The pre-concentration is based on the clouding phenomena of non-ionic surfactant, triton X-114, with Ag (I)/diethyldithiocarbamate (DDTC) complexes in which the latter is soluble in a micellar phase composed by the former. When the temperature increases above its cloud point, the Ag (I)/DDTC complexes are extracted into the surfactant-rich phase. The factors affecting the extraction efficiency including pH of the aqueous solution, concentration of the DDTC, amount of the surfactant, incubation temperature and time were investigated and optimized. Under the optimal experimental conditions, no interference was observed for the determination of 100 ng·mL -1 Ag + in the presence of various cations below their maximum concentrations allowed in this method, for instance, 50 μg·mL -1 for both Zn 2+ and Cu 2+ , 80 μg·mL -1 for Pb 2+ , 1000 μg·mL -1 for Mn 2+ , and 100 μg·mL -1 for both Cd 2+ and Ni 2+ . The calibration curve was linear in the range of 1-500 ng·mL -1 with a limit of detection (LOD) at 0.3 ng·mL -1 . The developed method was successfully applied for the determination of trace levels of silver in water samples such as river water and tap water.

  19. I See Your Point: Infants under 12 Months Understand that Pointing Is Communicative

    Science.gov (United States)

    Krehm, Madelaine; Onishi, Kristine H.; Vouloumanos, Athena

    2014-01-01

    Do young infants understand that pointing gestures allow the pointer to change the information state of a recipient? We used a third-party experimental scenario to examine whether 9- and 11-month-olds understand that a pointer's pointing gesture can inform a recipient about a target object. When the pointer pointed to a target, infants…

  20. Dependence between LD50 for Rodents and LC50 for Adult Fish and Fish Embryos.

    Science.gov (United States)

    Zolotarev, K V; Belyaeva, N F; Mikhailov, A N; Mikhailova, M V

    2017-02-01

    We revealed empirical dependences between common logarithm of a ratio of rat oral LD 50 to LC a 50 for adult fish and lgP for 50 different chemicals; and common logarithm of a ratio of the oral LD 50 in rodents to LC e 50 for fish embryos and lgP for 30 different chemicals. The dependences were obtained by constructing a trend line between experimental points and calculation of Pearson's R correlation coefficient as a measure of regression significance. These dependences can show the influence of substance lipophilicity on its toxicity for aquatic organisms comparing to mammals.

  1. Section-Based Tree Species Identification Using Airborne LIDAR Point Cloud

    Science.gov (United States)

    Yao, C.; Zhang, X.; Liu, H.

    2017-09-01

    The application of LiDAR data in forestry initially focused on mapping forest community, particularly and primarily intended for largescale forest management and planning. Then with the smaller footprint and higher sampling density LiDAR data available, detecting individual tree overstory, estimating crowns parameters and identifying tree species are demonstrated practicable. This paper proposes a section-based protocol of tree species identification taking palm tree as an example. Section-based method is to detect objects through certain profile among different direction, basically along X-axis or Y-axis. And this method improve the utilization of spatial information to generate accurate results. Firstly, separate the tree points from manmade-object points by decision-tree-based rules, and create Crown Height Mode (CHM) by subtracting the Digital Terrain Model (DTM) from the digital surface model (DSM). Then calculate and extract key points to locate individual trees, thus estimate specific tree parameters related to species information, such as crown height, crown radius, and cross point etc. Finally, with parameters we are able to identify certain tree species. Comparing to species information measured on ground, the portion correctly identified trees on all plots could reach up to 90.65 %. The identification result in this research demonstrate the ability to distinguish palm tree using LiDAR point cloud. Furthermore, with more prior knowledge, section-based method enable the process to classify trees into different classes.

  2. Luminance gradient at object borders communicates object location to the human oculomotor system.

    Science.gov (United States)

    Kilpeläinen, Markku; Georgeson, Mark A

    2018-01-25

    The locations of objects in our environment constitute arguably the most important piece of information our visual system must convey to facilitate successful visually guided behaviour. However, the relevant objects are usually not point-like and do not have one unique location attribute. Relatively little is known about how the visual system represents the location of such large objects as visual processing is, both on neural and perceptual level, highly edge dominated. In this study, human observers made saccades to the centres of luminance defined squares (width 4 deg), which appeared at random locations (8 deg eccentricity). The phase structure of the square was manipulated such that the points of maximum luminance gradient at the square's edges shifted from trial to trial. The average saccade endpoints of all subjects followed those shifts in remarkable quantitative agreement. Further experiments showed that the shifts were caused by the edge manipulations, not by changes in luminance structure near the centre of the square or outside the square. We conclude that the human visual system programs saccades to large luminance defined square objects based on edge locations derived from the points of maximum luminance gradients at the square's edges.

  3. Association between Caregiving, Meaning in Life, and Life Satisfaction beyond 50 in an Asian Sample: Age as a Moderator

    Science.gov (United States)

    Ang, Rebecca P.; O, Jiaqing

    2012-01-01

    The association between caregiving, meaning in life, and life satisfaction was examined in sample of 519 older Asian adults beyond 50 years of age. Two hierarchical multiple regression analyses were conducted to examine age as moderator of the associations between caregiving, meaning in life, and life satisfaction. Age moderated the association…

  4. Sediment Monitoring and Benthic Faunal Sampling Adjacent to the Barbers Point Ocean Outfall, Oahu, Hawaii, 1986-2010 (NODC Accession 9900098)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Benthic fauna and sediment in the vicinity of the Barbers Point (Honouliuli) ocean outfall were sampled from 1986-2010. To assess the environmental quality, sediment...

  5. An Instantaneous Low-Cost Point-of-Care Anemia Detection Device

    Directory of Open Access Journals (Sweden)

    Jaime Punter-Villagrasa

    2015-02-01

    Full Text Available We present a small, compact and portable device for point-of-care instantaneous early detection of anemia. The method used is based on direct hematocrit measurement from whole blood samples by means of impedance analysis. This device consists of a custom electronic instrumentation and a plug-and-play disposable sensor. The designed electronics rely on straightforward standards for low power consumption, resulting in a robust and low consumption device making it completely mobile with a long battery life. Another approach could be powering the system based on other solutions like indoor solar cells, or applying energy-harvesting solutions in order to remove the batteries. The sensing system is based on a disposable low-cost label-free three gold electrode commercial sensor for 50 µL blood samples. The device capability for anemia detection has been validated through 24 blood samples, obtained from four hospitalized patients at Hospital Clínic. As a result, the response, effectiveness and robustness of the portable point-of-care device to detect anemia has been proved with an accuracy error of 2.83% and a mean coefficient of variation of 2.57% without any particular case above 5%.

  6. Evaluation of sampling methods for Bacillus spore-contaminated HVAC filters

    OpenAIRE

    Calfee, M. Worth; Rose, Laura J.; Tufts, Jenia; Morse, Stephen; Clayton, Matt; Touati, Abderrahmane; Griffin-Gatchalian, Nicole; Slone, Christina; McSweeney, Neal

    2013-01-01

    The objective of this study was to compare an extraction-based sampling method to two vacuum-based sampling methods (vacuum sock and 37 mm cassette filter) with regards to their ability to recover Bacillus atrophaeus spores (surrogate for Bacillus anthracis) from pleated heating, ventilation, and air conditioning (HVAC) filters that are typically found in commercial and residential buildings. Electrostatic and mechanical HVAC filters were tested, both without and after loading with dust to 50...

  7. Objective identification of sexual risk behavior among blood donors in Croatia: is it reality?

    Science.gov (United States)

    Miskulin, Maja; Puntaric, Dinko; Bozikov, Jadranka; Miskulin, Ivan; Ruzman, Natasa

    2012-01-01

    The objective of this study is to determine the prevalence of blood donors positive for herpes simplex virus type 2 (HSV-2), to identify the patterns of sexual risk behavior responsible for HSV-2 positivity and to assess the reliability of HSV-2 positivity as a marker of sexual risk behavior in the study population. This cross-sectional study included 423 blood donors of both sexes from eastern Croatia. Their blood samples were tested by ELISA IgG test kit for HSV-2 IgG and Western blot. Data on sexual risk behavior were collected by use of an anonymous questionnaire. Western blot testing showed HSV-2 IgG antibodies in 14 of 423 (3.3%) donor blood samples. The most common patterns of sexual risk behavior potentially associated with test positivity were irregular condom use during sexual intercourse with new partners (294/423; 69.5%) and > or = 5 sexual partners during lifetime (213/423; 50.4%). The population of blood donors from eastern Croatia included subgroups of subjects characterized by sexual risk behavior. Study results pointed to a relationship between various forms of sexual risk behavior and HSV-2 positivity, which could therefore serve as a reliable marker of sexual risk behavior in the study population.

  8. New developments in analytical calculation of first order scattering for 3D complex objects

    International Nuclear Information System (INIS)

    Duvauchelle, Philippe; Berthier, Jerome

    2007-01-01

    The principle of the analytical calculation of first order scattering used in our simulation code named VXI (Virtual X-ray Imaging) is based on a double ray-tracing. The first step consists in realizing a ray-tracing from the X-ray source point to each point of the object (an elementary volume in practice) including attenuation effect in the primary beam. This calculation gives the number of photons and their direction arriving on each voxel. A voxel acts as a secondary source which properties accord to the physics of X-ray scattering (Compton and Rayleigh). The second step of the ray-tracing is then done from each voxel of the object in the direction of each pixel of the detector, taking into account the attenuation along the scattering path. To simulate a 3D complex object, the first problem consists in realizing an automatic 3D sampling of the object. This is done by using an octree-based method optimized for deterministic scattering computation. The basic octree method consists in dividing recursively the volume of the object in decreasing-size voxels until each of them is completely included under the surface of the sample. The object volume is then always under evaluated. This is a problem because the scattering phenomenon strongly depends on the real volume of the object. The second problem is that artefacts due to sampling effects can occur in synthesis images. These two particular aspects are taken into account in our simulation code and an optimized octree-based method has been specially developed for this application. To respond to the first problem, our 3D sampling algorithm may accept voxels on the surface of the sample under conditions defined by the user. The second problem is treated in generating a random sampling instead of a regular one. The algorithm developed for 3D sampling is easily configurable, fast (about a few seconds maximum), robust and can be applied to all object shapes (thin, massive). The sampling time depends on the number of

  9. Development of Spatial Scaling Technique of Forest Health Sample Point Information

    Science.gov (United States)

    Lee, J. H.; Ryu, J. E.; Chung, H. I.; Choi, Y. Y.; Jeon, S. W.; Kim, S. H.

    2018-04-01

    Forests provide many goods, Ecosystem services, and resources to humans such as recreation air purification and water protection functions. In rececnt years, there has been an increase in the factors that threaten the health of forests such as global warming due to climate change, environmental pollution, and the increase in interest in forests, and efforts are being made in various countries for forest management. Thus, existing forest ecosystem survey method is a monitoring method of sampling points, and it is difficult to utilize forests for forest management because Korea is surveying only a small part of the forest area occupying 63.7 % of the country (Ministry of Land Infrastructure and Transport Korea, 2016). Therefore, in order to manage large forests, a method of interpolating and spatializing data is needed. In this study, The 1st Korea Forest Health Management biodiversity Shannon;s index data (National Institute of Forests Science, 2015) were used for spatial interpolation. Two widely used methods of interpolation, Kriging method and IDW(Inverse Distance Weighted) method were used to interpolate the biodiversity index. Vegetation indices SAVI, NDVI, LAI and SR were used. As a result, Kriging method was the most accurate method.

  10. DEVELOPMENT OF SPATIAL SCALING TECHNIQUE OF FOREST HEALTH SAMPLE POINT INFORMATION

    Directory of Open Access Journals (Sweden)

    J. H. Lee

    2018-04-01

    Full Text Available Forests provide many goods, Ecosystem services, and resources to humans such as recreation air purification and water protection functions. In rececnt years, there has been an increase in the factors that threaten the health of forests such as global warming due to climate change, environmental pollution, and the increase in interest in forests, and efforts are being made in various countries for forest management. Thus, existing forest ecosystem survey method is a monitoring method of sampling points, and it is difficult to utilize forests for forest management because Korea is surveying only a small part of the forest area occupying 63.7 % of the country (Ministry of Land Infrastructure and Transport Korea, 2016. Therefore, in order to manage large forests, a method of interpolating and spatializing data is needed. In this study, The 1st Korea Forest Health Management biodiversity Shannon;s index data (National Institute of Forests Science, 2015 were used for spatial interpolation. Two widely used methods of interpolation, Kriging method and IDW(Inverse Distance Weighted method were used to interpolate the biodiversity index. Vegetation indices SAVI, NDVI, LAI and SR were used. As a result, Kriging method was the most accurate method.

  11. Spatial Sampling of Weather Data for Regional Crop Yield Simulations

    Science.gov (United States)

    Van Bussel, Lenny G. J.; Ewert, Frank; Zhao, Gang; Hoffmann, Holger; Enders, Andreas; Wallach, Daniel; Asseng, Senthold; Baigorria, Guillermo A.; Basso, Bruno; Biernath, Christian; hide

    2016-01-01

    Field-scale crop models are increasingly applied at spatio-temporal scales that range from regions to the globe and from decades up to 100 years. Sufficiently detailed data to capture the prevailing spatio-temporal heterogeneity in weather, soil, and management conditions as needed by crop models are rarely available. Effective sampling may overcome the problem of missing data but has rarely been investigated. In this study the effect of sampling weather data has been evaluated for simulating yields of winter wheat in a region in Germany over a 30-year period (1982-2011) using 12 process-based crop models. A stratified sampling was applied to compare the effect of different sizes of spatially sampled weather data (10, 30, 50, 100, 500, 1000 and full coverage of 34,078 sampling points) on simulated wheat yields. Stratified sampling was further compared with random sampling. Possible interactions between sample size and crop model were evaluated. The results showed differences in simulated yields among crop models but all models reproduced well the pattern of the stratification. Importantly, the regional mean of simulated yields based on full coverage could already be reproduced by a small sample of 10 points. This was also true for reproducing the temporal variability in simulated yields but more sampling points (about 100) were required to accurately reproduce spatial yield variability. The number of sampling points can be smaller when a stratified sampling is applied as compared to a random sampling. However, differences between crop models were observed including some interaction between the effect of sampling on simulated yields and the model used. We concluded that stratified sampling can considerably reduce the number of required simulations. But, differences between crop models must be considered as the choice for a specific model can have larger effects on simulated yields than the sampling strategy. Assessing the impact of sampling soil and crop management

  12. Sub-OBB based object recognition and localization algorithm using range images

    International Nuclear Information System (INIS)

    Hoang, Dinh-Cuong; Chen, Liang-Chia; Nguyen, Thanh-Hung

    2017-01-01

    This paper presents a novel approach to recognize and estimate pose of the 3D objects in cluttered range images. The key technical breakthrough of the developed approach can enable robust object recognition and localization under undesirable condition such as environmental illumination variation as well as optical occlusion to viewing the object partially. First, the acquired point clouds are segmented into individual object point clouds based on the developed 3D object segmentation for randomly stacked objects. Second, an efficient shape-matching algorithm called Sub-OBB based object recognition by using the proposed oriented bounding box (OBB) regional area-based descriptor is performed to reliably recognize the object. Then, the 3D position and orientation of the object can be roughly estimated by aligning the OBB of segmented object point cloud with OBB of matched point cloud in a database generated from CAD model and 3D virtual camera. To detect accurate pose of the object, the iterative closest point (ICP) algorithm is used to match the object model with the segmented point clouds. From the feasibility test of several scenarios, the developed approach is verified to be feasible for object pose recognition and localization. (paper)

  13. Dose planning objectives in anal canal cancer IMRT: the TROG ANROTAT experience

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Elizabeth, E-mail: elizabeth@mebrown.net [Princess Alexandra Hospital, Brisbane, Queensland (Australia); Cray, Alison [Peter MacCallum Cancer Cancer Centre, Box Hill, Victoria (Australia); Haworth, Annette [Peter MacCallum Cancer Cancer Centre, Box Hill, Victoria (Australia); University of Melbourne, Melbourne, Victoria (Australia); Chander, Sarat [Peter MacCallum Cancer Cancer Centre, Box Hill, Victoria (Australia); Lin, Robert [Medica Oncology, Hurstville, New South Wales (Australia); Subramanian, Brindha; Ng, Michael [Radiation Oncology Victoria, Melbourne, Victoria (Australia); Princess Alexandra Hospital, Brisbane, Queensland (Australia)

    2015-06-15

    Intensity modulated radiotherapy (IMRT) is ideal for anal canal cancer (ACC), delivering high doses to irregular tumour volumes whilst minimising dose to surrounding normal tissues. Establishing achievable dose objectives is a challenge. The purpose of this paper was to utilise data collected in the Assessment of New Radiation Oncology Treatments and Technologies (ANROTAT) project to evaluate the feasibility of ACC IMRT dose planning objectives employed in the Australian situation. Ten Australian centres were randomly allocated three data sets from 15 non-identifiable computed tomography data sets representing a range of disease stages and gender. Each data set was planned by two different centres, producing 30 plans. All tumour and organ at risk (OAR) contours, prescription and dose constraint details were provided. Dose–volume histograms (DVHs) for each plan were analysed to evaluate the feasibility of dose planning objectives provided. All dose planning objectives for the bone marrow (BM) and femoral heads were achieved. Median planned doses exceeded one or more objectives for bowel, external genitalia and bladder. This reached statistical significance for bowel V30 (P = 0.04), V45 (P < 0.001), V50 (P < 0.001), external genitalia V20 (P < 0.001) and bladder V35 (P < 0.001), V40 (P = 0.01). Gender was found to be the only significant factor in the likelihood of achieving the bowel V50 (P = 0.03) and BM V30 constraints (P = 0.04). The dose planning objectives used in the ANROTAT project provide a good starting point for ACC IMRT planning. To facilitate clinical implementation, it is important to prioritise OAR objectives and recognise factors that affect the achievability of these objectives.

  14. Hierarchical extraction of urban objects from mobile laser scanning data

    Science.gov (United States)

    Yang, Bisheng; Dong, Zhen; Zhao, Gang; Dai, Wenxia

    2015-01-01

    Point clouds collected in urban scenes contain a huge number of points (e.g., billions), numerous objects with significant size variability, complex and incomplete structures, and variable point densities, raising great challenges for the automated extraction of urban objects in the field of photogrammetry, computer vision, and robotics. This paper addresses these challenges by proposing an automated method to extract urban objects robustly and efficiently. The proposed method generates multi-scale supervoxels from 3D point clouds using the point attributes (e.g., colors, intensities) and spatial distances between points, and then segments the supervoxels rather than individual points by combining graph based segmentation with multiple cues (e.g., principal direction, colors) of the supervoxels. The proposed method defines a set of rules for merging segments into meaningful units according to types of urban objects and forms the semantic knowledge of urban objects for the classification of objects. Finally, the proposed method extracts and classifies urban objects in a hierarchical order ranked by the saliency of the segments. Experiments show that the proposed method is efficient and robust for extracting buildings, streetlamps, trees, telegraph poles, traffic signs, cars, and enclosures from mobile laser scanning (MLS) point clouds, with an overall accuracy of 92.3%.

  15. A redshift survey of IRAS galaxies. I. Sample selection

    International Nuclear Information System (INIS)

    Strauss, M.A.; Davis, M.; Yahil, A.; Huchra, J.P.

    1990-01-01

    A complete all-sky sample of objects, flux-limited at 60 microns, has been extracted from the data base of the IRAS. The sample consists of 5014 objects, of which 2649 are galaxies and 13 are not yet identified. In order to study large-scale structure with this sample, it must be free of systematic biases. Corrections are applied for a major systematic effect in the flux densities listed in the IRAS Point Source Catalog: sources resolved by the IRAS beam have flux densities systematically underestimated. In addition, accurate flux densities are obtained for sources flagged as variable, or of moderate flux quality at 60 microns. The IRAS detectors suffered radiation-induced responsivity enhancement (hysteresis) due to crossings of the satellite scans across the Galactic plane; this effect is measured and is shown to be negligible. 53 refs

  16. Implementation of antimicrobial peptides for sample preparation prior to nucleic acid amplification in point-of-care settings.

    Science.gov (United States)

    Krõlov, Katrin; Uusna, Julia; Grellier, Tiia; Andresen, Liis; Jevtuševskaja, Jekaterina; Tulp, Indrek; Langel, Ülo

    2017-12-01

    A variety of sample preparation techniques are used prior to nucleic acid amplification. However, their efficiency is not always sufficient and nucleic acid purification remains the preferred method for template preparation. Purification is difficult and costly to apply in point-of-care (POC) settings and there is a strong need for more robust, rapid, and efficient biological sample preparation techniques in molecular diagnostics. Here, the authors applied antimicrobial peptides (AMPs) for urine sample preparation prior to isothermal loop-mediated amplification (LAMP). AMPs bind to many microorganisms such as bacteria, fungi, protozoa and viruses causing disruption of their membrane integrity and facilitate nucleic acid release. The authors show that incubation of E. coli with antimicrobial peptide cecropin P1 for 5 min had a significant effect on the availability of template DNA compared with untreated or even heat treated samples resulting in up to six times increase of the amplification efficiency. These results show that AMPs treatment is a very efficient sample preparation technique that is suitable for application prior to nucleic acid amplification directly within biological samples. Furthermore, the entire process of AMPs treatment was performed at room temperature for 5 min thereby making it a good candidate for use in POC applications.

  17. Comparison of T-Square, Point Centered Quarter, and N-Tree Sampling Methods in Pittosporum undulatum Invaded Woodlands

    Directory of Open Access Journals (Sweden)

    Lurdes Borges Silva

    2017-01-01

    Full Text Available Tree density is an important parameter affecting ecosystems functions and management decisions, while tree distribution patterns affect sampling design. Pittosporum undulatum stands in the Azores are being targeted with a biomass valorization program, for which efficient tree density estimators are required. We compared T-Square sampling, Point Centered Quarter Method (PCQM, and N-tree sampling with benchmark quadrat (QD sampling in six 900 m2 plots established at P. undulatum stands in São Miguel Island. A total of 15 estimators were tested using a data resampling approach. The estimated density range (344–5056 trees/ha was found to agree with previous studies using PCQM only. Although with a tendency to underestimate tree density (in comparison with QD, overall, T-Square sampling appeared to be the most accurate and precise method, followed by PCQM. Tree distribution pattern was found to be slightly aggregated in 4 of the 6 stands. Considering (1 the low level of bias and high precision, (2 the consistency among three estimators, (3 the possibility of use with aggregated patterns, and (4 the possibility of obtaining a larger number of independent tree parameter estimates, we recommend the use of T-Square sampling in P. undulatum stands within the framework of a biomass valorization program.

  18. Cloud point extraction-flame atomic absorption spectrometry for pre-concentration and determination of trace amounts of silver ions in water samples

    Directory of Open Access Journals (Sweden)

    Xiupei Yang

    2017-03-01

    Full Text Available A cloud point extraction (CPE method was used as a pre-concentration strategy prior to the determination of trace levels of silver in water by flame atomic absorption spectrometry (FAAS The pre-concentration is based on the clouding phenomena of non-ionic surfactant, triton X-114, with Ag (I/diethyldithiocarbamate (DDTC complexes in which the latter is soluble in a micellar phase composed by the former. When the temperature increases above its cloud point, the Ag (I/DDTC complexes are extracted into the surfactant-rich phase. The factors affecting the extraction efficiency including pH of the aqueous solution, concentration of the DDTC, amount of the surfactant, incubation temperature and time were investigated and optimized. Under the optimal experimental conditions, no interference was observed for the determination of 100 ng·mL−1 Ag+ in the presence of various cations below their maximum concentrations allowed in this method, for instance, 50 μg·mL−1 for both Zn2+ and Cu2+, 80 μg·mL−1 for Pb2+, 1000 μg·mL−1 for Mn2+, and 100 μg·mL−1 for both Cd2+ and Ni2+. The calibration curve was linear in the range of 1–500 ng·mL−1 with a limit of detection (LOD at 0.3 ng·mL−1. The developed method was successfully applied for the determination of trace levels of silver in water samples such as river water and tap water.

  19. Normal P50 Gating in Children with Autism, Yet Attenuated P50 Amplitude in the Asperger Subcategory.

    Science.gov (United States)

    Madsen, Gitte Falcher; Bilenberg, Niels; Jepsen, Jens Richardt; Glenthøj, Birte; Cantio, Cathriona; Oranje, Bob

    2015-08-01

    Autism spectrum disorders (ASD) and schizophrenia are separate disorders, but there is evidence of conversion or comorbid overlap. The objective of this paper was to explore whether deficits in sensory gating, as seen in some schizophrenia patients, can also be found in a group of ASD children compared to neurotypically developed children. An additional aim was to investigate the possibility of subdividing our ASD sample based on these gating deficits. In a case-control design, we assessed gating of the P50 and N100 amplitude in 31 ASD children and 39 healthy matched controls (8-12 years) and screened for differences between groups and within the ASD group. We did not find disturbances in auditory P50 and N100 filtering in the group of ASD children as a whole, nor did we find abnormal P50 and N100 amplitudes. However, the P50 amplitude to the conditioning stimulus was significantly reduced in the Asperger subgroup compared to healthy controls. In contrast to what is usually reported for patients with schizophrenia, we found no evidence for sensory gating deficits in our group of ASD children taken as a whole. However, reduced P50 amplitude to conditioning stimuli was found in the Asperger group, which is similar to what has been described in some studies in schizophrenia patients. There was a positive correlation between the P50 amplitude of the conditioning stimuli and anxiety score in the pervasive developmental disorder not otherwise specified group, which indicates a relation between anxiety and sensory registration in this group. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.

  20. Characterizing fixed points

    Directory of Open Access Journals (Sweden)

    Sanjo Zlobec

    2017-04-01

    Full Text Available A set of sufficient conditions which guarantee the existence of a point x⋆ such that f(x⋆ = x⋆ is called a "fixed point theorem". Many such theorems are named after well-known mathematicians and economists. Fixed point theorems are among most useful ones in applied mathematics, especially in economics and game theory. Particularly important theorem in these areas is Kakutani's fixed point theorem which ensures existence of fixed point for point-to-set mappings, e.g., [2, 3, 4]. John Nash developed and applied Kakutani's ideas to prove the existence of (what became known as "Nash equilibrium" for finite games with mixed strategies for any number of players. This work earned him a Nobel Prize in Economics that he shared with two mathematicians. Nash's life was dramatized in the movie "Beautiful Mind" in 2001. In this paper, we approach the system f(x = x differently. Instead of studying existence of its solutions our objective is to determine conditions which are both necessary and sufficient that an arbitrary point x⋆ is a fixed point, i.e., that it satisfies f(x⋆ = x⋆. The existence of solutions for continuous function f of the single variable is easy to establish using the Intermediate Value Theorem of Calculus. However, characterizing fixed points x⋆, i.e., providing answers to the question of finding both necessary and sufficient conditions for an arbitrary given x⋆ to satisfy f(x⋆ = x⋆, is not simple even for functions of the single variable. It is possible that constructive answers do not exist. Our objective is to find them. Our work may require some less familiar tools. One of these might be the "quadratic envelope characterization of zero-derivative point" recalled in the next section. The results are taken from the author's current research project "Studying the Essence of Fixed Points". They are believed to be original. The author has received several feedbacks on the preliminary report and on parts of the project

  1. INEL Sample Management Office

    International Nuclear Information System (INIS)

    Watkins, C.

    1994-01-01

    The Idaho National Engineering Laboratory (INEL) Sample Management Office (SMO) was formed as part of the EG ampersand G Idaho Environmental Restoration Program (ERP) in June, 1990. Since then, the SMO has been recognized and sought out by other prime contractors and programs at the INEL. Since December 1991, the DOE-ID Division Directors for the Environmental Restoration Division and Waste Management Division supported the expansion of the INEL ERP SMO into the INEL site wide SMO. The INEL SMO serves as a point of contact for multiple environmental analytical chemistry and laboratory issues (e.g., capacity, capability). The SMO chemists work with project managers during planning to help develop data quality objectives, select appropriate analytical methods, identify special analytical services needs, identify a source for the services, and ensure that requirements for sampling and analysis (e.g., preservations, sample volumes) are clear and technically accurate. The SMO chemists also prepare work scope statements for the laboratories performing the analyses

  2. Relative hardness measurement of soft objects by a new fiber optic sensor

    Science.gov (United States)

    Ahmadi, Roozbeh; Ashtaputre, Pranav; Abou Ziki, Jana; Dargahi, Javad; Packirisamy, Muthukumaran

    2010-06-01

    The measurement of relative hardness of soft objects enables replication of human finger tactile perception capabilities. This ability has many applications not only in automation and robotics industry but also in many other areas such as aerospace and robotic surgery where a robotic tool interacts with a soft contact object. One of the practical examples of interaction between a solid robotic instrument and a soft contact object occurs during robotically-assisted minimally invasive surgery. Measuring the relative hardness of bio-tissue, while contacting the robotic instrument, helps the surgeons to perform this type of surgery more reliably. In the present work, a new optical sensor is proposed to measure the relative hardness of contact objects. In order to measure the hardness of a contact object, like a human finger, it is required to apply a small force/deformation to the object by a tactile sensor. Then, the applied force and resulting deformation should be recorded at certain points to enable the relative hardness measurement. In this work, force/deformation data for a contact object is recorded at certain points by the proposed optical sensor. Recorded data is used to measure the relative hardness of soft objects. Based on the proposed design, an experimental setup was developed and experimental tests were performed to measure the relative hardness of elastomeric materials. Experimental results verify the ability of the proposed optical sensor to measure the relative hardness of elastomeric samples.

  3. Comparing a Fischer-Tropsch Alternate Fuel to JP-8 and Their 50-50 Blend: Flow and Flame Visualization Results

    Science.gov (United States)

    Hicks, Yolanda R.; Tacina, M.

    2013-01-01

    Combustion performance of a Fischer-Tropsch (FT) jet fuel manufactured by Sasol was compared to JP-8 and a 50-50 blend of the two fuels, using the NASA/Woodward 9 point Lean Direct Injector (LDI) in its baseline configuration. The baseline LDI configuration uses 60deg axial air-swirlers, whose vanes generate clockwise swirl, in the streamwise sense. For all cases, the fuel-air equivalence ratio was 0.455, and the combustor inlet pressure and pressure drop were 10-bar and 4 percent. The three inlet temperatures used were 828, 728, and 617 K. The objectives of this experiment were to visually compare JP-8 flames with FT flames for gross features. Specifically, we sought to ascertain in a simple way visible luminosity, sooting, and primary flame length of the FT compared to a standard JP grade fuel. We used color video imaging and high-speed imaging to achieve these goals. The flame color provided a way to qualitatively compare soot formation. The length of the luminous signal measured using the high speed camera allowed an assessment of primary flame length. It was determined that the shortest flames resulted from the FT fuel.

  4. Sampling Design of Soil Physical Properties in a Conilon Coffee Field

    Directory of Open Access Journals (Sweden)

    Eduardo Oliveira de Jesus Santos

    Full Text Available ABSTRACT Establishing the number of samples required to determine values of soil physical properties ultimately results in optimization of labor and allows better representation of such attributes. The objective of this study was to analyze the spatial variability of soil physical properties in a Conilon coffee field and propose a soil sampling method better attuned to conditions of the management system. The experiment was performed in a Conilon coffee field in Espírito Santo state, Brazil, under a 3.0 × 2.0 × 1.0 m (4,000 plants ha-1 double spacing design. An irregular grid, with dimensions of 107 × 95.7 m and 65 sampling points, was set up. Soil samples were collected from the 0.00-0.20 m depth from each sampling point. Data were analyzed under descriptive statistical and geostatistical methods. Using statistical parameters, the adequate number of samples for analyzing the attributes under study was established, which ranged from 1 to 11 sampling points. With the exception of particle density, all soil physical properties showed a spatial dependence structure best fitted to the spherical model. Establishment of the number of samples and spatial variability for the physical properties of soils may be useful in developing sampling strategies that minimize costs for farmers within a tolerable and predictable level of error.

  5. Online phase measuring profilometry for rectilinear moving object by image correction

    Science.gov (United States)

    Yuan, Han; Cao, Yi-Ping; Chen, Chen; Wang, Ya-Pin

    2015-11-01

    In phase measuring profilometry (PMP), the object must be static for point-to-point reconstruction with the captured deformed patterns. While the object is rectilinearly moving online, the size and pixel position differences of the object in different captured deformed patterns do not meet the point-to-point requirement. We propose an online PMP based on image correction to measure the three-dimensional shape of the rectilinear moving object. In the proposed method, the deformed patterns captured by a charge-coupled diode camera are reprojected from the oblique view to an aerial view first and then translated based on the feature points of the object. This method makes the object appear stationary in the deformed patterns. Experimental results show the feasibility and efficiency of the proposed method.

  6. Experimental and Sampling Design for the INL-2 Sample Collection Operational Test

    Energy Technology Data Exchange (ETDEWEB)

    Piepel, Gregory F.; Amidan, Brett G.; Matzke, Brett D.

    2009-02-16

    This report describes the experimental and sampling design developed to assess sampling approaches and methods for detecting contamination in a building and clearing the building for use after decontamination. An Idaho National Laboratory (INL) building will be contaminated with BG (Bacillus globigii, renamed Bacillus atrophaeus), a simulant for Bacillus anthracis (BA). The contamination, sampling, decontamination, and re-sampling will occur per the experimental and sampling design. This INL-2 Sample Collection Operational Test is being planned by the Validated Sampling Plan Working Group (VSPWG). The primary objectives are: 1) Evaluate judgmental and probabilistic sampling for characterization as well as probabilistic and combined (judgment and probabilistic) sampling approaches for clearance, 2) Conduct these evaluations for gradient contamination (from low or moderate down to absent or undetectable) for different initial concentrations of the contaminant, 3) Explore judgment composite sampling approaches to reduce sample numbers, 4) Collect baseline data to serve as an indication of the actual levels of contamination in the tests. A combined judgmental and random (CJR) approach uses Bayesian methodology to combine judgmental and probabilistic samples to make clearance statements of the form "X% confidence that at least Y% of an area does not contain detectable contamination” (X%/Y% clearance statements). The INL-2 experimental design has five test events, which 1) vary the floor of the INL building on which the contaminant will be released, 2) provide for varying the amount of contaminant released to obtain desired concentration gradients, and 3) investigate overt as well as covert release of contaminants. Desirable contaminant gradients would have moderate to low concentrations of contaminant in rooms near the release point, with concentrations down to zero in other rooms. Such gradients would provide a range of contamination levels to challenge the sampling

  7. Sampling and chemical analysis in environmental samples around Nuclear Power Plants and some environmental samples

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Yong Woo; Han, Man Jung; Cho, Seong Won; Cho, Hong Jun; Oh, Hyeon Kyun; Lee, Jeong Min; Chang, Jae Sook [KORTIC, Taejon (Korea, Republic of)

    2002-12-15

    Twelve kinds of environmental samples such as soil, seawater, underground water, etc. around Nuclear Power Plants(NPPs) were collected. Tritium chemical analysis was tried for the samples of rain water, pine-needle, air, seawater, underground water, chinese cabbage, a grain of rice and milk sampled around NPPs, and surface seawater and rain water sampled over the country. Strontium in the soil that sere sampled at 60 point of district in Korea were analyzed. Tritium were sampled at 60 point of district in Korea were analyzed. Tritium were analyzed in 21 samples of surface seawater around the Korea peninsular that were supplied from KFRDI(National Fisheries Research and Development Institute). Sampling and chemical analysis environmental samples around Kori, Woolsung, Youngkwang, Wooljin Npps and Taeduk science town for tritium and strontium analysis was managed according to plans. Succeed to KINS after all samples were tried.

  8. Performance of thigh-mounted triaxial accelerometer algorithms in objective quantification of sedentary behaviour and physical activity in older adults.

    Directory of Open Access Journals (Sweden)

    Jorgen A Wullems

    Full Text Available Accurate monitoring of sedentary behaviour and physical activity is key to investigate their exact role in healthy ageing. To date, accelerometers using cut-off point models are most preferred for this, however, machine learning seems a highly promising future alternative. Hence, the current study compared between cut-off point and machine learning algorithms, for optimal quantification of sedentary behaviour and physical activity intensities in the elderly. Thus, in a heterogeneous sample of forty participants (aged ≥60 years, 50% female energy expenditure during laboratory-based activities (ranging from sedentary behaviour through to moderate-to-vigorous physical activity was estimated by indirect calorimetry, whilst wearing triaxial thigh-mounted accelerometers. Three cut-off point algorithms and a Random Forest machine learning model were developed and cross-validated using the collected data. Detailed analyses were performed to check algorithm robustness, and examine and benchmark both overall and participant-specific balanced accuracies. This revealed that the four models can at least be used to confidently monitor sedentary behaviour and moderate-to-vigorous physical activity. Nevertheless, the machine learning algorithm outperformed the cut-off point models by being robust for all individual's physiological and non-physiological characteristics and showing more performance of an acceptable level over the whole range of physical activity intensities. Therefore, we propose that Random Forest machine learning may be optimal for objective assessment of sedentary behaviour and physical activity in older adults using thigh-mounted triaxial accelerometry.

  9. Performance of thigh-mounted triaxial accelerometer algorithms in objective quantification of sedentary behaviour and physical activity in older adults

    Science.gov (United States)

    Verschueren, Sabine M. P.; Degens, Hans; Morse, Christopher I.; Onambélé, Gladys L.

    2017-01-01

    Accurate monitoring of sedentary behaviour and physical activity is key to investigate their exact role in healthy ageing. To date, accelerometers using cut-off point models are most preferred for this, however, machine learning seems a highly promising future alternative. Hence, the current study compared between cut-off point and machine learning algorithms, for optimal quantification of sedentary behaviour and physical activity intensities in the elderly. Thus, in a heterogeneous sample of forty participants (aged ≥60 years, 50% female) energy expenditure during laboratory-based activities (ranging from sedentary behaviour through to moderate-to-vigorous physical activity) was estimated by indirect calorimetry, whilst wearing triaxial thigh-mounted accelerometers. Three cut-off point algorithms and a Random Forest machine learning model were developed and cross-validated using the collected data. Detailed analyses were performed to check algorithm robustness, and examine and benchmark both overall and participant-specific balanced accuracies. This revealed that the four models can at least be used to confidently monitor sedentary behaviour and moderate-to-vigorous physical activity. Nevertheless, the machine learning algorithm outperformed the cut-off point models by being robust for all individual’s physiological and non-physiological characteristics and showing more performance of an acceptable level over the whole range of physical activity intensities. Therefore, we propose that Random Forest machine learning may be optimal for objective assessment of sedentary behaviour and physical activity in older adults using thigh-mounted triaxial accelerometry. PMID:29155839

  10. Calculation of dose for β point and sphere sources in soft tissue

    International Nuclear Information System (INIS)

    Sun Fuyin; Yuan Shuyu; Tan Jian

    1999-01-01

    Objective: To compare the results of the distribution of dose rate calculated by three typical methods for point source and sphere source of β nuclide. Methods: Calculating and comparing the distributions of dose rate from 32 P β point and sphere sources in soft tissue calculated by the three methods published in references, [1]. [2] and [3], respectively. Results: For the point source of 3.7 x 10 7 Bq (1mCi), the variations of the calculation results of the three formulas are within 10% if r≤0.35 g/cm 2 , r being the distance from source, and larger than 10% if r > 0.35 g/cm 2 . For the sphere source whose volume is 50 μl and activity is 3.7 x 10 7 Bq(1 mCi), the variations are within 10% if z≤0.15 g/cm 2 , z being the distance from the surface of the sphere source to a point outside the sphere. Conclusion: The agreement of the distributions of the dose rate calculated by the three methods mentioned above for point and sphere β source are good if the distances from point source or the surface of sphere source to the points observed are small, and poor if they are large

  11. Automatic markerless registration of point clouds with semantic-keypoint-based 4-points congruent sets

    Science.gov (United States)

    Ge, Xuming

    2017-08-01

    The coarse registration of point clouds from urban building scenes has become a key topic in applications of terrestrial laser scanning technology. Sampling-based algorithms in the random sample consensus (RANSAC) model have emerged as mainstream solutions to address coarse registration problems. In this paper, we propose a novel combined solution to automatically align two markerless point clouds from building scenes. Firstly, the method segments non-ground points from ground points. Secondly, the proposed method detects feature points from each cross section and then obtains semantic keypoints by connecting feature points with specific rules. Finally, the detected semantic keypoints from two point clouds act as inputs to a modified 4PCS algorithm. Examples are presented and the results compared with those of K-4PCS to demonstrate the main contributions of the proposed method, which are the extension of the original 4PCS to handle heavy datasets and the use of semantic keypoints to improve K-4PCS in relation to registration accuracy and computational efficiency.

  12. Octopuses use a human-like strategy to control precise point-to-point arm movements.

    Science.gov (United States)

    Sumbre, Germán; Fiorito, Graziano; Flash, Tamar; Hochner, Binyamin

    2006-04-18

    One of the key problems in motor control is mastering or reducing the number of degrees of freedom (DOFs) through coordination. This problem is especially prominent with hyper-redundant limbs such as the extremely flexible arm of the octopus. Several strategies for simplifying these control problems have been suggested for human point-to-point arm movements. Despite the evolutionary gap and morphological differences, humans and octopuses evolved similar strategies when fetching food to the mouth. To achieve this precise point-to-point-task, octopus arms generate a quasi-articulated structure based on three dynamic joints. A rotational movement around these joints brings the object to the mouth . Here, we describe a peripheral neural mechanism-two waves of muscle activation propagate toward each other, and their collision point sets the medial-joint location. This is a remarkably simple mechanism for adjusting the length of the segments according to where the object is grasped. Furthermore, similar to certain human arm movements, kinematic invariants were observed at the joint level rather than at the end-effector level, suggesting intrinsic control coordination. The evolutionary convergence to similar geometrical and kinematic features suggests that a kinematically constrained articulated limb controlled at the level of joint space is the optimal solution for precise point-to-point movements.

  13. Bi-objective branch-and-cut algorithms

    DEFF Research Database (Denmark)

    Gadegaard, Sune Lauth; Ehrgott, Matthias; Nielsen, Lars Relund

    Most real-world optimization problems are of a multi-objective nature, involving objectives which are conflicting and incomparable. Solving a multi-objective optimization problem requires a method which can generate the set of rational compromises between the objectives. In this paper, we propose...... are strengthened by cutting planes. In addition, we suggest an extension of the branching strategy "Pareto branching''. Extensive computational results obtained for the bi-objective single source capacitated facility location problem prove the effectiveness of the algorithms....... and compares it to an upper bound set. The implicit bound set based algorithm, on the other hand, fathoms branching nodes by generating a single point on the lower bound set for each local nadir point. We outline several approaches for fathoming branching nodes and we propose an updating scheme for the lower...

  14. The algorithm to generate color point-cloud with the registration between panoramic image and laser point-cloud

    International Nuclear Information System (INIS)

    Zeng, Fanyang; Zhong, Ruofei

    2014-01-01

    Laser point cloud contains only intensity information and it is necessary for visual interpretation to obtain color information from other sensor. Cameras can provide texture, color, and other information of the corresponding object. Points with color information of corresponding pixels in digital images can be used to generate color point-cloud and is conducive to the visualization, classification and modeling of point-cloud. Different types of digital cameras are used in different Mobile Measurement Systems (MMS).the principles and processes for generating color point-cloud in different systems are not the same. The most prominent feature of the panoramic images is the field of 360 degrees view angle in the horizontal direction, to obtain the image information around the camera as much as possible. In this paper, we introduce a method to generate color point-cloud with panoramic image and laser point-cloud, and deduce the equation of the correspondence between points in panoramic images and laser point-clouds. The fusion of panoramic image and laser point-cloud is according to the collinear principle of three points (the center of the omnidirectional multi-camera system, the image point on the sphere, the object point). The experimental results show that the proposed algorithm and formulae in this paper are correct

  15. Preconcentration and determination of iron and copper in spice samples by cloud point extraction and flow injection flame atomic absorption spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Sahin, Cigdem Arpa, E-mail: carpa@hacettepe.edu.tr [Hacettepe University, Chemistry Department, 06800 Beytepe, Ankara (Turkey); Tokgoez, Ilknur; Bektas, Sema [Hacettepe University, Chemistry Department, 06800 Beytepe, Ankara (Turkey)

    2010-09-15

    A flow injection (FI) cloud point extraction (CPE) method for the determination of iron and copper by flame atomic absorption spectrometer (FAAS) has been improved. The analytes were complexed with 3-amino-7-dimethylamino-2-methylphenazine (Neutral Red, NR) and octylphenoxypolyethoxyethanol (Triton X-114) was added as a surfactant. The micellar solution was heated above 50 {sup o}C and loaded through a column packed with cotton for phase separation. Then the surfactant-rich phase was eluted using 0.05 mol L{sup -1} H{sub 2}SO{sub 4} and the analytes were determined by FAAS. Chemical and flow variables influencing the instrumental and extraction conditions were optimized. Under optimized conditions for 25 mL of preconcentrated solution, the enrichment factors were 98 and 69, the limits of detection (3s) were 0.7 and 0.3 ng mL{sup -1}, the limits of quantification (10s) were 2.2 and 1.0 ng mL{sup -1} for iron and copper, respectively. The relative standard deviation (RSD) for ten replicate measurements of 10 ng mL{sup -1} iron and copper were 2.1% and 1.8%, respectively. The proposed method was successfully applied to determination of iron and copper in spice samples.

  16. Fast and fuzzy multi-objective radiotherapy treatment plan generation for head and neck cancer patients with the lexicographic reference point method (LRPM)

    Science.gov (United States)

    van Haveren, Rens; Ogryczak, Włodzimierz; Verduijn, Gerda M.; Keijzer, Marleen; Heijmen, Ben J. M.; Breedveld, Sebastiaan

    2017-06-01

    Previously, we have proposed Erasmus-iCycle, an algorithm for fully automated IMRT plan generation based on prioritised (lexicographic) multi-objective optimisation with the 2-phase ɛ-constraint (2pɛc) method. For each patient, the output of Erasmus-iCycle is a clinically favourable, Pareto optimal plan. The 2pɛc method uses a list of objective functions that are consecutively optimised, following a strict, user-defined prioritisation. The novel lexicographic reference point method (LRPM) is capable of solving multi-objective problems in a single optimisation, using a fuzzy prioritisation of the objectives. Trade-offs are made globally, aiming for large favourable gains for lower prioritised objectives at the cost of only slight degradations for higher prioritised objectives, or vice versa. In this study, the LRPM is validated for 15 head and neck cancer patients receiving bilateral neck irradiation. The generated plans using the LRPM are compared with the plans resulting from the 2pɛc method. Both methods were capable of automatically generating clinically relevant treatment plans for all patients. For some patients, the LRPM allowed large favourable gains in some treatment plan objectives at the cost of only small degradations for the others. Moreover, because of the applied single optimisation instead of multiple optimisations, the LRPM reduced the average computation time from 209.2 to 9.5 min, a speed-up factor of 22 relative to the 2pɛc method.

  17. At convenience and systematic random sampling: effects on the prognostic value of nuclear area assessments in breast cancer patients.

    Science.gov (United States)

    Jannink, I; Bennen, J N; Blaauw, J; van Diest, P J; Baak, J P

    1995-01-01

    This study compares the influence of two different nuclear sampling methods on the prognostic value of assessments of mean and standard deviation of nuclear area (MNA, SDNA) in 191 consecutive invasive breast cancer patients with long term follow up. The first sampling method used was 'at convenience' sampling (ACS); the second, systematic random sampling (SRS). Both sampling methods were tested with a sample size of 50 nuclei (ACS-50 and SRS-50). To determine whether, besides the sampling methods, sample size had impact on prognostic value as well, the SRS method was also tested using a sample size of 100 nuclei (SRS-100). SDNA values were systematically lower for ACS, obviously due to (unconsciously) not including small and large nuclei. Testing prognostic value of a series of cut off points, MNA and SDNA values assessed by the SRS method were prognostically significantly stronger than the values obtained by the ACS method. This was confirmed in Cox regression analysis. For the MNA, the Mantel-Cox p-values from SRS-50 and SRS-100 measurements were not significantly different. However, for the SDNA, SRS-100 yielded significantly lower p-values than SRS-50. In conclusion, compared with the 'at convenience' nuclear sampling method, systematic random sampling of nuclei is not only superior with respect to reproducibility of results, but also provides a better prognostic value in patients with invasive breast cancer.

  18. Local activation time sampling density for atrial tachycardia contact mapping: how much is enough?

    Science.gov (United States)

    Williams, Steven E; Harrison, James L; Chubb, Henry; Whitaker, John; Kiedrowicz, Radek; Rinaldi, Christopher A; Cooklin, Michael; Wright, Matthew; Niederer, Steven; O'Neill, Mark D

    2018-02-01

    Local activation time (LAT) mapping forms the cornerstone of atrial tachycardia diagnosis. Although anatomic and positional accuracy of electroanatomic mapping (EAM) systems have been validated, the effect of electrode sampling density on LAT map reconstruction is not known. Here, we study the effect of chamber geometry and activation complexity on optimal LAT sampling density using a combined in silico and in vivo approach. In vivo 21 atrial tachycardia maps were studied in three groups: (1) focal activation, (2) macro-re-entry, and (3) localized re-entry. In silico activation was simulated on a 4×4cm atrial monolayer, sampled randomly at 0.25-10 points/cm2 and used to re-interpolate LAT maps. Activation patterns were studied in the geometrically simple porcine right atrium (RA) and complex human left atrium (LA). Activation complexity was introduced into the porcine RA by incomplete inter-caval linear ablation. In all cases, optimal sampling density was defined as the highest density resulting in minimal further error reduction in the re-interpolated maps. Optimal sampling densities for LA tachycardias were 0.67 ± 0.17 points/cm2 (focal activation), 1.05 ± 0.32 points/cm2 (macro-re-entry) and 1.23 ± 0.26 points/cm2 (localized re-entry), P = 0.0031. Increasing activation complexity was associated with increased optimal sampling density both in silico (focal activation 1.09 ± 0.14 points/cm2; re-entry 1.44 ± 0.49 points/cm2; spiral-wave 1.50 ± 0.34 points/cm2, P density (0.61 ± 0.22 points/cm2 vs. 1.0 ± 0.34 points/cm2, P = 0.0015). Optimal sampling densities can be identified to maximize diagnostic yield of LAT maps. Greater sampling density is required to correctly reveal complex activation and represent activation across complex geometries. Overall, the optimal sampling density for LAT map interpolation defined in this study was ∼1.0-1.5 points/cm2. Published on behalf of the European Society of

  19. The significance of sampling time in therapeutic drug monitoring of clozapine

    DEFF Research Database (Denmark)

    Jakobsen, M I; Larsen, J R; Svensson, C K

    2017-01-01

    OBJECTIVE: Therapeutic drug monitoring (TDM) of clozapine is standardized to 12-h postdose samplings. In clinical settings, sampling time often deviates from this time point, although the importance of the deviation is unknown. To this end, serum concentrations (s-) of clozapine and its metabolite...... N-desmethyl-clozapine (norclozapine) were measured at 12 ± 1 and 2 h postdose. METHOD: Forty-six patients with a diagnosis of schizophrenia, and on stable clozapine treatment, were enrolled for hourly, venous blood sampling at 10-14 h postdose. RESULTS: Minor changes in median percentage values were...

  20. Development of a control system for a heavy object handling manipulator. Application to a remote maintenance system for ITER blanket module

    International Nuclear Information System (INIS)

    Yoshimi, Takashi; Tsuji, Kouichi; Miyagawa, Shinichi; Kubo, Tomomi; Kakudate, Satoshi; Tada, Eisuke

    2001-01-01

    This paper describes a control system for the heavy object handling manipulator. It has been developed for the blanket module remote maintenance system of ITER (International Thermonuclear Fusion Experimental Reactor). A rail-mounted vehicle-type manipulator is proposed for the precise handling of a blanket module which is about 4 tons in weight. Basically, this manipulator is controlled by teaching-playback technique. When grasping or releasing the module, the manipulator sags and the position of the end-effector changes about 50 [mm]. Applying only the usual teaching-playback control makes the smooth operation of setting/removing modules to/from the vacuum vessel wall difficult due to this position change. To solve this proper problem of heavy object handling manipulator, we have developed a system which uses motion patterns generated from two kinds of teaching points. These motion patterns for setting/removing heavy objects are generated by combining teaching points for positioning the manipulator with and without grasping the object. When these motion patterns are applied, the manipulator can transfer the object's weight smoothly at the setting/removing point. This developed system has been applied to the real-scale mock-up of the vehicle manipulator and through the actual module setting/removing experiments, we have verified its effectiveness and realized smooth maintenance operation. (author)

  1. How two word-trained dogs integrate pointing and naming

    NARCIS (Netherlands)

    Grassmann, Susanne; Kaminski, Juliane; Tomasello, Michael

    Two word-trained dogs were presented with acts of reference in which a human pointed, named objects, or simultaneously did both. The question was whether these dogs would assume co-reference of pointing and naming and thus pick the pointed-to object. Results show that the dogs did indeed assume

  2. Presence of pesticide residues in water, sediment and biological samples taken from aquatic environments in Honduras

    International Nuclear Information System (INIS)

    Meyer, D.E.

    1999-01-01

    The objective of this study was to detect the presence of persistent pesticides in water, sediment and biological samples taken from aquatic environments in Honduras during the period 1995-98. Additionally, the LC 50 for 2 fungicides and 2 insecticides on post-larval Penaeus vannamei was determined in static water bioassays. A total of 80 water samples, 16 sediment samples and 7 biological samples (fish muscle tissue) were analyzed for detection of organochlorine and organophosphate pesticide residues. The results of sample analyses indicate a widespread contamination of Honduran continental and coastal waters with organochlorine pesticides. Most detections were of low ( 50 values and were therefore found to be much more toxic to the post-larval shrimp than the fungicides tridemorph and propiconazole. (author)

  3. Cloud point extraction and flame atomic absorption spectrometric determination of cadmium and nickel in drinking and wastewater samples.

    Science.gov (United States)

    Naeemullah; Kazi, Tasneem G; Shah, Faheem; Afridi, Hassan I; Baig, Jameel Ahmed; Soomro, Abdul Sattar

    2013-01-01

    A simple method for the preconcentration of cadmium (Cd) and nickel (Ni) in drinking and wastewater samples was developed. Cloud point extraction has been used for the preconcentration of both metals, after formation of complexes with 8-hydroxyquinoline (8-HQ) and extraction with the surfactant octylphenoxypolyethoxyethanol (Triton X-114). Dilution of the surfactant-rich phase with acidified ethanol was performed after phase separation, and the Cd and Ni contents were measured by flame atomic absorption spectrometry. The experimental variables, such as pH, amounts of reagents (8-HQ and Triton X-114), temperature, incubation time, and sample volume, were optimized. After optimization of the complexation and extraction conditions, enhancement factors of 80 and 61, with LOD values of 0.22 and 0.52 microg/L, were obtained for Cd and Ni, respectively. The proposed method was applied satisfactorily for the determination of both elements in drinking and wastewater samples.

  4. Quarterly sampling of the wetlands along the old F Area effluent ditch: August 1994

    International Nuclear Information System (INIS)

    Cummins, C.L.; Dixon, K.L.

    1994-08-01

    In August 1994, well point water and near-surface water samples were collected to characterize tritium and volatile organic compounds (VOC) in the wetlands along the old F-Area effluent ditch south of 643-E (old burial ground). The August sampling event was the third in a series of eight events. Groundwater flow paths suggest that compounds detected in water table wells around 643-E migrate towards the old F-Area effluent ditch and Fourmile Branch. Recent analytical results from well point and near-surface water sampling in the wetlands that comprise the old F-Area effluent ditch have shown that tritium and small quantities of VOCs are outcropping in the area. For this study, seven locations along the old F-Area effluent ditch were selected to be sampled. Well point samples were collected from all seven locations and near-surface water samples were collected at four locations. A secondary objective of this project was to compare VOC concentrations between the well points installed to depths of 6 to 8 ft and the near-surface water sampling buckets installed to depths of 1 to 2 ft. Based on differences in tritium concentrations at each location, it was determined that the sampling devices intercepted different groundwater flow paths. This negated direct comparison of analytical results between devices. However, when VOC concentrations measured at each well point and bucket location were normalized, based on the percent differences observed in tritium concentrations at that location, the resulting well point and bucket VOC concentrations were comparable in most cases. These results are consistent with the results from the three previous sampling events, and suggest that volatilization losses of VOCs from the buckets may be negligible. Since the results from the two sampling methodologies are not directly comparable, further sampling of the buckets is not planned

  5. Tracking an oil slick from multiple natural sources, Coal Oil Point, California

    International Nuclear Information System (INIS)

    Leifer, Ira; Luyendyk, Bruce; Broderick, Kris

    2006-01-01

    Oil slicks on the ocean surface emitted from natural marine hydrocarbon seeps offshore from Coal Oil Point in the Santa Barbara Channel, California were tracked and sampled over a 2-h period. The objectives were to characterize the seep oil and to track its composition over time using a new sampling device, a catamaran drum sampler (CATDRUMS). The sampler was designed and developed at UCSB. Chromatograms showed that oil originating from an informally named, very active seep area, Shane Seep, primarily evolved during the first hour due to mixing with oil originating from a convergence zone slick surrounding Shane Seep. (author)

  6. Tracking an oil slick from multiple natural sources, Coal Oil Point, California

    Energy Technology Data Exchange (ETDEWEB)

    Leifer, Ira [Marine Sciences Institute, University of California, Santa Barbara, CA 93106 (United States); Luyendyk, Bruce [Department of Geological Sciences, University of California, Santa Barbara, CA 93106 (United States); Broderick, Kris [Exxon/Mobil Exploration Company, 13401 N. Freeway, Houston, TX 77060 (United States)

    2006-06-15

    Oil slicks on the ocean surface emitted from natural marine hydrocarbon seeps offshore from Coal Oil Point in the Santa Barbara Channel, California were tracked and sampled over a 2-h period. The objectives were to characterize the seep oil and to track its composition over time using a new sampling device, a catamaran drum sampler (CATDRUMS). The sampler was designed and developed at UCSB. Chromatograms showed that oil originating from an informally named, very active seep area, Shane Seep, primarily evolved during the first hour due to mixing with oil originating from a convergence zone slick surrounding Shane Seep. (author)

  7. Neutron activation analysis of limestone objects

    International Nuclear Information System (INIS)

    Meyers, P.; Van Zelst, L.

    1977-01-01

    The elemental composition of samples from limestone objects were determined by neutron activation analysis to investigate whether this technique can be used to distinguish between objects made of limestone from different sources. Samples weighing between 0.2-2 grams were obtained by drilling from a series of ancient Egyptian and medieval Spanish objects. Analysis was performed on aliquots varying in weight from 40-100 milligrams. The following elements were determined quantitatively: Na, K, Rb, Cs, Ba, Sc, La, Ce, Sm, Eu, Hf, Th, Ta, Cr, Mn, Fe, Co and Zn. The data on Egyptian limestones indicate that, because of the inhomogeneous nature of the stone, 0.2-2 gram samples may not be representative of an entire object. Nevertheless, multivariate statistical methods produced a clear distinction between objects originating from the Luxor area (ancient Thebes) and objects found north of Luxor. The Spanish limestone studied appeared to be more homogeneous. Samples from stylistically related objects have similar elemental compositions while relative large differences were observed between objects having no relationship other than the common provenance of medieval Spain. (orig.) [de

  8. Edge Artifacts in Point Spread Function-based PET Reconstruction in Relation to Object Size and Reconstruction Parameters

    Directory of Open Access Journals (Sweden)

    Yuji Tsutsui

    2017-06-01

    Full Text Available Objective(s: We evaluated edge artifacts in relation to phantom diameter and reconstruction parameters in point spread function (PSF-based positron emission tomography (PET image reconstruction.Methods: PET data were acquired from an original cone-shaped phantom filled with 18F solution (21.9 kBq/mL for 10 min using a Biograph mCT scanner. The images were reconstructed using the baseline ordered subsets expectation maximization (OSEM algorithm and the OSEM with PSF correction model. The reconstruction parameters included a pixel size of 1.0, 2.0, or 3.0 mm, 1-12 iterations, 24 subsets, and a full width at half maximum (FWHM of the post-filter Gaussian filter of 1.0, 2.0, or 3.0 mm. We compared both the maximum recovery coefficient (RCmax and the mean recovery coefficient (RCmean in the phantom at different diameters.Results: The OSEM images had no edge artifacts, but the OSEM with PSF images had a dense edge delineating the hot phantom at diameters 10 mm or more and a dense spot at the center at diameters of 8 mm or less. The dense edge was clearly observed on images with a small pixel size, a Gaussian filter with a small FWHM, and a high number of iterations. At a phantom diameter of 6-7 mm, the RCmax for the OSEM and OSEM with PSF images was 60% and 140%, respectively (pixel size: 1.0 mm; FWHM of the Gaussian filter: 2.0 mm; iterations: 2. The RCmean of the OSEM with PSF images did not exceed 100%.Conclusion: PSF-based image reconstruction resulted in edge artifacts, the degree of which depends on the pixel size, number of iterations, FWHM of the Gaussian filter, and object size.

  9. Optimization of the sampling scheme for maps of physical and chemical properties estimated by kriging

    Directory of Open Access Journals (Sweden)

    Gener Tadeu Pereira

    2013-10-01

    Full Text Available The sampling scheme is essential in the investigation of the spatial variability of soil properties in Soil Science studies. The high costs of sampling schemes optimized with additional sampling points for each physical and chemical soil property, prevent their use in precision agriculture. The purpose of this study was to obtain an optimal sampling scheme for physical and chemical property sets and investigate its effect on the quality of soil sampling. Soil was sampled on a 42-ha area, with 206 geo-referenced points arranged in a regular grid spaced 50 m from each other, in a depth range of 0.00-0.20 m. In order to obtain an optimal sampling scheme for every physical and chemical property, a sample grid, a medium-scale variogram and the extended Spatial Simulated Annealing (SSA method were used to minimize kriging variance. The optimization procedure was validated by constructing maps of relative improvement comparing the sample configuration before and after the process. A greater concentration of recommended points in specific areas (NW-SE direction was observed, which also reflects a greater estimate variance at these locations. The addition of optimal samples, for specific regions, increased the accuracy up to 2 % for chemical and 1 % for physical properties. The use of a sample grid and medium-scale variogram, as previous information for the conception of additional sampling schemes, was very promising to determine the locations of these additional points for all physical and chemical soil properties, enhancing the accuracy of kriging estimates of the physical-chemical properties.

  10. Monitoring of uranium concentrations in water samples collected near potentially hazardous objects in North-West Tajikistan.

    Science.gov (United States)

    Zoriy, P; Schläger, M; Murtazaev, K; Pillath, J; Zoriy, M; Heuel-Fabianek, B

    2018-01-01

    The water contamination near ecologically problematic objects was investigated between 2009 and 2014 in North-West Tajikistan as a part of a joint project between Forschungszentrum Jülich and Khujand State University. The main part of this work was the determination of uranium in water samples collected near the Degmay tailings dump, the Taboshar pit lake and the Syr Darya river. More than 130 water samples were collected and analyzed to monitor the uranium concentration near the investigated areas. Two different mass spectrometers and an ion chromatograph were used for element concentration measurements. Based on the results obtained, the uranium influence of the Degmay tailings on the rivers Khoja-Bakyrgan-Say and Syr Darya and surrounding water was not found. The uranium concentration in water samples was monitored for a lengthy period at seven locations Great differences in the uranium concentration in waters collected in 2010, 2011, 2012, 2013 for each location were not observed. Drinking water samples from the region of North-West Tajikistan were analyzed and compared with the World Health Organization's guidelines. Seven out of nine drinking water samples near Taboshar exceeded the WHO guideline value for uranium concentrations (30 μg/L). The average uranium concentration of water samples from Syr Darya for the period from 2009 to 2014 was determined to be 20.1 (±5.2) μg/L. The uranium contamination of the Syr Darya was determined from the western border to the eastern border and the results are shown in this paper. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. The issue of gamma spectral system sourceless object calibration software using in radioactive environment measurement

    International Nuclear Information System (INIS)

    Shen Ming; Zhu Yuelong; Zhao Yanzi

    2009-01-01

    The paper introduces the characteristic, based method of HPGe detector LabSOCS (Laboratory Sourceless Object Calibration Software). Compared measured efficiency and LabSOCS efficiency for different point sources, and the tolerance is about 6% at middle and high energy range. For cylinder samples of dirt, animal ash and plant ash, the results of verification is 7%-10%. (authors)

  12. Controllable resonant tunnelling through single-point potentials: A point triode

    International Nuclear Information System (INIS)

    Zolotaryuk, A.V.; Zolotaryuk, Yaroslav

    2015-01-01

    A zero-thickness limit of three-layer heterostructures under two bias voltages applied externally, where one of which is supposed to be a gate parameter, is studied. As a result, an effect of controllable resonant tunnelling of electrons through single-point potentials is shown to exist. Therefore the limiting structure may be termed a “point triode” and considered in the theory of point interactions as a new object. The simple limiting analytical expressions adequately describe the resonant behaviour in the transistor with realistic parameter values and thus one can conclude that the zero-range limit of multi-layer structures may be used in fabricating nanodevices. The difference between the resonant tunnelling across single-point potentials and the Fabry–Pérot interference effect is also emphasized. - Highlights: • The zero-thickness limit of three-layer heterostructures is described in terms of point interactions. • The effect of resonant tunnelling through these single-point potentials is established. • The resonant tunnelling is shown to be controlled by a gate voltage

  13. Digital Invasions: from Point Clouds to Historical Building Object Modeling H-Bom of a Unesco Whl Site

    Science.gov (United States)

    Chiabrando, F.; Lo Turco, M.; Santagati, C.

    2017-02-01

    The paper here presented shows the outcomes of a research/didactic activity carried out within a workshop titled "Digital Invasions. From point cloud to Heritage Building Information Modeling" held at Politecnico di Torino (29th September-5th October 2016). The term digital invasions refers to an Italian bottom up project born in the 2013 with the aim of promoting innovative digital ways for the enhancement of Cultural Heritage by the co-creation of cultural contents and its sharing through social media platforms. At this regard, we have worked with students of Architectural Master of Science degree, training them with a multidisciplinary teaching team (Architectural Representation, History of Architecture, Restoration, Digital Communication and Geomatics). The aim was also to test if our students could be involved in a sort of niche crowdsourcing for the creation of a library of H-BOMS (Historical-Building Object Modeling) of architectural elements.

  14. DIGITAL INVASIONS: FROM POINT CLOUDS TO HISTORICAL BUILDING OBJECT MODELING (H-BOM OF A UNESCO WHL SITE

    Directory of Open Access Journals (Sweden)

    F. Chiabrando

    2017-02-01

    Full Text Available The paper here presented shows the outcomes of a research/didactic activity carried out within a workshop titled "Digital Invasions. From point cloud to Heritage Building Information Modeling" held at Politecnico di Torino (29th September–5th October 2016. The term digital invasions refers to an Italian bottom up project born in the 2013 with the aim of promoting innovative digital ways for the enhancement of Cultural Heritage by the co-creation of cultural contents and its sharing through social media platforms. At this regard, we have worked with students of Architectural Master of Science degree, training them with a multidisciplinary teaching team (Architectural Representation, History of Architecture, Restoration, Digital Communication and Geomatics. The aim was also to test if our students could be involved in a sort of niche crowdsourcing for the creation of a library of H-BOMS (Historical-Building Object Modeling of architectural elements.

  15. Programs as Data Objects

    DEFF Research Database (Denmark)

    This book constitutes the refereed proceedings of the Second Symposium on Programs as Data Objects, PADO 2001, held in Aarhus, Denmark, in May 2001. The 14 revised full papers presented were carefully reviewed and selected from 30 submissions. Various aspects of looking at programs as data objects...... are covered from the point of view of program analysis, program transformation, computational complexity, etc....

  16. Micro- and nano-volume samples by electrothermal, near-torch vaporization sample introduction using removable, interchangeable and portable rhenium coiled-filament assemblies and axially-viewed inductively coupled plasma-atomic emission spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Badiei, Hamid R.; Lai, Bryant; Karanassios, Vassili

    2012-11-15

    -copper rule of the Environmental Protection Agency. It is also shown that interchangeable assemblies with volume-capacities in the ranges mentioned above can be used interchangeably in the same calibration curve, thus in some cases and for on-site applications, volume-of-sample can be substituted for sample-dilution. - Highlights: Black-Right-Pointing-Pointer Absolute detection limits from 4 pg (Pb) to 0.3 fg ({approx} 5 million atoms) for Ca. Black-Right-Pointing-Pointer Relative detection limits (100 {mu}L) from 50 pg/mL for Pb to 3 fg/mL for Ca. Black-Right-Pointing-Pointer Pb was determined in the 100's of pg/mL level using 100 {mu}L of diluted CRM. Black-Right-Pointing-Pointer Zn in individual vesicles was determined using 80 nL-volume samples. Black-Right-Pointing-Pointer Analytes pipetted on coils on-site and concentration determined in the lab.

  17. Detecting spatial structures in throughfall data: The effect of extent, sample size, sampling design, and variogram estimation method

    Science.gov (United States)

    Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander

    2016-09-01

    In the last decades, an increasing number of studies analyzed spatial patterns in throughfall by means of variograms. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and a layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation method on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with large outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling) and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments (non-robust and robust estimators) and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the number recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous

  18. Gibbs sampling on large lattice with GMRF

    Science.gov (United States)

    Marcotte, Denis; Allard, Denis

    2018-02-01

    Gibbs sampling is routinely used to sample truncated Gaussian distributions. These distributions naturally occur when associating latent Gaussian fields to category fields obtained by discrete simulation methods like multipoint, sequential indicator simulation and object-based simulation. The latent Gaussians are often used in data assimilation and history matching algorithms. When the Gibbs sampling is applied on a large lattice, the computing cost can become prohibitive. The usual practice of using local neighborhoods is unsatisfying as it can diverge and it does not reproduce exactly the desired covariance. A better approach is to use Gaussian Markov Random Fields (GMRF) which enables to compute the conditional distributions at any point without having to compute and invert the full covariance matrix. As the GMRF is locally defined, it allows simultaneous updating of all points that do not share neighbors (coding sets). We propose a new simultaneous Gibbs updating strategy on coding sets that can be efficiently computed by convolution and applied with an acceptance/rejection method in the truncated case. We study empirically the speed of convergence, the effect of choice of boundary conditions, of the correlation range and of GMRF smoothness. We show that the convergence is slower in the Gaussian case on the torus than for the finite case studied in the literature. However, in the truncated Gaussian case, we show that short scale correlation is quickly restored and the conditioning categories at each lattice point imprint the long scale correlation. Hence our approach enables to realistically apply Gibbs sampling on large 2D or 3D lattice with the desired GMRF covariance.

  19. Determination of carcinogenic threshold limit values using the tumorigenic dose rate 50% (TD50)

    International Nuclear Information System (INIS)

    Bonvalot, Y.; Oudiz, A.; Hubert, P.; Abenhaim, L.

    1989-01-01

    The objective of the present study is to propose a simple procedure for the determination of Occupational Limit Values (OLVs) based on the TD 50 concept (Tumorigenic Dose Rate 50%). The TD 50 concept was introduced by Peto R. and al. to help classify chemical substances according to their carcinogenic potency. The TD 50 is that dose rate (in mg/KXg body weight/day) which, if administered chronically for the standard lifespan of the species will halve the probability of remaining tumorless throughout that period. Using TD 50 values available for 776 substances, the procedure presented here allows one to determine OLVs corresponding to a fixed excess risk. It is based on a mathematical high-to-low doses extrapolation of the TD 50 . OLVs obtained with this procedure are compared with currently available TLVs and other occupational guidelines. (author)

  20. Spatial distribution sampling and Monte Carlo simulation of radioactive isotopes

    CERN Document Server

    Krainer, Alexander Michael

    2015-01-01

    This work focuses on the implementation of a program for random sampling of uniformly spatially distributed isotopes for Monte Carlo particle simulations and in specific FLUKA. With FLUKA it is possible to calculate the radio nuclide production in high energy fields. The decay of these nuclide, and therefore the resulting radiation field, however can only be simulated in the same geometry. This works gives the tool to simulate the decay of the produced nuclide in other geometries. With that the radiation field from an irradiated object can be simulated in arbitrary environments. The sampling of isotope mixtures was tested by simulating a 50/50 mixture of $Cs^{137}$ and $Co^{60}$. These isotopes are both well known and provide therefore a first reliable benchmark in that respect. The sampling of uniformly distributed coordinates was tested using the histogram test for various spatial distributions. The advantages and disadvantages of the program compared to standard methods are demonstrated in the real life ca...

  1. Hysteresis critical point of nitrogen in porous glass: occurrence of sample spanning transition in capillary condensation.

    Science.gov (United States)

    Morishige, Kunimitsu

    2009-06-02

    To examine the mechanisms for capillary condensation and for capillary evaporation in porous glass, we measured the hysteresis critical points and desorption scanning curves of nitrogen in four kinds of porous glasses with different pore sizes (Vycor, CPG75A, CPG120A, and CPG170A). The shapes of the hysteresis loop in the adsorption isotherm of nitrogen for the Vycor and the CPG75A changed with temperature, whereas those for the CPG120A and the CPG170A remained almost unchanged with temperature. The hysteresis critical points for the Vycor and the CPG75A fell on the common line observed previously for ordered mesoporous silicas. On the other hand, the hysteresis critical points for the CPG120A and the CPG170A deviated appreciably from the common line. This strongly suggests that capillary evaporation of nitrogen in the interconnected and disordered pores of both the Vycor and the CPG75A follows a cavitation process at least in the vicinity of their hysteresis critical temperatures in the same way as that in the cagelike pores of the ordered silicas, whereas the hysteresis critical points in the CPG120A and the CPG170A have origin different from that in the cagelike pores. The desorption scanning curves for the CPG75A indicated the nonindependence of the porous domains. On the other hand, for both the CPG120A and the CPG170A, we obtained the scanning curves that are expected from the independent domain theory. All these results suggest that sample spanning transitions in capillary condensation and evaporation take place inside the interconnected pores of both the CPG120A and the CPG170A.

  2. Objective assessment of the aesthetic outcomes of breast cancer treatment: toward automatic localization of fiducial points on digital photographs

    Science.gov (United States)

    Udpa, Nitin; Sampat, Mehul P.; Kim, Min Soon; Reece, Gregory P.; Markey, Mia K.

    2007-03-01

    The contemporary goals of breast cancer treatment are not limited to cure but include maximizing quality of life. All breast cancer treatment can adversely affect breast appearance. Developing objective, quantifiable methods to assess breast appearance is important to understand the impact of deformity on patient quality of life, guide selection of current treatments, and make rational treatment advances. A few measures of aesthetic properties such as symmetry have been developed. They are computed from the distances between manually identified fiducial points on digital photographs. However, this is time-consuming and subject to intra- and inter-observer variability. The purpose of this study is to investigate methods for automatic localization of fiducial points on anterior-posterior digital photographs taken to document the outcomes of breast reconstruction. Particular emphasis is placed on automatic localization of the nipple complex since the most widely used aesthetic measure, the Breast Retraction Assessment, quantifies the symmetry of nipple locations. The nipple complexes are automatically localized using normalized cross-correlation with a template bank of variants of Gaussian and Laplacian of Gaussian filters. A probability map of likely nipple locations determined from the image database is used to reduce the number of false positive detections from the matched filter operation. The accuracy of the nipple detection was evaluated relative to markings made by three human observers. The impact of using the fiducial point locations as identified by the automatic method, as opposed to the manual method, on the calculation of the Breast Retraction Assessment was also evaluated.

  3. Melting point of yttria

    International Nuclear Information System (INIS)

    Skaggs, S.R.

    1977-06-01

    Fourteen samples of 99.999 percent Y 2 O 3 were melted near the focus of a 250-W CO 2 laser. The average value of the observed melting point along the solid-liquid interface was 2462 +- 19 0 C. Several of these same samples were then melted in ultrahigh-purity oxygen, nitrogen, helium, or argon and in water vapor. No change in the observed temperature was detected, with the exception of a 20 0 C increase in temperature from air to helium gas. Post test examination of the sample characteristics, clarity, sphericity, and density is presented, along with composition. It is suggested that yttria is superior to alumina as a secondary melting-point standard

  4. Evaluation of point mutations in dystrophin gene in Iranian Duchenne and Becker muscular dystrophy patients: introducing three novel variants.

    Science.gov (United States)

    Haghshenas, Maryam; Akbari, Mohammad Taghi; Karizi, Shohreh Zare; Deilamani, Faravareh Khordadpoor; Nafissi, Shahriar; Salehi, Zivar

    2016-06-01

    Duchenne and Becker muscular dystrophies (DMD and BMD) are X-linked neuromuscular diseases characterized by progressive muscular weakness and degeneration of skeletal muscles. Approximately two-thirds of the patients have large deletions or duplications in the dystrophin gene and the remaining one-third have point mutations. This study was performed to evaluate point mutations in Iranian DMD/BMD male patients. A total of 29 DNA samples from patients who did not show any large deletion/duplication mutations following multiplex polymerase chain reaction (PCR) and multiplex ligation-dependent probe amplification (MLPA) screening were sequenced for detection of point mutations in exons 50-79. Also exon 44 was sequenced in one sample in which a false positive deletion was detected by MLPA method. Cycle sequencing revealed four nonsense, one frameshift and two splice site mutations as well as two missense variants.

  5. Learning objects and interactive whiteboards: a evaluation proposal of learning objects for mathematics teaching

    Directory of Open Access Journals (Sweden)

    Silvio Henrique Fiscarelli

    2016-05-01

    Full Text Available The current conditions of the classroom learning tend to be a one-way process based in teacher exposition, this make a negative impact on learning make it a mechanical and not meaningful activity. One possibility to improve the quality of teaching is to innovate methodologies and varying forms of presenting information to students, such as the use of technology in the teaching process. The Interactive Whiteboard (IBW is one of the technologies that are being implemented in Brazilian schools. One of the promising possibilities to add value to the use of LDI in classroom are "learning objects" (LO. However, one problem is that often the LO are not fully suited to the dynamics of IWB, whether functional or pedagogical point of view. The objective of this study is to analyze and propose a set of indicators that evaluate the learning objects for use in conjunction with Interactive Whiteboards. The selection and definition of evaluation indicators was carried from the literature review on the subject and based on LDI experiences of use in Municipal Elementary School. After defining the set of indicators was conducted a evaluation of a sample of 30 OA utilized to teaching mathematics in 3rd grade of elementary school. The results of the evaluation indicate that the proposed indicators are suitable for a pre-analysis of OA and assisting in the process of selection of these.

  6. Preliminary studies on DNA retardation by MutS applied to the detection of point mutations in clinical samples

    International Nuclear Information System (INIS)

    Stanislawska-Sachadyn, Anna; Paszko, Zygmunt; Kluska, Anna; Skasko, Elzibieta; Sromek, Maria; Balabas, Aneta; Janiec-Jankowska, Aneta; Wisniewska, Alicja; Kur, Jozef; Sachadyn, Pawel

    2005-01-01

    MutS ability to bind DNA mismatches was applied to the detection of point mutations in PCR products. MutS recognized mismatches from single up to five nucleotides and retarded the electrophoretic migration of mismatched DNA. The electrophoretic detection of insertions/deletions above three nucleotides is also possible without MutS, thanks to the DNA mobility shift caused by the presence of large insertion/deletion loops in the heteroduplex DNA. Thus, the method enables the search for a broad range of mutations: from single up to several nucleotides. The mobility shift assays were carried out in polyacrylamide gels stained with SYBR-Gold. One assay required 50-200 ng of PCR product and 1-3 μg of Thermus thermophilus his 6 -MutS protein. The advantages of this approach are: the small amounts of DNA required for the examination, simple and fast staining, no demand for PCR product purification, no labelling and radioisotopes required. The method was tested in the detection of cancer predisposing mutations in RET, hMSH2, hMLH1, BRCA1, BRCA2 and NBS1 genes. The approach appears to be promising in screening for unknown point mutations

  7. An antithetic variate to facilitate upper-stem height measurements for critical height sampling with importance sampling

    Science.gov (United States)

    Thomas B. Lynch; Jeffrey H. Gove

    2013-01-01

    Critical height sampling (CHS) estimates cubic volume per unit area by multiplying the sum of critical heights measured on trees tallied in a horizontal point sample (HPS) by the HPS basal area factor. One of the barriers to practical application of CHS is the fact that trees near the field location of the point-sampling sample point have critical heights that occur...

  8. SU-E-T-72: A Retrospective Correlation Analysis On Dose-Volume Control Points and Treatment Outcomes

    Energy Technology Data Exchange (ETDEWEB)

    Roy, A; Nohadani, O [Northwestern University, Evanston, IL (United States); Refaat, T; Bacchus, I; Cutright, D; Sathiaseelan, V; Mittal, B [Northwestern University, Chicago, IL (United States)

    2015-06-15

    Purpose: To quantify correlation between dose-volume control points and treatment outcomes. Specifically, two outcomes are analyzed: occurrence of radiation induced dysphagia and target complications. The results inform the treatment planning process when competing dose-volume criteria requires relaxations. Methods: 32 patients, treated with whole-field sequential intensity modulated radiation therapy during 2009–2010 period, are considered for this study. Acute dysphagia that is categorized into 3 grades is observed on all patients. 3 patients are observed in grade 1, 17 patients in grade 2, and 12 patients in grade 3. Ordinal logistic regression is employed to establish correlations between grades of dysphagia and dose to cervico-thoracic esophagus. Particularly, minimum (Dmin), mean (Dmean), and maximum (Dmax) dose control points are analyzed. Additionally, target complication, which includes local-regional recurrence and/or distant metastasis, is observed on 4 patients. Binary logistic regression is used to quantify correlation between target complication and four dose control points. Namely, ICRU recommended dose control points, D2, D50, D95, and D98 are analyzed. Results: For correlation with dysphagia, Dmin on cervico-thoracic esophagus is statistically significant (p-value = 0.005). Additionally, Dmean on cervico-thoracic esophagus is also significant in association with dysphagia (p-value = 0.012). However, no correlation was observed between Dmax and dysphagia (p-value = 0.263). For target complications, D50 on the target is a statistically significant dose control point (p-value = 0.032). No correlations were observed between treatment complications and D2 (p-value = 0.866), D95 (p-value = 0.750), and D98 (p-value = 0.710) on the target. Conclusion: Significant correlations are observed between radiation induced dysphagia and Dmean (and Dmin) to cervico-thoracic esophagus. Additionally, correlation between target complications and median dose to target

  9. SU-E-T-72: A Retrospective Correlation Analysis On Dose-Volume Control Points and Treatment Outcomes

    International Nuclear Information System (INIS)

    Roy, A; Nohadani, O; Refaat, T; Bacchus, I; Cutright, D; Sathiaseelan, V; Mittal, B

    2015-01-01

    Purpose: To quantify correlation between dose-volume control points and treatment outcomes. Specifically, two outcomes are analyzed: occurrence of radiation induced dysphagia and target complications. The results inform the treatment planning process when competing dose-volume criteria requires relaxations. Methods: 32 patients, treated with whole-field sequential intensity modulated radiation therapy during 2009–2010 period, are considered for this study. Acute dysphagia that is categorized into 3 grades is observed on all patients. 3 patients are observed in grade 1, 17 patients in grade 2, and 12 patients in grade 3. Ordinal logistic regression is employed to establish correlations between grades of dysphagia and dose to cervico-thoracic esophagus. Particularly, minimum (Dmin), mean (Dmean), and maximum (Dmax) dose control points are analyzed. Additionally, target complication, which includes local-regional recurrence and/or distant metastasis, is observed on 4 patients. Binary logistic regression is used to quantify correlation between target complication and four dose control points. Namely, ICRU recommended dose control points, D2, D50, D95, and D98 are analyzed. Results: For correlation with dysphagia, Dmin on cervico-thoracic esophagus is statistically significant (p-value = 0.005). Additionally, Dmean on cervico-thoracic esophagus is also significant in association with dysphagia (p-value = 0.012). However, no correlation was observed between Dmax and dysphagia (p-value = 0.263). For target complications, D50 on the target is a statistically significant dose control point (p-value = 0.032). No correlations were observed between treatment complications and D2 (p-value = 0.866), D95 (p-value = 0.750), and D98 (p-value = 0.710) on the target. Conclusion: Significant correlations are observed between radiation induced dysphagia and Dmean (and Dmin) to cervico-thoracic esophagus. Additionally, correlation between target complications and median dose to target

  10. Lead preconcentration in synthetic samples with triton x-114 in the cloud point extraction and analysis by atomic absorption (EAAF)

    International Nuclear Information System (INIS)

    Zegarra Pisconti, Marixa; Cjuno Huanca, Jesus

    2015-01-01

    A methodology was developed about lead preconcentration in water samples that were added dithizone as complexing agent, previously dissolved in the nonionic surfactant Triton X-114, until the formation of the critical micelle concentration and the cloud point temperature. The centrifuged system gave a precipitate with high concentrations of Pb (II) that was measured by atomic absorption spectroscopy with flame (EAAF). The method has proved feasible to be implemented as a method of preconcentration and analysis of Pb in aqueous samples with concentrations less than 1 ppm. Several parameters were evaluated to obtain a percentage recovery of 89.8%. (author)

  11. Direct protein quantification in complex sample solutions by surface-engineered nanorod probes

    KAUST Repository

    Schrittwieser, Stefan

    2017-06-30

    Detecting biomarkers from complex sample solutions is the key objective of molecular diagnostics. Being able to do so in a simple approach that does not require laborious sample preparation, sophisticated equipment and trained staff is vital for point-of-care applications. Here, we report on the specific detection of the breast cancer biomarker sHER2 directly from serum and saliva samples by a nanorod-based homogeneous biosensing approach, which is easy to operate as it only requires mixing of the samples with the nanorod probes. By careful nanorod surface engineering and homogeneous assay design, we demonstrate that the formation of a protein corona around the nanoparticles does not limit the applicability of our detection method, but on the contrary enables us to conduct in-situ reference measurements, thus further strengthening the point-of-care applicability of our method. Making use of sandwich assays on top of the nanorods, we obtain a limit of detection of 110 pM and 470 pM in 10-fold diluted spiked saliva and serum samples, respectively. In conclusion, our results open up numerous applications in direct protein biomarker quantification, specifically in point-of-care settings where resources are limited and ease-of-use is of essence.

  12. Direct protein quantification in complex sample solutions by surface-engineered nanorod probes

    KAUST Repository

    Schrittwieser, Stefan; Pelaz, Beatriz; Parak, Wolfgang J.; Lentijo Mozo, Sergio; Soulantica, Katerina; Dieckhoff, Jan; Ludwig, Frank; Schotter, Joerg

    2017-01-01

    Detecting biomarkers from complex sample solutions is the key objective of molecular diagnostics. Being able to do so in a simple approach that does not require laborious sample preparation, sophisticated equipment and trained staff is vital for point-of-care applications. Here, we report on the specific detection of the breast cancer biomarker sHER2 directly from serum and saliva samples by a nanorod-based homogeneous biosensing approach, which is easy to operate as it only requires mixing of the samples with the nanorod probes. By careful nanorod surface engineering and homogeneous assay design, we demonstrate that the formation of a protein corona around the nanoparticles does not limit the applicability of our detection method, but on the contrary enables us to conduct in-situ reference measurements, thus further strengthening the point-of-care applicability of our method. Making use of sandwich assays on top of the nanorods, we obtain a limit of detection of 110 pM and 470 pM in 10-fold diluted spiked saliva and serum samples, respectively. In conclusion, our results open up numerous applications in direct protein biomarker quantification, specifically in point-of-care settings where resources are limited and ease-of-use is of essence.

  13. Three-dimensional digital imaging based on shifted point-array encoding.

    Science.gov (United States)

    Tian, Jindong; Peng, Xiang

    2005-09-10

    An approach to three-dimensional (3D) imaging based on shifted point-array encoding is presented. A kind of point-array structure light is projected sequentially onto the reference plane and onto the object surface to be tested and thus forms a pair of point-array images. A mathematical model is established to formulize the imaging process with the pair of point arrays. This formulation allows for a description of the relationship between the range image of the object surface and the lateral displacement of each point in the point-array image. Based on this model, one can reconstruct each 3D range image point by computing the lateral displacement of the corresponding point on the two point-array images. The encoded point array can be shifted digitally along both the lateral and the longitudinal directions step by step to achieve high spatial resolution. Experimental results show good agreement with the theoretical predictions. This method is applicable for implementing 3D imaging of object surfaces with complex topology or large height discontinuities.

  14. Evidence of magnetic dipolar interaction in micrometric powders of the Fe{sub 50}Mn{sub 10}Al{sub 40} system: Melted alloys

    Energy Technology Data Exchange (ETDEWEB)

    Perez Alcazar, G.A., E-mail: gpgeperez@gmail.com [Departamento de Fisica, Universidad del Valle, A. A. 25360, Cali (Colombia); Unidad Asociada ICMM-IMA, Apdo. 155, 28230 Las Rozas, Madrid (Spain); Zamora, L.E. [Departamento de Fisica, Universidad del Valle, A. A. 25360, Cali (Colombia); Unidad Asociada ICMM-IMA, Apdo. 155, 28230 Las Rozas, Madrid (Spain); Tabares, J.A.; Piamba, J.F. [Departamento de Fisica, Universidad del Valle, A. A. 25360, Cali (Colombia); Gonzalez, J.M. [Unidad Asociada ICMM-IMA, Apdo. 155, 28230 Las Rozas, Madrid (Spain); Greneche, J.M. [LUNAM, Universite du Maine, Institut des Molecules et Materiaux du Mans, UMR CNRS 6283, 72085 Le Mans Cedex 9 (France); Martinez, A. [Instituto de Magnetismo Aplicado, P.O. Box 155, 28230 Las Rozas (Spain); Romero, J.J. [Instituto de Ceramica y Vidrio, CSIC, C/Kelsen 5, 28049, Madrid (Spain); Marco, J.F. [Instituto de Quimica Fisica Rocasolano, CSIC, C/Serrano 119, 28006 Madrid (Spain)

    2013-02-15

    Powders of melted disordered Fe{sub 50}Mn{sub 10}Al{sub 40} alloy were separated at different mean particle sizes as well as magnetically and structurally characterized. All the samples are BCC and show the same nanostructure. Particles larger than 250 {mu}m showed a lamellar shape compared to smaller particles, which exhibited a more regular form. All the samples are ferromagnetic at room temperature and showed reentrant spin-glass (RSG) and superparamagnetic (SP)-like behaviors between 30 and 60 K and 265 and > 280 K, respectively, as a function of frequency and particle size. The freezing temperature increases with increasing particle size while the blocking one decreases with particle size. The origin of these magnetic phenomena relies in the internal disordered character of samples and the competitive interaction of Fe and Mn atoms. The increase of their critical freezing temperature with increasing mean particle size is due to the increase of the magnetic dipolar interaction between the magnetic moment of each particle with the field produced by the other magnetic moments of their surrounding particles. - Highlights: Black-Right-Pointing-Pointer The effect of particle size in microsized powders of Fe{sub 50}Mn{sub 10}Al{sub 40} melted disordered alloy is studied. Black-Right-Pointing-Pointer Dipolar magnetic interaction between particles exists and this changes with the particle size. Black-Right-Pointing-Pointer For all the particle sizes the reentrant spin- glass and the superparamagnetic-like phases exist. Black-Right-Pointing-Pointer RSG and SP critical temperatures increase with increasing the dipolar magnetic interaction (the mean particle size).

  15. Adobe Boxes: Locating Object Proposals Using Object Adobes.

    Science.gov (United States)

    Fang, Zhiwen; Cao, Zhiguo; Xiao, Yang; Zhu, Lei; Yuan, Junsong

    2016-09-01

    Despite the previous efforts of object proposals, the detection rates of the existing approaches are still not satisfactory enough. To address this, we propose Adobe Boxes to efficiently locate the potential objects with fewer proposals, in terms of searching the object adobes that are the salient object parts easy to be perceived. Because of the visual difference between the object and its surroundings, an object adobe obtained from the local region has a high probability to be a part of an object, which is capable of depicting the locative information of the proto-object. Our approach comprises of three main procedures. First, the coarse object proposals are acquired by employing randomly sampled windows. Then, based on local-contrast analysis, the object adobes are identified within the enlarged bounding boxes that correspond to the coarse proposals. The final object proposals are obtained by converging the bounding boxes to tightly surround the object adobes. Meanwhile, our object adobes can also refine the detection rate of most state-of-the-art methods as a refinement approach. The extensive experiments on four challenging datasets (PASCAL VOC2007, VOC2010, VOC2012, and ILSVRC2014) demonstrate that the detection rate of our approach generally outperforms the state-of-the-art methods, especially with relatively small number of proposals. The average time consumed on one image is about 48 ms, which nearly meets the real-time requirement.

  16. Core drilling of deep drillhole OL-KR50 at Olkiluoto in Eurajoki 2008

    International Nuclear Information System (INIS)

    Toropainen, V.

    2009-02-01

    As a part of the confirming site investigations at Olkiluoto, Suomen Malmi Oy (Smoy) core drilled 939.33 m and 45.44 m deep drillholes with a diameter of 75.7 mm at Olkiluoto in September - November 2008. The identification numbers of the drillholes are OL-KR50 and OL-KR50B, respectively. A set of monitoring measurements and samplings from the drilling and returning water was carried out during the drilling. Both the volume and the electric conductivity of the returning and drilling water were recorded. The drill rig was computer controlled and the computer recorded drilling parameters during drilling. The objective of the measurements was to obtain more information about bedrock and groundwater properties. Sodium fluorescein was used as a label agent in the drilling water. The total volumes of the used drilling and washing water were 1135 m 3 and 20 m 3 in the drillholes OL-KR50 and OL-KR50B, respectively. The measured volume of the returning water in the drillhole OL-KR50 was 954 m 3 . The deviation of the drillholes was measured with the deviation measuring instruments EMS and Maxibor II. Uniaxial compressive strength, Young's Modulus and Poisson's ratio were measured from the core samples. The average uniaxial compressive strength was 129.7 MPa, the average Young's Modulus was 45.8 GPa and the average Poisson's ratio was 0.15. The main rock types were veined and diatexitic gneisses, pegmatitic granite and tonaliticgranodioritic-granitic gneiss. The average fracture frequency is 2.0 pcs/m in drillhole OL KR50 and 3.6 pcs/m in the drillhole OL-KR50B. The average RQD values are 96.1 % and 94.3 %, respectively. 39 fractured zones were penetrated by drillhole OL-KR50 and four by drillhole OL-KR50B. (orig.)

  17. Evaluation of mixing downstream of tees in duct systems with respect to single point representative air sampling.

    Science.gov (United States)

    Kim, Taehong; O'Neal, Dennis L; Ortiz, Carlos

    2006-09-01

    Air duct systems in nuclear facilities must be monitored with continuous sampling in case of an accidental release of airborne radionuclides. The purpose of this work is to identify the air sampling locations where the velocity and contaminant concentrations fall below the 20% coefficient of variation required by the American National Standards Institute/Health Physics Society N13.1-1999. Experiments of velocity and tracer gas concentration were conducted on a generic "T" mixing system which included combinations of three sub ducts, one main duct, and air velocities from 0.5 to 2 m s (100 to 400 fpm). The experimental results suggest that turbulent mixing provides the accepted velocity coefficients of variation after 6 hydraulic diameters downstream of the T-junction. About 95% of the cases achieved coefficients of variation below 10% by 6 hydraulic diameters. However, above a velocity ratio (velocity in the sub duct/velocity in the main duct) of 2, velocity profiles were uniform in a shorter distance downstream of the T-junction as the velocity ratio went up. For the tracer gas concentration, the distance needed for the coefficients of variation to drop 20% decreased with increasing velocity ratio due to the sub duct airflow momentum. The results may apply to other duct systems with similar geometries and, ultimately, be a basis for selecting a proper sampling location under the requirements of single point representative sampling.

  18. [Determination of biphenyl ether herbicides in water using HPLC with cloud-point extraction].

    Science.gov (United States)

    He, Cheng-Yan; Li, Yuan-Qian; Wang, Shen-Jiao; Ouyang, Hua-Xue; Zheng, Bo

    2010-01-01

    To determine residues of multiple biphenyl ether herbicides simultaneously in water using high performance liquid chromatography (HPLC) with cloud-point extraction. The residues of eight biphenyl ether herbicides (including bentazone, fomesafen, acifluorfen, aclonifen, bifenox, fluoroglycofenethy, nitrofen, oxyfluorfen) in water samples were extracted with cloud-point extraction of Triton X-114. The analytes were separated and determined using reverse phase HPLC with ultraviolet detector at 300 nm. Optimized conditions for the pretreatment of water samples and the parameters of chromatographic separation applied. There was a good linear correlation between the concentration and the peak area of the analytes in the range of 0.05-2.00 mg/L (r = 0.9991-0.9998). Except bentazone, the spiked recoveries of the biphenyl ether herbicides in the water samples ranged from 80.1% to 100.9%, with relative standard deviations ranging from 2.70% to 6.40%. The detection limit of the method ranged from 0.10 microg/L to 0.50 microg/L. The proposed method is simple, rapid and sensitive, and can meet the requirements of determination of multiple biphenyl ether herbicides simultaneously in natural waters.

  19. Statistical surrogate model based sampling criterion for stochastic global optimization of problems with constraints

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Su Gil; Jang, Jun Yong; Kim, Ji Hoon; Lee, Tae Hee [Hanyang University, Seoul (Korea, Republic of); Lee, Min Uk [Romax Technology Ltd., Seoul (Korea, Republic of); Choi, Jong Su; Hong, Sup [Korea Research Institute of Ships and Ocean Engineering, Daejeon (Korea, Republic of)

    2015-04-15

    Sequential surrogate model-based global optimization algorithms, such as super-EGO, have been developed to increase the efficiency of commonly used global optimization technique as well as to ensure the accuracy of optimization. However, earlier studies have drawbacks because there are three phases in the optimization loop and empirical parameters. We propose a united sampling criterion to simplify the algorithm and to achieve the global optimum of problems with constraints without any empirical parameters. It is able to select the points located in a feasible region with high model uncertainty as well as the points along the boundary of constraint at the lowest objective value. The mean squared error determines which criterion is more dominant among the infill sampling criterion and boundary sampling criterion. Also, the method guarantees the accuracy of the surrogate model because the sample points are not located within extremely small regions like super-EGO. The performance of the proposed method, such as the solvability of a problem, convergence properties, and efficiency, are validated through nonlinear numerical examples with disconnected feasible regions.

  20. Choice of Sample Split in Out-of-Sample Forecast Evaluation

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Timmermann, Allan

    , while conversely the power of forecast evaluation tests is strongest with long out-of-sample periods. To deal with size distortions, we propose a test statistic that is robust to the effect of considering multiple sample split points. Empirical applications to predictabil- ity of stock returns......Out-of-sample tests of forecast performance depend on how a given data set is split into estimation and evaluation periods, yet no guidance exists on how to choose the split point. Empirical forecast evaluation results can therefore be difficult to interpret, particularly when several values...... and inflation demonstrate that out-of-sample forecast evaluation results can critically depend on how the sample split is determined....

  1. TOPOLOGY OF RANDOM POINTS YOGESHWARAN. D.

    Indian Academy of Sciences (India)

    Balls grow at unit rate centred at the points of the point cloud/ process. ... Idea of persistence : Keep track of births and deaths of topological features. ..... holes, Betti numbers, etc., one will be more interested in the distribution of such objects on ...

  2. Comparison of prevalence estimation of Mycobacterium avium subsp. paratuberculosis infection by sampling slaughtered cattle with macroscopic lesions vs. systematic sampling.

    Science.gov (United States)

    Elze, J; Liebler-Tenorio, E; Ziller, M; Köhler, H

    2013-07-01

    The objective of this study was to identify the most reliable approach for prevalence estimation of Mycobacterium avium ssp. paratuberculosis (MAP) infection in clinically healthy slaughtered cattle. Sampling of macroscopically suspect tissue was compared to systematic sampling. Specimens of ileum, jejunum, mesenteric and caecal lymph nodes were examined for MAP infection using bacterial microscopy, culture, histopathology and immunohistochemistry. MAP was found most frequently in caecal lymph nodes, but sampling more tissues optimized the detection rate. Examination by culture was most efficient while combination with histopathology increased the detection rate slightly. MAP was detected in 49/50 animals with macroscopic lesions representing 1.35% of the slaughtered cattle examined. Of 150 systematically sampled macroscopically non-suspect cows, 28.7% were infected with MAP. This indicates that the majority of MAP-positive cattle are slaughtered without evidence of macroscopic lesions and before clinical signs occur. For reliable prevalence estimation of MAP infection in slaughtered cattle, systematic random sampling is essential.

  3. Blue-noise remeshing with farthest point optimization

    KAUST Repository

    Yan, Dongming

    2014-08-01

    In this paper, we present a novel method for surface sampling and remeshing with good blue-noise properties. Our approach is based on the farthest point optimization (FPO), a relaxation technique that generates high quality blue-noise point sets in 2D. We propose two important generalizations of the original FPO framework: adaptive sampling and sampling on surfaces. A simple and efficient algorithm for accelerating the FPO framework is also proposed. Experimental results show that the generalized FPO generates point sets with excellent blue-noise properties for adaptive and surface sampling. Furthermore, we demonstrate that our remeshing quality is superior to the current state-of-the art approaches. © 2014 The Eurographics Association and John Wiley & Sons Ltd.

  4. Blue-noise remeshing with farthest point optimization

    KAUST Repository

    Yan, Dongming; Guo, Jianwei; Jia, Xiaohong; Zhang, Xiaopeng; Wonka, Peter

    2014-01-01

    In this paper, we present a novel method for surface sampling and remeshing with good blue-noise properties. Our approach is based on the farthest point optimization (FPO), a relaxation technique that generates high quality blue-noise point sets in 2D. We propose two important generalizations of the original FPO framework: adaptive sampling and sampling on surfaces. A simple and efficient algorithm for accelerating the FPO framework is also proposed. Experimental results show that the generalized FPO generates point sets with excellent blue-noise properties for adaptive and surface sampling. Furthermore, we demonstrate that our remeshing quality is superior to the current state-of-the art approaches. © 2014 The Eurographics Association and John Wiley & Sons Ltd.

  5. Sucralfate and Lidocain: Antacid 50:50 solution in Post Esophageal Variceal Band Ligation Pain.

    Science.gov (United States)

    Hafeez, Muhammad; Kadir, Ehsan; Aijaz, Anjum

    2016-01-01

    To compare the effectiveness of pain relief of Sucralfate and lidocain antacid 50:50 solution in post esophageal variceal band ligation pain. All patients who had under gone Esophageal Variceal Band Ligation (EVBL) were included in the study. Patients un-willing to be included in the study or those who didn't have post EVBL pain were excluded. Patients with post EVBL pains were divided into two groups: one group was given sucralfate and other was given lidocaine: antacid 50:50 solution. Both were inquired about the duration of the pain relief after the medication. The results were analyzed on SPSS 23. Independent samples T-test was performed to find out whether the difference in duration of pain relief was significantly different in the two groups. Out of 110 patients who have EVBL, 66(60.00%) had pain and 44(40.00%) were pain free. In the pain group 46 (69.7%) were given sucralfate and 20 (30.3%) were given lidocain: antacid 50:50 solution. Mean duration of pain relief in two groups was 2.78 (SD ± 2.096) and 2.5 days (SD ±. 0.76) respectively. Independent samples T-test results revealed that there was no statistically significant difference in the duration of pain relief between these two groups with p value 0.426. Both Sucralfate and Lidocain: antacid 50:50 solutions are effective in relieving the post EVBL pain. However, no statistically significant difference in duration of pain relief was detected in separate groups of patients treated with either treatment.

  6. Duality and calculus of convex objects (theory and applications)

    International Nuclear Information System (INIS)

    Brinkhuis, Ya; Tikhomirov, V M

    2007-01-01

    A new approach to convex calculus is presented, which allows one to treat from a single point of view duality and calculus for various convex objects. This approach is based on the possibility of associating with each convex object (a convex set or a convex function) a certain convex cone without loss of information about the object. From the duality theorem for cones duality theorems for other convex objects are deduced as consequences. The theme 'Duality formulae and the calculus of convex objects' is exhausted (from a certain precisely formulated point of view). Bibliography: 5 titles.

  7. Consistency of the national realization of dew-point temperature using NIS standard humidity generators

    Directory of Open Access Journals (Sweden)

    El-Galil Doaa Abd

    2017-01-01

    Full Text Available A comparison of two standard humidity generators (two-temperature (2-T and one-temperature (1-T generators that are developed by the National Institute for Standards (NIS has been performed using a transfer standard chilled-mirror hygrometer and measurement procedures to realize dew-point temperature Td in the range from −50 °C to 0 °C. The main objective of this comparison was to compare the realizations of dew-point temperature and to establish the level of consistency between the two generators. For a level of consistency between two measurements, it is expressed by the difference between the measured values, m1 − m2, and the expanded pair uncertainty of this difference Up [1]. The comparison measurements revealed dew-point temperature differences of 0.02 °C and 0.07 °C with expanded pair uncertainties of ±0.09 °C and ±0.15 °C.

  8. Design and verification of the miniature optical system for small object surface profile fast scanning

    Science.gov (United States)

    Chi, Sheng; Lee, Shu-Sheng; Huang, Jen, Jen-Yu; Lai, Ti-Yu; Jan, Chia-Ming; Hu, Po-Chi

    2016-04-01

    As the progress of optical technologies, different commercial 3D surface contour scanners are on the market nowadays. Most of them are used for reconstructing the surface profile of mold or mechanical objects which are larger than 50 mm×50 mm× 50 mm, and the scanning system size is about 300 mm×300 mm×100 mm. There are seldom optical systems commercialized for surface profile fast scanning for small object size less than 10 mm×10 mm×10 mm. Therefore, a miniature optical system has been designed and developed in this research work for this purpose. Since the most used scanning method of such system is line scan technology, we have developed pseudo-phase shifting digital projection technology by adopting projecting fringes and phase reconstruction method. A projector was used to project a digital fringe patterns on the object, and the fringes intensity images of the reference plane and of the sample object were recorded by a CMOS camera. The phase difference between the plane and object can be calculated from the fringes images, and the surface profile of the object was reconstructed by using the phase differences. The traditional phase shifting method was accomplished by using PZT actuator or precisely controlled motor to adjust the light source or grating and this is one of the limitations for high speed scanning. Compared with the traditional optical setup, we utilized a micro projector to project the digital fringe patterns on the sample. This diminished the phase shifting processing time and the controlled phase differences between the shifted phases become more precise. Besides, the optical path design based on a portable device scanning system was used to minimize the size and reduce the number of the system components. A screwdriver section about 7mm×5mm×5mm has been scanned and its surface profile was successfully restored. The experimental results showed that the measurement area of our system can be smaller than 10mm×10mm, the precision reached to

  9. Random vs. systematic sampling from administrative databases involving human subjects.

    Science.gov (United States)

    Hagino, C; Lo, R J

    1998-09-01

    Two sampling techniques, simple random sampling (SRS) and systematic sampling (SS), were compared to determine whether they yield similar and accurate distributions for the following four factors: age, gender, geographic location and years in practice. Any point estimate within 7 yr or 7 percentage points of its reference standard (SRS or the entire data set, i.e., the target population) was considered "acceptably similar" to the reference standard. The sampling frame was from the entire membership database of the Canadian Chiropractic Association. The two sampling methods were tested using eight different sample sizes of n (50, 100, 150, 200, 250, 300, 500, 800). From the profile/characteristics, summaries of four known factors [gender, average age, number (%) of chiropractors in each province and years in practice], between- and within-methods chi 2 tests and unpaired t tests were performed to determine whether any of the differences [descriptively greater than 7% or 7 yr] were also statistically significant. The strengths of the agreements between the provincial distributions were quantified by calculating the percent agreements for each (provincial pairwise-comparison methods). Any percent agreement less than 70% was judged to be unacceptable. Our assessments of the two sampling methods (SRS and SS) for the different sample sizes tested suggest that SRS and SS yielded acceptably similar results. Both methods started to yield "correct" sample profiles at approximately the same sample size (n > 200). SS is not only convenient, it can be recommended for sampling from large databases in which the data are listed without any inherent order biases other than alphabetical listing by surname.

  10. Mortality 1950-1964 and disease and survivorship 1958-1964 among sample members aged 50 years or older, October 1, 1950

    Energy Technology Data Exchange (ETDEWEB)

    Ciocco, A

    1965-01-01

    Persons who were 50 years or older in 1950, or 45 years or older at the time of the atomic bomb (ATB), constitute that portion of the Life Span Study sample subject to the highest disability and mortality risks, from malignancies as well as from other chronic disease conditions. Furthermore, this age class is rapidly approaching the modal age of death. Hence, whatever late effects of exposure to the bomb in 1945 are to occur they should be perceptible by this time. With this view in mind, mortality, and the occurrence of selected diseases subsequent to 1950 have been compared for the following purposes among designated exposure groups: to specify the size and trend of differences among the exposure groups; and to point up some of the issues which must be met in planning future statistical-epidemiologic studies at ABCC. The three exposure groups compared were: persons within 1400 m from the hypocenter (0 to 1399 m); those beyond 1400 m (1400 to 9999 m); and persons not in the city ATB. Each group has been examined for: cumulative mortality pattern from 1 October 1950 to 30 September 1964, for all causes of death, for deaths from tuberculosis, lung cancer, stomach cancer, and leukemia; occupation characteristics and their relation to mortality; selective factors related to inclusion in the ABCC-JNIH Adult Health Study, and prevalance and incidence of, and survivorship from, tuberculosis among participants in the Adult Health Study, 1958 to 1964; and selective factors related to frequency of autopsy, 1961 to 1964. 11 references, 2 figures, 22 tables.

  11. Very Luminous X-ray Point Sources in Starburst Galaxies

    Science.gov (United States)

    Colbert, E.; Heckman, T.; Ptak, A.; Weaver, K. A.; Strickland, D.

    Extranuclear X-ray point sources in external galaxies with luminosities above 1039.0 erg/s are quite common in elliptical, disk and dwarf galaxies, with an average of ~ 0.5 and dwarf galaxies, with an average of ~0.5 sources per galaxy. These objects may be a new class of object, perhaps accreting intermediate-mass black holes, or beamed stellar mass black hole binaries. Starburst galaxies tend to have a larger number of these intermediate-luminosity X-ray objects (IXOs), as well as a large number of lower-luminosity (1037 - 1039 erg/s) point sources. These point sources dominate the total hard X-ray emission in starburst galaxies. We present a review of both types of objects and discuss possible schemes for their formation.

  12. The study of microplasticity mechanism in Ti-50 wt.%Nb alloy with high hydrogen content

    International Nuclear Information System (INIS)

    Golovin, I.S.; Kollerov, M.U.; Schinaeva, E.V.

    1996-01-01

    The upper yield point (∼ 700 MPa) appears at the compression test curves (ε=0.024 sec -1 ) of b.c.c. Nb-50 wt.%Ti due to the increase of hydrogen content from 0 to 0.2 wt.% and more and leads to the non monotonous increase in compressive lower yield stress from 400 to 550 MPa. Taking into account close connection between macro- and microplasticity of metallic materials the low frequency (∼ 2 Hz) amplitude dependent internal friction (ADIF) spectrum (γ = 1. 60.10 -5 ) in hydrogenized Nb-50 wt.% Ti and Nb samples are studied. The ADIF investigation of the closed hysteresis loop ''loading-unloading'' shows the dependence of its width from the hydrogen content which evidences the fact of dislocation unpinning from hydrogen atmospheres in the 1/2 cycle of loading. The study of ADIF spectrum for samples with different hydrogen content before and after torsion deformation (γ ∼ 2%) shows the sharp increase of IF level at γ = 1. 10.10 -5 after ∼1 hour of natural ageing. At that time the ADIF curves change its shape from Γ-shape to U-shape. The amplitude range of the IF increase depends on the hydrogen content. It is the interaction of hydrogen atoms with dislocations that caused the above mentioned effect which has not been observed in hydrogen free samples. The time estimation for the formation of thermodynamically stable hydrogen atmospheres on dislocations shows that hydrogen atmospheres could not follow the dislocation during compressive tests and that leads to the upper yield point appearance. (orig.)

  13. Statistical data evaluation in mobile gamma spectrometry. An optimisation of on-line search strategies in the scenario of lost point sources

    International Nuclear Information System (INIS)

    Hjerpe, T.; Samuelsson, C.

    1999-01-01

    There is a potential risk that hazardous radioactive sources could enter the environment, e.g. via satellite debris, smuggled radioactive goods or lost metal scrap. From a radiation protection point of view there is a need for rapid and reliable methods for locating and identifying sources. Car-borne and air-borne detector systems are suitable for the task. The condition in this work is a situation where the missing radionuclide is known, which is not an unlikely scenario. The possibility that the source is located near a road can be high, and thus motivating a car-borne spectrometer system. The main object is to optimise on-line statistical methods in order to achieve a high probability for locating point sources, or hot spots, and still have reasonably few false alarms from variations in the natural background radiation. Data were obtained from a car-borne 3 litres (NaI(Tl) detector and two point sources, located at various distances from the road. The nuclides used were 137 Cs and 131 I. Spectra were measured stationary on the road. From these measurements spectra we have reconstructed spectra applicable to different speed and sampling times; the time 3 seconds and 50 km/h are used in this work. The maximum distance a source can be located from the road and still be detected is estimated with four different statistical analysis methods. This distance is called the detection distance, DD. The method is applied on gross counts in the full energy peak window. For each method alarm thresholds has been calculated from background data obtained in Scania (Skaane), in the south of Sweden. The results show a 30-50% difference in DD's. With this semi-theoretical approach, the two sources could be detected from 250 m ( 137 Cs, 6GBq) and 200 m ( 131 I, 4GBq). (au)

  14. Out-of-Sample Generalizations for Supervised Manifold Learning for Classification.

    Science.gov (United States)

    Vural, Elif; Guillemot, Christine

    2016-03-01

    Supervised manifold learning methods for data classification map high-dimensional data samples to a lower dimensional domain in a structure-preserving way while increasing the separation between different classes. Most manifold learning methods compute the embedding only of the initially available data; however, the generalization of the embedding to novel points, i.e., the out-of-sample extension problem, becomes especially important in classification applications. In this paper, we propose a semi-supervised method for building an interpolation function that provides an out-of-sample extension for general supervised manifold learning algorithms studied in the context of classification. The proposed algorithm computes a radial basis function interpolator that minimizes an objective function consisting of the total embedding error of unlabeled test samples, defined as their distance to the embeddings of the manifolds of their own class, as well as a regularization term that controls the smoothness of the interpolation function in a direction-dependent way. The class labels of test data and the interpolation function parameters are estimated jointly with an iterative process. Experimental results on face and object images demonstrate the potential of the proposed out-of-sample extension algorithm for the classification of manifold-modeled data sets.

  15. Development of a simple, sensitive and inexpensive ion-pairing cloud point extraction approach for the determination of trace inorganic arsenic species in spring water, beverage and rice samples by UV-Vis spectrophotometry.

    Science.gov (United States)

    Gürkan, Ramazan; Kır, Ufuk; Altunay, Nail

    2015-08-01

    The determination of inorganic arsenic species in water, beverages and foods become crucial in recent years, because arsenic species are considered carcinogenic and found at high concentrations in the samples. This communication describes a new cloud-point extraction (CPE) method for the determination of low quantity of arsenic species in the samples, purchased from the local market by UV-Visible Spectrophotometer (UV-Vis). The method is based on selective ternary complex of As(V) with acridine orange (AOH(+)) being a versatile fluorescence cationic dye in presence of tartaric acid and polyethylene glycol tert-octylphenyl ether (Triton X-114) at pH 5.0. Under the optimized conditions, a preconcentration factor of 65 and detection limit (3S blank/m) of 1.14 μg L(-1) was obtained from the calibration curve constructed in the range of 4-450 μg L(-1) with a correlation coefficient of 0.9932 for As(V). The method is validated by the analysis of certified reference materials (CRMs). Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Site-Wide Integrated Water Monitoring - Defining and Implementing Sampling Objectives to Support Site Closure - 13060

    International Nuclear Information System (INIS)

    Wilborn, Bill; Knapp, Kathryn; Farnham, Irene; Marutzky, Sam

    2013-01-01

    The Underground Test Area (UGTA) activity is responsible for assessing and evaluating the effects of the underground nuclear weapons tests on groundwater at the Nevada National Security Site (NNSS), formerly the Nevada Test Site (NTS), and implementing a corrective action closure strategy. The UGTA strategy is based on a combination of characterization, modeling studies, monitoring, and institutional controls (i.e., monitored natural attenuation). The closure strategy verifies through appropriate monitoring activities that contaminants of concern do not exceed the SDWA at the regulatory boundary and that adequate institutional controls are established and administered to ensure protection of the public. Other programs conducted at the NNSS supporting the environmental mission include the Routine Radiological Environmental Monitoring Program (RREMP), Waste Management, and the Infrastructure Program. Given the current programmatic and operational demands for various water-monitoring activities at the same locations, and the ever-increasing resource challenges, cooperative and collaborative approaches to conducting the work are necessary. For this reason, an integrated sampling plan is being developed by the UGTA activity to define sampling and analysis objectives, reduce duplication, eliminate unnecessary activities, and minimize costs. The sampling plan will ensure the right data sets are developed to support closure and efficient transition to long-term monitoring. The plan will include an integrated reporting mechanism for communicating results and integrating process improvements within the UGTA activity as well as between other U.S. Department of Energy (DOE) Programs. (authors)

  17. Experimental study and modelling of the well-mixing length. Application to the representativeness of sampling points in duct

    International Nuclear Information System (INIS)

    Alengry, Jonathan

    2014-01-01

    Monitoring of gaseous releases from nuclear installations in the environment and air cleaning efficiency measurement are based on regular measurements of concentrations of contaminants in outlet chimneys and ventilation systems. The concentration distribution may be heterogeneous at the measuring point if the distance setting of the mixing is not sufficient. The question is about the set up of the measuring point in duct and the error compared to the homogeneous concentration in case of non-compliance with this distance. This study defines the so-called 'well mixing length' from laboratory experiments. The bench designed for these tests allowed to reproduce flows in long circular and rectangular ducts, each including a bend. An optical measurement technique has been developed, calibrated and used to measure the concentration distribution of a tracer injected in the flow. The experimental results in cylindrical duct have validated an analytical model based on the convection-diffusion equation of a tracer, and allowed to propose models of good mixing length and representativeness of sampling points. In rectangular duct, the acquired measures constitute a first database on the evolution of the homogenization of a tracer, in the perspective of numerical simulations exploring more realistic conditions for measurements in situ. (author) [fr

  18. Point-of-care hemoglobin testing for postmortem diagnosis of anemia.

    Science.gov (United States)

    Na, Joo-Young; Park, Ji Hye; Choi, Byung Ha; Kim, Hyung-Seok; Park, Jong-Tae

    2018-03-01

    An autopsy involves examination of a body using invasive methods such as dissection, and includes various tests using samples procured during dissection. During medicolegal autopsies, the blood carboxyhemoglobin concentration is commonly measured using the AVOXimeter® 4000 as a point-of-care test. When evaluating the body following hypovolemic shock, characteristics such as reduced livor mortis or an anemic appearance of the viscera can be identified, but these observations arequite subjective. Thus, a more objective test is required for the postmortem diagnosis of anemia. In the present study, the AVOXimeter® 4000 was used to investigate the utility of point-of-care hemoglobin testing. Hemoglobin tests were performed in 93 autopsy cases. The AVOXimeter® 4000 and the BC-2800 Auto Hematology Analyzer were used to test identical samples in 29 of these cases. The results of hemoglobin tests performed with these two devices were statistically similar (r = 0.969). The results of hemoglobin tests using postmortem blood were compared with antemortem test results from medical records from 31 cases, and these results were similar. In 13 of 17 cases of death from internal hemorrhage, hemoglobin levels were lower in the cardiac blood than in blood from the affected body cavity, likely due to compensatory changes induced by antemortem hemorrhage. It is concluded that blood hemoglobin testing may be useful as a point-of-care test for diagnosing postmortem anemia.

  19. Auricular Point Acupressure for Chronic Low Back Pain: A Feasibility Study for 1-Week Treatment

    Directory of Open Access Journals (Sweden)

    Chao-Hsing Yeh

    2012-01-01

    Full Text Available Objectives. The objective of this one-group, repeated-measures design was to explore the acceptance of auricular point acupressure (APA to reduce chronic low back pain (CLBP and estimate minimum clinically important differences (MCIDs for pain intensity change. Methods. Subjects received 7-day APA treatment. After appropriate acupoints were identified, vaccaria seeds were carefully taped onto each selected auricular point for 7-day. The Brief Pain Inventory Short Form (BPI was used to collect outcome data. Results. A total of 74 subjects participated in the study. Ten subjects dropped out and the retention rate was 87%. Subjects reported a 46% reduction in BPI worst pain, and over 50% reduction in BPI average pain, overall pain severity and pain interference by the end of study, and 62.5% subjects also reported less pain medication use. The MCIDs for the subscale of BPI ranged from .70 to 1.86 points. The percentage improvement of MCIDs from baseline was between 14.5–24.9%. Discussion. APA appears to be highly acceptable to patients with CLBP. A sham group is needed in order to differentiate the true effects of APA from the possible psychological effects of more frequent visits by the auricular therapist and patients’ expectation of the APA treatment.

  20. Analysis of fingerprint samples, testing various conditions, for forensic DNA identification.

    Science.gov (United States)

    Ostojic, Lana; Wurmbach, Elisa

    2017-01-01

    Fingerprints can be of tremendous value for forensic biology, since they can be collected from a wide variety of evident types, such as handles of weapons, tools collected in criminal cases, and objects with no apparent staining. DNA obtained from fingerprints varies greatly in quality and quantity, which ultimately affects the quality of the resulting STR profiles. Additional difficulties can arise when fingerprint samples show mixed STR profiles due to the handling of multiple persons. After applying a tested protocol for sample collection (swabbing with 5% Triton X-100), DNA extraction (using an enzyme that works at elevated temperatures), and PCR amplification (AmpFlSTR® Identifiler® using 31cycles) extensive analysis was performed to better understand the challenges inherent to fingerprint samples, with the ultimate goal of developing valuable profiles (≥50% complete). The impact of time on deposited fingerprints was investigated, revealing that while the quality of profiles deteriorated, full STR profiles could still be obtained from samples after 40days of storage at room temperature. By comparing the STR profiles from fingerprints of the dominant versus the non-dominant hand, we found a slightly better quality from the non-dominant hand, which was not always significant. Substrates seem to have greater effects on fingerprints. Tests on glass, plastic, paper and metal (US Quarter dollar, made of Cu and Ni), common substrates in offices and homes, showed best results for glass, followed by plastic and paper, while almost no profiles were obtained from a Quarter dollar. Important for forensic casework, we also assessed three-person mixtures of touched fingerprint samples. Unlike routinely used approaches for sampling evidence, the surface of an object (bottle) was sectioned into six equal parts and separate samples were taken from each section. The samples were processed separately for DNA extraction and STR amplification. The results included a few single

  1. Phase contrast STEM for thin samples: Integrated differential phase contrast.

    Science.gov (United States)

    Lazić, Ivan; Bosch, Eric G T; Lazar, Sorin

    2016-01-01

    It has been known since the 1970s that the movement of the center of mass (COM) of a convergent beam electron diffraction (CBED) pattern is linearly related to the (projected) electrical field in the sample. We re-derive a contrast transfer function (CTF) for a scanning transmission electron microscopy (STEM) imaging technique based on this movement from the point of view of image formation and continue by performing a two-dimensional integration on the two images based on the two components of the COM movement. The resulting integrated COM (iCOM) STEM technique yields a scalar image that is linear in the phase shift caused by the sample and therefore also in the local (projected) electrostatic potential field of a thin sample. We confirm that the differential phase contrast (DPC) STEM technique using a segmented detector with 4 quadrants (4Q) yields a good approximation for the COM movement. Performing a two-dimensional integration, just as for the COM, we obtain an integrated DPC (iDPC) image which is approximately linear in the phase of the sample. Beside deriving the CTFs of iCOM and iDPC, we clearly point out the objects of the two corresponding imaging techniques, and highlight the differences to objects corresponding to COM-, DPC-, and (HA) ADF-STEM. The theory is validated with simulations and we present first experimental results of the iDPC-STEM technique showing its capability for imaging both light and heavy elements with atomic resolution and a good signal to noise ratio (SNR). Copyright © 2015 Elsevier B.V. All rights reserved.

  2. RandomSpot: A web-based tool for systematic random sampling of virtual slides.

    Science.gov (United States)

    Wright, Alexander I; Grabsch, Heike I; Treanor, Darren E

    2015-01-01

    This paper describes work presented at the Nordic Symposium on Digital Pathology 2014, Linköping, Sweden. Systematic random sampling (SRS) is a stereological tool, which provides a framework to quickly build an accurate estimation of the distribution of objects or classes within an image, whilst minimizing the number of observations required. RandomSpot is a web-based tool for SRS in stereology, which systematically places equidistant points within a given region of interest on a virtual slide. Each point can then be visually inspected by a pathologist in order to generate an unbiased sample of the distribution of classes within the tissue. Further measurements can then be derived from the distribution, such as the ratio of tumor to stroma. RandomSpot replicates the fundamental principle of traditional light microscope grid-shaped graticules, with the added benefits associated with virtual slides, such as facilitated collaboration and automated navigation between points. Once the sample points have been added to the region(s) of interest, users can download the annotations and view them locally using their virtual slide viewing software. Since its introduction, RandomSpot has been used extensively for international collaborative projects, clinical trials and independent research projects. So far, the system has been used to generate over 21,000 sample sets, and has been used to generate data for use in multiple publications, identifying significant new prognostic markers in colorectal, upper gastro-intestinal and breast cancer. Data generated using RandomSpot also has significant value for training image analysis algorithms using sample point coordinates and pathologist classifications.

  3. A multi-object spectral imaging instrument

    International Nuclear Information System (INIS)

    Gibson, G M; Dienerowitz, M; Kelleher, P A; Harvey, A R; Padgett, M J

    2013-01-01

    We have developed a snapshot spectral imaging system which fits onto the side camera port of a commercial inverted microscope. The system provides spectra, in real time, from multiple points randomly selected on the microscope image. Light from the selected points in the sample is directed from the side port imaging arm using a digital micromirror device to a spectrometer arm based on a dispersing prism and CCD camera. A multi-line laser source is used to calibrate the pixel positions on the CCD for wavelength. A CMOS camera on the front port of the microscope allows the full image of the sample to be displayed and can also be used for particle tracking, providing spectra of multiple particles moving in the sample. We demonstrate the system by recording the spectra of multiple fluorescent beads in aqueous solution and from multiple points along a microscope sample channel containing a mixture of red and blue dye. (paper)

  4. The effects of spatial sampling choices on MR temperature measurements.

    Science.gov (United States)

    Todd, Nick; Vyas, Urvi; de Bever, Josh; Payne, Allison; Parker, Dennis L

    2011-02-01

    The purpose of this article is to quantify the effects that spatial sampling parameters have on the accuracy of magnetic resonance temperature measurements during high intensity focused ultrasound treatments. Spatial resolution and position of the sampling grid were considered using experimental and simulated data for two different types of high intensity focused ultrasound heating trajectories (a single point and a 4-mm circle) with maximum measured temperature and thermal dose volume as the metrics. It is demonstrated that measurement accuracy is related to the curvature of the temperature distribution, where regions with larger spatial second derivatives require higher resolution. The location of the sampling grid relative temperature distribution has a significant effect on the measured values. When imaging at 1.0 × 1.0 × 3.0 mm(3) resolution, the measured values for maximum temperature and volume dosed to 240 cumulative equivalent minutes (CEM) or greater varied by 17% and 33%, respectively, for the single-point heating case, and by 5% and 18%, respectively, for the 4-mm circle heating case. Accurate measurement of the maximum temperature required imaging at 1.0 × 1.0 × 3.0 mm(3) resolution for the single-point heating case and 2.0 × 2.0 × 5.0 mm(3) resolution for the 4-mm circle heating case. Copyright © 2010 Wiley-Liss, Inc.

  5. 7 CFR 1209.50 - Budget and expenses.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false Budget and expenses. 1209.50 Section 1209.50 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING... subpart. Each such budget shall include: (i) A statement of objectives and strategy for each program, plan...

  6. 7 CFR 1216.50 - Budget and expenses.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false Budget and expenses. 1216.50 Section 1216.50 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING... statement of objectives and strategy for each program, plan, or project; (2) A summary of anticipated...

  7. 7 CFR 1212.50 - Budget and expenses.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false Budget and expenses. 1212.50 Section 1212.50 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING... objectives and strategy for each program, plan, or project; (2) A summary of anticipated revenue, with...

  8. 7 CFR 1218.50 - Budget and expenses.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false Budget and expenses. 1218.50 Section 1218.50 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING... budget shall include: (1) A statement of objectives and strategy for each program, plan, or project; (2...

  9. Determination of cadmium in real water samples by flame atomic absorption spectrometry after cloud point extraction

    International Nuclear Information System (INIS)

    Naeemullah, A.; Kazi, T.G.

    2011-01-01

    Water pollution is a global threat and it is the leading world wide cause of death and diseases. The awareness of the potential danger posed by heavy metals to the ecosystems and in particular to human health has grown tremendously in the past decades. Separation and preconcentration procedures are considered of great importance in analytical and environmental chemistry. Cloud point is one of the most reliable and sophisticated separation methods for determination of traces quantities of heavy metals. Cloud point methodology was successfully employed for preconcentration of trace quantities of cadmium prior to their determination by flame atomic absorption spectrometry (FAAS). The metals react with 8-hydroxquinoline in a surfactant Triton X-114 medium. The following parameters such as pH, concentration of the reagent and Triton X-114, equilibrating temperature and centrifuging time were evaluated and optimized to enhance the sensitivity and extraction efficiency of the proposed method. Dilution of the surfactant-rich phase with acidified ethanol was performed after phase separation and the cadmium content was measured by FAAS. The validation of the procedure was carried out by spiking addition methods. The method was applied for determination of Cd in water samples of different ecosystems (lake and river). (author)

  10. Synchrotron-based FTIR microspectroscopy for the mapping of photo-oxidation and additives in acrylonitrile-butadiene-styrene model samples and historical objects.

    Science.gov (United States)

    Saviello, Daniela; Pouyet, Emeline; Toniolo, Lucia; Cotte, Marine; Nevin, Austin

    2014-09-16

    Synchrotron-based Fourier transform infrared micro-spectroscopy (SR-μFTIR) was used to map photo-oxidative degradation of acrylonitrile-butadiene-styrene (ABS) and to investigate the presence and the migration of additives in historical samples from important Italian design objects. High resolution (3×3 μm(2)) molecular maps were obtained by FTIR microspectroscopy in transmission mode, using a new method for the preparation of polymer thin sections. The depth of photo-oxidation in samples was evaluated and accompanied by the formation of ketones, aldehydes, esters, and unsaturated carbonyl compounds. This study demonstrates selective surface oxidation and a probable passivation of material against further degradation. In polymer fragments from design objects made of ABS from the 1960s, UV-stabilizers were detected and mapped, and microscopic inclusions of proteinaceous material were identified and mapped for the first time. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Finger image quality based on singular point localization

    DEFF Research Database (Denmark)

    Wang, Jinghua; Olsen, Martin A.; Busch, Christoph

    2014-01-01

    Singular points are important global features of fingerprints and singular point localization is a crucial step in biometric recognition. Moreover the presence and position of the core point in a captured fingerprint sample can reflect whether the finger is placed properly on the sensor. Therefore...... and analyze the importance of singular points on biometric accuracy. The experiment is based on large scale databases and conducted by relating the measured quality of a fingerprint sample, given by the positions of core points, to the biometric performance. The experimental results show the positions of core...

  12. Simultaneous preconcentration of copper, zinc, cadmium, and nickel in water samples by cloud point extraction using 4-(2-pyridylazo)-resorcinol and their determination by inductively coupled plasma optic emission spectrometry

    International Nuclear Information System (INIS)

    Silva, Edson Luiz; Santos Roldan, Paulo dos; Gine, Maria Fernanda

    2009-01-01

    A procedure for simultaneous separation/preconcentration of copper, zinc, cadmium, and nickel in water samples, based on cloud point extraction (CPE) as a prior step to their determination by inductively coupled plasma optic emission spectrometry (ICP-OES), has been developed. The analytes reacted with 4-(2-pyridylazo)-resorcinol (PAR) at pH 5 to form hydrophobic chelates, which were separated and preconcentrated in a surfactant-rich phase of octylphenoxypolyethoxyethanol (Triton X-114). The parameters affecting the extraction efficiency of the proposed method, such as sample pH, complexing agent concentration, buffer amount, surfactant concentration, temperature, kinetics of complexation reaction, and incubation time were optimized and their respective values were 5, 0.6 mmol L -1 , 0.3 mL, 0.15% (w/v), 50 deg. C, 40 min, and 10 min for 15 mL of preconcentrated solution. The method presented precision (R.S.D.) between 1.3% and 2.6% (n = 9). The concentration factors with and without dilution of the surfactant-rich phase for the analytes ranged from 9.4 to 10.1 and from 94.0 to 100.1, respectively. The limits of detection (L.O.D.) obtained for copper, zinc, cadmium, and nickel were 1.2, 1.1, 1.0, and 6.3 μg L -1 , respectively. The accuracy of the procedure was evaluated through recovery experiments on aqueous samples.

  13. Simultaneous reconstruction of multiple depth images without off-focus points in integral imaging using a graphics processing unit.

    Science.gov (United States)

    Yi, Faliu; Lee, Jieun; Moon, Inkyu

    2014-05-01

    The reconstruction of multiple depth images with a ray back-propagation algorithm in three-dimensional (3D) computational integral imaging is computationally burdensome. Further, a reconstructed depth image consists of a focus and an off-focus area. Focus areas are 3D points on the surface of an object that are located at the reconstructed depth, while off-focus areas include 3D points in free-space that do not belong to any object surface in 3D space. Generally, without being removed, the presence of an off-focus area would adversely affect the high-level analysis of a 3D object, including its classification, recognition, and tracking. Here, we use a graphics processing unit (GPU) that supports parallel processing with multiple processors to simultaneously reconstruct multiple depth images using a lookup table containing the shifted values along the x and y directions for each elemental image in a given depth range. Moreover, each 3D point on a depth image can be measured by analyzing its statistical variance with its corresponding samples, which are captured by the two-dimensional (2D) elemental images. These statistical variances can be used to classify depth image pixels as either focus or off-focus points. At this stage, the measurement of focus and off-focus points in multiple depth images is also implemented in parallel on a GPU. Our proposed method is conducted based on the assumption that there is no occlusion of the 3D object during the capture stage of the integral imaging process. Experimental results have demonstrated that this method is capable of removing off-focus points in the reconstructed depth image. The results also showed that using a GPU to remove the off-focus points could greatly improve the overall computational speed compared with using a CPU.

  14. Three-dimensional cinematography with control object of unknown shape.

    Science.gov (United States)

    Dapena, J; Harman, E A; Miller, J A

    1982-01-01

    A technique for reconstruction of three-dimensional (3D) motion which involves a simple filming procedure but allows the deduction of coordinates in large object volumes was developed. Internal camera parameters are calculated from measurements of the film images of two calibrated crosses while external camera parameters are calculated from the film images of points in a control object of unknown shape but at least one known length. The control object, which includes the volume in which the activity is to take place, is formed by a series of poles placed at unknown locations, each carrying two targets. From the internal and external camera parameters, and from locations of the images of point in the films of the two cameras, 3D coordinates of the point can be calculated. Root mean square errors of the three coordinates of points in a large object volume (5m x 5m x 1.5m) were 15 mm, 13 mm, 13 mm and 6 mm, and relative errors in lengths averaged 0.5%, 0.7% and 0.5%, respectively.

  15. Sampling system for fast single pulses; Realisation d'un dispositif d'echantillonnage d'un signal bref unique

    Energy Technology Data Exchange (ETDEWEB)

    Zenatti, D [Commissariat a l' Energie Atomique, Grenoble (France). Centre d' Etudes Nucleaires

    1969-07-01

    Development of a device for the enlargement of the domain of application of classical oscilloscopes to the observation of fast single pulses by application of the sampling principle. Its principal characteristics are: Bandwidth of 700 MHz; Maximum sensibility of 50 mV; Maximum amplitude of input signal of {+-} 1 V; Number of samples of 16; Samples separation of 0,2 ns. (author) [French] Realisation d'un dispositif permettant d'elargir le domaine d'utilisation des oscilloscopes classiques en appliquant le principe de l'echantillonnage a l'observation d'un signal bref unique. Les principales caracteristiques sont les suivantes: Bande passante de 700 MHz; Sensibilite maximale de 50 mV; Amplitude maximale du signal a echantillonner de {+-} 1 V; Nombre de points d'echantillonnage de 16; Pas d'echantillonnage de 0,2 ns. (auteur)

  16. Sample summary report for ARG 1 pressure tube sample

    International Nuclear Information System (INIS)

    Belinco, C.

    2006-01-01

    The ARG 1 sample is made from an un-irradiated Zr-2.5% Nb pressure tube. The sample has 103.4 mm ID, 112 mm OD and approximately 500 mm length. A punch mark was made very close to one end of the sample. The punch mark indicates the 12 O'clock position and also identifies the face of the tube for making all the measurements. ARG 1 sample contains flaws on ID and OD surface. There was no intentional flaw within the wall of the pressure tube sample. Once the flaws are machined the pressure tube sample was covered from outside to hide the OD flaws. Approximately 50 mm length of pressure tube was left open at both the ends to facilitate the holding of sample in the fixtures for inspection. No flaw was machined in this zone of 50 mm on either end of the pressure tube sample. A total of 20 flaws were machined in ARG 1 sample. Out of these, 16 flaws were on the OD surface and the remaining 4 on the ID surface of the pressure tube. The flaws were characterized in to various groups like axial flaws, circumferential flaws, etc

  17. Sorting points into neighborhoods (SPIN): data analysis and visualization by ordering distance matrices.

    Science.gov (United States)

    Tsafrir, D; Tsafrir, I; Ein-Dor, L; Zuk, O; Notterman, D A; Domany, E

    2005-05-15

    We introduce a novel unsupervised approach for the organization and visualization of multidimensional data. At the heart of the method is a presentation of the full pairwise distance matrix of the data points, viewed in pseudocolor. The ordering of points is iteratively permuted in search of a linear ordering, which can be used to study embedded shapes. Several examples indicate how the shapes of certain structures in the data (elongated, circular and compact) manifest themselves visually in our permuted distance matrix. It is important to identify the elongated objects since they are often associated with a set of hidden variables, underlying continuous variation in the data. The problem of determining an optimal linear ordering is shown to be NP-Complete, and therefore an iterative search algorithm with O(n3) step-complexity is suggested. By using sorting points into neighborhoods, i.e. SPIN to analyze colon cancer expression data we were able to address the serious problem of sample heterogeneity, which hinders identification of metastasis related genes in our data. Our methodology brings to light the continuous variation of heterogeneity--starting with homogeneous tumor samples and gradually increasing the amount of another tissue. Ordering the samples according to their degree of contamination by unrelated tissue allows the separation of genes associated with irrelevant contamination from those related to cancer progression. Software package will be available for academic users upon request.

  18. A toolbox and sample object perception data for equalization of natural images

    Directory of Open Access Journals (Sweden)

    Wilma A. Bainbridge

    2015-12-01

    Full Text Available For psychologists and neuroscientists, careful selection of their stimuli is essential, so that low-level visual features such as color or spatial frequency do not serve as confounds between conditions of interest. Here, we detail the Natural Image Statistical Toolbox, which allows scientists to measure, visualize, and control stimulus sets along a set of low-level visual properties. Additionally, we provide a set of object images varying along several perceptual object properties, including physical size and interaction envelope size (i.e., the space around an object transversed during an interaction, serving as a test-bed for the Natural Image Statistical Toolbox. This stimulus set is also a highly characterized set useful to psychology and neuroscience studies on object perception.

  19. Experimental Object-Oriented Modelling

    DEFF Research Database (Denmark)

    Hansen, Klaus Marius

    through, e.g., technical prototyping and active user involvement. We introduce and examine “experimental object-oriented modelling” as the intersection of these practices. The contributions of this thesis are expected to be within three perspectives on models and modelling in experimental system...... development: Grounding We develop an empirically based conceptualization of modelling and use of models in system development projects characterized by a high degree of uncertainty in requirements and point to implications for tools and techniques for modelling in such a setting. Techniques We introduce......This thesis examines object-oriented modelling in experimental system development. Object-oriented modelling aims at representing concepts and phenomena of a problem domain in terms of classes and objects. Experimental system development seeks active experimentation in a system development project...

  20. Final report on CCT-K6: Comparison of local realisations of dew-point temperature scales in the range -50 °C to +20 °C

    Science.gov (United States)

    Bell, S.; Stevens, M.; Abe, H.; Benyon, R.; Bosma, R.; Fernicola, V.; Heinonen, M.; Huang, P.; Kitano, H.; Li, Z.; Nielsen, J.; Ochi, N.; Podmurnaya, O. A.; Scace, G.; Smorgon, D.; Vicente, T.; Vinge, A. F.; Wang, L.; Yi, H.

    2015-01-01

    A key comparison in dew-point temperature was carried out among the national standards held by NPL (pilot), NMIJ, INTA, VSL, INRIM, MIKES, NIST, NIM, VNIIFTRI-ESB and NMC. A pair of condensation-principle dew-point hygrometers was circulated and used to compare the local realisations of dew point for participant humidity generators in the range -50 °C to +20 °C. The duration of the comparison was prolonged by numerous problems with the hygrometers, requiring some repairs, and several additional check measurements by the pilot. Despite the problems and the extended timescale, the comparison was effective in providing evidence of equivalence. Agreement with the key comparison reference value was achieved in the majority of cases, and bilateral degrees of equivalence are also reported. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCT, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).

  1. Use of CITATION code for flux calculation in neutron activation analysis with voluminous sample using an Am-Be source

    International Nuclear Information System (INIS)

    Khelifi, R.; Idiri, Z.; Bode, P.

    2002-01-01

    The CITATION code based on neutron diffusion theory was used for flux calculations inside voluminous samples in prompt gamma activation analysis with an isotopic neutron source (Am-Be). The code uses specific parameters related to the energy spectrum source and irradiation system materials (shielding, reflector). The flux distribution (thermal and fast) was calculated in the three-dimensional geometry for the system: air, polyethylene and water cuboidal sample (50x50x50 cm). Thermal flux was calculated in a series of points inside the sample. The results agreed reasonably well with observed values. The maximum thermal flux was observed at a distance of 3.2 cm while CITATION gave 3.7 cm. Beyond a depth of 7.2 cm, the thermal flux to fast flux ratio increases up to twice and allows us to optimise the detection system position in the scope of in-situ PGAA

  2. Dual-mode nested search method for categorical uncertain multi-objective optimization

    Science.gov (United States)

    Tang, Long; Wang, Hu

    2016-10-01

    Categorical multi-objective optimization is an important issue involved in many matching design problems. Non-numerical variables and their uncertainty are the major challenges of such optimizations. Therefore, this article proposes a dual-mode nested search (DMNS) method. In the outer layer, kriging metamodels are established using standard regular simplex mapping (SRSM) from categorical candidates to numerical values. Assisted by the metamodels, a k-cluster-based intelligent sampling strategy is developed to search Pareto frontier points. The inner layer uses an interval number method to model the uncertainty of categorical candidates. To improve the efficiency, a multi-feature convergent optimization via most-promising-area stochastic search (MFCOMPASS) is proposed to determine the bounds of objectives. Finally, typical numerical examples are employed to demonstrate the effectiveness of the proposed DMNS method.

  3. Determination of carcinogenic herbicides in milk samples using green non-ionic silicone surfactant of cloud point extraction and spectrophotometry.

    Science.gov (United States)

    Mohd, N I; Zain, N N M; Raoov, M; Mohamad, S

    2018-04-01

    A new cloud point methodology was successfully used for the extraction of carcinogenic pesticides in milk samples as a prior step to their determination by spectrophotometry. In this work, non-ionic silicone surfactant, also known as 3-(3-hydroxypropyl-heptatrimethylxyloxane), was chosen as a green extraction solvent because of its structure and properties. The effect of different parameters, such as the type of surfactant, concentration and volume of surfactant, pH, salt, temperature, incubation time and water content on the cloud point extraction of carcinogenic pesticides such as atrazine and propazine, was studied in detail and a set of optimum conditions was established. A good correlation coefficient ( R 2 ) in the range of 0.991-0.997 for all calibration curves was obtained. The limit of detection was 1.06 µg l -1 (atrazine) and 1.22 µg l -1 (propazine), and the limit of quantitation was 3.54 µg l -1 (atrazine) and 4.07 µg l -1 (propazine). Satisfactory recoveries in the range of 81-108% were determined in milk samples at 5 and 1000 µg l -1 , respectively, with low relative standard deviation, n  = 3 of 0.301-7.45% in milk matrices. The proposed method is very convenient, rapid, cost-effective and environmentally friendly for food analysis.

  4. Generally objective measurement of human temperature and reading ability: some corollaries.

    Science.gov (United States)

    Stenner, A Jackson; Stone, Mark

    2010-01-01

    We argue that a goal of measurement is general objectivity: point estimates of a person's measure (height, temperature, and reader ability) should be independent of the instrument and independent of the sample in which the person happens to find herself. In contrast, Rasch's concept of specific objectivity requires only differences (i.e., comparisons) between person measures to be independent of the instrument. We present a canonical case in which there is no overlap between instruments and persons: each person is measured by a unique instrument. We then show what is required to estimate measures in this degenerate case. The canonical case encourages a simplification and reconceptualization of validity and reliability. Not surprisingly, this reconceptualization looks a lot like the way physicists and chemometricians think about validity and measurement error. We animate this presentation with a technology that blurs the distinction between instruction, assessment, and generally objective measurement of reader ability. We encourage adaptation of this model to health outcomes measurement.

  5. Interfacial dynamics of dissolving objects in fluid flow

    Science.gov (United States)

    Rycroft, Chris; Bazant, Martin

    2013-11-01

    An advection-diffusion-limited dissolution model of an object being eroded by a two-dimensional potential flow will be presented. By taking advantage of conformal invariance of the model, a numerical method will be introduced that tracks the evolution of the object boundary in terms of a time-dependent Laurent series. Simulations of several dissolving objects will be shown, all of which show collapse to a single point in finite time. The simulations reveal a surprising connection between the position of the collapse point and the initial Laurent coefficients, which was subsequently derived analytically using residue calculus.

  6. DEVELOPING VALUES FOR SECONDARY SCHOOL STUDENTS THROUGH THE STUDY OF ART OBJECTS

    Directory of Open Access Journals (Sweden)

    Maria Eliza Dulamă

    2011-11-01

    Full Text Available The paper begins with some issues related to aesthetics, aesthetic education, art and axiological education. The empirical research has the general assumption that secondary school students and youth have difficulties in selecting values. The objective of the research was three fold: to design, to organize and to carry learning activities from which students shall acquire educational values through the study of art objects. The exploratory research was conducted on a sample of 50 students (25 in experimental group and 25 in control group. The content sample included fairy-tales and short stories (Beauty and the Beast; The money earned by Alexandru Mitru and artistic topics on several well-known art objects (The Endless Column, Table of Silence, The Gate of Kiss, Peleş castle, Voroneţ monastery, and St. Michael’s Cathedral from Cluj-Napoca. The tested hypothesis stated that if secondary school students are involved in learning contexts where they perceive, analyze and explain artistic objects then they develop aesthetic and ethic values. The learning context students were exposed to represents the independent variable and the outputs – the educational values themselves – represent the dependent variable. In order to test for the hypothesis we planned a formative didactic experiment. In order to test the hypothesis the pre-test/post-test design was used.

  7. Objectively determined habitual physical activity in South African adolescents: the PAHL study

    Science.gov (United States)

    2014-01-01

    Background There is limited data on objectively determined habitual physical activity (PA) in 16-year old South African adolescents. The purpose of this study was to objectively determine the habitual PA of adolescents from the North West Province of South Africa by race and gender. Methods Adolescents (137 girls, 89 boys) from the ongoing Physical Activity and Health Longitudinal Study (PAHL study), participated in the present study. Habitual PA was objectively recorded by means of the Actiheart® over a period of 7 days. Time spent in moderate-to-vigorous intensity physical activity (MVPA) was assessed. Results Average MVPA for the study sample was 50.9 ± 40.3 minutes/day. Girls were significantly more active than boys expending more time in MVPA (61.13 ± 52.2 minutes/day; p Physical activity varies by both gender and race in adolescents from the North West Province of South Africa. Objectively determined data from our study indicates that girls habitually spend more time in MVPA per day than boys, and that white adolescents habitually engage in more MVPA than black adolescents. Seeing as the average MVPA per day for the entire study sample falls below the recommended daily average of 60minutes/day, adolescents should be the foremost targets of interventions aimed at enhancing habitual PA. PMID:24885503

  8. Fresnel zone-plate based X-ray microscopy in Zernike phase contrast with sub-50 nm resolution at NSRL

    International Nuclear Information System (INIS)

    Chen Jie; Li Wenjie; Tian Jinping; Liu Longhua; Xiong Ying; Liu Gang; Wu Ziyu; Tian Yangchao; Liu Yijin; Yue Zhengbo; Yu Hanqing; Wang Chunru

    2009-01-01

    A transmission X-ray microscope using Fresnel zone-plates (FZPs) has been installed at U7A beamline of National Synchrotron Radiation Laboratory (NSRL). The objective FZP with 45 nm outermost zone width delivers a sub-50 nm resolution. A gold phase ring with 2.5 μm thickness and 4 μm width was placed at the focal plane of the objective FZP at 8 keV to produce a negative Zernike phase contrast. A series of samples were used to test the performance of the Zernike phase contrast X-ray microscopy.

  9. Fresnel zone-plate based X-ray microscopy in Zernike phase contrast with sub-50 nm resolution at NSRL

    Energy Technology Data Exchange (ETDEWEB)

    Chen Jie; Li Wenjie; Tian Jinping; Liu Longhua; Xiong Ying; Liu Gang; Wu Ziyu; Tian Yangchao [National Synchrotron Radiation Laboratory (China); Liu Yijin [School of Physics (China); Yue Zhengbo; Yu Hanqing [Laboratory of Environmental Engineering, School of Chemistry, University of Science and Technology of China, Hefei Anhui 230029 (China); Wang Chunru, E-mail: ychtian@ustc.edu.c [Institute of Chemistry, Chinese Academy of Sciences, Beijing 10060 (China)

    2009-09-01

    A transmission X-ray microscope using Fresnel zone-plates (FZPs) has been installed at U7A beamline of National Synchrotron Radiation Laboratory (NSRL). The objective FZP with 45 nm outermost zone width delivers a sub-50 nm resolution. A gold phase ring with 2.5 {mu}m thickness and 4 {mu}m width was placed at the focal plane of the objective FZP at 8 keV to produce a negative Zernike phase contrast. A series of samples were used to test the performance of the Zernike phase contrast X-ray microscopy.

  10. Data quality objectives lessons learned for tank waste characterization

    International Nuclear Information System (INIS)

    Eberlein, S.J.; Banning, D.L.

    1996-01-01

    The tank waste characterization process is an integral part of the overall effort to control the hazards associated with radioactive wastes stored in underground tanks at the Hanford Reservation. The programs involved in the characterization of the waste are employing the Data Quality Objective (DQO) process in all information and data collection activities. The DQO process is used by the programs to address an issue or problem rather than a specific sampling event. Practical limits (e.g., limited number and location of sampling points) do not always allow for precise characterization of a tank or the full implementation of the DQO process. Because of the flexibility of the DQO process, it can be used as a planning tool for sampling and analysis of the underground waste storage tanks. The iterative nature of the DQO process allows it to be used as additional information is obtained or open-quotes lessons are learnedclose quotes concerning an issue or problem requiring sampling and analysis of tank waste. In addition, the application of the DQO process forces alternative actions to be considered when precise characterization of a tank or the fall implementation of the DQO process is not practical

  11. Catch me if you can: Comparing ballast water sampling skids to traditional net sampling

    Science.gov (United States)

    Bradie, Johanna; Gianoli, Claudio; Linley, Robert Dallas; Schillak, Lothar; Schneider, Gerd; Stehouwer, Peter; Bailey, Sarah

    2018-03-01

    With the recent ratification of the International Convention for the Control and Management of Ships' Ballast Water and Sediments, 2004, it will soon be necessary to assess ships for compliance with ballast water discharge standards. Sampling skids that allow the efficient collection of ballast water samples in a compact space have been developed for this purpose. We ran 22 trials on board the RV Meteor from June 4-15, 2015 to evaluate the performance of three ballast water sampling devices (traditional plankton net, Triton sampling skid, SGS sampling skid) for three organism size classes: ≥ 50 μm, ≥ 10 μm to Natural sea water was run through the ballast water system and untreated samples were collected using paired sampling devices. Collected samples were analyzed in parallel by multiple analysts using several different analytic methods to quantify organism concentrations. To determine whether there were differences in the number of viable organisms collected across sampling devices, results were standardized and statistically treated to filter out other sources of variability, resulting in an outcome variable representing the mean difference in measurements that can be attributed to sampling devices. These results were tested for significance using pairwise Tukey contrasts. Differences in organism concentrations were found in 50% of comparisons between sampling skids and the plankton net for ≥ 50 μm, and ≥ 10 μm to < 50 μm size classes, with net samples containing either higher or lower densities. There were no differences for < 10 μm organisms. Future work will be required to explicitly examine the potential effects of flow velocity, sampling duration, sampled volume, and organism concentrations on sampling device performance.

  12. Comparison of chlorzoxazone one-sample methods to estimate CYP2E1 activity in humans

    DEFF Research Database (Denmark)

    Kramer, Iza; Dalhoff, Kim; Clemmesen, Jens O

    2003-01-01

    OBJECTIVE: Comparison of a one-sample with a multi-sample method (the metabolic fractional clearance) to estimate CYP2E1 activity in humans. METHODS: Healthy, male Caucasians ( n=19) were included. The multi-sample fractional clearance (Cl(fe)) of chlorzoxazone was compared with one...... estimates, Cl(est) at 3 h or 6 h, and MR at 3 h, can serve as reliable markers of CYP2E1 activity. The one-sample clearance method is an accurate, renal function-independent measure of the intrinsic activity; it is simple to use and easily applicable to humans.......-time-point clearance estimation (Cl(est)) at 3, 4, 5 and 6 h. Furthermore, the metabolite/drug ratios (MRs) estimated from one-time-point samples at 1, 2, 3, 4, 5 and 6 h were compared with Cl(fe). RESULTS: The concordance between Cl(est) and Cl(fe) was highest at 6 h. The minimal mean prediction error (MPE) of Cl...

  13. Tests of a High Temperature Sample Conditioner for the Waste Treatment Plant LV-S2, LV-S3, HV-S3A and HV-S3B Exhaust Systems

    Energy Technology Data Exchange (ETDEWEB)

    Flaherty, Julia E. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Glissmeyer, John A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-03-18

    Tests were performed to evaluate a sample conditioning unit for stack monitoring at Hanford Tank Waste Treatment and Immobilization Plant (WTP) exhaust stacks with elevated air temperatures. The LV-S2, LV-S3, HV-S3A and HV-S3B exhaust stacks are expected to have elevated air temperature and dew point. At these emission points, exhaust temperatures are too high to deliver the air sample directly to the required stack monitoring equipment. As a result, a sample conditioning system is considered to cool and dry the air prior to its delivery to the stack monitoring system. The method proposed for the sample conditioning is a dilution system that will introduce cooler, dry air to the air sample stream. This method of sample conditioning is meant to reduce the sample temperature while avoiding condensation of moisture in the sample stream. An additional constraint is that the ANSI/HPS N13.1-1999 standard states that at least 50% of the 10 μm aerodynamic diameter (AD) particles present in the stack free stream must be delivered to the sample collector. In other words, depositional loss of particles should be limited to 50% in the sampling, transport, and conditioning systems. Based on estimates of particle penetration through the LV-S3 sampling system, the diluter should perform with about 80% penetration or better to ensure that the total sampling system passes the 50% or greater penetration criterion.

  14. Dual-Layer Density Estimation for Multiple Object Instance Detection

    Directory of Open Access Journals (Sweden)

    Qiang Zhang

    2016-01-01

    Full Text Available This paper introduces a dual-layer density estimation-based architecture for multiple object instance detection in robot inventory management applications. The approach consists of raw scale-invariant feature transform (SIFT feature matching and key point projection. The dominant scale ratio and a reference clustering threshold are estimated using the first layer of the density estimation. A cascade of filters is applied after feature template reconstruction and refined feature matching to eliminate false matches. Before the second layer of density estimation, the adaptive threshold is finalized by multiplying an empirical coefficient for the reference value. The coefficient is identified experimentally. Adaptive threshold-based grid voting is applied to find all candidate object instances. Error detection is eliminated using final geometric verification in accordance with Random Sample Consensus (RANSAC. The detection results of the proposed approach are evaluated on a self-built dataset collected in a supermarket. The results demonstrate that the approach provides high robustness and low latency for inventory management application.

  15. Multi-point probe for testing electrical properties and a method of producing a multi-point probe

    DEFF Research Database (Denmark)

    2011-01-01

    A multi-point probe for testing electrical properties of a number of specific locations of a test sample comprises a supporting body defining a first surface, a first multitude of conductive probe arms (101-101'''), each of the probe arms defining a proximal end and a distal end. The probe arms...... of contact with the supporting body, and a maximum thickness perpendicular to its perpendicular bisector and its line of contact with the supporting body. Each of the probe arms has a specific area or point of contact (111-111''') at its distal end for contacting a specific location among the number...... of specific locations of the test sample. At least one of the probe arms has an extension defining a pointing distal end providing its specific area or point of contact located offset relative to its perpendicular bisector....

  16. University of Otago 1998 field school excavations at Shag Point, North Otago

    International Nuclear Information System (INIS)

    Weisler, M.I.

    1998-01-01

    During late 1997, discussions with Gerard O'Regan, representing Te Runanga O Moeraki, lead to combining the needs of cultural resource management, teaching and research at the Shag Point archaeological site, situated about an hour's drive north from the University of Otago. In late 1996, a car park (ca. 20 by 40 m) was bulldozed on Department of Conservation land just inland from Shag Point. Unfortunately, the north-east portion of the site, as well as cultural deposits along newly graded access roads, were destroyed. Te Runanga O Moeraki requested that I conduct archaeological survey and excavations at Shag Point to determine the extent of site destruction and to recover a sample of cultural material in the bulldozer spoil dirt. In addition to these cultural resource management objectives, it was necessary, from a research point of view, to determine: (1) the site area; (2) dates of site use; (3) the nature of occupation, subsistence practices and stone-tool technology; (4) evidence of interaction through analysis of imported artefacts; and (5) relationship of the Shag Point site to the major prehistoric village at Shag River mouth, located less than 1 km south. (author). 2 refs., 2 figs

  17. Context-aware pattern discovery for moving object trajectories

    Science.gov (United States)

    Sharif, Mohammad; Asghar Alesheikh, Ali; Kaffash Charandabi, Neda

    2018-05-01

    Movement of point objects are highly sensitive to the underlying situations and conditions during the movement, which are known as contexts. Analyzing movement patterns, while accounting the contextual information, helps to better understand how point objects behave in various contexts and how contexts affect their trajectories. One potential solution for discovering moving objects patterns is analyzing the similarities of their trajectories. This article, therefore, contextualizes the similarity measure of trajectories by not only their spatial footprints but also a notion of internal and external contexts. The dynamic time warping (DTW) method is employed to assess the multi-dimensional similarities of trajectories. Then, the results of similarity searches are utilized in discovering the relative movement patterns of the moving point objects. Several experiments are conducted on real datasets that were obtained from commercial airplanes and the weather information during the flights. The results yielded the robustness of DTW method in quantifying the commonalities of trajectories and discovering movement patterns with 80 % accuracy. Moreover, the results revealed the importance of exploiting contextual information because it can enhance and restrict movements.

  18. Development of a Cloud-Point Extraction Method for Cobalt Determination in Natural Water Samples

    Directory of Open Access Journals (Sweden)

    Mohammad Reza Jamali

    2013-01-01

    Full Text Available A new, simple, and versatile cloud-point extraction (CPE methodology has been developed for the separation and preconcentration of cobalt. The cobalt ions in the initial aqueous solution were complexed with 4-Benzylpiperidinedithiocarbamate, and Triton X-114 was added as surfactant. Dilution of the surfactant-rich phase with acidified ethanol was performed after phase separation, and the cobalt content was measured by flame atomic absorption spectrometry. The main factors affecting CPE procedure, such as pH, concentration of ligand, amount of Triton X-114, equilibrium temperature, and incubation time were investigated and optimized. Under the optimal conditions, the limit of detection (LOD for cobalt was 0.5 μg L-1, with sensitivity enhancement factor (EF of 67. Calibration curve was linear in the range of 2–150 μg L-1, and relative standard deviation was 3.2% (c=100 μg L-1; n=10. The proposed method was applied to the determination of trace cobalt in real water samples with satisfactory analytical results.

  19. A logistic regression estimating function for spatial Gibbs point processes

    DEFF Research Database (Denmark)

    Baddeley, Adrian; Coeurjolly, Jean-François; Rubak, Ege

    We propose a computationally efficient logistic regression estimating function for spatial Gibbs point processes. The sample points for the logistic regression consist of the observed point pattern together with a random pattern of dummy points. The estimating function is closely related to the p......We propose a computationally efficient logistic regression estimating function for spatial Gibbs point processes. The sample points for the logistic regression consist of the observed point pattern together with a random pattern of dummy points. The estimating function is closely related...

  20. Comparative Study by MS and XRD of Fe50Al50 Alloys Produced by Mechanical Alloying, Using Different Ball Mills

    International Nuclear Information System (INIS)

    Rojas Martinez, Y.; Perez Alcazar, G. A.; Bustos Rodriguez, H.; Oyola Lozano, D.

    2005-01-01

    In this work we report a comparative study of the magnetic and structural properties of Fe 50 Al 50 alloys produced by mechanical alloying using two different planetary ball mills with the same ball mass to powder mass relation. The Fe 50 Al 50 sample milled during 48 h using the Fritsch planetary ball mill pulverisette 5 and balls of 20 mm, presents only a bcc alloy phase with a majority of paramagnetic sites, whereas that sample milled during the same time using the Fritsch planetary ball mill pulverisette 7 with balls of 15 mm, presents a bcc alloy phase with paramagnetic site (doublet) and a majority of ferromagnetic sites which include pure Fe. However for 72 h of milling this sample presents a bcc paramagnetic phase, very similar to that prepared with the first system during 48 h. These results show that the conditions used in the first ball mill equipment make more efficient the milling process.

  1. Evaluating the effect of sample type on American alligator (Alligator mississippiensis) analyte values in a point-of-care blood analyser.

    Science.gov (United States)

    Hamilton, Matthew T; Finger, John W; Winzeler, Megan E; Tuberville, Tracey D

    2016-01-01

    The assessment of wildlife health has been enhanced by the ability of point-of-care (POC) blood analysers to provide biochemical analyses of non-domesticated animals in the field. However, environmental limitations (e.g. temperature, atmospheric humidity and rain) and lack of reference values may inhibit researchers from using such a device with certain wildlife species. Evaluating the use of alternative sample types, such as plasma, in a POC device may afford researchers the opportunity to delay sample analysis and the ability to use banked samples. In this study, we examined fresh whole blood, fresh plasma and frozen plasma (sample type) pH, partial pressure of carbon dioxide (PCO2), bicarbonate (HCO3 (-)), total carbon dioxide (TCO2), base excess (BE), partial pressure of oxygen (PO2), oxygen saturation (sO2) and lactate concentrations in 23 juvenile American alligators (Alligator mississippiensis) using an i-STAT CG4+ cartridge. Our results indicate that sample type had no effect on lactate concentration values (F 2,65 = 0.37, P = 0.963), suggesting that the i-STAT analyser can be used reliably to quantify lactate concentrations in fresh and frozen plasma samples. In contrast, the other seven blood parameters measured by the CG4+ cartridge were significantly affected by sample type. Lastly, we were able to collect blood samples from all alligators within 2 min of capture to establish preliminary reference ranges for juvenile alligators based on values obtained using fresh whole blood.

  2. IDMS analysis of blank swipe samples for uranium quantity and isotopic composition

    International Nuclear Information System (INIS)

    Ryjinski, M.; Donohue, D.

    2001-01-01

    Since 1996 the IAEA has started routine implementation of environmental sampling. During the last 5 years more than 1700 swipe samples were collected and analyzed in the Network of Analytical Laboratories (NWAL). One sensitive point of analyzing environmental samples is evidence of the presence of enriched U. The U content on swipes is extremely low and therefore there is a relatively high probability of a false positive, e.g. small contamination or a measurement bias. In order to avoid and/or control this the IAEA systematically sends to the laboratories blind blank QC samples. In particular more than 50 blank samples were analyzed during the last two years. A preliminary analysis of blank swipes showed the swipe material itself contains up to 10 ng of NU per swipe. However, about 50% of blind blank swipes analyzed show the presence of enriched uranium. A source of this bias has to be clarified and excluded. This paper presents the results of modeling of IDMS analysis for quantity and isotopic composition of uranium in order to identify the possible contribution of different factors to the final measurement uncertainty. This modeling was carried out based on the IAEA Clean Laboratory measurement data and simulation technique

  3. An Object-Oriented Approach to C++ Compiler Technology

    NARCIS (Netherlands)

    Sminchisescu, Cristian; Telea, Alexandru

    1999-01-01

    This paper focuses on the use of object-oriented approaches to syntactical and semantical analysis for complex object-oriented languages like C++. We are interested in these issues both from a design and implementation point of view. We implement a semantic analyzer in an object-oriented manner,

  4. Subjective and objective outcomes from new BiCROS technology in a veteran sample.

    Science.gov (United States)

    Williams, Victoria A; McArdle, Rachel A; Chisolm, Theresa H

    2012-01-01

    . Overall, the objective (WIN) and subjective (SSQ, MarkeTrak, and open-ended questions) measures indicated that the new BiCROS provided better outcomes than the previous BiCROS system. In addition, an overlap of favorable results was seen across measures. Of the 39 participants, 95% reported improvements with the new BiCROS and chose to utilize the device regularly. The favorable objective and subjective outcomes indicate that the new BiCROS system is as good, or better than, what was previously utilized by our sample of veterans. American Academy of Audiology.

  5. Object-Oriented Software Development Environments

    DEFF Research Database (Denmark)

    The book "Object-Oriented Environments - The Mjølner Approach" presents the collective results of the Mjølner Project. The project was set up to work on the widely recognized problems of developing, maintaining and understanding large software systems. The starting point was to use object...... and realizations User interfaces for environments and realizations Grammar-based software architectures Structure-based editing Language implementation, runtime organization, garbage collection Incremental compilation techniques...

  6. Forensic Comparison of Soil Samples Using Nondestructive Elemental Analysis.

    Science.gov (United States)

    Uitdehaag, Stefan; Wiarda, Wim; Donders, Timme; Kuiper, Irene

    2017-07-01

    Soil can play an important role in forensic cases in linking suspects or objects to a crime scene by comparing samples from the crime scene with samples derived from items. This study uses an adapted ED-XRF analysis (sieving instead of grinding to prevent destruction of microfossils) to produce elemental composition data of 20 elements. Different data processing techniques and statistical distances were evaluated using data from 50 samples and the log-LR cost (C llr ). The best performing combination, Canberra distance, relative data, and square root values, is used to construct a discriminative model. Examples of the spatial resolution of the method in crime scenes are shown for three locations, and sampling strategy is discussed. Twelve test cases were analyzed, and results showed that the method is applicable. The study shows how the combination of an analysis technique, a database, and a discriminative model can be used to compare multiple soil samples quickly. © 2016 American Academy of Forensic Sciences.

  7. The effects of armodafinil on objective sleepiness and performance in a shift work disorder sample unselected for objective sleepiness.

    Science.gov (United States)

    Howard, Ryan; Roth, Thomas; Drake, Christopher L

    2014-06-01

    Armodafinil is a medication used to treat excessive sleepiness in individuals with shift work disorder (SWD). In the present study, we investigate whether armodafinil can normalize nocturnal sleepiness in a group of typical SWD patients. Participants were 12 night workers (aged 33.8 ± 8.57 years, 7 female subjects) with excessive sleepiness (≥10 on the Epworth Sleepiness Scale; mean, 14.8 ± 3.16), meeting the International Classification of Sleep Disorders, Second Edition criteria for SWD, with no other sleep or medical disorders verified by polysomnogram. The multiple sleep latency test (MSLT) was not used as an entry criteria. Armodafinil was administered at 10:30 pm in a randomized, double-blind, placebo-controlled, crossover design with experimental nights separated by 1 week. Primary end point was the MSLT, with naps at 1:30, 3:30, 5:30, and 7:30 am. Other study measures included a sleepiness-alertness visual analog scale administered before each nap, and 2 computer-based performance tests evaluating attention and memory. Subjects with SWD had a mean MSLT of 5.3 ± 3.25 minutes, indicating a mean level of pathological sleepiness. Armodafinil significantly improved MSLT score to 11.1 ± 4.79 minutes (P = 0.006). Subjective levels of alertness on the visual analog scale also improved (P = 0.008). For performance, reaction time to central (P = 0.006) and peripheral (P = 0.003) stimuli and free recall memory (P = 0.05) were also improved. Armodafinil 150 mg administered at the beginning of a night shift normalizes nocturnal sleepiness in individuals with SWD unselected for objective sleepiness. Subjective measures of sleepiness and cognitive performance are also improved. This suggests that armodafinil can improve levels of nocturnal alertness to within normal daytime levels in the majority of patients with SWD.

  8. Objective measurement of tissue tension in myofascial trigger point areas before and during the administration of anesthesia with complete blocking of neuromuscular transmission.

    Science.gov (United States)

    Buchmann, Johannes; Neustadt, Beate; Buchmann-Barthel, Katharina; Rudolph, Soeren; Klauer, Thomas; Reis, Olaf; Smolenski, Ulrich; Buchmann, Hella; Wagner, Klaus F; Haessler, Frank

    2014-03-01

    Myofascial trigger points (MTPs) are extremely frequent in the human musculoskeletal system. Despite this, little is known about their etiology. Increased muscular tension in the trigger point area could be a major factor for the development of MTPs. To investigate the impact of muscular tension in the taut band with an MTP and thereby, the spinal excitability of associated segmental neurons, we objectively measured the tissue tension in MTPs before and during the administration of anesthesia using a transducer. Three target muscles (m. temporalis, upper part of m. trapezius, and m. extensor carpi radialis longus) with an MTP and 1 control muscle without an MTP were examined in 62 patients scheduled for an operation. We found significant 2-way interactions (ANOVA, Pspinal segmental excitability. In line with this, we assume a predominant, but not unique, impact of increased spinal excitability resulting in an augmented tension of segmental-associated muscle fibers for the etiology of MTP. Consequently, postisometric relaxation might be a promising therapeutic option for MTPs.

  9. Optimal Point-to-Point Trajectory Tracking of Redundant Manipulators using Generalized Pattern Search

    Directory of Open Access Journals (Sweden)

    Thi Rein Myo

    2008-11-01

    Full Text Available Optimal point-to-point trajectory planning for planar redundant manipulator is considered in this study. The main objective is to minimize the sum of the position error of the end-effector at each intermediate point along the trajectory so that the end-effector can track the prescribed trajectory accurately. An algorithm combining Genetic Algorithm and Pattern Search as a Generalized Pattern Search GPS is introduced to design the optimal trajectory. To verify the proposed algorithm, simulations for a 3-D-O-F planar manipulator with different end-effector trajectories have been carried out. A comparison between the Genetic Algorithm and the Generalized Pattern Search shows that The GPS gives excellent tracking performance.

  10. Safety evaluation report related to the operation of Nine Mile Point Nuclear Station, Unit No. 2 (Docket No. 50-410)

    International Nuclear Information System (INIS)

    1986-07-01

    This report supplements the Safety Evaluation Report (NUREG-1047, February 1985) for the application filed by Niagara Mohawk Power Corporation, as applicant and co-owner, for a license to operate the Nine Mile Point Nuclear Station, Unit No. 2 (Docket No. 50-410). It has been prepared by the Office of Nuclear Reactor Regulation of the US Nuclear Regulatory Commission. The facility is located near Oswego, New York. Supplement 1 to the Safety Evaluation Report was published in June 1985 and contained the report from the Advisory Committee on Reactor Safeguards as well as the resolution to a number of outstanding issues from the Safety Evaluation Report. Supplement 2 was published in November 1985 and contained the resolution to a number of outstanding and confirmatory issues. Subject to favorable resolution of the issues discussed in this report, the NRC staff concludes that the facility can be operated by the applicant without endangering the health and safety of the public

  11. Export pricing objectives and factors influencing them

    OpenAIRE

    Snieškienė, Gabrielė; Pridotkienė, Jūratė

    2010-01-01

    Pricing is recognized as one of the most important tools to achieve a successful export operation. The starting point in every pricing effort is the process of creating pricing objectives. Pricing objectives are the strategic and economic goals desired by management in pricing the product. Pricing objectives constitute the basis on which pricing methods and policies are formulated. Therefore, a better understanding of the pricing objectives should direct the company’s overall pricing process....

  12. Testing aggregation hypotheses among Neotropical trees and shrubs: results from a 50-ha plot over 20 years of sampling.

    Science.gov (United States)

    Myster, Randall W; Malahy, Michael P

    2012-09-01

    Spatial patterns of tropical trees and shrubs are important to understanding their interaction and the resultant structure of tropical rainforests. To assess this issue, we took advantage of previously collected data, on Neotropical tree and shrub stem identified to species and mapped for spatial coordinates in a 50ha plot, with a frequency of every five years and over a 20 year period. These stems data were first placed into four groups, regardless of species, depending on their location in the vertical strata of the rainforest (shrubs, understory trees, mid-sized trees, tall trees) and then used to generate aggregation patterns for each sampling year. We found shrubs and understory trees clumped at small spatial scales of a few meters for several of the years sampled. Alternatively, mid-sized trees and tall trees did not clump, nor did they show uniform (regular) patterns, during any sampling period. In general (1) groups found higher in the canopy did not show aggregation on the ground and (2) the spatial patterns of all four groups showed similarity among different sampling years, thereby supporting a "shifting mosaic" view of plant communities over large areas. Spatial analysis, such as this one, are critical to understanding and predicting tree spaces, tree-tree replacements and the Neotropical forest patterns, such as biodiversity and those needed for sustainability efforts, they produce.

  13. Study of Interwinner 5.0, software used in gamma spectrometry: application to the determination of radioelements and their doses rates in samples

    International Nuclear Information System (INIS)

    OURA, Kouame Joel

    2010-05-01

    Interwinner 5.0 is one of the software mostly used by researchers to measure and to analyse energy spectra. This software has been studied and explained here in order to master how to use it. During the measurements of some samples collected in our environment, we had to connect our computer (containing the software) to a High Purity Germanium detector (HPGe) and to a sophiscated amplification system. As samples, we measured a waste of gas. After analyse, we noticed that the activities of our samples are very inferior to international standards. So the waste of gas has been recognized not dangerous for human beings. In short, this work is an excellent research on electronic devices and softwares used by people in spectroscopy, but not always well mastered. The aim of our study which was to obtain better results while measuring samples collected in the environment has been reached. Finally this work contributed in the reduction of mistakes due to misuse of electronic devices [fr

  14. Detecting Near-Earth Objects Using Cross-Correlation with a Point Spread Function

    Science.gov (United States)

    2009-03-01

    impact in the Yucatan Peninsula caused the extinction of the dinosaurs in the Cretaceous Period [Fix, 1995]. Even the Moon is pot marked by many...the atmosphere that the light traverses. For this reason , it is typically better to be at higher elevations to decrease the amount of atmosphere the...detection on average for the Rayleigh sampling with cross-correlation of a PSF than the Rayleigh sampling without cross- correlation. For this reason

  15. Technical evaluation report on the Third 10-year Interval Inservice Inspection Program Plan: Florida Power and Light Company, Turkey Point Nuclear Power Plant, Units 3 and 4 (Docket Numbers 50-250 and 50-251)

    International Nuclear Information System (INIS)

    Brown, B.W.; Feige, E.J.; Galbraith, S.G.; Porter, A.M.

    1995-02-01

    This report presents the results of the evaluation of the Turkey Point Nuclear Power Plant, Units 3 and 4, Third 10-Year Interval Inservice Inspection Program Plan, Revision 0, submitted September 9, 1993, including the requests for relief from the American Society of Mechanical Engineers (ASME) Boiler and Pressure Vessel Code, Section XI, requirements that the licensee has determined to be impractical. The Turkey Point Nuclear Power Plant, Units 3 and 4, Third 10-Year Interval Inservice Inspection Program Plan is evaluated in Section 2 of this report. The inservice inspection (ISI) program plan is evaluated for (a) compliance with the appropriate edition/addenda of Section XI, (b) acceptability of the examination sample, (c) correctness of the application of system or component examination exclusion criteria, and (d) compliance with ISI-related commitments identified during previous Nuclear Regulatory Commission (NRC) reviews. The requests for relief are evaluated in Section 3 of this report

  16. IMAGE TO POINT CLOUD METHOD OF 3D-MODELING

    Directory of Open Access Journals (Sweden)

    A. G. Chibunichev

    2012-07-01

    Full Text Available This article describes the method of constructing 3D models of objects (buildings, monuments based on digital images and a point cloud obtained by terrestrial laser scanner. The first step is the automated determination of exterior orientation parameters of digital image. We have to find the corresponding points of the image and point cloud to provide this operation. Before the corresponding points searching quasi image of point cloud is generated. After that SIFT algorithm is applied to quasi image and real image. SIFT algorithm allows to find corresponding points. Exterior orientation parameters of image are calculated from corresponding points. The second step is construction of the vector object model. Vectorization is performed by operator of PC in an interactive mode using single image. Spatial coordinates of the model are calculated automatically by cloud points. In addition, there is automatic edge detection with interactive editing available. Edge detection is performed on point cloud and on image with subsequent identification of correct edges. Experimental studies of the method have demonstrated its efficiency in case of building facade modeling.

  17. Many-objective thermodynamic optimization of Stirling heat engine

    International Nuclear Information System (INIS)

    Patel, Vivek; Savsani, Vimal; Mudgal, Anurag

    2017-01-01

    This paper presents a rigorous investigation of many-objective (four-objective) thermodynamic optimization of a Stirling heat engine. Many-objective optimization problem is formed by considering maximization of thermal efficiency, power output, ecological function and exergy efficiency. Multi-objective heat transfer search (MOHTS) algorithm is proposed and applied to obtain a set of Pareto-optimal points. Many objective optimization results form a solution in a four dimensional hyper objective space and for visualization it is represented on a two dimension objective space. Thus, results of four-objective optimization are represented by six Pareto fronts in two dimension objective space. These six Pareto fronts are compared with their corresponding two-objective Pareto fronts. Quantitative assessment of the obtained Pareto solutions is reported in terms of spread and the spacing measures. Different decision making approaches such as LINMAP, TOPSIS and fuzzy are used to select a final optimal solution from Pareto optimal set of many-objective optimization. Finally, to reveal the level of conflict between these objectives, distribution of each decision variable in their allowable range is also shown in two dimensional objective spaces. - Highlights: • Many-objective (i.e. four objective) optimization of Stirling engine is investigated. • MOHTS algorithm is introduced and applied to obtain a set of Pareto points. • Comparative results of many-objective and multi-objectives are presented. • Relationship of design variables in many-objective optimization are obtained. • Optimum solution is selected by using decision making approaches.

  18. Pengaruh Harga, Lokasi, dan Gaya Hidup Terhadap Loyalitas Pelanggan pada Starbucks Focal Point Medan

    OpenAIRE

    Purba, Emir Syahfuad

    2016-01-01

    The objective of this research is to know the effect of Price, Place, and Lifestyle towards Customer Loyalty at Starbucks Focal Point Medan. The kind of this research is associative explanation research. Methods of data collection used questionnaire and literature. The methods of data analysis using Software SPSS 20.0 for Windows used 90 respondents which determined by using Purposive Sampling. The result of this research by analysis of multiple linear regression shows that variable of ...

  19. Validation and Clinical Utility of the hERG IC50:Cmax Ratio to Determine the Risk of Drug-Induced Torsades de Pointes: A Meta-Analysis.

    Science.gov (United States)

    Lehmann, David F; Eggleston, William D; Wang, Dongliang

    2018-03-01

    Use of the QT interval corrected for heart rate (QTc) on the electrocardiogram (ECG) to predict torsades de pointes (TdP) risk from culprit drugs is neither sensitive nor specific. The ratio of the half-maximum inhibitory concentration of the hERG channel (hERG IC50) to the peak serum concentration of unbound drug (C max ) is used during drug development to screen out chemical entities likely to cause TdP. To validate the use of the hERG IC50:C max ratio to predict TdP risk from a culprit drug by its correlation with TdP incidence. Medline (between 1966 and March 2017) was accessed for hERG IC50 and C max values from the antihistamine, fluoroquinolone, and antipsychotic classes to identify cases of drug-induced TdP. Exposure to a culprit drug was estimated from annual revenues reported by the manufacturer. Inclusion criteria for TdP cases were provision of an ECG tracing that demonstrated QTc prolongation with TdP and normal serum values of potassium, calcium, and magnesium. Cases reported in patients with a prior rhythm disturbance and those involving a drug interaction were excluded. The Meta-Analysis of Observational Studies in Epidemiology checklist was used for epidemiological data extraction by two authors. Negligible risk drugs were defined by an hERG IC50:C max ratio that correlated with less than a 5% chance of one TdP event for every 100 million exposures (relative risk [RR] 1.0). The hERG IC50:C max ratio correlated with TdP risk (0.312; 95% confidence interval 0.205-0.476, pratio of 80 (RR 1.0). The RR from olanzapine is on par with loratadine; ziprasidone is comparable with ciprofloxacin. Drugs with an RR greater than 50 include astemizole, risperidone, haloperidol, and thioridazine. The hERG IC50:C max ratio was correlated with TdP incidence for culprit drugs. This validation provides support for the potential use of the hERG IC50:C max ratio for clinical decision making in instances of drug selection where TdP risk is a concern. © 2018

  20. SEMANTIC SEGMENTATION OF BUILDING ELEMENTS USING POINT CLOUD HASHING

    Directory of Open Access Journals (Sweden)

    M. Chizhova

    2018-05-01

    Full Text Available For the interpretation of point clouds, the semantic definition of extracted segments from point clouds or images is a common problem. Usually, the semantic of geometrical pre-segmented point cloud elements are determined using probabilistic networks and scene databases. The proposed semantic segmentation method is based on the psychological human interpretation of geometric objects, especially on fundamental rules of primary comprehension. Starting from these rules the buildings could be quite well and simply classified by a human operator (e.g. architect into different building types and structural elements (dome, nave, transept etc., including particular building parts which are visually detected. The key part of the procedure is a novel method based on hashing where point cloud projections are transformed into binary pixel representations. A segmentation approach released on the example of classical Orthodox churches is suitable for other buildings and objects characterized through a particular typology in its construction (e.g. industrial objects in standardized enviroments with strict component design allowing clear semantic modelling.

  1. Evaluating the effect of sample type on American alligator (Alligator mississippiensis) analyte values in a point-of-care blood analyser

    OpenAIRE

    Hamilton, Matthew T.; Finger, John W.; Winzeler, Megan E.; Tuberville, Tracey D.

    2016-01-01

    The assessment of wildlife health has been enhanced by the ability of point-of-care (POC) blood analysers to provide biochemical analyses of non-domesticated animals in the field. However, environmental limitations (e.g. temperature, atmospheric humidity and rain) and lack of reference values may inhibit researchers from using such a device with certain wildlife species. Evaluating the use of alternative sample types, such as plasma, in a POC device may afford researchers the opportunity to d...

  2. Quality of university education – starting points and objectives

    Directory of Open Access Journals (Sweden)

    Floreková ¼ubica

    2002-12-01

    Full Text Available Quality of university education as a service for clients (students, potential employers, and society is presently a very important goal for university and their faculties.International agreements (Bologna appeal of Ministries of Education from 1998, international institutions (OECD– a list of internationally validated universities and study branches, and the Slovak legislation (Act No 131/2002 on universities, Act No 132/2002 on scientist and techniques must be implemented in the context of the self–evaluation process of educational institutions and of the European Foundation Quality Model of Excellence (EFQM.The given documents allow making an internal analysis by any university oriented to the process and consumer approach and to the objectives, forms, content and organization of university education.The quality of education is a subsystem of the quality of the educational institution. This quality determines the competitive status of this institution on the market of postsecondary and part-time education.The quality of university education is however connected not only with the material and information sources, but also especially with the human factor. Ethos, pathos and logos, i.e. the soft factors of universities as providers of education are necessary part of every Alma Mater.

  3. Sampling Development

    Science.gov (United States)

    Adolph, Karen E.; Robinson, Scott R.

    2011-01-01

    Research in developmental psychology requires sampling at different time points. Accurate depictions of developmental change provide a foundation for further empirical studies and theories about developmental mechanisms. However, overreliance on widely spaced sampling intervals in cross-sectional and longitudinal designs threatens the validity of…

  4. 40 CFR 407.50 - Applicability; description of the dehydrated potato products subcategory.

    Science.gov (United States)

    2010-07-01

    ... dehydrated potato products subcategory. 407.50 Section 407.50 Protection of Environment ENVIRONMENTAL... PROCESSING POINT SOURCE CATEGORY Dehydrated Potato Products Subcategory § 407.50 Applicability; description of the dehydrated potato products subcategory. The provisions of this subpart are applicable to...

  5. Ionised Jets Associated With Massive Young Stellar Objects

    Science.gov (United States)

    Purser, Simon John Derek

    2017-09-01

    This thesis focuses on the phenomena of ionised jets associated with massive young stellar objects. Firstly a study was conducted with the aim to establish a statistical sample of such objects. Radio observations towards a sample of 49 MYSOs resulted in the detection of 28 objects classified as ionised jets. The jets’ radio luminosities scaled with their MYSOs’ bolometric luminosities in the same way as for low-mass examples. This infers that the jet launching and collimation mechanisms of high-mass jets are very similar to that in their low-mass counterparts and they are ejected for the last ≤65000 yr of the MYSO phase. Interestingly non-thermal emission was regularly detected towards spatially distinct radio lobes (associated with ˜50% of the jets), suggesting the presence of synchrotron emission and therefore, magnetic fields. With an average spectral index of ¯α=‑0. 55 (indicative of the 1st order Fermi acceleration mechanism) it is concluded these lobes are the result of shocks in the jets’ stream. My second science chapter is a study of radio variability, precession and proper motions towards a subset of objects from the first chapter. Over a two year time period, no significant variability and only one example of proper motion (1800±600 km s‑1) was detected. Precession was found to be commonplace however and if it arises as the result of binary interactions, we infer orbital radii between 30 and 1800 au for the binary companions. Lastly, high-resolution, VLA observations at C and Q-bands were analysed to extend the known sample of MYSOs harbouring ionised jets into the northern hemisphere. Only 3 radio sources were detected possessing jet-like characteristics towards the work’s sub-sample of 8 IRDCs containing 44 mm-cores (in our field of view), highlighting the radio-quiet (≳30μJy) nature of this early phase in massive star formation. Towards the RMS survey derived sample of 48 MYSOs, a total of 38 radio sources with jet

  6. Breaking object correspondence across saccadic eye movements deteriorates object recognition

    Directory of Open Access Journals (Sweden)

    Christian H. Poth

    2015-12-01

    Full Text Available Visual perception is based on information processing during periods of eye fixations that are interrupted by fast saccadic eye movements. The ability to sample and relate information on task-relevant objects across fixations implies that correspondence between presaccadic and postsaccadic objects is established. Postsaccadic object information usually updates and overwrites information on the corresponding presaccadic object. The presaccadic object representation is then lost. In contrast, the presaccadic object is conserved when object correspondence is broken. This helps transsaccadic memory but it may impose attentional costs on object recognition. Therefore, we investigated how breaking object correspondence across the saccade affects postsaccadic object recognition. In Experiment 1, object correspondence was broken by a brief postsaccadic blank screen. Observers made a saccade to a peripheral object which was displaced during the saccade. This object reappeared either immediately after the saccade or after the blank screen. Within the postsaccadic object, a letter was briefly presented (terminated by a mask. Observers reported displacement direction and letter identity in different blocks. Breaking object correspondence by blanking improved displacement identification but deteriorated postsaccadic letter recognition. In Experiment 2, object correspondence was broken by changing the object’s contrast-polarity. There were no object displacements and observers only reported letter identity. Again, breaking object correspondence deteriorated postsaccadic letter recognition. These findings identify transsaccadic object correspondence as a key determinant of object recognition across the saccade. This is in line with the recent hypothesis that breaking object correspondence results in separate representations of presaccadic and postsaccadic objects which then compete for limited attentional processing resources (Schneider, 2013. Postsaccadic

  7. Detection of Xenotropic Murine Leukemia Virus-Related Virus in Prostate Biopsy Samples

    International Nuclear Information System (INIS)

    Baig, F. A.; Mirza, T.; Khanani, R.; Khan, S.

    2014-01-01

    Objective: To determine the association of Xenotropic murine leukemia virus related virus (XMRV) infection with prostate cancer and compare it with benign prostate hyperplasia. Study Design: Case control study. Place and Duration of Study: Department of Histopathology and Molecular Pathology, Dow University of Health Sciences, Karachi, from January 2009 to December 2012. Methodology: XMRV was screened in 50 prostate cancer and 50 benign prostatic hyperplasia biopsies using conventional end-point PCR. Other studied variables were family history of prostate cancer, patients age and Gleason score. Results: XMRV was detected in 4 (8%) of the 50 prostate cancer biopsy specimens compared to none in biopsies with benign prostatic hyperplasia. However, there was no significant statistical association of XMRV infection with the other variables. Conclusion: A low frequency of XMRV infection was found in this case-control study. Men, who harbor XMRV infection, may be at increased risk of prostate cancer but this needs to be investigated further at a larger scale. (author)

  8. Biosafety: degree of importance in the point of view of undergraduate dental students from Univille

    OpenAIRE

    Maria Dalva de S. SCHROEDER; Constanza MARIN; Fabio MIRI

    2010-01-01

    Introduction and objective: This study evaluated the degree ofimportance regarding biosafety in the point of view of undergraduatedental students from the University of the Region of Joinville– Univille. Material and methods: The sample was composed of142 undergraduate dental students from first to fifth year, who wereasked to sign the term of free and clarified assent in order to answera questionnaire with 13 closed-ended and 2 open-ended questionsregarding dental clinical practice and conce...

  9. Determination of ultra trace arsenic species in water samples by hydride generation atomic absorption spectrometry after cloud point extraction

    Energy Technology Data Exchange (ETDEWEB)

    Ulusoy, Halil Ibrahim, E-mail: hiulusoy@yahoo.com [University of Cumhuriyet, Faculty of Science, Department of Chemistry, TR-58140, Sivas (Turkey); Akcay, Mehmet; Ulusoy, Songuel; Guerkan, Ramazan [University of Cumhuriyet, Faculty of Science, Department of Chemistry, TR-58140, Sivas (Turkey)

    2011-10-10

    Graphical abstract: The possible complex formation mechanism for ultra-trace As determination. Highlights: {yields} CPE/HGAAS system for arsenic determination and speciation in real samples has been applied first time until now. {yields} The proposed method has the lowest detection limit when compared with those of similar CPE studies present in literature. {yields} The linear range of the method is highly wide and suitable for its application to real samples. - Abstract: Cloud point extraction (CPE) methodology has successfully been employed for the preconcentration of ultra-trace arsenic species in aqueous samples prior to hydride generation atomic absorption spectrometry (HGAAS). As(III) has formed an ion-pairing complex with Pyronine B in presence of sodium dodecyl sulfate (SDS) at pH 10.0 and extracted into the non-ionic surfactant, polyethylene glycol tert-octylphenyl ether (Triton X-114). After phase separation, the surfactant-rich phase was diluted with 2 mL of 1 M HCl and 0.5 mL of 3.0% (w/v) Antifoam A. Under the optimized conditions, a preconcentration factor of 60 and a detection limit of 0.008 {mu}g L{sup -1} with a correlation coefficient of 0.9918 was obtained with a calibration curve in the range of 0.03-4.00 {mu}g L{sup -1}. The proposed preconcentration procedure was successfully applied to the determination of As(III) ions in certified standard water samples (TMDA-53.3 and NIST 1643e, a low level fortified standard for trace elements) and some real samples including natural drinking water and tap water samples.

  10. The Investigation of Accuracy of 3 Dimensional Models Generated From Point Clouds with Terrestrial Laser Scanning

    Science.gov (United States)

    Gumus, Kutalmis; Erkaya, Halil

    2013-04-01

    In Terrestrial laser scanning (TLS) applications, it is necessary to take into consideration the conditions that affect the scanning process, especially the general characteristics of the laser scanner, geometric properties of the scanned object (shape, size, etc.), and its spatial location in the environment. Three dimensional models obtained with TLS, allow determining the geometric features and relevant magnitudes of the scanned object in an indirect way. In order to compare the spatial location and geometric accuracy of the 3-dimensional model created by Terrestrial laser scanning, it is necessary to use measurement tools that give more precise results than TLS. Geometric comparisons are performed by analyzing the differences between the distances, the angles between surfaces and the measured values taken from cross-sections between the data from the 3-dimensional model created with TLS and the values measured by other measurement devices The performance of the scanners, the size and shape of the scanned objects are tested using reference objects the sizes of which are determined with high precision. In this study, the important points to consider when choosing reference objects were highlighted. The steps up to processing the point clouds collected by scanning, regularizing these points and modeling in 3 dimensions was presented visually. In order to test the geometric correctness of the models obtained by Terrestrial laser scanners, sample objects with simple geometric shapes such as cubes, rectangular prisms and cylinders that are made of concrete were used as reference models. Three dimensional models were generated by scanning these reference models with Trimble Mensi GS 100. The dimension of the 3D model that is created from point clouds was compared with the precisely measured dimensions of the reference objects. For this purpose, horizontal and vertical cross-sections were taken from the reference objects and generated 3D models and the proximity of

  11. From learning objects to learning activities

    DEFF Research Database (Denmark)

    Dalsgaard, Christian

    2005-01-01

    This paper discusses and questions the current metadata standards for learning objects from a pedagogical point of view. From a social constructivist approach, the paper discusses how learning objects can support problem based, self-governed learning activities. In order to support this approach......, it is argued that it is necessary to focus on learning activities rather than on learning objects. Further, it is argued that descriptions of learning objectives and learning activities should be separated from learning objects. The paper presents a new conception of learning objects which supports problem...... based, self-governed activities. Further, a new way of thinking pedagogy into learning objects is introduced. It is argued that a lack of pedagogical thinking in learning objects is not solved through pedagogical metadata. Instead, the paper suggests the concept of references as an alternative...

  12. Image Sampling with Quasicrystals

    Directory of Open Access Journals (Sweden)

    Mark Grundland

    2009-07-01

    Full Text Available We investigate the use of quasicrystals in image sampling. Quasicrystals produce space-filling, non-periodic point sets that are uniformly discrete and relatively dense, thereby ensuring the sample sites are evenly spread out throughout the sampled image. Their self-similar structure can be attractive for creating sampling patterns endowed with a decorative symmetry. We present a brief general overview of the algebraic theory of cut-and-project quasicrystals based on the geometry of the golden ratio. To assess the practical utility of quasicrystal sampling, we evaluate the visual effects of a variety of non-adaptive image sampling strategies on photorealistic image reconstruction and non-photorealistic image rendering used in multiresolution image representations. For computer visualization of point sets used in image sampling, we introduce a mosaic rendering technique.

  13. DMR50 – the first digital terrain model of Slovakia in the GCCA SR sector

    Directory of Open Access Journals (Sweden)

    Matej Klobušiak

    2005-06-01

    Full Text Available The digital terrain model (DTM is a complex object of the Primary Database for The Geographic Information System (PD GIS. PD GIS is a component of the Automated Information System of Geodesy, Cartography and Cadastre. The EC initiative INSPIRE defines DTM as one basic element of the National Spatial Data Infrastructure (NSDI. The creation of NSDI is a task of the Action Plan of the Strategy of the Slovak Information Society. The range of the DTM vertical accuracy is described through the metadata. The metadata describes a product in a complex way. The GCCA SR will offer metadata and the solo product of DTM through its organization, the Geodetic and Cartographic Institute in Bratislava (GCI, via the Internet. For this purpose the GCI meaningfuly build a webmap service, GCCA SR Geoportal, which is nearly related with the NSDI concept as well as with the projects of the Eurogeographics association. The paper describes the creation of DMR50, DTM of Slovakia, with the 50x50 meter grid. DMR50 was created by the data processing of the contour lines model from the Basic Map of the Slovak Republic 1:50 000. The testing of the DMR50 vertical accuracy was carried out by the set of geodetic points from the State Levelling Network. DMR50 is a suitable contribution of Slovakia to the creation of the EuroGeographics or INSPIRE–coordinated pan-European products.

  14. Data Quality Objectives for Regulatory Requirements for Hazardous and Radioactive Air Emissions Sampling and Analysis

    International Nuclear Information System (INIS)

    MULKEY, C.H.

    1999-01-01

    This document describes the results of the data quality objective (DQO) process undertaken to define data needs for state and federal requirements associated with toxic, hazardous, and/or radiological air emissions under the jurisdiction of the River Protection Project (RPP). Hereafter, this document is referred to as the Air DQO. The primary drivers for characterization under this DQO are the regulatory requirements pursuant to Washington State regulations, that may require sampling and analysis. The federal regulations concerning air emissions are incorporated into the Washington State regulations. Data needs exist for nonradioactive and radioactive waste constituents and characteristics as identified through the DQO process described in this document. The purpose is to identify current data needs for complying with regulatory drivers for the measurement of air emissions from RPP facilities in support of air permitting. These drivers include best management practices; similar analyses may have more than one regulatory driver. This document should not be used for determining overall compliance with regulations because the regulations are in constant change, and this document may not reflect the latest regulatory requirements. Regulatory requirements are also expected to change as various permits are issued. Data needs require samples for both radionuclides and nonradionuclide analytes of air emissions from tanks and stored waste containers. The collection of data is to support environmental permitting and compliance, not for health and safety issues

  15. Properties of BL Lac objects

    International Nuclear Information System (INIS)

    Wolfe, A.M.; Pittsburgh, University, Pittsburgh, Pa.)

    1980-01-01

    The properties of BL Lacertae objects are examined in light of their recently realized similarities to quasars and associations with galactic radiation. The criteria typically used to define BL Lac objects are analyzed, with attention given to radio spectra, optical continual, radio and optical variability, optical polarization and emission lines, and evidence that BL Lac objects and optically violent variables represent the most compact and strongly variable sources among the general class of quasars is discussed. Connections between BL Lac objects and the galaxies in which they have been observed to be embedded are discussed and it is pointed out that no low-luminosity quasars have been found to be associated with first-ranked giant ellipticals. Future observations which may clarify the properties and relationships of BL Lac objects are indicated

  16. RR Lyrae star distance scale and kinematics from inner bulge to 50 kpc

    Directory of Open Access Journals (Sweden)

    Dambis Andrei

    2017-01-01

    Full Text Available We use the currently most complete sample of ∼ 3500 type ab RR Lyraes in our Galaxy with available radial-velocity and [Fe/H] measurements to perform a statisticalparallax analysis for a subsample of ∼ 600 type ab RR Lyraes located within 5 kpc from the Sun to refine the parameters of optical and WISE W1-band period-metallicityluminosity relations and adjust our preliminary distances. The new zero point implies the rescaled estimates for the solar Galactocentric distance (RG = 7.99 ± 0.37 kpc and the LMC distance modulus (DMLMC = 18.39 ±0.09. We use the kinematic data for the entire sample to explore the dependence of the halo and thick-disk RR Lyrae velocity ellipsoids on Galactocentric distance from the inner bulge out to R ∼ 50 kpc.

  17. Hardening in AlN induced by point defects

    International Nuclear Information System (INIS)

    Suematsu, H.; Mitchell, T.E.; Iseki, T.; Yano, T.

    1991-01-01

    Pressureless-sintered AIN was neutron irradiated and the hardness change was examined by Vickers indentation. The hardness was increased by irradiation. When the samples were annealed at high temperature, the hardness gradually decreased. Length was also found to increase and to change in the same way as the hardness. A considerable density of dislocation loops still remained, even after the hardness completely recovered to the value of the unirradiated sample. Thus, it is concluded that the hardening in AIN is caused by isolated point defects and small clusters of point defects, rather than by dislocation loops. Hardness was found to increase in proportion to the length change. If the length change is assumed to be proportional to the point defect density, then the curve could be fitted qualitatively to that predicted by models of solution hardening in metals. Furthermore, the curves for three samples irradiated at different temperatures and fluences are identical. There should be different kinds of defect clusters in samples irradiated at different conditions, e.g., the fraction of single point defects is the highest in the sample irradiated at the lowest temperature. Thus, hardening is insensitive to the kind of defects remaining in the sample and is influenced only by those which contribute to length change

  18. 42 CFR 50.603 - Definitions.

    Science.gov (United States)

    2010-10-01

    ... Responsibility of Applicants for Promoting Objectivity in Research for Which PHS Funding Is Sought § 50.603... knowledge relating broadly to public health, including behavioral and social-sciences research. The term... children over the next twelve months, are not expected to exceed $10,000. Small Business Innovation...

  19. Tagging the didactic functionality of learning objects

    DEFF Research Database (Denmark)

    Hansen, Per Skafte; Brostroem, Stig

    2002-01-01

    From a components-in-a-network point of view, the most important issues are: a didactically based typing of the learning objects themselves; the entire design superstructure, into which the learning objects must be fitted; and the symmetry of the interfaces, as seen by each pair of the triad...

  20. Health disadvantage in US adults aged 50 to 74 years: A comparison of the health of rich and poor Americans with that of Europeans

    NARCIS (Netherlands)

    M. Avendano Pabon (Mauricio); M.M. Glymour (Maria); J. Banks (James); J.P. Mackenbach (Johan)

    2009-01-01

    textabstractObjectives. We compared the health of older US, English, and other European adults, stratified by wealth. Methods. Representative samples of adults aged 50 to 74 years were interviewed in 2004 in 10 European countries (n=17481), England (n=6527), and the United States (n=9940). We

  1. Analysis of bioethanol samples through Inductively Coupled Plasma Mass Spectrometry with a total sample consumption system

    Science.gov (United States)

    Sánchez, Carlos; Lienemann, Charles-Philippe; Todolí, Jose-Luis

    2016-10-01

    Bioethanol real samples have been directly analyzed through ICP-MS by means of the so called High Temperature Torch Integrated Sample Introduction System (hTISIS). Because bioethanol samples may contain water, experiments have been carried out in order to determine the effect of ethanol concentration on the ICP-MS response. The ethanol content studied went from 0 to 50%, because higher alcohol concentrations led to carbon deposits on the ICP-MS interface. The spectrometer default spray chamber (double pass) equipped with a glass concentric pneumatic micronebulizer has been taken as the reference system. Two flow regimes have been evaluated: continuous sample aspiration at 25 μL min- 1 and 5 μL air-segmented sample injection. hTISIS temperature has been shown to be critical, in fact ICP-MS sensitivity increased with this variable up to 100-200 °C depending on the solution tested. Higher chamber temperatures led to either a drop in signal or a plateau. Compared with the reference system, the hTISIS improved the sensitivities by a factor included within the 4 to 8 range while average detection limits were 6 times lower for the latter device. Regarding the influence of the ethanol concentration on sensitivity, it has been observed that an increase in the temperature was not enough to eliminate the interferences. It was also necessary to modify the torch position with respect to the ICP-MS interface to overcome them. This fact was likely due to the different extent of ion plasma radial diffusion encountered as a function of the matrix when working at high chamber temperatures. When the torch was moved 1 mm plasma down axis, ethanolic and aqueous solutions provided statistically equal sensitivities. A preconcentration procedure has been applied in order to validate the methodology. It has been found that, under optimum conditions from the point of view of matrix effects, recoveries for spiked samples were close to 100%. Furthermore, analytical concentrations for real

  2. Fashion Objects

    DEFF Research Database (Denmark)

    Andersen, Bjørn Schiermer

    2009-01-01

    -- an outline which at the same time indicates the need for transformations of the Durkheimian model on decisive points. Thus, thirdly, it returns to Durkheim and undertakes to develop his concepts in a direction suitable for a sociological theory of fashion. Finally, it discusses the theoretical implications......This article attempts to create a framework for understanding modern fashion phenomena on the basis of Durkheim's sociology of religion. It focuses on Durkheim's conception of the relation between the cult and the sacred object, on his notion of 'exteriorisation', and on his theory of the social...... symbol in an attempt to describe the peculiar attraction of the fashion object and its social constitution. However, Durkheim's notions of cult and ritual must undergo profound changes if they are to be used in an analysis of fashion. The article tries to expand the Durkheimian cult, radically enlarging...

  3. Acceptance sampling using judgmental and randomly selected samples

    Energy Technology Data Exchange (ETDEWEB)

    Sego, Landon H.; Shulman, Stanley A.; Anderson, Kevin K.; Wilson, John E.; Pulsipher, Brent A.; Sieber, W. Karl

    2010-09-01

    We present a Bayesian model for acceptance sampling where the population consists of two groups, each with different levels of risk of containing unacceptable items. Expert opinion, or judgment, may be required to distinguish between the high and low-risk groups. Hence, high-risk items are likely to be identifed (and sampled) using expert judgment, while the remaining low-risk items are sampled randomly. We focus on the situation where all observed samples must be acceptable. Consequently, the objective of the statistical inference is to quantify the probability that a large percentage of the unsampled items in the population are also acceptable. We demonstrate that traditional (frequentist) acceptance sampling and simpler Bayesian formulations of the problem are essentially special cases of the proposed model. We explore the properties of the model in detail, and discuss the conditions necessary to ensure that required samples sizes are non-decreasing function of the population size. The method is applicable to a variety of acceptance sampling problems, and, in particular, to environmental sampling where the objective is to demonstrate the safety of reoccupying a remediated facility that has been contaminated with a lethal agent.

  4. What predicts inattention in adolescents? An experience-sampling study comparing chronotype, subjective, and objective sleep parameters.

    Science.gov (United States)

    Hennig, Timo; Krkovic, Katarina; Lincoln, Tania M

    2017-10-01

    Many adolescents sleep insufficiently, which may negatively affect their functioning during the day. To improve sleep interventions, we need a better understanding of the specific sleep-related parameters that predict poor functioning. We investigated to which extent subjective and objective parameters of sleep in the preceding night (state parameters) and the trait variable chronotype predict daytime inattention as an indicator of poor functioning. We conducted an experience-sampling study over one week with 61 adolescents (30 girls, 31 boys; mean age = 15.5 years, standard deviation = 1.1 years). Participants rated their inattention two times each day (morning, afternoon) on a smartphone. Subjective sleep parameters (feeling rested, positive affect upon awakening) were assessed each morning on the smartphone. Objective sleep parameters (total sleep time, sleep efficiency, wake after sleep onset) were assessed with a permanently worn actigraph. Chronotype was assessed with a self-rated questionnaire at baseline. We tested the effect of subjective and objective state parameters of sleep on daytime inattention, using multilevel multiple regressions. Then, we tested whether the putative effect of the trait parameter chronotype on inattention is mediated through state sleep parameters, again using multilevel regressions. We found that short sleep time, but no other state sleep parameter, predicted inattention to a small effect. As expected, the trait parameter chronotype also predicted inattention: morningness was associated with less inattention. However, this association was not mediated by state sleep parameters. Our results indicate that short sleep time causes inattention in adolescents. Extended sleep time might thus alleviate inattention to some extent. However, it cannot alleviate the effect of being an 'owl'. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Using Item Analysis to Assess Objectively the Quality of the Calgary-Cambridge OSCE Checklist

    Directory of Open Access Journals (Sweden)

    Tyrone Donnon

    2011-06-01

    Full Text Available Background:  The purpose of this study was to investigate the use of item analysis to assess objectively the quality of items on the Calgary-Cambridge Communications OSCE checklist. Methods:  A total of 150 first year medical students were provided with extensive teaching on the use of the Calgary-Cambridge Guidelines for interviewing patients and participated in a final year end 20 minute communication OSCE station.  Grouped into either the upper half (50% or lower half (50% communication skills performance groups, discrimination, difficulty and point biserial values were calculated for each checklist item. Results:  The mean score on the 33 item communication checklist was 24.09 (SD = 4.46 and the internal reliability coefficient was ? = 0.77. Although most of the items were found to have moderate (k = 12, 36% or excellent (k = 10, 30% discrimination values, there were 6 (18% identified as ‘fair’ and 3 (9% as ‘poor’. A post-examination review focused on item analysis findings resulted in an increase in checklist reliability (? = 0.80. Conclusions:  Item analysis has been used with MCQ exams extensively. In this study, it was also found to be an objective and practical approach to use in evaluating the quality of a standardized OSCE checklist.

  6. Point process models for spatio-temporal distance sampling data from a large-scale survey of blue whales

    KAUST Repository

    Yuan, Yuan; Bachl, Fabian E.; Lindgren, Finn; Borchers, David L.; Illian, Janine B.; Buckland, Stephen T.; Rue, Haavard; Gerrodette, Tim

    2017-01-01

    Distance sampling is a widely used method for estimating wildlife population abundance. The fact that conventional distance sampling methods are partly design-based constrains the spatial resolution at which animal density can be estimated using these methods. Estimates are usually obtained at survey stratum level. For an endangered species such as the blue whale, it is desirable to estimate density and abundance at a finer spatial scale than stratum. Temporal variation in the spatial structure is also important. We formulate the process generating distance sampling data as a thinned spatial point process and propose model-based inference using a spatial log-Gaussian Cox process. The method adopts a flexible stochastic partial differential equation (SPDE) approach to model spatial structure in density that is not accounted for by explanatory variables, and integrated nested Laplace approximation (INLA) for Bayesian inference. It allows simultaneous fitting of detection and density models and permits prediction of density at an arbitrarily fine scale. We estimate blue whale density in the Eastern Tropical Pacific Ocean from thirteen shipboard surveys conducted over 22 years. We find that higher blue whale density is associated with colder sea surface temperatures in space, and although there is some positive association between density and mean annual temperature, our estimates are consistent with no trend in density across years. Our analysis also indicates that there is substantial spatially structured variation in density that is not explained by available covariates.

  7. Point process models for spatio-temporal distance sampling data from a large-scale survey of blue whales

    KAUST Repository

    Yuan, Yuan

    2017-12-28

    Distance sampling is a widely used method for estimating wildlife population abundance. The fact that conventional distance sampling methods are partly design-based constrains the spatial resolution at which animal density can be estimated using these methods. Estimates are usually obtained at survey stratum level. For an endangered species such as the blue whale, it is desirable to estimate density and abundance at a finer spatial scale than stratum. Temporal variation in the spatial structure is also important. We formulate the process generating distance sampling data as a thinned spatial point process and propose model-based inference using a spatial log-Gaussian Cox process. The method adopts a flexible stochastic partial differential equation (SPDE) approach to model spatial structure in density that is not accounted for by explanatory variables, and integrated nested Laplace approximation (INLA) for Bayesian inference. It allows simultaneous fitting of detection and density models and permits prediction of density at an arbitrarily fine scale. We estimate blue whale density in the Eastern Tropical Pacific Ocean from thirteen shipboard surveys conducted over 22 years. We find that higher blue whale density is associated with colder sea surface temperatures in space, and although there is some positive association between density and mean annual temperature, our estimates are consistent with no trend in density across years. Our analysis also indicates that there is substantial spatially structured variation in density that is not explained by available covariates.

  8. The role of the interface on the magnetic behaviour of granular Fe50Ag50 film

    International Nuclear Information System (INIS)

    Fdez-Gubieda, M.L.; Sarmiento, G.; Fernandez Barquin, L.; Orue, I.

    2007-01-01

    The magnetic behaviour of a Fe 50 Ag 50 granular thin film has been studied by means of AC and DC magnetic measurements. Exchange coupling between magnetic nanoparticles appears at T=<200K decreasing the coercive field of the sample. Additionally, an exchange bias is observed at low temperature related to the existence of a spin disordered interface around the nanoparticles

  9. Cloud Point Extraction and Determination of Silver Ion in Real Sample using Bis((1H-benzo[d ]imidazol-2ylmethylsulfane

    Directory of Open Access Journals (Sweden)

    Farshid Ahmadi

    2011-01-01

    Full Text Available Bis((1H-benzo[d]imidazol-2ylmethylsulfane (BHIS was used as a complexing agent in cloud point extraction for the first time and applied for selective pre-concentration of trace amounts of silver. The method is based on the extraction of silver at pH 8.0 by using non-ionic surfactant T-X114 and bis((1H-benzo[d]imidazol-2ylmethylsulfane as a chelating agent. The adopted concentrations for BHIS, Triton X-114 and HNO3, bath temperature, centrifuge rate and time were optimized. Detection limits (3SDb/m of 1.7 along with enrichment factor of 39 for silver ion was achieved. The high efficiency of cloud point extraction to carry out the determination of analytes in complex matrices was demonstrated. The proposed method was successfully applied to the ultra-trace determination of silver in real samples.

  10. The effects of training based on BASNEF model and acupressure at GB21 point on the infants’ physical growth indicators

    Directory of Open Access Journals (Sweden)

    marzieh akbarzadeh

    2014-08-01

    Full Text Available objective: Educational models are used to study the behavior and plan for changing and determining the factors that affect the individuals’ decision making for conducting a behavior. This study aimed to compare the effects of the educational program based on BASNEF model and acupressure at GB21 point on the infants’ physical growth indicators. Methods: This clinical trial was conducted on 150 (50 per group pregnant women in 2011-2012. The interventions included educational program based on the BASNEF model and application of acupressure at GB21 point. The infants’ physical indicators were compared to the control group one and three months after birth. The study data were analyzed using repeated measurement test, paired sample T-Test, one-way ANOVA, and Tukey’s test. finding: The results showed a significant difference between the intervention and the control group regarding the infants’ weight and height one and three months after birth (p0.05. Also, no significant difference was observed among the three groups concerning the infants’ head and arm circumference (P>0.05. Conclusion: BASNEF model improved the infants’ height and weight. Application of acupressure also improved the infants’ height, weight, and head and arm circumference compared to the control group. Hence, learning and application of techniques and models by the medical team are highly essential.

  11. 50 CFR 600.305 - General.

    Science.gov (United States)

    2010-10-01

    ... 50 Wildlife and Fisheries 8 2010-10-01 2010-10-01 false General. 600.305 Section 600.305 Wildlife..., Councils balance biological constraints with human needs, reconcile present and future costs and benefits, and integrate the diversity of public and private interests. If objectives are in conflict, priorities...

  12. The MUSIC algorithm for sparse objects: a compressed sensing analysis

    International Nuclear Information System (INIS)

    Fannjiang, Albert C

    2011-01-01

    The multiple signal classification (MUSIC) algorithm, and its extension for imaging sparse extended objects, with noisy data is analyzed by compressed sensing (CS) techniques. A thresholding rule is developed to augment the standard MUSIC algorithm. The notion of restricted isometry property (RIP) and an upper bound on the restricted isometry constant (RIC) are employed to establish sufficient conditions for the exact localization by MUSIC with or without noise. In the noiseless case, the sufficient condition gives an upper bound on the numbers of random sampling and incident directions necessary for exact localization. In the noisy case, the sufficient condition assumes additionally an upper bound for the noise-to-object ratio in terms of the RIC and the dynamic range of objects. This bound points to the super-resolution capability of the MUSIC algorithm. Rigorous comparison of performance between MUSIC and the CS minimization principle, basis pursuit denoising (BPDN), is given. In general, the MUSIC algorithm guarantees to recover, with high probability, s scatterers with n=O(s 2 ) random sampling and incident directions and sufficiently high frequency. For the favorable imaging geometry where the scatterers are distributed on a transverse plane MUSIC guarantees to recover, with high probability, s scatterers with a median frequency and n=O(s) random sampling/incident directions. Moreover, for the problems of spectral estimation and source localizations both BPDN and MUSIC guarantee, with high probability, to identify exactly the frequencies of random signals with the number n=O(s) of sampling times. However, in the absence of abundant realizations of signals, BPDN is the preferred method for spectral estimation. Indeed, BPDN can identify the frequencies approximately with just one realization of signals with the recovery error at worst linearly proportional to the noise level. Numerical results confirm that BPDN outperforms MUSIC in the well-resolved case while

  13. SINGLE TREE DETECTION FROM AIRBORNE LASER SCANNING DATA USING A MARKED POINT PROCESS BASED METHOD

    Directory of Open Access Journals (Sweden)

    J. Zhang

    2013-05-01

    Full Text Available Tree detection and reconstruction is of great interest in large-scale city modelling. In this paper, we present a marked point process model to detect single trees from airborne laser scanning (ALS data. We consider single trees in ALS recovered canopy height model (CHM as a realization of point process of circles. Unlike traditional marked point process, we sample the model in a constraint configuration space by making use of image process techniques. A Gibbs energy is defined on the model, containing a data term which judge the fitness of the model with respect to the data, and prior term which incorporate the prior knowledge of object layouts. We search the optimal configuration through a steepest gradient descent algorithm. The presented hybrid framework was test on three forest plots and experiments show the effectiveness of the proposed method.

  14. Nearly 1000 Protein Identifications from 50 ng of Xenopus laevis Zygote Homogenate Using Online Sample Preparation on a Strong Cation Exchange Monolith Based Microreactor Coupled with Capillary Zone Electrophoresis.

    Science.gov (United States)

    Zhang, Zhenbin; Sun, Liangliang; Zhu, Guijie; Cox, Olivia F; Huber, Paul W; Dovichi, Norman J

    2016-01-05

    A sulfonate-silica hybrid strong cation exchange monolith microreactor was synthesized and coupled to a linear polyacrylamide coated capillary for online sample preparation and capillary zone electrophoresis-tandem mass spectrometry (CZE-MS/MS) bottom-up proteomic analysis. The protein sample was loaded onto the microreactor in an acidic buffer. After online reduction, alkylation, and digestion with trypsin, the digests were eluted with 200 mM ammonium bicarbonate at pH 8.2 for CZE-MS/MS analysis using 1 M acetic acid as the background electrolyte. This combination of basic elution and acidic background electrolytes results in both sample stacking and formation of a dynamic pH junction. 369 protein groups and 1274 peptides were identified from 50 ng of Xenopus laevis zygote homogenate, which is comparable with an offline sample preparation method, but the time required for sample preparation was decreased from over 24 h to less than 40 min. Dramatically improved performance was produced by coupling the reactor to a longer separation capillary (∼100 cm) and a Q Exactive HF mass spectrometer. 975 protein groups and 3749 peptides were identified from 50 ng of Xenopus protein using the online sample preparation method.

  15. Microstructure and Mechanical Behavior of Microwave Sintered Cu50Ti50 Amorphous Alloy Reinforced Al Metal Matrix Composites

    Science.gov (United States)

    Reddy, M. Penchal; Ubaid, F.; Shakoor, R. A.; Mohamed, A. M. A.

    2018-06-01

    In the present work, Al metal matrix composites reinforced with Cu-based (Cu50Ti50) amorphous alloy particles synthesized by ball milling followed by a microwave sintering process were studied. The amorphous powders of Cu50Ti50 produced by ball milling were used to reinforce the aluminum matrix. They were examined by x-ray diffraction (XRD), scanning electron microscopy (SEM), microhardness and compression testing. The analysis of XRD patterns of the samples containing 5 vol.%, 10 vol.% and 15 vol.% Cu50Ti50 indicates the presence of Al and Cu50Ti50 peaks. SEM images of the sintered composites show the uniform distribution of reinforced particles within the matrix. Mechanical properties of the composites were found to increase with an increasing volume fraction of Cu50Ti50 reinforcement particles. The hardness and compressive strength were enhanced to 89 Hv and 449 MPa, respectively, for the Al-15 vol.% Cu50Ti50 composites.

  16. Evaluation of solid sampling high-resolution continuum source graphite furnace atomic absorption spectrometry for direct determination of chromium in medicinal plants

    Energy Technology Data Exchange (ETDEWEB)

    Virgilio, Alex; Nobrega, Joaquim A. [Department of Chemistry, Federal University of Sao Carlos, Post Office Box 676, 13560-970, Sao Carlos-SP (Brazil); Rego, Jardes F. [Department of Analytical Chemistry, Institute of Chemistry, Sao Paulo State University-UNESP, Post Office Box 355, 14801-970, Araraquara-SP (Brazil); Neto, Jose A. Gomes, E-mail: anchieta@iq.unesp.br [Department of Analytical Chemistry, Institute of Chemistry, Sao Paulo State University-UNESP, Post Office Box 355, 14801-970, Araraquara-SP (Brazil)

    2012-12-01

    A method for Cr determination in medicinal plants using direct solid sampling graphite furnace high-resolution continuum source atomic absorption spectrometry was developed. Modifiers were dispensable. Pyrolysis and atomization temperatures were 1500 Degree-Sign C and 2400 Degree-Sign C, respectively. Slopes of calibration curves (50-750 pg Cr, R{sup 2} > 0.999) using aqueous and solid standards coincides in 96%, indicated feasibility of aqueous calibration for solid sampling of medicinal plants. Accuracy was checked by analysis of four plant certified reference materials. Results were in agreement at 95% confidence level with certified and non-certified values. Ten samples of medicinal plants were analyzed and Cr contents were in the 1.3-17.7 {mu}g g{sup -1} Cr range. The highest RSD (n = 5) was 15.4% for the sample Melissa officinalis containing 13.9 {+-} 2.1 {mu}g g{sup -1} Cr. The limit of detection was 3.3 ng g{sup -1} Cr. - Highlights: Black-Right-Pointing-Pointer Direct solid sampling is first time employed for Cr in plant materials. Black-Right-Pointing-Pointer Calibration curves with liquids and solids are coincident. Black-Right-Pointing-Pointer Microanalysis of plants for Cr is validated by reference materials. Black-Right-Pointing-Pointer The proposed HR-CS GF AAS method is environmental friendly.

  17. A simple method for determination of carmine in food samples based on cloud point extraction and spectrophotometric detection.

    Science.gov (United States)

    Heydari, Rouhollah; Hosseini, Mohammad; Zarabi, Sanaz

    2015-01-01

    In this paper, a simple and cost effective method was developed for extraction and pre-concentration of carmine in food samples by using cloud point extraction (CPE) prior to its spectrophotometric determination. Carmine was extracted from aqueous solution using Triton X-100 as extracting solvent. The effects of main parameters such as solution pH, surfactant and salt concentrations, incubation time and temperature were investigated and optimized. Calibration graph was linear in the range of 0.04-5.0 μg mL(-1) of carmine in the initial solution with regression coefficient of 0.9995. The limit of detection (LOD) and limit of quantification were 0.012 and 0.04 μg mL(-1), respectively. Relative standard deviation (RSD) at low concentration level (0.05 μg mL(-1)) of carmine was 4.8% (n=7). Recovery values in different concentration levels were in the range of 93.7-105.8%. The obtained results demonstrate the proposed method can be applied satisfactory to determine the carmine in food samples. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Neighborhood sampling: how many streets must an auditor walk?

    Science.gov (United States)

    McMillan, Tracy E; Cubbin, Catherine; Parmenter, Barbara; Medina, Ashley V; Lee, Rebecca E

    2010-03-12

    This study tested the representativeness of four street segment sampling protocols using the Pedestrian Environment Data Scan (PEDS) in eleven neighborhoods surrounding public housing developments in Houston, TX. The following four street segment sampling protocols were used (1) all segments, both residential and arterial, contained within the 400 meter radius buffer from the center point of the housing development (the core) were compared with all segments contained between the 400 meter radius buffer and the 800 meter radius buffer (the ring); all residential segments in the core were compared with (2) 75% (3) 50% and (4) 25% samples of randomly selected residential street segments in the core. Analyses were conducted on five key variables: sidewalk presence; ratings of attractiveness and safety for walking; connectivity; and number of traffic lanes. Some differences were found when comparing all street segments, both residential and arterial, in the core to the ring. Findings suggested that sampling 25% of residential street segments within the 400 m radius of a residence sufficiently represents the pedestrian built environment. Conclusions support more cost effective environmental data collection for physical activity research.

  19. Modeling Spatial Data within Object Relational-Databases

    Directory of Open Access Journals (Sweden)

    Iuliana BOTHA

    2011-03-01

    Full Text Available Spatial data can refer to elements that help place a certain object in a certain area. These elements are latitude, longitude, points, geometric figures represented by points, etc. However, when translating these elements into data that can be stored in a computer, it all comes down to numbers. The interesting part that requires attention is how to memorize them in order to obtain fast and various spatial queries. This part is where the DBMS (Data Base Management System that contains the database acts in. In this paper, we analyzed and compared two object-relational DBMS that work with spatial data: Oracle and PostgreSQL.

  20. Ifcwall Reconstruction from Unstructured Point Clouds

    Science.gov (United States)

    Bassier, M.; Klein, R.; Van Genechten, B.; Vergauwen, M.

    2018-05-01

    The automated reconstruction of Building Information Modeling (BIM) objects from point cloud data is still ongoing research. A key aspect is the creation of accurate wall geometry as it forms the basis for further reconstruction of objects in a BIM. After segmenting and classifying the initial point cloud, the labelled segments are processed and the wall topology is reconstructed. However, the preocedure is challenging due to noise, occlusions and the complexity of the input data.In this work, a method is presented to automatically reconstruct consistent wall geometry from point clouds. More specifically, the use of room information is proposed to aid the wall topology creation. First, a set of partial walls is constructed based on classified planar primitives. Next, the rooms are identified using the retrieved wall information along with the floors and ceilings. The wall topology is computed by the intersection of the partial walls conditioned on the room information. The final wall geometry is defined by creating IfcWallStandardCase objects conform the IFC4 standard. The result is a set of walls according to the as-built conditions of a building. The experiments prove that the used method is a reliable framework for wall reconstruction from unstructured point cloud data. Also, the implementation of room information reduces the rate of false positives for the wall topology. Given the walls, ceilings and floors, 94% of the rooms is correctly identified. A key advantage of the proposed method is that it deals with complex rooms and is not bound to single storeys.

  1. Comparative Study by MS and XRD of Fe{sub 50}Al{sub 50} Alloys Produced by Mechanical Alloying, Using Different Ball Mills

    Energy Technology Data Exchange (ETDEWEB)

    Rojas Martinez, Y., E-mail: yarojas@ut.edu.co [University of Tolima, Department of Physics (Colombia); Perez Alcazar, G. A. [University of Valle, Department of Physics (Colombia); Bustos Rodriguez, H.; Oyola Lozano, D., E-mail: doyolalozano@yahoo.com.mx [University of Tolima, Department of Physics (Colombia)

    2005-02-15

    In this work we report a comparative study of the magnetic and structural properties of Fe{sub 50}Al{sub 50} alloys produced by mechanical alloying using two different planetary ball mills with the same ball mass to powder mass relation. The Fe{sub 50}Al{sub 50} sample milled during 48 h using the Fritsch planetary ball mill pulverisette 5 and balls of 20 mm, presents only a bcc alloy phase with a majority of paramagnetic sites, whereas that sample milled during the same time using the Fritsch planetary ball mill pulverisette 7 with balls of 15 mm, presents a bcc alloy phase with paramagnetic site (doublet) and a majority of ferromagnetic sites which include pure Fe. However for 72 h of milling this sample presents a bcc paramagnetic phase, very similar to that prepared with the first system during 48 h. These results show that the conditions used in the first ball mill equipment make more efficient the milling process.

  2. A New Cryogenic Sample Manipulator For SRC's Scienta 2002 System

    International Nuclear Information System (INIS)

    Gundelach, Chad T.; Fisher, Mike V.; Hoechst, Hartmut

    2004-01-01

    We discuss the first bench tests of a sample manipulator which was recently designed at SRC for the Scienta 2002 User system. The manipulator concept utilizes the 10 deg. angular window of the Scienta in the horizontal plane (angle dispersion) by rotating the sample normal around the vertical axis while angular scans along the vertical axis (energy dispersion) are continuous within ±30 deg. relative to the electron lens by rotating the sample around the horizontal axis. With this concept it is possible to precisely map the entire two-dimensional k-space of a crystal by means of stitching together 10 deg. wide stripes centered +15 deg. to -50 deg. relative to the sample normal. Three degrees of translational freedom allow positioning the sample surface at the focal point of the analyzer. Two degrees of rotational freedom are available at this position for manipulating the sample. Samples are mounted to a standard holder and transferred to the manipulator via a load-lock system attached to a prep chamber. The manipulator is configured with a cryogenic cold head, an electrical heater, and a temperature sensor permitting continuous closed-loop operation for 20-380 K

  3. Calculation of absorbed dose and biological effectiveness from photonuclear reactions in a bremsstrahlung beam of end point 50 MeV.

    Science.gov (United States)

    Gudowska, I; Brahme, A; Andreo, P; Gudowski, W; Kierkegaard, J

    1999-09-01

    The absorbed dose due to photonuclear reactions in soft tissue, lung, breast, adipose tissue and cortical bone has been evaluated for a scanned bremsstrahlung beam of end point 50 MeV from a racetrack accelerator. The Monte Carlo code MCNP4B was used to determine the photon source spectrum from the bremsstrahlung target and to simulate the transport of photons through the treatment head and the patient. Photonuclear particle production in tissue was calculated numerically using the energy distributions of photons derived from the Monte Carlo simulations. The transport of photoneutrons in the patient and the photoneutron absorbed dose to tissue were determined using MCNP4B; the absorbed dose due to charged photonuclear particles was calculated numerically assuming total energy absorption in tissue voxels of 1 cm3. The photonuclear absorbed dose to soft tissue, lung, breast and adipose tissue is about (0.11-0.12)+/-0.05% of the maximum photon dose at a depth of 5.5 cm. The absorbed dose to cortical bone is about 45% larger than that to soft tissue. If the contributions from all photoparticles (n, p, 3He and 4He particles and recoils of the residual nuclei) produced in the soft tissue and the accelerator, and from positron radiation and gammas due to induced radioactivity and excited states of the nuclei, are taken into account the total photonuclear absorbed dose delivered to soft tissue is about 0.15+/-0.08% of the maximum photon dose. It has been estimated that the RBE of the photon beam of 50 MV acceleration potential is approximately 2% higher than that of conventional 60Co radiation.

  4. Calculation of absorbed dose and biological effectiveness from photonuclear reactions in a bremsstrahlung beam of end point 50 MeV

    International Nuclear Information System (INIS)

    Gudowska, I.; Brahme, A.; Andreo, P.; Gudowski, W.; Kierkegaard, J.

    1999-01-01

    The absorbed dose due to photonuclear reactions in soft tissue, lung, breast, adipose tissue and cortical bone has been evaluated for a scanned bremsstrahlung beam of end point 50 MeV from a racetrack accelerator. The Monte Carlo code MCNP4B was used to determine the photon source spectrum from the bremsstrahlung target and to simulate the transport of photons through the treatment head and the patient. Photonuclear particle production in tissue was calculated numerically using the energy distributions of photons derived from the Monte Carlo simulations. The transport of photoneutrons in the patient and the photoneutron absorbed dose to tissue were determined using MCNP4B; the absorbed dose due to charged photonuclear particles was calculated numerically assuming total energy absorption in tissue voxels of 1 cm 3 . The photonuclear absorbed dose to soft tissue, lung, breast and adipose tissue is about (0.11-0.12)±0.05% of the maximum photon dose at a depth of 5.5 cm. The absorbed dose to cortical bone is about 45% larger than that to soft tissue. If the contributions from all photoparticles (n, p, 3 He and 4 He particles and recoils of the residual nuclei) produced in the soft tissue and the accelerator, and from positron radiation and gammas due to induced radioactivity and excited states of the nuclei, are taken into account the total photonuclear absorbed dose delivered to soft tissue is about 0.15±0.08% of the maximum photon dose. It has been estimated that the RBE of the photon beam of 50 MV acceleration potential is approximately 2% higher than that of conventional 60 Co radiation. (author)

  5. VEHICLE LOCALIZATION BY LIDAR POINT CORRELATION IMPROVED BY CHANGE DETECTION

    Directory of Open Access Journals (Sweden)

    A. Schlichting

    2016-06-01

    Full Text Available LiDAR sensors are proven sensors for accurate vehicle localization. Instead of detecting and matching features in the LiDAR data, we want to use the entire information provided by the scanners. As dynamic objects, like cars, pedestrians or even construction sites could lead to wrong localization results, we use a change detection algorithm to detect these objects in the reference data. If an object occurs in a certain number of measurements at the same position, we mark it and every containing point as static. In the next step, we merge the data of the single measurement epochs to one reference dataset, whereby we only use static points. Further, we also use a classification algorithm to detect trees. For the online localization of the vehicle, we use simulated data of a vertical aligned automotive LiDAR sensor. As we only want to use static objects in this case as well, we use a random forest classifier to detect dynamic scan points online. Since the automotive data is derived from the LiDAR Mobile Mapping System, we are able to use the labelled objects from the reference data generation step to create the training data and further to detect dynamic objects online. The localization then can be done by a point to image correlation method using only static objects. We achieved a localization standard deviation of about 5 cm (position and 0.06° (heading, and were able to successfully localize the vehicle in about 93 % of the cases along a trajectory of 13 km in Hannover, Germany.

  6. Vehicle Localization by LIDAR Point Correlation Improved by Change Detection

    Science.gov (United States)

    Schlichting, A.; Brenner, C.

    2016-06-01

    LiDAR sensors are proven sensors for accurate vehicle localization. Instead of detecting and matching features in the LiDAR data, we want to use the entire information provided by the scanners. As dynamic objects, like cars, pedestrians or even construction sites could lead to wrong localization results, we use a change detection algorithm to detect these objects in the reference data. If an object occurs in a certain number of measurements at the same position, we mark it and every containing point as static. In the next step, we merge the data of the single measurement epochs to one reference dataset, whereby we only use static points. Further, we also use a classification algorithm to detect trees. For the online localization of the vehicle, we use simulated data of a vertical aligned automotive LiDAR sensor. As we only want to use static objects in this case as well, we use a random forest classifier to detect dynamic scan points online. Since the automotive data is derived from the LiDAR Mobile Mapping System, we are able to use the labelled objects from the reference data generation step to create the training data and further to detect dynamic objects online. The localization then can be done by a point to image correlation method using only static objects. We achieved a localization standard deviation of about 5 cm (position) and 0.06° (heading), and were able to successfully localize the vehicle in about 93 % of the cases along a trajectory of 13 km in Hannover, Germany.

  7. Reviving common standards in point-count surveys for broad inference across studies

    Science.gov (United States)

    Matsuoka, Steven M.; Mahon, C. Lisa; Handel, Colleen M.; Solymos, Peter; Bayne, Erin M.; Fontaine, Patricia C.; Ralph, C.J.

    2014-01-01

    We revisit the common standards recommended by Ralph et al. (1993, 1995a) for conducting point-count surveys to assess the relative abundance of landbirds breeding in North America. The standards originated from discussions among ornithologists in 1991 and were developed so that point-count survey data could be broadly compared and jointly analyzed by national data centers with the goals of monitoring populations and managing habitat. Twenty years later, we revisit these standards because (1) they have not been universally followed and (2) new methods allow estimation of absolute abundance from point counts, but these methods generally require data beyond the original standards to account for imperfect detection. Lack of standardization and the complications it introduces for analysis become apparent from aggregated data. For example, only 3% of 196,000 point counts conducted during the period 1992-2011 across Alaska and Canada followed the standards recommended for the count period and count radius. Ten-minute, unlimited-count-radius surveys increased the number of birds detected by >300% over 3-minute, 50-m-radius surveys. This effect size, which could be eliminated by standardized sampling, was ≥10 times the published effect sizes of observers, time of day, and date of the surveys. We suggest that the recommendations by Ralph et al. (1995a) continue to form the common standards when conducting point counts. This protocol is inexpensive and easy to follow but still allows the surveys to be adjusted for detection probabilities. Investigators might optionally collect additional information so that they can analyze their data with more flexible forms of removal and time-of-detection models, distance sampling, multiple-observer methods, repeated counts, or combinations of these methods. Maintaining the common standards as a base protocol, even as these study-specific modifications are added, will maximize the value of point-count data, allowing compilation and

  8. Urban birds in the Sonoran Desert: estimating population density from point counts

    Directory of Open Access Journals (Sweden)

    Karina Johnston López

    2015-01-01

    Full Text Available We conducted bird surveys in Hermosillo, Sonora using distance sampling to characterize detection functions at point-transects for native and non-native urban birds in a desert environment. From March to August 2013 we sampled 240 plots in the city and its surroundings; each plot was visited three times. Our purpose was to provide information for a rapid assessment of bird density in this region by using point counts. We identified 72 species, including six non-native species. Sixteen species had sufficient detections to accurately estimate the parameters of the detection functions. To illustrate the estimation of density from bird count data using our inferred detection functions, we estimated the density of the Eurasian Collared-Dove (Streptopelia decaocto under two different levels of urbanization: highly urbanized (90-100% of urban impact and moderately urbanized zones (39-50% of urban impact. Density of S. decaocto in the highly-urbanized and moderately-urbanized zones was 3.97±0.52 and 2.92±0.52 individuals/ha, respectively. By using our detection functions, avian ecologists can efficiently relocate time and effort that is regularly used for the estimation of detection distances, to increase the number of sites surveyed and to collect other relevant ecological information.

  9. Estimated neutron-activation data for TFTR. Part II. Biological dose rate from sample-materials activation

    International Nuclear Information System (INIS)

    Ku, L.; Kolibal, J.G.

    1982-06-01

    The neutron induced material activation dose rate data are summarized for the TFTR operation. This report marks the completion of the second phase of the systematic study of the activation problem on the TFTR. The estimations of the neutron induced activation dose rates were made for spherical and slab objects, based on a point kernel method, for a wide range of materials. The dose rates as a function of cooling time for standard samples are presented for a number of typical neutron spectrum expected during TFTR DD and DT operations. The factors which account for the variations of the pulsing history, the characteristic size of the object and the distance of observation relative to the standard samples are also presented

  10. Application of Micro-cloud point extraction for spectrophotometric determination of Malachite green, Crystal violet and Rhodamine B in aqueous samples

    Science.gov (United States)

    Ghasemi, Elham; Kaykhaii, Massoud

    2016-07-01

    A novel, green, simple and fast method was developed for spectrophotometric determination of Malachite green, Crystal violet, and Rhodamine B in water samples based on Micro-cloud Point extraction (MCPE) at room temperature. This is the first report on the application of MCPE on dyes. In this method, to reach the cloud point at room temperature, the MCPE procedure was carried out in brine using Triton X-114 as a non-ionic surfactant. The factors influencing the extraction efficiency were investigated and optimized. Under the optimized condition, calibration curves were found to be linear in the concentration range of 0.06-0.60 mg/L, 0.10-0.80 mg/L, and 0.03-0.30 mg/L with the enrichment factors of 29.26, 85.47 and 28.36, respectively for Malachite green, Crystal violet, and Rhodamine B. Limit of detections were between 2.2 and 5.1 μg/L.

  11. 75 FR 33995 - Safety Zone; Michigan Orthopaedic Society 50th Anniversary Fireworks, Lake Huron, Mackinac Island...

    Science.gov (United States)

    2010-06-16

    ... yards south of Biddle Point, at position 45[deg]50'32.82'' N., 084[deg]37'03.18'' W: [DATUM: NAD 83... fireworks launch site, approximately 460 yards south of Biddle Point, at position 45[deg]50'32.82'' N, 084...

  12. Object-Oriented Approach to Modeling Units of Pneumatic Systems

    Directory of Open Access Journals (Sweden)

    Yu. V. Kyurdzhiev

    2014-01-01

    Full Text Available The article shows the relevance of the approaches to the object-oriented programming when modeling the pneumatic units (PU.Based on the analysis of the calculation schemes of aggregates pneumatic systems two basic objects, namely a cavity flow and a material point were highlighted.Basic interactions of objects are defined. Cavity-cavity interaction: ex-change of matter and energy with the flows of mass. Cavity-point interaction: force interaction, exchange of energy in the form of operation. Point-point in-teraction: force interaction, elastic interaction, inelastic interaction, and inter-vals of displacement.The authors have developed mathematical models of basic objects and interactions. Models and interaction of elements are implemented in the object-oriented programming.Mathematical models of elements of PU design scheme are implemented in derived from the base class. These classes implement the models of flow cavity, piston, diaphragm, short channel, diaphragm to be open by a given law, spring, bellows, elastic collision, inelastic collision, friction, PU stages with a limited movement, etc.A numerical integration of differential equations for the mathematical models of PU design scheme elements is based on the Runge-Kutta method of the fourth order. On request each class performs a tact of integration i.e. calcu-lation of the coefficient method.The paper presents an integration algorithm of the system of differential equations. All objects of the PU design scheme are placed in a unidirectional class list. Iterator loop cycle initiates the integration tact of all the objects in the list. One in four iteration makes a transition to the next step of integration. Calculation process stops when any object shows a shutdowns flag.The proposed approach was tested in the calculation of a number of PU designs. With regard to traditional approaches to modeling, the authors-proposed method features in easy enhancement, code reuse, high reliability

  13. Cern: 50 ans sur fond de crise sociale

    CERN Multimedia

    2004-01-01

    On 19th October, CERN is celebrating its 50 years. The opportunity for the trade-unions to point out their struggle for more sociale equity; no demonstration is foreseen, but the distribution of a leaflet

  14. The effect of point defects on ferroelastic phase transition of lanthanum-doped calcium titanate ceramics

    International Nuclear Information System (INIS)

    Ni, Yan; Zhang, Zhen; Wang, Dong; Wang, Yu; Ren, Xiaobing

    2013-01-01

    Highlights: ► The effect of point defects on phase transitions in Ca (1−x) La 2x/3 TiO 3 was studied. ► When x = 0.45, normal ferroelastic phase transition happens. ► When x = 0.7, a “glassy-like” frozen process appears. ► Point defects weaken the thermodynamic stability of ferroelastic phase. ► Point defects induce a “glassy-like” frozen process. -- Abstract: In the present paper, La-doped CaTiO 3 is studied to investigate the effect of point defects on ferroelastic phase transition of the ceramics. The dynamic mechanical measurements show that the transition temperature of the orthorhombic to tetragonal phase transition of Ca (1−x) La 2x/3 TiO 3 decreases with increasing dopant (La) concentration x. The samples with the dopant content of x = 0.45 and 0.7 exhibit different structure evolution features during their transition processes as revealed by in situ powder X-ray diffraction (XRD) measurement. Moreover, when x = 0.7, the storage modulus shows a frequency-dependent minimum at T g , which can be well fitted with the Vogel–Fulcher relation, and the corresponding internal friction also exhibits a frequency-dependent peak within the same temperature regime. These results thus indicate that doping La suppresses ferroelastic phase transition in CaTiO 3 and induces a “glassy-like” behavior in Ca (1−x) La 2x/3 TiO 3 , which is similar to “strain glass” in Ni-doped Ti 50−x Ni 50+x

  15. Formation flying within a constellation of nano-satellites the QB50 mission

    NARCIS (Netherlands)

    Gill, E.K.A.; Sundaramoorthy, P.; Bouwmeester, J.; Zandbergen, B.; Reinhard, R.

    2010-01-01

    QB50 is a mission establishing an international network of 50 nano-satellites for multi-point, in-situ measurements in the lower thermosphere and re-entry research. As part of the QB50 mission, the Delft University of Technology intends to contribute two nano-satellites both being equipped with a

  16. Forma da paisagem como critério para otimização amostral de latossolos sob cultivo de cana-de-açúcar Landscape form as a criterion for sampling optimization of an oxisol under cultivation of sugar cane

    Directory of Open Access Journals (Sweden)

    Rafael Montanari

    2005-01-01

    Full Text Available O número de pontos amostrais é fundamental para estabelecer um programa de avaliação da variabilidade espacial dos atributos dos solos. O objetivo deste trabalho foi utilizar a forma da paisagem como critério auxiliar na otimização do esquema amostral na avaliação dos atributos químicos de latossolos, em áreas sob cultivo de cana-de-açúcar. Utilizou-se uma área contínua com duas pedoformas: côncava, que ocorre na posição mais elevada da área; e linear, constituída pelos segmentos escarpa, meia encosta e encosta inferior. Foi utilizado um espaçamento amostral regular de 50x50 m em uma malha de 300x3.000 m numa área total de 94 ha, com 421 pontos amostrados. Coletaram-se amostras de solo nas profundidades 0,0-0,2 m e 0,6-0,8 m, em cada ponto da malha, e determinaram-se as propriedades químicas do solo. Na pedoforma côncava, houve maior variabilidade espacial para os atributos químicos do solo. A aplicação do programa Sanos 0.1 na malha amostral (pedoforma côncava e pedoforma linear revelou que a pedoforma côncava, em ambas as profundidades, apresenta maior variabilidade espacial dos atributos químicos do que a pedoforma linear.The number of sampling points is essential to establish an evaluation program of the spatial variability of soil atribute. The objective of this work was to use the form of landscape as auxiliary criterion in the optimization of the outline sample for the estimate of chemical attributes of oxisol, in a area under sugarcane cultivation. It was possible to choose a continuous area with two landforms: concave, that occur in higher positions of the area, and linear, comprising steep, stocking leans and inferior leans. A 50x50 m spacing in a mesh of 300x3.000 m (total area 94 ha, with 421 sampling points were used. Soil sample in depths 0.0-0.2 m and 0.6-0.8 m were collected, in each point of the grid, in order to evaluate the soil chemical attributes. In the concave landform, larger space

  17. An integrated paper-based sample-to-answer biosensor for nucleic acid testing at the point of care.

    Science.gov (United States)

    Choi, Jane Ru; Hu, Jie; Tang, Ruihua; Gong, Yan; Feng, Shangsheng; Ren, Hui; Wen, Ting; Li, XiuJun; Wan Abas, Wan Abu Bakar; Pingguan-Murphy, Belinda; Xu, Feng

    2016-02-07

    With advances in point-of-care testing (POCT), lateral flow assays (LFAs) have been explored for nucleic acid detection. However, biological samples generally contain complex compositions and low amounts of target nucleic acids, and currently require laborious off-chip nucleic acid extraction and amplification processes (e.g., tube-based extraction and polymerase chain reaction (PCR)) prior to detection. To the best of our knowledge, even though the integration of DNA extraction and amplification into a paper-based biosensor has been reported, a combination of LFA with the aforementioned steps for simple colorimetric readout has not yet been demonstrated. Here, we demonstrate for the first time an integrated paper-based biosensor incorporating nucleic acid extraction, amplification and visual detection or quantification using a smartphone. A handheld battery-powered heating device was specially developed for nucleic acid amplification in POC settings, which is coupled with this simple assay for rapid target detection. The biosensor can successfully detect Escherichia coli (as a model analyte) in spiked drinking water, milk, blood, and spinach with a detection limit of as low as 10-1000 CFU mL(-1), and Streptococcus pneumonia in clinical blood samples, highlighting its potential use in medical diagnostics, food safety analysis and environmental monitoring. As compared to the lengthy conventional assay, which requires more than 5 hours for the entire sample-to-answer process, it takes about 1 hour for our integrated biosensor. The integrated biosensor holds great potential for detection of various target analytes for wide applications in the near future.

  18. A random sampling approach for robust estimation of tissue-to-plasma ratio from extremely sparse data.

    Science.gov (United States)

    Chu, Hui-May; Ette, Ene I

    2005-09-02

    his study was performed to develop a new nonparametric approach for the estimation of robust tissue-to-plasma ratio from extremely sparsely sampled paired data (ie, one sample each from plasma and tissue per subject). Tissue-to-plasma ratio was estimated from paired/unpaired experimental data using independent time points approach, area under the curve (AUC) values calculated with the naïve data averaging approach, and AUC values calculated using sampling based approaches (eg, the pseudoprofile-based bootstrap [PpbB] approach and the random sampling approach [our proposed approach]). The random sampling approach involves the use of a 2-phase algorithm. The convergence of the sampling/resampling approaches was investigated, as well as the robustness of the estimates produced by different approaches. To evaluate the latter, new data sets were generated by introducing outlier(s) into the real data set. One to 2 concentration values were inflated by 10% to 40% from their original values to produce the outliers. Tissue-to-plasma ratios computed using the independent time points approach varied between 0 and 50 across time points. The ratio obtained from AUC values acquired using the naive data averaging approach was not associated with any measure of uncertainty or variability. Calculating the ratio without regard to pairing yielded poorer estimates. The random sampling and pseudoprofile-based bootstrap approaches yielded tissue-to-plasma ratios with uncertainty and variability. However, the random sampling approach, because of the 2-phase nature of its algorithm, yielded more robust estimates and required fewer replications. Therefore, a 2-phase random sampling approach is proposed for the robust estimation of tissue-to-plasma ratio from extremely sparsely sampled data.

  19. ERROR DISTRIBUTION EVALUATION OF THE THIRD VANISHING POINT BASED ON RANDOM STATISTICAL SIMULATION

    Directory of Open Access Journals (Sweden)

    C. Li

    2012-07-01

    Full Text Available POS, integrated by GPS / INS (Inertial Navigation Systems, has allowed rapid and accurate determination of position and attitude of remote sensing equipment for MMS (Mobile Mapping Systems. However, not only does INS have system error, but also it is very expensive. Therefore, in this paper error distributions of vanishing points are studied and tested in order to substitute INS for MMS in some special land-based scene, such as ground façade where usually only two vanishing points can be detected. Thus, the traditional calibration approach based on three orthogonal vanishing points is being challenged. In this article, firstly, the line clusters, which parallel to each others in object space and correspond to the vanishing points, are detected based on RANSAC (Random Sample Consensus and parallelism geometric constraint. Secondly, condition adjustment with parameters is utilized to estimate nonlinear error equations of two vanishing points (VX, VY. How to set initial weights for the adjustment solution of single image vanishing points is presented. Solving vanishing points and estimating their error distributions base on iteration method with variable weights, co-factor matrix and error ellipse theory. Thirdly, under the condition of known error ellipses of two vanishing points (VX, VY and on the basis of the triangle geometric relationship of three vanishing points, the error distribution of the third vanishing point (VZ is calculated and evaluated by random statistical simulation with ignoring camera distortion. Moreover, Monte Carlo methods utilized for random statistical estimation are presented. Finally, experimental results of vanishing points coordinate and their error distributions are shown and analyzed.

  20. Error Distribution Evaluation of the Third Vanishing Point Based on Random Statistical Simulation

    Science.gov (United States)

    Li, C.

    2012-07-01

    POS, integrated by GPS / INS (Inertial Navigation Systems), has allowed rapid and accurate determination of position and attitude of remote sensing equipment for MMS (Mobile Mapping Systems). However, not only does INS have system error, but also it is very expensive. Therefore, in this paper error distributions of vanishing points are studied and tested in order to substitute INS for MMS in some special land-based scene, such as ground façade where usually only two vanishing points can be detected. Thus, the traditional calibration approach based on three orthogonal vanishing points is being challenged. In this article, firstly, the line clusters, which parallel to each others in object space and correspond to the vanishing points, are detected based on RANSAC (Random Sample Consensus) and parallelism geometric constraint. Secondly, condition adjustment with parameters is utilized to estimate nonlinear error equations of two vanishing points (VX, VY). How to set initial weights for the adjustment solution of single image vanishing points is presented. Solving vanishing points and estimating their error distributions base on iteration method with variable weights, co-factor matrix and error ellipse theory. Thirdly, under the condition of known error ellipses of two vanishing points (VX, VY) and on the basis of the triangle geometric relationship of three vanishing points, the error distribution of the third vanishing point (VZ) is calculated and evaluated by random statistical simulation with ignoring camera distortion. Moreover, Monte Carlo methods utilized for random statistical estimation are presented. Finally, experimental results of vanishing points coordinate and their error distributions are shown and analyzed.

  1. Development of object oriented program `SONSHO` for strength evaluation. Manual of Version 4.0 program

    Energy Technology Data Exchange (ETDEWEB)

    Hosogai, Hiromi [Joyo Industries Co. Ltd., Tokai, Ibaraki (Japan); Kasahara, Naoto

    1998-07-01

    Object Oriented Program `SONSHO` predicts creep fatigue damage factors based on Elevated Temperature Structural Design Guide for `Monju` and other various procedures from stress classification data obtained from structural analysis results. From view point of program implementation, it is required that external programs interface and frequent revise from update of material and creep fatigue evaluation methods. Object oriented approach was continuously introduced to improve these aspects of the program. Version 4.0 has the following new functions. (1) Material strength library was implemented as an independent program module based on Microsoft Active X control and 32bitDLL technologies, which can be accessed by general Windows programs. (2) Self instruction system `Wizard` enables manual less operation. (3) Microsoft common object model (COM) was adopted for program interface, and this program can communicate with Excel sheet data on memory. Sonsho Ver.4.0 can work on Windows 95 or Windows NT4.0. Microsoft Visual Basic 5.0 (Enterprose Edition) and Microsoft FORTRAN Power Station 4.0 were adopted for program. (author)

  2. An Evaluation of the Plant Density Estimator the Point-Centred Quarter Method (PCQM Using Monte Carlo Simulation.

    Directory of Open Access Journals (Sweden)

    Md Nabiul Islam Khan

    Full Text Available In the Point-Centred Quarter Method (PCQM, the mean distance of the first nearest plants in each quadrant of a number of random sample points is converted to plant density. It is a quick method for plant density estimation. In recent publications the estimator equations of simple PCQM (PCQM1 and higher order ones (PCQM2 and PCQM3, which uses the distance of the second and third nearest plants, respectively show discrepancy. This study attempts to review PCQM estimators in order to find the most accurate equation form. We tested the accuracy of different PCQM equations using Monte Carlo Simulations in simulated (having 'random', 'aggregated' and 'regular' spatial patterns plant populations and empirical ones.PCQM requires at least 50 sample points to ensure a desired level of accuracy. PCQM with a corrected estimator is more accurate than with a previously published estimator. The published PCQM versions (PCQM1, PCQM2 and PCQM3 show significant differences in accuracy of density estimation, i.e. the higher order PCQM provides higher accuracy. However, the corrected PCQM versions show no significant differences among them as tested in various spatial patterns except in plant assemblages with a strong repulsion (plant competition. If N is number of sample points and R is distance, the corrected estimator of PCQM1 is 4(4N - 1/(π ∑ R2 but not 12N/(π ∑ R2, of PCQM2 is 4(8N - 1/(π ∑ R2 but not 28N/(π ∑ R2 and of PCQM3 is 4(12N - 1/(π ∑ R2 but not 44N/(π ∑ R2 as published.If the spatial pattern of a plant association is random, PCQM1 with a corrected equation estimator and over 50 sample points would be sufficient to provide accurate density estimation. PCQM using just the nearest tree in each quadrant is therefore sufficient, which facilitates sampling of trees, particularly in areas with just a few hundred trees per hectare. PCQM3 provides the best density estimations for all types of plant assemblages including the repulsion process

  3. Tank 50H Tetraphenylborate Destruction Results

    International Nuclear Information System (INIS)

    Peters, T.B.

    2003-01-01

    We conducted several scoping tests with both Tank 50H surrogate materials (KTPB and phenol) as well as with actual Tank 50H solids. These tests examined whether we could destroy the tetraphenylborate in the surrogates or actual Tank 50H material either by use of Fenton's Reagent or by hydrolysis (in Tank 50H conditions at a maximum temperature of 50 degrees C) under a range of conditions. The results of these tests showed that destruction of the solids occurred only under a minority of conditions. (1)Using Fenton's Reagent and KTPB as the Tank 50H surrogate, no reaction occurred at pH ranges greater than 9. (2)Using Fenton's Reagent and phenol as the Tank 50H surrogate, no reaction occurred at a pH of 14. (3)Using Fenton's Reagent and actual Tank 50H slurry, a reaction occurred at a pH of 9.5 in the presence of ECC additives. (4)Using Fenton's Reagent and actual Tank 50H slurry, after a thirty three day period, all attempts at hydrolysis (at pH 14) were too slow to be viable. This happened even in the case of higher temperature (50 degrees C) and added (100 ppm) copper. Tank 50H is scheduled to return to HLW Tank Farm service with capabilities of transferring and receiving salt supernate solutions to and from the Tank Farms and staging feed for the Saltstone Facility. Before returning Tank 50H to Tank Farm service as a non-organic tank, less than 5 kg of TPB must remain in Tank 50H. Recently, camera inspections in Tank 50H revealed two large mounds of solid material, one in the vicinity of the B5 Riser Transfer Pump and the other on the opposite side of the tank. Personnel sampled and analyzed this material to determine its composition. The sample analysis indicated presence of a significant quantity of organics in the solid material. This quantity of organic material exceeds the 5 kg limit for declaring only trace amounts of organic material remain in Tank 50H. Additionally, these large volumes of solids, calculated as approximately 61K gallons, present other

  4. Multiparty Compatibility for Concurrent Objects

    Directory of Open Access Journals (Sweden)

    Roly Perera

    2016-06-01

    Full Text Available Objects and actors are communicating state machines, offering and consuming different services at different points in their lifecycle. Two complementary challenges arise when programming such systems. When objects interact, their state machines must be "compatible", so that services are requested only when they are available. Dually, when objects refine other objects, their state machines must be "compliant", so that services are honoured whenever they are promised. In this paper we show how the idea of multiparty compatibility from the session types literature can be applied to both of these problems. We present an untyped language in which concurrent objects are checked automatically for compatibility and compliance. For simple objects, checking can be exhaustive and has the feel of a type system. More complex objects can be partially validated via test cases, leading to a methodology closer to continuous testing. Our proof-of-concept implementation is limited in some important respects, but demonstrates the potential value of the approach and the relationship to existing software development practices.

  5. Bayesian analysis of Markov point processes

    DEFF Research Database (Denmark)

    Berthelsen, Kasper Klitgaard; Møller, Jesper

    2006-01-01

    Recently Møller, Pettitt, Berthelsen and Reeves introduced a new MCMC methodology for drawing samples from a posterior distribution when the likelihood function is only specified up to a normalising constant. We illustrate the method in the setting of Bayesian inference for Markov point processes...... a partially ordered Markov point process as the auxiliary variable. As the method requires simulation from the "unknown" likelihood, perfect simulation algorithms for spatial point processes become useful....

  6. Feature-fused SSD: fast detection for small objects

    Science.gov (United States)

    Cao, Guimei; Xie, Xuemei; Yang, Wenzhe; Liao, Quan; Shi, Guangming; Wu, Jinjian

    2018-04-01

    Small objects detection is a challenging task in computer vision due to its limited resolution and information. In order to solve this problem, the majority of existing methods sacrifice speed for improvement in accuracy. In this paper, we aim to detect small objects at a fast speed, using the best object detector Single Shot Multibox Detector (SSD) with respect to accuracy-vs-speed trade-off as base architecture. We propose a multi-level feature fusion method for introducing contextual information in SSD, in order to improve the accuracy for small objects. In detailed fusion operation, we design two feature fusion modules, concatenation module and element-sum module, different in the way of adding contextual information. Experimental results show that these two fusion modules obtain higher mAP on PASCAL VOC2007 than baseline SSD by 1.6 and 1.7 points respectively, especially with 2-3 points improvement on some small objects categories. The testing speed of them is 43 and 40 FPS respectively, superior to the state of the art Deconvolutional single shot detector (DSSD) by 29.4 and 26.4 FPS.

  7. DETECTION AND CLASSIFICATION OF POLE-LIKE OBJECTS FROM MOBILE MAPPING DATA

    Directory of Open Access Journals (Sweden)

    K. Fukano

    2015-08-01

    Full Text Available Laser scanners on a vehicle-based mobile mapping system can capture 3D point-clouds of roads and roadside objects. Since roadside objects have to be maintained periodically, their 3D models are useful for planning maintenance tasks. In our previous work, we proposed a method for detecting cylindrical poles and planar plates in a point-cloud. However, it is often required to further classify pole-like objects into utility poles, streetlights, traffic signals and signs, which are managed by different organizations. In addition, our previous method may fail to extract low pole-like objects, which are often observed in urban residential areas. In this paper, we propose new methods for extracting and classifying pole-like objects. In our method, we robustly extract a wide variety of poles by converting point-clouds into wireframe models and calculating cross-sections between wireframe models and horizontal cutting planes. For classifying pole-like objects, we subdivide a pole-like object into five subsets by extracting poles and planes, and calculate feature values of each subset. Then we apply a supervised machine learning method using feature variables of subsets. In our experiments, our method could achieve excellent results for detection and classification of pole-like objects.

  8. Large Sample Neutron Activation Analysis of Heterogeneous Samples

    International Nuclear Information System (INIS)

    Stamatelatos, I.E.; Vasilopoulou, T.; Tzika, F.

    2018-01-01

    A Large Sample Neutron Activation Analysis (LSNAA) technique was developed for non-destructive analysis of heterogeneous bulk samples. The technique incorporated collimated scanning and combining experimental measurements and Monte Carlo simulations for the identification of inhomogeneities in large volume samples and the correction of their effect on the interpretation of gamma-spectrometry data. Corrections were applied for the effect of neutron self-shielding, gamma-ray attenuation, geometrical factor and heterogeneous activity distribution within the sample. A benchmark experiment was performed to investigate the effect of heterogeneity on the accuracy of LSNAA. Moreover, a ceramic vase was analyzed as a whole demonstrating the feasibility of the technique. The LSNAA results were compared against results obtained by INAA and a satisfactory agreement between the two methods was observed. This study showed that LSNAA is a technique capable to perform accurate non-destructive, multi-elemental compositional analysis of heterogeneous objects. It also revealed the great potential of the technique for the analysis of precious objects and artefacts that need to be preserved intact and cannot be damaged for sampling purposes. (author)

  9. Nutrition content of brisket point end of part Simental Ongole Crossbred meat in boiled various temperature

    Science.gov (United States)

    Riyanto, J.; Sudibya; Cahyadi, M.; Aji, A. P.

    2018-01-01

    This aim of this study was to determine the quality of nutritional contents of beef brisket point end of Simental Ongole Crossbred meat in various boiling temperatures. Simental Ongole Crossbred had been fattened for 9 months. Furthermore, they were slaughtered at slaughterhouse and brisket point end part of meat had been prepared to analyse its nutritional contents using Food Scan. These samples were then boiled at 100°C for 0 (TR), 15 (R15), and 30 (R30) minutes, respectively. The data was analysed using Randomized Complete Design (CRD) and Duncan’s multiple range test (DMRT) had been conducted to differentiate among three treatments. The results showed that boiling temperatures significantly affected moisture, and cholesterol contents of beef (P<0.05) while fat content was not significantly affected by boiling temperatures. The boiling temperature decreased beef water contents from 72.77 to 70.84%, on the other hand, the treatment increased beef protein and cholesterol contents from 20.77 to 25.14% and 47.55 to 50.45 mg/100g samples, respectively. The conclusion of this study was boiling of beef at 100°C for 15 minutes and 30 minutes decreasing water content and increasing protein and cholesterol contents of brisket point end of Simental Ongole Crossbred beef.

  10. Prism adaptation does not alter object-based attention in healthy participants

    Science.gov (United States)

    Bultitude, Janet H.

    2013-01-01

    Hemispatial neglect (‘neglect’) is a disabling condition that can follow damage to the right side of the brain, in which patients show difficulty in responding to or orienting towards objects and events that occur on the left side of space. Symptoms of neglect can manifest in both space- and object-based frames of reference. Although patients can show a combination of these two forms of neglect, they are considered separable and have distinct neurological bases. In recent years considerable evidence has emerged to demonstrate that spatial symptoms of neglect can be reduced by an intervention called prism adaptation. Patients point towards objects viewed through prismatic lenses that shift the visual image to the right. Approximately five minutes of repeated pointing results in a leftward recalibration of pointing and improved performance on standard clinical tests for neglect. The understanding of prism adaptation has also been advanced through studies of healthy participants, in whom adaptation to leftward prismatic shifts results in temporary neglect-like performance. Here we examined the effect of prism adaptation on the performance of healthy participants who completed a computerised test of space- and object-based attention. Participants underwent adaptation to leftward- or rightward-shifting prisms, or performed neutral pointing according to a between-groups design. Significant pointing after-effects were found for both prism groups, indicating successful adaptation. In addition, the results of the computerised test revealed larger reaction-time costs associated with shifts of attention between two objects compared to shifts of attention within the same object, replicating previous work. However there were no differences in the performance of the three groups, indicating that prism adaptation did not influence space- or object-based attention for this task. When combined with existing literature, the results are consistent with the proposal that prism

  11. Empirical study on mutual fund objective classification.

    Science.gov (United States)

    Jin, Xue-jun; Yang, Xiao-lan

    2004-05-01

    Mutual funds are usually classified on the basis of their objectives. If the activities of mutual funds are consistent with their stated objectives, investors may look at the latter as signals of their risks and incomes. This work analyzes mutual fund objective classification in China by statistical methods of distance analysis and discriminant analysis; and examines whether the stated investment objectives of mutual funds adequately represent their attributes to investors. That is, if mutual funds adhere to their stated objectives, attributes must be heterogeneous between investment objective groups and homogeneous within them. Our conclusion is to some degree, the group of optimized exponential funds is heterogeneous to other groups. As a whole, there exist no significant differences between different objective groups; and 50% of mutual funds are not consistent with their objective groups.

  12. Determination of Sr-90 in milk samples from the study of statistical results

    Directory of Open Access Journals (Sweden)

    Otero-Pazos Alberto

    2017-01-01

    Full Text Available The determination of 90Sr in milk samples is the main objective of radiation monitoring laboratories because of its environmental importance. In this paper the concentration of activity of 39 milk samples was obtained through radiochemical separation based on selective retention of Sr in a cationic resin (Dowex 50WX8, 50-100 mesh and subsequent determination by a low-level proportional gas counter. The results were checked by performing the measurement of the Sr concentration by using the flame atomic absorption spectroscopy technique, to finally obtain the mass of 90Sr. From the data obtained a statistical treatment was performed using linear regressions. A reliable estimate of the mass of 90Sr was obtained based on the gravimetric technique, and secondly, the counts per minute of the third measurement in the 90Sr and 90Y equilibrium, without having to perform the analysis. These estimates have been verified with 19 milk samples, obtaining overlapping results. The novelty of the manuscript is the possibility of determining the concentration of 90Sr in milk samples, without the need to perform the third measurement in the equilibrium.

  13. Influence of atomic ordering on sigma phase precipitation of the Fe{sub 50}Cr{sub 50} alloy

    Energy Technology Data Exchange (ETDEWEB)

    Vélez, G.Y., E-mail: g.y.velezcastillo@gmail.com [Universidad del Valle, Departamento de Física, A.A. 25360 Cali (Colombia); Instituto de Física, Universidad Autónoma de San Luis Potosí, avenida Manuel Nava 6, zona universitaria, 78290 San Luis Potosí, SLP México (Mexico); Pérez Alcázar, G.A. [Universidad del Valle, Departamento de Física, A.A. 25360 Cali (Colombia)

    2015-09-25

    Highlights: • σ-FeCr phase can be delayed when α-FeCr phase is ordered. • The formation of σ phase is favored by concentration gradients of α phase. • We determine the iron occupation number of the five sites of σ-Fe{sub 50}Cr{sub 50}. - Abstract: In this work we report a study of the kinetic of the formation of the σ-Fe{sub 50}Cr{sub 50} alloy which is obtained by heat treatment of α-FeCr samples with different atomic ordering. Two α-FeCr alloys were obtained, one by mechanical alloying and the other by arc-melting. Both alloys were heated at 925 K for 170 h and then quenched into ice water. Before heat treatment both alloys exhibit α-FeCr disordered structure with greater ferromagnetic behavior in the alloy obtained by mechanical alloying due to its higher atomic disorder. The sigma phase precipitation is influenced by the atomic ordering of the bcc samples: in the alloy obtained by mechanical alloying, the bcc phase is completely transformed into the σ phase; in the alloy obtained by melted the α–σ transformation is partial.

  14. Magnetic and structural investigation of growth induced magnetic anisotropies in Fe50Co50 thin films

    Directory of Open Access Journals (Sweden)

    Neri I.

    2013-01-01

    Full Text Available In this paper, we investigate the magnetic properties of Fe50 Co50 polycrystalline thin films, grown by dc-magnetron sputtering, with thickness (t ranging from 2.5 nm up to 100 nm. We focused on the magnetic properties of the samples to highlight the effects of possible intrinsic stress that may develop during growth, and their dependence on film thickness. Indeed, during film deposition, due to the growth technique and growth conditions, a metallic film may display an intrinsic compressive or tensile stress. In our case, due to the Fe50Co50 magnetolastic properties, this stress may in its turn promote the development of magnetic anisotropies. Samples magnetic properties were monitored with a SQUID magnetometer and a magneto–optic Kerr effect apparatus, using both an in–plane and an out–of–plane magnetic field. Magnetoresistance measurements were collected, as well, to further investigate the magnetic behavior of the samples. Indications about the presence of intrinsic stress were obtained accessing samples curvature with an optical profilometer. For t ≤ 20 nm, the shape of the in-plane magnetization loops is squared and coercivity increases with t, possibly due to fact that, for small t values, the grain size grows with t. The magnetoresistive response is anisotropic in character. For t > 20 nm, coercivity smoothly decreases, the approach to saturation gets slower and the shape of the whole loop gets less and less squared. The magnetoresistive effect becomes almost isotropic and its intensity increases of about one order of magnitude. These results suggest that the magnetization reorientation process changes for t > 20 nm, and are in agreement with the progressive development of an out-of-plane easy axis. This hypothesis is substantiated by profilometric analysis that reveals the presence of an in-plane compressive stress.

  15. Hybrid image and blood sampling input function for quantification of small animal dynamic PET data

    International Nuclear Information System (INIS)

    Shoghi, Kooresh I.; Welch, Michael J.

    2007-01-01

    We describe and validate a hybrid image and blood sampling (HIBS) method to derive the input function for quantification of microPET mice data. The HIBS algorithm derives the peak of the input function from the image, which is corrected for recovery, while the tail is derived from 5 to 6 optimally placed blood sampling points. A Bezier interpolation algorithm is used to link the rightmost image peak data point to the leftmost blood sampling point. To assess the performance of HIBS, 4 mice underwent 60-min microPET imaging sessions following a 0.40-0.50-mCi bolus administration of 18 FDG. In total, 21 blood samples (blood-sampled plasma time-activity curve, bsPTAC) were obtained throughout the imaging session to compare against the proposed HIBS method. MicroPET images were reconstructed using filtered back projection with a zoom of 2.75 on the heart. Volumetric regions of interest (ROIs) were composed by drawing circular ROIs 3 pixels in diameter on 3-4 transverse planes of the left ventricle. Performance was characterized by kinetic simulations in terms of bias in parameter estimates when bsPTAC and HIBS are used as input functions. The peak of the bsPTAC curve was distorted in comparison to the HIBS-derived curve due to temporal limitations and delay in blood sampling, which affected the rates of bidirectional exchange between plasma and tissue. The results highlight limitations in using bsPTAC. The HIBS method, however, yields consistent results, and thus, is a substitute for bsPTAC

  16. Collaborative Indoor Access Point Localization Using Autonomous Mobile Robot Swarm.

    Science.gov (United States)

    Awad, Fahed; Naserllah, Muhammad; Omar, Ammar; Abu-Hantash, Alaa; Al-Taj, Abrar

    2018-01-31

    Localization of access points has become an important research problem due to the wide range of applications it addresses such as dismantling critical security threats caused by rogue access points or optimizing wireless coverage of access points within a service area. Existing proposed solutions have mostly relied on theoretical hypotheses or computer simulation to demonstrate the efficiency of their methods. The techniques that rely on estimating the distance using samples of the received signal strength usually assume prior knowledge of the signal propagation characteristics of the indoor environment in hand and tend to take a relatively large number of uniformly distributed random samples. This paper presents an efficient and practical collaborative approach to detect the location of an access point in an indoor environment without any prior knowledge of the environment. The proposed approach comprises a swarm of wirelessly connected mobile robots that collaboratively and autonomously collect a relatively small number of non-uniformly distributed random samples of the access point's received signal strength. These samples are used to efficiently and accurately estimate the location of the access point. The experimental testing verified that the proposed approach can identify the location of the access point in an accurate and efficient manner.

  17. Repeatability of Objective Measurements of Linear Udder and Body ...

    African Journals Online (AJOL)

    The objective of this study was to estimates the repeatability of objective measurements on linear udder and body conformation traits and to evaluate the objectivity of the measurements in Friesian x Bunaji cows. Data from 50 (F1) Frisian X Bunaji cows collected between 2007 and 2008 at the Dairy Research Farm of the ...

  18. Ecotoxicological screen of Potential Release Site 50-006(d) of Operable Unit 1147 of Mortandad Canyon and relationship to the Radioactive Liquid Waste Treatment Facilities project

    International Nuclear Information System (INIS)

    Gonzales, G.J.; Newell, P.G.

    1996-04-01

    Potential ecological risk associated with soil contaminants in Potential Release Site (PRS) 50-006(d) of Mortandad Canyon at the Los Alamos National Laboratory was assessed by performing an ecotoxicological risk screen. The PRS surrounds Outfall 051, which discharges treated effluent from the Radioactive Liquid Waste Treatment Facility. Discharge at the outfall is permitted under the Clean Water Act National Pollution Discharge Elimination System. Radionuclide discharge is regulated by US Department of Energy (DOE) Order 5400.5. Ecotoxicological Screening Action Levels (ESALSs) were computed for nonradionuclide constituents in the soil, and human risk SALs for radionuclides were used as ESALs. Within the PRS and beginning at Outfall 051, soil was sampled at three points along each of nine linear transects at 100-ft intervals. Soil samples from 3 depths for each sampling point were analyzed for the concentration of a total of 121 constituents. Only the results of the surface sampling are reported in this report

  19. The motivational function of an objective in physical activity and sport

    Directory of Open Access Journals (Sweden)

    Mariusz Lipowski

    2017-12-01

    Full Text Available Background As a conscious activity of an individual, physical activity (PA constitutes an element of the free-time dimension. The type of goal allows us to distinguish between sport and PA: sport performance vs. psychophysical health. Drawing on the theory of the motivational function of an objective, this study examined the motivational function of an objective in physical activity and sport. Participants and procedures The sample consisted of 2141 individuals: 1163 women aged 16-64 years (M = 23.90, SD = 8.30 and 978 men aged 16-66 years (M = 24.50, SD = 9.40 who completed the Inventory of Physical Activity Objectives (IPAO, which includes the following scales: 1 motivational value, 2 time management, 3 persistence in action, and 4 motivational conflict. There are also questions that allow one to control for variables such as the variety of forms, duration, and frequency of PA, and socio-demographic variables. Results Males presented different motives of physical activity than females. Motives related to shapely body and health were more important for females. The most important motives for males were physical fitness and shapely body. The gender of participants moderates the motivational value of the specific objectives of physical activity and persistence in action. Conclusions With knowledge about the purposefulness of actions, it is possible to support and shape additional motivation experienced by an individual, by setting new, realistic objectives.

  20. Clustering for high-dimension, low-sample size data using distance vectors

    OpenAIRE

    Terada, Yoshikazu

    2013-01-01

    In high-dimension, low-sample size (HDLSS) data, it is not always true that closeness of two objects reflects a hidden cluster structure. We point out the important fact that it is not the closeness, but the "values" of distance that contain information of the cluster structure in high-dimensional space. Based on this fact, we propose an efficient and simple clustering approach, called distance vector clustering, for HDLSS data. Under the assumptions given in the work of Hall et al. (2005), w...

  1. YOUNG STELLAR OBJECTS IN THE GOULD BELT

    Energy Technology Data Exchange (ETDEWEB)

    Dunham, Michael M. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, MS 78, Cambridge, MA 02138 (United States); Allen, Lori E. [National Optical Astronomy Observatories, Tucson, AZ (United States); Evans II, Neal J.; Harvey, Paul M. [Department of Astronomy, The University of Texas at Austin, 2515 Speedway, Stop C1400, Austin, TX 78712-1205 (United States); Broekhoven-Fiene, Hannah [Department of Physics and Astronomy, University of Victoria, Victoria, BC, V8W 3P6 (Canada); Cieza, Lucas A. [Núcleo de Astronomía de la Facultad de Ingeniería, Universidad Diego Portales, Av. Ejército 441, Santiago (Chile); Di Francesco, James; Johnstone, Doug; Matthews, Brenda C. [National Research Council of Canada, Herzberg Astronomy and Astrophysics Programs, 5071 West Saanich Road, Victoria, BC, V9E 2E7 (Canada); Gutermuth, Robert A. [Department of Astronomy, University of Massachusetts, Amherst, MA 01003 (United States); Hatchell, Jennifer [Physics and Astronomy, University of Exeter, Stocker Road, Exeter EX4 4QL (United Kingdom); Heiderman, Amanda [Department of Astronomy, University of Virginia, P.O. Box 400325, Charlottesville, VA 22904 (United States); Huard, Tracy L. [Department of Astronomy, University of Maryland, College Park, MD 20742 (United States); Kirk, Jason M. [Jeremiah Horrocks Institute, University of Central Lancashire, Preston, PR1 2HE (United Kingdom); Miller, Jennifer F. [Gemini Observatory, 670 N. A’ohoku Place, Hilo, HI 96720 (United States); Peterson, Dawn E. [Space Science Institute, 4750 Walnut Street, Suite 205, Boulder, CO 80301 (United States); Young, Kaisa E., E-mail: mdunham@cfa.harvard.edu [Department of Physical Sciences, Nicholls State University, P.O. Box 2022, Thibodaux, LA 70310 (United States)

    2015-09-15

    We present the full catalog of Young Stellar Objects (YSOs) identified in the 18 molecular clouds surveyed by the Spitzer Space Telescope “cores to disks” (c2d) and “Gould Belt” (GB) Legacy surveys. Using standard techniques developed by the c2d project, we identify 3239 candidate YSOs in the 18 clouds, 2966 of which survive visual inspection and form our final catalog of YSOs in the GB. We compile extinction corrected spectral energy distributions for all 2966 YSOs and calculate and tabulate the infrared spectral index, bolometric luminosity, and bolometric temperature for each object. We find that 326 (11%), 210 (7%), 1248 (42%), and 1182 (40%) are classified as Class 0 + I, Flat-spectrum, Class II, and Class III, respectively, and show that the Class III sample suffers from an overall contamination rate by background Asymptotic Giant Branch stars between 25% and 90%. Adopting standard assumptions, we derive durations of 0.40–0.78 Myr for Class 0 + I YSOs and 0.26–0.50 Myr for Flat-spectrum YSOs, where the ranges encompass uncertainties in the adopted assumptions. Including information from (sub)millimeter wavelengths, one-third of the Class 0 + I sample is classified as Class 0, leading to durations of 0.13–0.26 Myr (Class 0) and 0.27–0.52 Myr (Class I). We revisit infrared color–color diagrams used in the literature to classify YSOs and propose minor revisions to classification boundaries in these diagrams. Finally, we show that the bolometric temperature is a poor discriminator between Class II and Class III YSOs.

  2. YOUNG STELLAR OBJECTS IN THE GOULD BELT

    International Nuclear Information System (INIS)

    Dunham, Michael M.; Allen, Lori E.; Evans II, Neal J.; Harvey, Paul M.; Broekhoven-Fiene, Hannah; Cieza, Lucas A.; Di Francesco, James; Johnstone, Doug; Matthews, Brenda C.; Gutermuth, Robert A.; Hatchell, Jennifer; Heiderman, Amanda; Huard, Tracy L.; Kirk, Jason M.; Miller, Jennifer F.; Peterson, Dawn E.; Young, Kaisa E.

    2015-01-01

    We present the full catalog of Young Stellar Objects (YSOs) identified in the 18 molecular clouds surveyed by the Spitzer Space Telescope “cores to disks” (c2d) and “Gould Belt” (GB) Legacy surveys. Using standard techniques developed by the c2d project, we identify 3239 candidate YSOs in the 18 clouds, 2966 of which survive visual inspection and form our final catalog of YSOs in the GB. We compile extinction corrected spectral energy distributions for all 2966 YSOs and calculate and tabulate the infrared spectral index, bolometric luminosity, and bolometric temperature for each object. We find that 326 (11%), 210 (7%), 1248 (42%), and 1182 (40%) are classified as Class 0 + I, Flat-spectrum, Class II, and Class III, respectively, and show that the Class III sample suffers from an overall contamination rate by background Asymptotic Giant Branch stars between 25% and 90%. Adopting standard assumptions, we derive durations of 0.40–0.78 Myr for Class 0 + I YSOs and 0.26–0.50 Myr for Flat-spectrum YSOs, where the ranges encompass uncertainties in the adopted assumptions. Including information from (sub)millimeter wavelengths, one-third of the Class 0 + I sample is classified as Class 0, leading to durations of 0.13–0.26 Myr (Class 0) and 0.27–0.52 Myr (Class I). We revisit infrared color–color diagrams used in the literature to classify YSOs and propose minor revisions to classification boundaries in these diagrams. Finally, we show that the bolometric temperature is a poor discriminator between Class II and Class III YSOs

  3. Point-of-Care Healthcare Databases Are an Overall Asset to Clinicians, but Different Databases May Vary in Usefulness Based on Personal Preferences. A Review of: Chan, R. & Stieda, V. (2011). Evaluation of three point-of-care healthcare databases: BMJ Point-of-Care, Clin-eguide and Nursing Reference Centre. Health and Information Libraries Journal, 28(1), 50-58. doi: 10.1111/j.1471-1842.2010.00920.x

    OpenAIRE

    Carol D. Howe

    2011-01-01

    Objective – To evaluate the usefulness of three point-of-care healthcare databases (BMJ Point-of-Care, Clin-eguide, and Nursing Reference Centre) in clinical practice.Design – A descriptive study analyzing questionnaire results.Setting – Hospitals within Alberta, Canada’s two largest health regions (at the time of this study), with a third health region submitting a small number of responses.Subjects – A total of 46 Alberta hospital personnel answered the questionnaire, including 19 clinician...

  4. Point Cloud Analysis for Uav-Borne Laser Scanning with Horizontally and Vertically Oriented Line Scanners - Concept and First Results

    Science.gov (United States)

    Weinmann, M.; Müller, M. S.; Hillemann, M.; Reydel, N.; Hinz, S.; Jutzi, B.

    2017-08-01

    In this paper, we focus on UAV-borne laser scanning with the objective of densely sampling object surfaces in the local surrounding of the UAV. In this regard, using a line scanner which scans along the vertical direction and perpendicular to the flight direction results in a point cloud with low point density if the UAV moves fast. Using a line scanner which scans along the horizontal direction only delivers data corresponding to the altitude of the UAV and thus a low scene coverage. For these reasons, we present a concept and a system for UAV-borne laser scanning using multiple line scanners. Our system consists of a quadcopter equipped with horizontally and vertically oriented line scanners. We demonstrate the capabilities of our system by presenting first results obtained for a flight within an outdoor scene. Thereby, we use a downsampling of the original point cloud and different neighborhood types to extract fundamental geometric features which in turn can be used for scene interpretation with respect to linear, planar or volumetric structures.

  5. Integrated plant-safety assessment, Systematic Evaluation Program: Big Rock Point Plant (Docket No. 50-155)

    International Nuclear Information System (INIS)

    1983-09-01

    The Systematic Evaluation Program was initiated in February 1977 by the US Nuclear Regulatory Commission to review the designs of older operating nuclear reactor plants to reconfirm and document their safety. This report documents the review of the Big Rock Point Plant, which is one of ten plants reviewed under Phase II of this program. This report indicates how 137 topics selected for review under Phase I of the program were addressed. It also addresses a majority of the pending licensing actions for Big Rock Point, which include TMI Action Plan requirements and implementation criteria for resolved generic issues. Equipment and procedural changes have been identified as a result of the review

  6. Sampling system for fast single pulses; Realisation d'un dispositif d'echantillonnage d'un signal bref unique

    Energy Technology Data Exchange (ETDEWEB)

    Zenatti, D. [Commissariat a l' Energie Atomique, Grenoble (France). Centre d' Etudes Nucleaires

    1969-07-01

    Development of a device for the enlargement of the domain of application of classical oscilloscopes to the observation of fast single pulses by application of the sampling principle. Its principal characteristics are: Bandwidth of 700 MHz; Maximum sensibility of 50 mV; Maximum amplitude of input signal of {+-} 1 V; Number of samples of 16; Samples separation of 0,2 ns. (author) [French] Realisation d'un dispositif permettant d'elargir le domaine d'utilisation des oscilloscopes classiques en appliquant le principe de l'echantillonnage a l'observation d'un signal bref unique. Les principales caracteristiques sont les suivantes: Bande passante de 700 MHz; Sensibilite maximale de 50 mV; Amplitude maximale du signal a echantillonner de {+-} 1 V; Nombre de points d'echantillonnage de 16; Pas d'echantillonnage de 0,2 ns. (auteur)

  7. Reflective and refractive objects for mixed reality.

    Science.gov (United States)

    Knecht, Martin; Traxler, Christoph; Winklhofer, Christoph; Wimmer, Michael

    2013-04-01

    In this paper, we present a novel rendering method which integrates reflective or refractive objects into a differential instant radiosity (DIR) framework usable for mixed-reality (MR) applications. This kind of objects are very special from the light interaction point of view, as they reflect and refract incident rays. Therefore they may cause high-frequency lighting effects known as caustics. Using instant-radiosity (IR) methods to approximate these high-frequency lighting effects would require a large amount of virtual point lights (VPLs) and is therefore not desirable due to real-time constraints. Instead, our approach combines differential instant radiosity with three other methods. One method handles more accurate reflections compared to simple cubemaps by using impostors. Another method is able to calculate two refractions in real-time, and the third method uses small quads to create caustic effects. Our proposed method replaces parts in light paths that belong to reflective or refractive objects using these three methods and thus tightly integrates into DIR. In contrast to previous methods which introduce reflective or refractive objects into MR scenarios, our method produces caustics that also emit additional indirect light. The method runs at real-time frame rates, and the results show that reflective and refractive objects with caustics improve the overall impression for MR scenarios.

  8. Concentrations of tylvalosin and 3-O-acetyltylosin attained in the synovial fluid of swine after administration by oral gavage at 50 and 5 mg/kg.

    Science.gov (United States)

    Canning, P; Bates, J; Hammen, K; Coetzee, J; Wulf, L; Rajewski, S; Wang, C; Karriker, L

    2016-12-01

    The objectives of this study were to determine the concentration of tylvalosin (TVN) and its metabolite, 3-O-acetyltylosin (3AT) in the synovial fluid of growing pigs when administered as a single bolus by oral gavage at target doses of 50 mg/kg (Trial 1) and 5 mg/kg (Trial 2). TVN is a water soluble macrolide antimicrobial used in swine production. The stability of the drug in synovial fluid samples stored at -70 °C up to 28 days was also evaluated in Trial 2. In Trial 1, eight pigs were randomly assigned to one of eight time points for euthanasia and synovial fluid collection: 0, 1, 2, 3, 4, 6, 9, 12 h postgavage. For Trial 2, 24 pigs were randomly allocated to one terminal collection time point at 0, 2, 4, 6, 8 or 10 h postgavage. Synovial fluid was analyzed to determine TVN and 3AT concentrations. TVN and 3AT were detected in Trial 1 at all time points, except 0 h. At 2 h postgavage for trial 2, the mean concentrations peaked at 31.17 ng/mL (95% CI: 18.62-52.16) for TVN and at 58.82 ng/mL (95% CI: 35.14-98.46) for 3AT. Storage duration did not impact TVN or 3AT concentrations (P-value 0.9732). © 2016 John Wiley & Sons Ltd.

  9. Tensile behavior of Cu50Zr50 metallic glass nanowire with a B2 crystalline precipitate

    Science.gov (United States)

    Sepulveda-Macias, Matias; Amigo, Nicolas; Gutierrez, Gonzalo

    2018-02-01

    A molecular dynamics study of the effect of a single B2-CuZr precipitate on the mechanical properties of Cu50Zr50 metallic glass nanowires is presented. Four different samples are considered: three with a 2, 4 and 6 nm radii precipitate and a precipitate-free sample. These systems are submitted to uniaxial tensile test up to 25% of strain. The interface region between the precipitate and the glass matrix has high local atomic shear strain, activating shear transformation zones, which concentrates in the neighborhood of the precipitate. The plastic regime is dominated by necking, and no localized shear band is observed for the samples with a 4 and 6 nm radii precipitate. In addition, the yield stress decreases as the size of the precipitate increases. Regarding the precipitate structure, no martensitic phase transformation is observed, since neither the shear band hit the precipitate nor the stress provided by the tensile test is enough to initiate the transformation. It is concluded that, in contrast to the case when multiple precipitates are present in the sample, a single precipitate concentrates the shear strain around its surface, eventually causing the failure of the nanowire.

  10. Zero curvature-surface driven small objects

    Science.gov (United States)

    Dou, Xiaoxiao; Li, Shanpeng; Liu, Jianlin

    2017-08-01

    In this study, we investigate the spontaneous migration of small objects driven by surface tension on a catenoid, formed by a layer of soap constrained by two rings. Although the average curvature of the catenoid is zero at each point, the small objects always migrate to the position near the ring. The force and energy analyses have been performed to uncover the mechanism, and it is found that the small objects distort the local shape of the liquid film, thus making the whole system energetically favorable. These findings provide some inspiration to design microfluidics, aquatic robotics, and miniature boats.

  11. AN ADAPTIVE APPROACH FOR SEGMENTATION OF 3D LASER POINT CLOUD

    Directory of Open Access Journals (Sweden)

    Z. Lari

    2012-09-01

    Full Text Available Automatic processing and object extraction from 3D laser point cloud is one of the major research topics in the field of photogrammetry. Segmentation is an essential step in the processing of laser point cloud, and the quality of extracted objects from laser data is highly dependent on the validity of the segmentation results. This paper presents a new approach for reliable and efficient segmentation of planar patches from a 3D laser point cloud. In this method, the neighbourhood of each point is firstly established using an adaptive cylinder while considering the local point density and surface trend. This neighbourhood definition has a major effect on the computational accuracy of the segmentation attributes. In order to efficiently cluster planar surfaces and prevent introducing ambiguities, the coordinates of the origin's projection on each point's best fitted plane are used as the clustering attributes. Then, an octree space partitioning method is utilized to detect and extract peaks from the attribute space. Each detected peak represents a specific cluster of points which are located on a distinct planar surface in the object space. Experimental results show the potential and feasibility of applying this method for segmentation of both airborne and terrestrial laser data.

  12. Electric field modulation of magnetic anisotropy and microwave absorption properties in Fe50Ni50/Teflon composite films

    Directory of Open Access Journals (Sweden)

    Zhenjun Xia

    2016-05-01

    Full Text Available Fe50Ni50 nanoparticle films with the size about 6 nm were deposited by a high energetic cluster deposition source. An electric field of about 0 - 40 kV was applied on the sample platform when the films were prepared. The field assisted deposition technique can dramatically induce in-plane magnetic anisotropy. To probe the microwave absorption properties, the Fe50Ni50 nanoparticles were deliberately deposited on the dielectric Teflon sheet. Then the laminated Fe50Ni50/Teflon composites were used to do reflection loss scan. The results prove that the application of electric field is an effective avenue to improve the GHz microwave absorption performance of our magnetic nanoparticles films expressed by the movement of reflection loss peak to high GHz region for the composites.

  13. FPFH-based graph matching for 3D point cloud registration

    Science.gov (United States)

    Zhao, Jiapeng; Li, Chen; Tian, Lihua; Zhu, Jihua

    2018-04-01

    Correspondence detection is a vital step in point cloud registration and it can help getting a reliable initial alignment. In this paper, we put forward an advanced point feature-based graph matching algorithm to solve the initial alignment problem of rigid 3D point cloud registration with partial overlap. Specifically, Fast Point Feature Histograms are used to determine the initial possible correspondences firstly. Next, a new objective function is provided to make the graph matching more suitable for partially overlapping point cloud. The objective function is optimized by the simulated annealing algorithm for final group of correct correspondences. Finally, we present a novel set partitioning method which can transform the NP-hard optimization problem into a O(n3)-solvable one. Experiments on the Stanford and UWA public data sets indicates that our method can obtain better result in terms of both accuracy and time cost compared with other point cloud registration methods.

  14. Uav-Based Photogrammetric Point Clouds and Hyperspectral Imaging for Mapping Biodiversity Indicators in Boreal Forests

    Science.gov (United States)

    Saarinen, N.; Vastaranta, M.; Näsi, R.; Rosnell, T.; Hakala, T.; Honkavaara, E.; Wulder, M. A.; Luoma, V.; Tommaselli, A. M. G.; Imai, N. N.; Ribeiro, E. A. W.; Guimarães, R. B.; Holopainen, M.; Hyyppä, J.

    2017-10-01

    Biodiversity is commonly referred to as species diversity but in forest ecosystems variability in structural and functional characteristics can also be treated as measures of biodiversity. Small unmanned aerial vehicles (UAVs) provide a means for characterizing forest ecosystem with high spatial resolution, permitting measuring physical characteristics of a forest ecosystem from a viewpoint of biodiversity. The objective of this study is to examine the applicability of photogrammetric point clouds and hyperspectral imaging acquired with a small UAV helicopter in mapping biodiversity indicators, such as structural complexity as well as the amount of deciduous and dead trees at plot level in southern boreal forests. Standard deviation of tree heights within a sample plot, used as a proxy for structural complexity, was the most accurately derived biodiversity indicator resulting in a mean error of 0.5 m, with a standard deviation of 0.9 m. The volume predictions for deciduous and dead trees were underestimated by 32.4 m3/ha and 1.7 m3/ha, respectively, with standard deviation of 50.2 m3/ha for deciduous and 3.2 m3/ha for dead trees. The spectral features describing brightness (i.e. higher reflectance values) were prevailing in feature selection but several wavelengths were represented. Thus, it can be concluded that structural complexity can be predicted reliably but at the same time can be expected to be underestimated with photogrammetric point clouds obtained with a small UAV. Additionally, plot-level volume of dead trees can be predicted with small mean error whereas identifying deciduous species was more challenging at plot level.

  15. Point kinetics modeling

    International Nuclear Information System (INIS)

    Kimpland, R.H.

    1996-01-01

    A normalized form of the point kinetics equations, a prompt jump approximation, and the Nordheim-Fuchs model are used to model nuclear systems. Reactivity feedback mechanisms considered include volumetric expansion, thermal neutron temperature effect, Doppler effect and void formation. A sample problem of an excursion occurring in a plutonium solution accidentally formed in a glovebox is presented

  16. Comparative effectiveness and acceptability of home-based and clinic-based sampling methods for sexually transmissible infections screening in females aged 14-50 years: a systematic review and meta-analysis.

    Science.gov (United States)

    Odesanmi, Tolulope Y; Wasti, Sharada P; Odesanmi, Omolola S; Adegbola, Omololu; Oguntuase, Olubukola O; Mahmood, Sajid

    2013-12-01

    Home-based sampling is a strategy to enhance uptake of sexually transmissible infection (STI) screening. This review aimed to compare the screening uptake levels of home-based self-sampling and clinic-based specimen collection for STIs (chlamydia (Chlamydia trachomatis), gonorrhoea (Neisseria gonorrhoeae) and trichomoniasis) in females aged 14-50 years. Acceptability and effect on specimen quality were determined. Sixteen electronic databases were searched from inception to September 2012. Randomised controlled trials (RCTs) comparing the uptake levels of home-based self-sampling and clinic-based sampling for chlamydia, gonorrhoea and trichomoniasis in females aged 14-50 years were eligible for inclusion. The risk of bias in the trials was assessed. Risk ratios (RRs) for dichotomous outcomes were meta-analysed. Of 3065 papers, six studies with seven RCTs contributed to the final review. Compared with clinic-based methods, home-based screening increased uptake significantly (P=0.001-0.05) in five trials and was substantiated in a meta-analysis (RR: 1.55; 95% confidence interval: 1.30-1.85; P=0.00001) of two trials. In three trials, a significant preference for home-based testing (P=0.001-0.05) was expressed. No significant difference was observed in specimen quality. Sampling was rated as easy by a significantly higher number of women (P=0.01) in the clinic group in one trial. The review provides evidence that home-based testing results in greater uptake of STI screening in females (14-50 years) than clinic-based testing without compromising quality in the developed world. Home collection strategies should be added to clinic-based screening programs to enhance uptake.

  17. Pb-210 behaviour in environmental samples from the Cuban East in 1993

    International Nuclear Information System (INIS)

    Perez T, L.; Suarez P, W.

    1996-01-01

    Purely experimental method based on the alpha and beta gross counting is applied to determine Pb-210 behaviour in deposition samples from six points of the eastern part of Cuba during 1993. The results were similar in the five points away from the crowded inhabited centers: an evident Pb-210 maximum (more than 50% annual) coinciding with the higher rainy trimester in spring and the contribution of the mentioned radionuclide to the 80-100% of the total deposited beta activity. In the sixth point, located in an industrial zone, alpha and beta activity relations are abnormally high (1.5-3.0) and present peaks in the rainiest and driest months. Hypothesis are formulated to explain these results, which are related to the cyclic input of cool continental air masses rich in radon and its daughters to the island atmosphere in the first case and with input to atmosphere of particles rich in alpha-emitters of industrial origin in the record case. (authors). 3 refs

  18. Evidence for secondary gravitationally lensed images in radio quasistellar objects

    International Nuclear Information System (INIS)

    Rousey, C.E.

    1977-01-01

    Evidence is sought for the observability of the gravitational lens effect by studying the internal radio structures of quasistellar objects. Since the majority of the radio emitting quasars were observed to be multiply structured at radio wavelengths, and since the gravitational deflection of light is essentially frequency independent, these sources are very suitable objects for the investigation of gravitational imaging. From the theoretical framework of gravitational imaging, particularly in the treatment of the gravitational lenses as ''point-mass'' deflectors, several selection criteria were imposed on a sample of 208 radio emitting quasars in order to filter out only those sources which may be exhibiting radio imaging. The employment of further selection criteria, obtained from the consideration of the observed optical fields around the quasars, resulted in a small filtered sample of 10 quasars which are good candidates for exhibiting the gravitational lens effect. In particular, two quasars, 3C 268.4 and 3C 286, are observed to have good evidence for the presence of suitable gravitational lenses. Image models were computed for the image candidates which predict the masses and distances of the gravitational deflectors as well as estimations of the ''time delays'' of the images. It is also suggested that measurements of these image time delays may enable one to place stringent limits on the value of the Hubble constant

  19. Application of a Pelletron accelerator to study total dose radiation effects on 50 GHz SiGe HBTs

    Energy Technology Data Exchange (ETDEWEB)

    Praveen, K.C.; Pushpa, N.; Naik, P.S. [Department of Studies in Physics, University of Mysore, Manasagangotri, Mysore 570 006 (India); Cressler, John D. [School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA (United States); Tripathi, Ambuj [Inter University Accelerator Centre (IUAC), New Delhi 110 067 (India); Gnana Prakash, A.P., E-mail: gnanaprakash@physics.uni-mysore.ac.in [Department of Studies in Physics, University of Mysore, Manasagangotri, Mysore 570 006 (India)

    2012-02-15

    Highlights: Black-Right-Pointing-Pointer Total dose effects of 50 MeV Li3+ ion on 50 GHz SiGe HBTs is investigated. Black-Right-Pointing-Pointer Ion irradiated results were compared with Co-60 gamma results. Black-Right-Pointing-Pointer 50 MeV Li ions create more damage in E-B spacer oxide when compared to Co-60 gamma radiation. Black-Right-Pointing-Pointer Co-60 gamma radiation create more damage in STI oxide when compared to 50 MeV Li ions. Black-Right-Pointing-Pointer Worst case total dose radiation effects can be studied using Pelletron accelerator facilities. - Abstract: We have investigated the effects of 50 MeV lithium ion irradiation on the DC electrical characteristics of first-generation silicon-germanium heterojunction bipolar transistors (50 GHz SiGe HBTs) in the dose range of 600 krad to 100 Mrad. The results of 50 MeV Li{sup 3+} ion irradiation on the SiGe HBTs are compared with 63 MeV proton and Co-60 gamma irradiation results in the same dose range in order to understand the damage induced by different LET species. The radiation response of emitter-base (EB) spacer oxide and shallow trench isolation (STI) oxide to different irradiation types are discussed in this paper. We have also focused on the efficacy in the application of a Pelletron accelerator to study total dose irradiation studies in SiGe HBTs.

  20. Two General Extension Algorithms of Latin Hypercube Sampling

    Directory of Open Access Journals (Sweden)

    Zhi-zhao Liu

    2015-01-01

    Full Text Available For reserving original sampling points to reduce the simulation runs, two general extension algorithms of Latin Hypercube Sampling (LHS are proposed. The extension algorithms start with an original LHS of size m and construct a new LHS of size m+n that contains the original points as many as possible. In order to get a strict LHS of larger size, some original points might be deleted. The relationship of original sampling points in the new LHS structure is shown by a simple undirected acyclic graph. The basic general extension algorithm is proposed to reserve the most original points, but it costs too much time. Therefore, a general extension algorithm based on greedy algorithm is proposed to reduce the extension time, which cannot guarantee to contain the most original points. These algorithms are illustrated by an example and applied to evaluating the sample means to demonstrate the effectiveness.

  1. Estimation of object motion parameters from noisy images.

    Science.gov (United States)

    Broida, T J; Chellappa, R

    1986-01-01

    An approach is presented for the estimation of object motion parameters based on a sequence of noisy images. The problem considered is that of a rigid body undergoing unknown rotational and translational motion. The measurement data consists of a sequence of noisy image coordinates of two or more object correspondence points. By modeling the object dynamics as a function of time, estimates of the model parameters (including motion parameters) can be extracted from the data using recursive and/or batch techniques. This permits a desired degree of smoothing to be achieved through the use of an arbitrarily large number of images. Some assumptions regarding object structure are presently made. Results are presented for a recursive estimation procedure: the case considered here is that of a sequence of one dimensional images of a two dimensional object. Thus, the object moves in one transverse dimension, and in depth, preserving the fundamental ambiguity of the central projection image model (loss of depth information). An iterated extended Kalman filter is used for the recursive solution. Noise levels of 5-10 percent of the object image size are used. Approximate Cramer-Rao lower bounds are derived for the model parameter estimates as a function of object trajectory and noise level. This approach may be of use in situations where it is difficult to resolve large numbers of object match points, but relatively long sequences of images (10 to 20 or more) are available.

  2. Variability of Extragalactic Objects in Relation to Redshift, Color ...

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    properties of the objects, viz., redshift, color indices, radio spectral index ... properties of different types of closely related objects are expected to throw light on the ...... z = 3.0, OVV objects are concentrated at the lower range of the scale, mostly at ..... from the practical point of view in the sense that redshifts can be predicted ...

  3. Point prevalence of neurosis in the Lundby Study 1947-1997.

    Science.gov (United States)

    Nilsson, Erik; Bogren, Mats; Mattisson, Cecilia; Nettelbladt, Per

    2007-01-01

    The objective of this article is to report and discuss the changing point prevalence rate of neurosis 1947-1997 in the Lundby cohort. The Lundby Study is a prospective longitudinal study of a geographically defined total population in the south of Sweden. Field investigations were performed in 1947, 1957, 1972 and in 1997, with psychiatrists interviewing the probands in a semi-structured way. Additional information was gathered from registers, case notes and key informants. Throughout the period of 50 years, the Lundby Study used its own diagnostic system with neurosis referring to non-psychotic mental illness in the absence of an organic brain disease. After 1957, no newcomers were included, and therefore only probands 40 years of age or older at the cross-sectional surveys are included in the present paper. For men aged 40-59 and 60 years or older, respectively, the age-specific point prevalence of neurosis increased from 2.5% and 0.5% in 1947, to 8.3% and 8.4% in 1972. The corresponding figures for women were 8.0% and 1.3% in 1947, and 24.2% and 20.1% in 1972. The increase could be seen in all degrees of impairment, but it was most pronounced in the mild and medium impairment groups. Except for a slight decrease in point prevalence in the female group 40-59 years of age, there were no significant changes from 1972 to 1997. A large increase in the point prevalence rate of neurosis could be seen 1947-1972, but not 1972-1997. Because of the many biases inherent in longitudinal psychiatric studies, our results must be interpreted with caution.

  4. A FAST METHOD FOR MEASURING THE SIMILARITY BETWEEN 3D MODEL AND 3D POINT CLOUD

    Directory of Open Access Journals (Sweden)

    Z. Zhang

    2016-06-01

    Full Text Available This paper proposes a fast method for measuring the partial Similarity between 3D Model and 3D point Cloud (SimMC. It is crucial to measure SimMC for many point cloud-related applications such as 3D object retrieval and inverse procedural modelling. In our proposed method, the surface area of model and the Distance from Model to point Cloud (DistMC are exploited as measurements to calculate SimMC. Here, DistMC is defined as the weighted distance of the distances between points sampled from model and point cloud. Similarly, Distance from point Cloud to Model (DistCM is defined as the average distance of the distances between points in point cloud and model. In order to reduce huge computational burdens brought by calculation of DistCM in some traditional methods, we define SimMC as the ratio of weighted surface area of model to DistMC. Compared to those traditional SimMC measuring methods that are only able to measure global similarity, our method is capable of measuring partial similarity by employing distance-weighted strategy. Moreover, our method is able to be faster than other partial similarity assessment methods. We demonstrate the superiority of our method both on synthetic data and laser scanning data.

  5. The role of the interface on the magnetic behaviour of granular Fe{sub 50}Ag{sub 50} film

    Energy Technology Data Exchange (ETDEWEB)

    Fdez-Gubieda, M.L. [Dpto. Electricidad y Electronica. Universidad del Pais Vasco Apdo 644. 48080 Bilbao (Spain)]. E-mail: malu@we.lc.ehu.es; Sarmiento, G. [Dpto. Electricidad y Electronica. Universidad del Pais Vasco Apdo 644. 48080 Bilbao (Spain); Fernandez Barquin, L. [CITIMAC, Universidad de Cantabria, Avda. de los Castros s/n, 39005 Santander (Spain); Orue, I. [SGIKER, Servicios Generales de medidas magneticas, Universidad del Pais Vasco (Spain)

    2007-03-15

    The magnetic behaviour of a Fe{sub 50}Ag{sub 50} granular thin film has been studied by means of AC and DC magnetic measurements. Exchange coupling between magnetic nanoparticles appears at T=<200K decreasing the coercive field of the sample. Additionally, an exchange bias is observed at low temperature related to the existence of a spin disordered interface around the nanoparticles.

  6. Investigation of potential factors affecting the measurement of dew point temperature in oil-soaked transformers

    Science.gov (United States)

    Kraus, Adam H.

    Moisture within a transformer's insulation system has been proven to degrade its dielectric strength. When installing a transformer in situ, one method used to calculate the moisture content of the transformer insulation is to measure the dew point temperature of the internal gas volume of the transformer tank. There are two instruments commercially available that are designed for dew point temperature measurement: the Alnor Model 7000 Dewpointer and the Vaisala DRYCAPRTM Hand-Held Dewpoint Meter DM70. Although these instruments perform an identical task, the design technology behind each instrument is vastly different. When the Alnor Dewpointer and Vaisala DM70 instruments are used to measure the dew point of the internal gas volume simultaneously from a pressurized transformer, their differences in dew point measurement have been observed to vary as much as 30 °F. There is minimal scientific research available that focuses on the process of measuring dew point of a gas inside a pressurized transformer, let alone this observed phenomenon. The primary objective of this work was to determine what effect certain factors potentially have on dew point measurements of a transformer's internal gas volume, in hopes of understanding the root cause of this phenomenon. Three factors that were studied include (1) human error, (2) the use of calibrated and out-of-calibration instruments, and (3) the presence of oil vapor gases in the dry air sample, and their subsequent effects on the Q-value of the sampled gas. After completing this portion of testing, none of the selected variables proved to be a direct cause of the observed discrepancies between the two instruments. The secondary objective was to validate the accuracy of each instrument as compared to its respective published range by testing against a known dew point temperature produced by a humidity generator. In a select operating range of -22 °F to -4 °F, both instruments were found to be accurate and within their

  7. Discovering Symmetry in Everyday Environments: A Creative Approach to Teaching Symmetry and Point Groups

    Science.gov (United States)

    Fuchigami, Kei; Schrandt, Matthew; Miessler, Gary L.

    2016-01-01

    A hands-on symmetry project is proposed as an innovative way of teaching point groups to undergraduate chemistry students. Traditionally, courses teaching symmetry require students to identify the point group of a given object. This project asks the reverse: students are instructed to identify an object that matches each point group. Doing so…

  8. Correlation between Body Mass Index, Gender, and Skeletal Muscle Mass Cut off Point in Bandung

    Directory of Open Access Journals (Sweden)

    Richi Hendrik Wattimena

    2017-09-01

    Full Text Available Objective: To determine the average skeletal muscle mass (SMM value in young adults as a reference population; to analyze the correlation of gender, and body mass index to the cut off point; and to determine skeletal muscle mass cut off points of population in Bandung, Indonesia. Methods: This was a cross-sectional study involving 199 participants, 122 females and 77 males. The sampling technique used was the multistage random sampling. The participants were those who lived in four major regions in Bandung, Indonesia: Sukajadi, Cicadas, Buah Batu, and Cibaduyut. Results: The average appendicular skeletal mass index (ASMI in females and males based on body mass index (BMI were identified. The average ASMI values for normal BMI in females was 5.982±0.462 kg/m2 while the average ASMI values normal BMI for males was 7.581±0.744 kg/m2 Conclusions: A correlation between BMI and ASMI that was considered statistically significant was found in females (0.7712; p<0.05 and a very significant correlation was seen in males (0.870; p<0.05. The cut off points were defined by the normal BMI, which were 5.059 for females and 6.093 for males.

  9. Exploring the relationship between object realism and object-based attention effects.

    Science.gov (United States)

    Roque, Nelson; Boot, Walter R

    2015-09-01

    Visual attention prioritizes processing of locations in space, and evidence also suggests that the benefits of attention can be shaped by the presence of objects (object-based attention). However, the prevalence of object-based attention effects has been called into question recently by evidence from a large-sampled study employing classic attention paradigms (Pilz et al., 2012). We conducted two experiments to explore factors that might determine when and if object-based attention effects are observed, focusing on the degree to which the concreteness and realism of objects might contribute to these effects. We adapted the classic attention paradigm first reported by Egly, Driver, and Rafal (1994) by replacing abstract bar stimuli in some conditions with objects that were more concrete and familiar to participants: items of silverware. Furthermore, we varied the realism of these items of silverware, presenting either cartoon versions or photo-realistic versions. Contrary to predictions, increased realism did not increase the size of object-based effects. In fact, no clear object-based effects were observed in either experiment, consistent with previous failures to replicate these effects in similar paradigms. While object-based attention may exist, and may have important influences on how we parse the visual world, these and other findings suggest that the two-object paradigm typically relied upon to study object-based effects may not be the best paradigm to investigate these issues. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Section curve reconstruction and mean-camber curve extraction of a point-sampled blade surface.

    Directory of Open Access Journals (Sweden)

    Wen-long Li

    Full Text Available The blade is one of the most critical parts of an aviation engine, and a small change in the blade geometry may significantly affect the dynamics performance of the aviation engine. Rapid advancements in 3D scanning techniques have enabled the inspection of the blade shape using a dense and accurate point cloud. This paper proposes a new method to achieving two common tasks in blade inspection: section curve reconstruction and mean-camber curve extraction with the representation of a point cloud. The mathematical morphology is expanded and applied to restrain the effect of the measuring defects and generate an ordered sequence of 2D measured points in the section plane. Then, the energy and distance are minimized to iteratively smoothen the measured points, approximate the section curve and extract the mean-camber curve. In addition, a turbine blade is machined and scanned to observe the curvature variation, energy variation and approximation error, which demonstrates the availability of the proposed method. The proposed method is simple to implement and can be applied in aviation casting-blade finish inspection, large forging-blade allowance inspection and visual-guided robot grinding localization.

  11. Multiple-Sensor Discrimination of Closely-Spaced Objects on a Ballistic Trajectory

    Science.gov (United States)

    2015-05-18

    Modeling Two-body orbit dynamics was utilized to generate ballistic trajectories between the desired burnout and reentry points. The dispersion of object...trajectories within the target complex was achieved by varying the velocity of each object at the burnout points. The generated trajectories served...utilized as it removes several limitations associated with using the Euclidean distance mainly that it accounts for the scaling of the coordinate

  12. Matching Ge detector element geometry to sample size and shape: One does not fit all exclamation point

    International Nuclear Information System (INIS)

    Keyser, R.M.; Twomey, T.R.; Sangsingkeow, P.

    1998-01-01

    For 25 yr, coaxial germanium detector performance has been specified using the methods and values specified in Ref. 1. These specifications are the full-width at half-maximum (FWHM), FW.1M, FW.02M, peak-to-Compton ratio, and relative efficiency. All of these measurements are made with a 60 Co source 25 cm from the cryostat endcap and centered on the axis of the detector. These measurements are easy to reproduce, both because they are simple to set up and use a common source. These standard tests have been useful in guiding the user to an appropriate detector choice for the intended measurement. Most users of germanium gamma-ray detectors do not make measurements in this simple geometry. Germanium detector manufacturers have worked over the years to make detectors with better resolution, better peak-to-Compton ratios, and higher efficiency--but all based on measurements using the IEEE standard. Advances in germanium crystal growth techniques have made it relatively easy to provide detector elements of different shapes and sizes. Many of these different shapes and sizes can give better results for a specific application than other shapes and sizes. But, the detector specifications must be changed to correspond to the actual application. Both the expected values and the actual parameters to be specified should be changed. In many cases, detection efficiency, peak shape, and minimum detectable limit for a particular detector/sample combination are valuable specifications of detector performance. For other situations, other parameters are important, such as peak shape as a function of count rate. In this work, different sample geometries were considered. The results show the variation in efficiency with energy for all of these sample and detector geometries. The point source at 25 cm from the endcap measurement allows the results to be compared with the currently given IEEE criteria. The best sample/detector configuration for a specific measurement requires more and

  13. Discontinuous Patterns of Cigarette Smoking From Ages 18 to 50 in the United States: A Repeated-Measures Latent Class Analysis.

    Science.gov (United States)

    Terry-McElrath, Yvonne M; O'Malley, Patrick M; Johnston, Lloyd D

    2017-12-13

    Effective cigarette smoking prevention and intervention programming is enhanced by accurate understanding of developmental smoking pathways across the life span. This study investigated within-person patterns of cigarette smoking from ages 18 to 50 among a US national sample of high school graduates, focusing on identifying ages of particular importance for smoking involvement change. Using data from approximately 15,000 individuals participating in the longitudinal Monitoring the Future study, trichotomous measures of past 30-day smoking obtained at 11 time points were modeled using repeated-measures latent class analyses. Sex differences in latent class structure and membership were examined. Twelve latent classes were identified: three characterized by consistent smoking patterns across age (no smoking; smoking developing effective smoking prevention and intervention programming. This study examined cigarette smoking among a national longitudinal US sample of high school graduates from ages 18 to 50 and identified distinct latent classes characterized by patterns of movement between no cigarette use, light-to-moderate smoking, and the conventional definition of heavy smoking at 11 time points via repeated-measures latent class analysis. Membership probabilities for each smoking class were estimated, and critical ages of susceptibility to change in smoking behaviors were identified. © The Author 2017. Published by Oxford University Press on behalf of the Society for Research on Nicotine and Tobacco. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  14. Edge Detection and Feature Line Tracing in 3D-Point Clouds by Analyzing Geometric Properties of Neighborhoods

    Directory of Open Access Journals (Sweden)

    Huan Ni

    2016-09-01

    Full Text Available This paper presents an automated and effective method for detecting 3D edges and tracing feature lines from 3D-point clouds. This method is named Analysis of Geometric Properties of Neighborhoods (AGPN, and it includes two main steps: edge detection and feature line tracing. In the edge detection step, AGPN analyzes geometric properties of each query point’s neighborhood, and then combines RANdom SAmple Consensus (RANSAC and angular gap metric to detect edges. In the feature line tracing step, feature lines are traced by a hybrid method based on region growing and model fitting in the detected edges. Our approach is experimentally validated on complex man-made objects and large-scale urban scenes with millions of points. Comparative studies with state-of-the-art methods demonstrate that our method obtains a promising, reliable, and high performance in detecting edges and tracing feature lines in 3D-point clouds. Moreover, AGPN is insensitive to the point density of the input data.

  15. 41 CFR 109-27.5001 - Objectives.

    Science.gov (United States)

    2010-07-01

    ... misappropriation; (c) The maintenance of an efficient operation; and (d) The standardization of inventories to the...-INVENTORY MANAGEMENT 27.50-Inventory Management Policies, Procedures, and Guidelines § 109-27.5001 Objectives. Necessary inventories shall be established and maintained at reasonable levels, consistent with...

  16. UO3 deactivation end point criteria

    International Nuclear Information System (INIS)

    Stefanski, L.D.

    1994-01-01

    The UO 3 Deactivation End Point Criteria are necessary to facilitate the transfer of the UO 3 Facility from the Office of Facility Transition and Management (EM-60) to the office of Environmental Restoration (EM-40). The criteria were derived from a logical process for determining end points for the systems and spaces at the UO 3 , Facility based on the objectives, tasks, and expected future uses pertinent to that system or space. Furthermore, the established criteria meets the intent and supports the draft guidance for acceptance criteria prepared by EM-40, open-quotes U.S. Department of Energy office of Environmental Restoration (EM-40) Decontamination and Decommissioning Guidance Document (Draft).close quotes For the UO 3 Facility, the overall objective of deactivation is to achieve a safe, stable and environmentally sound condition, suitable for an extended period, as quickly and economically as possible. Once deactivated, the facility is kept in its stable condition by means of a methodical surveillance and maintenance (S ampersand M) program, pending ultimate decontamination and decommissioning (D ampersand D). Deactivation work involves a range of tasks, such as removal of hazardous material, elimination or shielding of radiation fields, partial decontamination to permit access for inspection, installation of monitors and alarms, etc. it is important that the end point of each of these tasks be established clearly and in advance, for the following reasons: (1) End points must be such that the central element of the deactivation objective - to achieve stability - is unquestionably achieved. (2) Much of the deactivation work involves worker exposure to radiation or dangerous materials. This can be minimized by avoiding unnecessary work. (3) Each task is, in effect, competing for resources with other deactivation tasks and other facilities. By assuring that each task is appropriately bounded, DOE's overall resources can be used most fully and effectively

  17. Astronomers Detect Powerful Bursting Radio Source Discovery Points to New Class of Astronomical Objects

    Science.gov (United States)

    2005-03-01

    Astronomers at Sweet Briar College and the Naval Research Laboratory (NRL) have detected a powerful new bursting radio source whose unique properties suggest the discovery of a new class of astronomical objects. The researchers have monitored the center of the Milky Way Galaxy for several years and reveal their findings in the March 3, 2005 edition of the journal, “Nature”. This radio image of the central region of the Milky Way Galaxy holds a new radio source, GCRT J1745-3009. The arrow points to an expanding ring of debris expelled by a supernova. CREDIT: N.E. Kassim et al., Naval Research Laboratory, NRAO/AUI/NSF Principal investigator, Dr. Scott Hyman, professor of physics at Sweet Briar College, said the discovery came after analyzing some additional observations from 2002 provided by researchers at Northwestern University. “"We hit the jackpot!” Hyman said referring to the observations. “An image of the Galactic center, made by collecting radio waves of about 1-meter in wavelength, revealed multiple bursts from the source during a seven-hour period from Sept. 30 to Oct. 1, 2002 — five bursts in fact, and repeating at remarkably constant intervals.” Hyman, four Sweet Briar students, and his NRL collaborators, Drs. Namir Kassim and Joseph Lazio, happened upon transient emission from two radio sources while studying the Galactic center in 1998. This prompted the team to propose an ongoing monitoring program using the National Science Foundation’s Very Large Array (VLA) radio telescope in New Mexico. The National Radio Astronomy Observatory, which operates the VLA, approved the program. The data collected, laid the groundwork for the detection of the new radio source. “Amazingly, even though the sky is known to be full of transient objects emitting at X- and gamma-ray wavelengths,” NRL astronomer Dr. Joseph Lazio pointed out, “very little has been done to look for radio bursts, which are often easier for astronomical objects to produce

  18. Why a Gunk World is Compatible with Nihilism about Objects

    Directory of Open Access Journals (Sweden)

    Baptiste Le Bihan

    2015-07-01

    Full Text Available Ted Sider argues that nihilism about objects is incompatible with the metaphysical possibility of gunk and takes this point to show that nihilism is flawed. I shall describe one kind of nihilism able to answer this objection. I believe that most of the things we usually encounter do not exist. That is, I take talk of macroscopic objects and macroscopic properties to refer to sets of fundamental properties, which are invoked as a matter of linguistic convention. This view is a kind of nihilism: it rules out the existence of objects; that is, from an ontological point of view, there are no objects. But unlike the moderate nihilism of Mark Heller, Peter van Inwagen and Trenton Merricks that claims that most objects do not exist, I endorse a radical nihilism according to which there are no objects in the world, but only properties instantiated in spacetime. As I will show, radical nihilism is perfectly compatible with the metaphysical possibility of gunk. It is also compatible with the epistemic possibility that we actually live in a gunk world. The objection raised by Ted Sider only applies to moderate nihilism that admits some objects in its ontology.

  19. Inhibition effect of calcium hydroxide point and chlorhexidine point on root canal bacteria of necrosis teeth

    Directory of Open Access Journals (Sweden)

    Andry Leonard Je

    2006-03-01

    Full Text Available Calcium Hydroxide point and Chlorhexidine point are new drugs for eliminating bacteria in the root canal. The points slowly and controly realease Calcium Hydroxide and Chlorhexidine into root canal. The purpose of the study was to determined the effectivity of Calcium hydroxide point (Calcium hydroxide plus point and Chlorhexidine point in eleminating the root canal bacteria of nescrosis teeth. In this study 14 subjects were divided into 2 groups. The first group was treated with Calcium hydroxide point and the second was treated with Chlorhexidine poin. The bacteriological sampling were measured with spectrofotometry. The Paired T Test analysis (before and after showed significant difference between the first and second group. The Independent T Test which analysed the effectivity of both groups had not showed significant difference. Although there was no significant difference in statistical test, the result of second group eliminate more bacteria than the first group. The present finding indicated that the use of Chlorhexidine point was better than Calcium hydroxide point in seven days period. The conclusion is Chlorhexidine point and Calcium hydroxide point as root canal medicament effectively eliminate root canal bacteria of necrosis teeth.

  20. Towards 4d Virtual City Reconstruction from LIDAR Point Cloud Sequences

    Science.gov (United States)

    Józsa, O.; Börcs, A.; Benedek, C.

    2013-05-01

    In this paper we propose a joint approach on virtual city reconstruction and dynamic scene analysis based on point cloud sequences of a single car-mounted Rotating Multi-Beam (RMB) Lidar sensor. The aim of the addressed work is to create 4D spatio-temporal models of large dynamic urban scenes containing various moving and static objects. Standalone RMB Lidar devices have been frequently applied in robot navigation tasks and proved to be efficient in moving object detection and recognition. However, they have not been widely exploited yet for geometric approximation of ground surfaces and building facades due to the sparseness and inhomogeneous density of the individual point cloud scans. In our approach we propose an automatic registration method of the consecutive scans without any additional sensor information such as IMU, and introduce a process for simultaneously extracting reconstructed surfaces, motion information and objects from the registered dense point cloud completed with point time stamp information.

  1. Collaborative Indoor Access Point Localization Using Autonomous Mobile Robot Swarm

    Directory of Open Access Journals (Sweden)

    Fahed Awad

    2018-01-01

    Full Text Available Localization of access points has become an important research problem due to the wide range of applications it addresses such as dismantling critical security threats caused by rogue access points or optimizing wireless coverage of access points within a service area. Existing proposed solutions have mostly relied on theoretical hypotheses or computer simulation to demonstrate the efficiency of their methods. The techniques that rely on estimating the distance using samples of the received signal strength usually assume prior knowledge of the signal propagation characteristics of the indoor environment in hand and tend to take a relatively large number of uniformly distributed random samples. This paper presents an efficient and practical collaborative approach to detect the location of an access point in an indoor environment without any prior knowledge of the environment. The proposed approach comprises a swarm of wirelessly connected mobile robots that collaboratively and autonomously collect a relatively small number of non-uniformly distributed random samples of the access point’s received signal strength. These samples are used to efficiently and accurately estimate the location of the access point. The experimental testing verified that the proposed approach can identify the location of the access point in an accurate and efficient manner.

  2. 40 CFR 406.50 - Applicability; description of the normal rice milling subcategory.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 28 2010-07-01 2010-07-01 true Applicability; description of the normal rice milling subcategory. 406.50 Section 406.50 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS GRAIN MILLS POINT SOURCE CATEGORY Normal Rice...

  3. Application of Multi-Objective Human Learning Optimization Method to Solve AC/DC Multi-Objective Optimal Power Flow Problem

    Science.gov (United States)

    Cao, Jia; Yan, Zheng; He, Guangyu

    2016-06-01

    This paper introduces an efficient algorithm, multi-objective human learning optimization method (MOHLO), to solve AC/DC multi-objective optimal power flow problem (MOPF). Firstly, the model of AC/DC MOPF including wind farms is constructed, where includes three objective functions, operating cost, power loss, and pollutant emission. Combining the non-dominated sorting technique and the crowding distance index, the MOHLO method can be derived, which involves individual learning operator, social learning operator, random exploration learning operator and adaptive strategies. Both the proposed MOHLO method and non-dominated sorting genetic algorithm II (NSGAII) are tested on an improved IEEE 30-bus AC/DC hybrid system. Simulation results show that MOHLO method has excellent search efficiency and the powerful ability of searching optimal. Above all, MOHLO method can obtain more complete pareto front than that by NSGAII method. However, how to choose the optimal solution from pareto front depends mainly on the decision makers who stand from the economic point of view or from the energy saving and emission reduction point of view.

  4. 2D virtual texture on 3D real object with coded structured light

    Science.gov (United States)

    Molinier, Thierry; Fofi, David; Salvi, Joaquim; Gorria, Patrick

    2008-02-01

    Augmented reality is used to improve color segmentation on human body or on precious no touch artifacts. We propose a technique to project a synthesized texture on real object without contact. Our technique can be used in medical or archaeological application. By projecting a suitable set of light patterns onto the surface of a 3D real object and by capturing images with a camera, a large number of correspondences can be found and the 3D points can be reconstructed. We aim to determine these points of correspondence between cameras and projector from a scene without explicit points and normals. We then project an adjusted texture onto the real object surface. We propose a global and automatic method to virtually texture a 3D real object.

  5. POINT CLOUD ANALYSIS FOR UAV-BORNE LASER SCANNING WITH HORIZONTALLY AND VERTICALLY ORIENTED LINE SCANNERS – CONCEPT AND FIRST RESULTS

    Directory of Open Access Journals (Sweden)

    M. Weinmann

    2017-08-01

    Full Text Available In this paper, we focus on UAV-borne laser scanning with the objective of densely sampling object surfaces in the local surrounding of the UAV. In this regard, using a line scanner which scans along the vertical direction and perpendicular to the flight direction results in a point cloud with low point density if the UAV moves fast. Using a line scanner which scans along the horizontal direction only delivers data corresponding to the altitude of the UAV and thus a low scene coverage. For these reasons, we present a concept and a system for UAV-borne laser scanning using multiple line scanners. Our system consists of a quadcopter equipped with horizontally and vertically oriented line scanners. We demonstrate the capabilities of our system by presenting first results obtained for a flight within an outdoor scene. Thereby, we use a downsampling of the original point cloud and different neighborhood types to extract fundamental geometric features which in turn can be used for scene interpretation with respect to linear, planar or volumetric structures.

  6. Humans use visual and remembered information about object location to plan pointing movements

    NARCIS (Netherlands)

    Brouwer, A.-M.; Knill, D.C.

    2009-01-01

    We investigated whether humans use a target's remembered location to plan reaching movements to targets according to the relative reliabilities of visual and remembered information. Using their index finger, subjects moved a virtual object from one side of a table to the other, and then went back to

  7. Diffusion from a Ground Level Point Source Experiment with Thermoluminescence Dosimeters and Kr 85 as Tracer Substance

    Energy Technology Data Exchange (ETDEWEB)

    Gyllander, Ch; Hollman, S; Widemo, U

    1969-04-15

    Within the framework of the IRIS-project (Iodine Research in Safety Project) an experiment to study diffusion at near-ground level was carried out on 19 December 1967 using {sup 85}Kr as the tracer element. The object of the experiment was a) to test the method using |3-sensitive thermoluminescence dosimeters under actual field conditions. b) to study the initial dilution from a ground level point source. The test area chosen was the Tranvik valley just south of Trobbofjaerden, an inland bay of the Baltic. Dose distributions have been studied at two sections, 50 and 200 m respectively, from the release point. At each level various dispersion parameters have been experimentally determined and their conformity to normal distribution have been calculated. Dilution factors valid for the centre of the plume are related to the values reported in the literature. The experiment was made under ideal weather conditions above snow-free ground. Results of the next experiment, a point release at ground level from a building at Studsvik, are expected to yield valuable information concerning the effect of buildings on the diffusion pattern.

  8. Diffusion from a Ground Level Point Source Experiment with Thermoluminescence Dosimeters and Kr 85 as Tracer Substance

    International Nuclear Information System (INIS)

    Gyllander, Ch.; Hollman, S.; Widemo, U.

    1969-04-01

    Within the framework of the IRIS-project (Iodine Research in Safety Project) an experiment to study diffusion at near-ground level was carried out on 19 December 1967 using 85 Kr as the tracer element. The object of the experiment was a) to test the method using |3-sensitive thermoluminescence dosimeters under actual field conditions. b) to study the initial dilution from a ground level point source. The test area chosen was the Tranvik valley just south of Trobbofjaerden, an inland bay of the Baltic. Dose distributions have been studied at two sections, 50 and 200 m respectively, from the release point. At each level various dispersion parameters have been experimentally determined and their conformity to normal distribution have been calculated. Dilution factors valid for the centre of the plume are related to the values reported in the literature. The experiment was made under ideal weather conditions above snow-free ground. Results of the next experiment, a point release at ground level from a building at Studsvik, are expected to yield valuable information concerning the effect of buildings on the diffusion pattern

  9. Joint classification and contour extraction of large 3D point clouds

    Science.gov (United States)

    Hackel, Timo; Wegner, Jan D.; Schindler, Konrad

    2017-08-01

    We present an effective and efficient method for point-wise semantic classification and extraction of object contours of large-scale 3D point clouds. What makes point cloud interpretation challenging is the sheer size of several millions of points per scan and the non-grid, sparse, and uneven distribution of points. Standard image processing tools like texture filters, for example, cannot handle such data efficiently, which calls for dedicated point cloud labeling methods. It turns out that one of the major drivers for efficient computation and handling of strong variations in point density, is a careful formulation of per-point neighborhoods at multiple scales. This allows, both, to define an expressive feature set and to extract topologically meaningful object contours. Semantic classification and contour extraction are interlaced problems. Point-wise semantic classification enables extracting a meaningful candidate set of contour points while contours help generating a rich feature representation that benefits point-wise classification. These methods are tailored to have fast run time and small memory footprint for processing large-scale, unstructured, and inhomogeneous point clouds, while still achieving high classification accuracy. We evaluate our methods on the semantic3d.net benchmark for terrestrial laser scans with >109 points.

  10. Improving a variation of the DSC technique for measuring the boiling points of pure compounds at low pressures

    International Nuclear Information System (INIS)

    Troni, Kelly L.; Damaceno, Daniela S.; Ceriani, Roberta

    2016-01-01

    Highlights: • Improvement of a variation of the DSC technique for boiling points at low pressures. • Use of a ballpoint pen ball over the pinhole of the DSC crucible. • Effects of configuration variables of the DSC technique accounted by factorial design. • An optimized region was obtained and tested for selected compounds. - Abstract: This study aims to improve a variation of the differential scanning calorimetry (DSC) technique for measuring boiling points of pure compounds at low pressures. Using a well-known n-paraffin (n-hexadecane), experimental boiling points at a pressure of 3.47 kPa with u(P) = 0.07 kPa were obtained by using a variation of the DSC technique, which consists of placing samples inside hermetically sealed aluminum crucibles, with a pinhole (diameter of 0.8 mm) made on the lid and a tungsten carbide ball with a diameter of 1.0 mm over it. Experiments were configured at nine different combinations of heating rates (K·min"−"1) and sample sizes (mg) following a full factorial design (2"2 trials plus a star configuration and three central points). Individual and combined effects of these two independent variables on the difference between experimental and estimated boiling points (NIST Thermo Data Engine v. 5.0 – Aspen Plus v. 8.4) were investigated. The results obtained in this work reveal that although both factors affect individually the accuracy of this variation of the DSC technique, the effect of heating rate is the most important. An optimized region of combinations of heating rate and sample size for determining boiling points of pure compounds at low pressures was obtained using the response-surface methodology (RSM). Within this optimized region, a selected condition, combining a heating rate of 24.52 K·min"−"1 and a sample size of (4.6 ± 0.5) mg, was tested for six different compounds (92.094–302.37 g mol"−"1) comprising four fatty compounds (tributyrin, monocaprylin, octanoic acid and 1-octadecanol), glycerol and n

  11. Object Detection and Tracking using Modified Diamond Search Block Matching Motion Estimation Algorithm

    Directory of Open Access Journals (Sweden)

    Apurva Samdurkar

    2018-06-01

    Full Text Available Object tracking is one of the main fields within computer vision. Amongst various methods/ approaches for object detection and tracking, the background subtraction approach makes the detection of object easier. To the detected object, apply the proposed block matching algorithm for generating the motion vectors. The existing diamond search (DS and cross diamond search algorithms (CDS are studied and experiments are carried out on various standard video data sets and user defined data sets. Based on the study and analysis of these two existing algorithms a modified diamond search pattern (MDS algorithm is proposed using small diamond shape search pattern in initial step and large diamond shape (LDS in further steps for motion estimation. The initial search pattern consists of five points in small diamond shape pattern and gradually grows into a large diamond shape pattern, based on the point with minimum cost function. The algorithm ends with the small shape pattern at last. The proposed MDS algorithm finds the smaller motion vectors and fewer searching points than the existing DS and CDS algorithms. Further, object detection is carried out by using background subtraction approach and finally, MDS motion estimation algorithm is used for tracking the object in color video sequences. The experiments are carried out by using different video data sets containing a single object. The results are evaluated and compared by using the evaluation parameters like average searching points per frame and average computational time per frame. The experimental results show that the MDS performs better than DS and CDS on average search point and average computation time.

  12. Object recognition with video-theodolites and without targeting the object

    International Nuclear Information System (INIS)

    Kahmen, H.; Seixas, A. de

    1999-01-01

    At the Department of Applied Geodesy and Engineering Geodesy (TU Vienna) an new kind of theodolite measurement system is under development, enabling measurements with an accuracy of 1:30.000 with and without targeting the object. The main goal is, to develop an intelligent multi-sensor system. Thus an operator is only needed to supervise the system. Results are gained on-sine and can be stored in a CAD system. If no artificial targets are used identification of points has to be performed by the Master-Theodolite. The method, used in our project, is based on interest operators. The Slave-Theodolite has to track the master by searching for homologous regions. The before described method can only be used, if there is some texture on the surface of the object. If that is not fulfilled, a 'grid-line-method' can be used, to get informations about the surface of the object. In the case of a cartesian co-ordinate system, for instance, the grid-lines can be chosen by the operator before the measurement process is started. The theodolite-measurement system is then able to detect the grid-lines and to find the positions where the grid-lines intersect the surface of the object. This system could be used for positioning the different components of a particle accelerator. (author)

  13. Object recognition with video-theodolites and without targeting the object

    Energy Technology Data Exchange (ETDEWEB)

    Kahmen, H.; Seixas, A. de [University of Technology Vienna, Institute of Geodesy and Geophysics, Vienna (Austria)

    1999-07-01

    At the Department of Applied Geodesy and Engineering Geodesy (TU Vienna) an new kind of theodolite measurement system is under development, enabling measurements with an accuracy of 1:30.000 with and without targeting the object. The main goal is, to develop an intelligent multi-sensor system. Thus an operator is only needed to supervise the system. Results are gained on-sine and can be stored in a CAD system. If no artificial targets are used identification of points has to be performed by the Master-Theodolite. The method, used in our project, is based on interest operators. The Slave-Theodolite has to track the master by searching for homologous regions. The before described method can only be used, if there is some texture on the surface of the object. If that is not fulfilled, a 'grid-line-method' can be used, to get informations about the surface of the object. In the case of a cartesian co-ordinate system, for instance, the grid-lines can be chosen by the operator before the measurement process is started. The theodolite-measurement system is then able to detect the grid-lines and to find the positions where the grid-lines intersect the surface of the object. This system could be used for positioning the different components of a particle accelerator. (author)

  14. Point-driven Mathematics Teaching. Studying and Intervening in Danish Classrooms

    DEFF Research Database (Denmark)

    Mogensen, Arne

    secondary schools emphasize such points in their teaching. Thus, 50 randomly selected mathematics teachers are filmed in one grade 8 math lesson and the dialogue investigated. The study identifies large variations and many influential components. There seems to be room for improvement. In order to examine...... possibilities to strengthen the presence and role of mathematical points in teaching two intervention studies are conducted. First a focus group of 5 of the original 50 teachers from each school are offered peer coaching by the researcher. This study indicates that different teachers appreciate peer coaching...... be supported in significant changes to a point-oriented mathematics teaching. The teachers emphasized joint planning of study lessons, and they regarded the peer coaching after each of these lessons as valuable. The studies with the two teacher groups indicate different opportunities and challenges...

  15. Low-complexity object detection with deep convolutional neural network for embedded systems

    Science.gov (United States)

    Tripathi, Subarna; Kang, Byeongkeun; Dane, Gokce; Nguyen, Truong

    2017-09-01

    We investigate low-complexity convolutional neural networks (CNNs) for object detection for embedded vision applications. It is well-known that consolidation of an embedded system for CNN-based object detection is more challenging due to computation and memory requirement comparing with problems like image classification. To achieve these requirements, we design and develop an end-to-end TensorFlow (TF)-based fully-convolutional deep neural network for generic object detection task inspired by one of the fastest framework, YOLO.1 The proposed network predicts the localization of every object by regressing the coordinates of the corresponding bounding box as in YOLO. Hence, the network is able to detect any objects without any limitations in the size of the objects. However, unlike YOLO, all the layers in the proposed network is fully-convolutional. Thus, it is able to take input images of any size. We pick face detection as an use case. We evaluate the proposed model for face detection on FDDB dataset and Widerface dataset. As another use case of generic object detection, we evaluate its performance on PASCAL VOC dataset. The experimental results demonstrate that the proposed network can predict object instances of different sizes and poses in a single frame. Moreover, the results show that the proposed method achieves comparative accuracy comparing with the state-of-the-art CNN-based object detection methods while reducing the model size by 3× and memory-BW by 3 - 4× comparing with one of the best real-time CNN-based object detectors, YOLO. Our 8-bit fixed-point TF-model provides additional 4× memory reduction while keeping the accuracy nearly as good as the floating-point model. Moreover, the fixed- point model is capable of achieving 20× faster inference speed comparing with the floating-point model. Thus, the proposed method is promising for embedded implementations.

  16. 50 CFR 660.383 - Open access fishery management measures.

    Science.gov (United States)

    2010-10-01

    .... south to the U.S./Mexico border) is permitted within the non-trawl RCA with fixed gear only under the..., President Jackson Seamount, Cordell Bank (50-fm (91-m) isobath), Harris Point, Richardson Rock, Scorpion...

  17. Object recognition memory in zebrafish.

    Science.gov (United States)

    May, Zacnicte; Morrill, Adam; Holcombe, Adam; Johnston, Travis; Gallup, Joshua; Fouad, Karim; Schalomon, Melike; Hamilton, Trevor James

    2016-01-01

    The novel object recognition, or novel-object preference (NOP) test is employed to assess recognition memory in a variety of organisms. The subject is exposed to two identical objects, then after a delay, it is placed back in the original environment containing one of the original objects and a novel object. If the subject spends more time exploring one object, this can be interpreted as memory retention. To date, this test has not been fully explored in zebrafish (Danio rerio). Zebrafish possess recognition memory for simple 2- and 3-dimensional geometrical shapes, yet it is unknown if this translates to complex 3-dimensional objects. In this study we evaluated recognition memory in zebrafish using complex objects of different sizes. Contrary to rodents, zebrafish preferentially explored familiar over novel objects. Familiarity preference disappeared after delays of 5 mins. Leopard danios, another strain of D. rerio, also preferred the familiar object after a 1 min delay. Object preference could be re-established in zebra danios by administration of nicotine tartrate salt (50mg/L) prior to stimuli presentation, suggesting a memory-enhancing effect of nicotine. Additionally, exploration biases were present only when the objects were of intermediate size (2 × 5 cm). Our results demonstrate zebra and leopard danios have recognition memory, and that low nicotine doses can improve this memory type in zebra danios. However, exploration biases, from which memory is inferred, depend on object size. These findings suggest zebrafish ecology might influence object preference, as zebrafish neophobia could reflect natural anti-predatory behaviour. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Automatic and objective oral cancer diagnosis by Raman spectroscopic detection of keratin with multivariate curve resolution analysis

    Science.gov (United States)

    Chen, Po-Hsiung; Shimada, Rintaro; Yabumoto, Sohshi; Okajima, Hajime; Ando, Masahiro; Chang, Chiou-Tzu; Lee, Li-Tzu; Wong, Yong-Kie; Chiou, Arthur; Hamaguchi, Hiro-O.

    2016-01-01

    We have developed an automatic and objective method for detecting human oral squamous cell carcinoma (OSCC) tissues with Raman microspectroscopy. We measure 196 independent Raman spectra from 196 different points of one oral tissue sample and globally analyze these spectra using a Multivariate Curve Resolution (MCR) analysis. Discrimination of OSCC tissues is automatically and objectively made by spectral matching comparison of the MCR decomposed Raman spectra and the standard Raman spectrum of keratin, a well-established molecular marker of OSCC. We use a total of 24 tissue samples, 10 OSCC and 10 normal tissues from the same 10 patients, 3 OSCC and 1 normal tissues from different patients. Following the newly developed protocol presented here, we have been able to detect OSCC tissues with 77 to 92% sensitivity (depending on how to define positivity) and 100% specificity. The present approach lends itself to a reliable clinical diagnosis of OSCC substantiated by the “molecular fingerprint” of keratin.

  19. Reproducibility of subgingival bacterial samples from patients with peri-implant mucositis

    DEFF Research Database (Denmark)

    Hallström, Hadar; Persson, G Rutger; Strömberg, Ulf

    2015-01-01

    collected with paper points and analyzed using the checkerboard DNA-DNA hybridization technique. Whole genomic probes of 74 preselected bacterial species were used. Based on the bacterial scores, Cohen's kappa coefficient was used to calculate the inter-annotator agreement for categorical data......OBJECTIVE: The aim of the present study was to investigate the reproducibility of bacterial enumeration from subsequent subgingival samples collected from patients with peri-implant mucositis. MATERIAL AND METHODS: Duplicate microbial samples from 222 unique implant sites in 45 adult subjects were....... The percentage agreement was considered as "good" when the two samples showed the same score or differed by 1 to the power of 10. RESULTS: Moderate to fair kappa values were displayed for all bacterial species in the test panel (range 0.21-0.58). There were no significant differences between Gram...

  20. Sampling point selection for energy estimation in the quasicontinuum method

    NARCIS (Netherlands)

    Beex, L.A.A.; Peerlings, R.H.J.; Geers, M.G.D.

    2010-01-01

    The quasicontinuum (QC) method reduces computational costs of atomistic calculations by using interpolation between a small number of so-called repatoms to represent the displacements of the complete lattice and by selecting a small number of sampling atoms to estimate the total potential energy of