WorldWideScience

Sample records for automated sampling assessment

  1. Manual versus automated blood sampling

    DEFF Research Database (Denmark)

    Teilmann, A C; Kalliokoski, Otto; Sørensen, Dorte B

    2014-01-01

    corticosterone metabolites, and expressed more anxious behavior than did the mice of the other groups. Plasma corticosterone levels of mice subjected to tail blood sampling were also elevated, although less significantly. Mice subjected to automated blood sampling were less affected with regard to the parameters......Facial vein (cheek blood) and caudal vein (tail blood) phlebotomy are two commonly used techniques for obtaining blood samples from laboratory mice, while automated blood sampling through a permanent catheter is a relatively new technique in mice. The present study compared physiological parameters......, glucocorticoid dynamics as well as the behavior of mice sampled repeatedly for 24 h by cheek blood, tail blood or automated blood sampling from the carotid artery. Mice subjected to cheek blood sampling lost significantly more body weight, had elevated levels of plasma corticosterone, excreted more fecal...

  2. AUTOMATED GEOSPATIAL WATERSHED ASSESSMENT ...

    Science.gov (United States)

    The Automated Geospatial Watershed Assessment tool (AGWA) is a GIS interface jointly developed by the USDA Agricultural Research Service, the U.S. Environmental Protection Agency, the University of Arizona, and the University of Wyoming to automate the parameterization and execution of the Soil Water Assessment Tool (SWAT) and KINEmatic Runoff and EROSion (KINEROS2) hydrologic models. The application of these two models allows AGWA to conduct hydrologic modeling and watershed assessments at multiple temporal and spatial scales. AGWA’s current outputs are runoff (volumes and peaks) and sediment yield, plus nitrogen and phosphorus with the SWAT model. AGWA uses commonly available GIS data layers to fully parameterize, execute, and visualize results from both models. Through an intuitive interface the user selects an outlet from which AGWA delineates and discretizes the watershed using a Digital Elevation Model (DEM) based on the individual model requirements. The watershed model elements are then intersected with soils and land cover data layers to derive the requisite model input parameters. The chosen model is then executed, and the results are imported back into AGWA for visualization. This allows managers to identify potential problem areas where additional monitoring can be undertaken or mitigation activities can be focused. AGWA also has tools to apply an array of best management practices. There are currently two versions of AGWA available; AGWA 1.5 for

  3. High throughput sample processing and automated scoring

    Directory of Open Access Journals (Sweden)

    Gunnar eBrunborg

    2014-10-01

    Full Text Available The comet assay is a sensitive and versatile method for assessing DNA damage in cells. In the traditional version of the assay, there are many manual steps involved and few samples can be treated in one experiment. High throughput modifications have been developed during recent years, and they are reviewed and discussed. These modifications include accelerated scoring of comets; other important elements that have been studied and adapted to high throughput are cultivation and manipulation of cells or tissues before and after exposure, and freezing of treated samples until comet analysis and scoring. High throughput methods save time and money but they are useful also for other reasons: large-scale experiments may be performed which are otherwise not practicable (e.g., analysis of many organs from exposed animals, and human biomonitoring studies, and automation gives more uniform sample treatment and less dependence on operator performance. The high throughput modifications now available vary largely in their versatility, capacity, complexity and costs. The bottleneck for further increase of throughput appears to be the scoring.

  4. Technology modernization assessment flexible automation

    Energy Technology Data Exchange (ETDEWEB)

    Bennett, D.W.; Boyd, D.R.; Hansen, N.H.; Hansen, M.A.; Yount, J.A.

    1990-12-01

    The objectives of this report are: to present technology assessment guidelines to be considered in conjunction with defense regulations before an automation project is developed to give examples showing how assessment guidelines may be applied to a current project to present several potential areas where automation might be applied successfully in the depot system. Depots perform primarily repair and remanufacturing operations, with limited small batch manufacturing runs. While certain activities (such as Management Information Systems and warehousing) are directly applicable to either environment, the majority of applications will require combining existing and emerging technologies in different ways, with the special needs of depot remanufacturing environment. Industry generally enjoys the ability to make revisions to its product lines seasonally, followed by batch runs of thousands or more. Depot batch runs are in the tens, at best the hundreds, of parts with a potential for large variation in product mix; reconfiguration may be required on a week-to-week basis. This need for a higher degree of flexibility suggests a higher level of operator interaction, and, in turn, control systems that go beyond the state of the art for less flexible automation and industry in general. This report investigates the benefits and barriers to automation and concludes that, while significant benefits do exist for automation, depots must be prepared to carefully investigate the technical feasibility of each opportunity and the life-cycle costs associated with implementation. Implementation is suggested in two ways: (1) develop an implementation plan for automation technologies based on results of small demonstration automation projects; (2) use phased implementation for both these and later stage automation projects to allow major technical and administrative risk issues to be addressed. 10 refs., 2 figs., 2 tabs. (JF)

  5. An Office Automation Needs Assessment Model

    Science.gov (United States)

    1985-08-01

    office automation needs of a Army Hospital. Based on a literature review and interviews with industry experts, a model was developed to assess office automation needs. The model was applied against the needs of the Clinical Support Division. The author identified a need for a strategic plan for Office Automation prior to analysis of a specific service for automaton. He recommended establishment of a Hospital Automation Advisory Council to centralize establish policy recommendations for Office automation

  6. Automated Assessment in Massive Open Online Courses

    Science.gov (United States)

    Ivaniushin, Dmitrii A.; Shtennikov, Dmitrii G.; Efimchick, Eugene A.; Lyamin, Andrey V.

    2016-01-01

    This paper describes an approach to use automated assessments in online courses. Open edX platform is used as the online courses platform. The new assessment type uses Scilab as learning and solution validation tool. This approach allows to use automated individual variant generation and automated solution checks without involving the course…

  7. Automated sampling and control of gaseous simulations

    KAUST Repository

    Huang, Ruoguan

    2013-05-04

    In this work, we describe a method that automates the sampling and control of gaseous fluid simulations. Several recent approaches have provided techniques for artists to generate high-resolution simulations based on a low-resolution simulation. However, often in applications the overall flow in the low-resolution simulation that an animator observes and intends to preserve is composed of even lower frequencies than the low resolution itself. In such cases, attempting to match the low-resolution simulation precisely is unnecessarily restrictive. We propose a new sampling technique to efficiently capture the overall flow of a fluid simulation, at the scale of user\\'s choice, in such a way that the sampled information is sufficient to represent what is virtually perceived and no more. Thus, by applying control based on the sampled data, we ensure that in the resulting high-resolution simulation, the overall flow is matched to the low-resolution simulation and the fine details on the high resolution are preserved. The samples we obtain have both spatial and temporal continuity that allows smooth keyframe matching and direct manipulation of visible elements such as smoke density through temporal blending of samples. We demonstrate that a user can easily configure a simulation with our system to achieve desired results. © 2013 Springer-Verlag Berlin Heidelberg.

  8. Automated system for fractionation of blood samples

    Energy Technology Data Exchange (ETDEWEB)

    Lee, N. E.; Genung, R. K.; Johnson, W. F.; Mrochek, J. E.; Scott, C. D.

    1978-01-01

    A prototype system for preparing multiple fractions of blood components (plasma, washed red cells, and hemolysates) using automated techniques has been developed. The procedure is based on centrifugal separation and differential pressure-induced transfer in a rotor that has been designed to process numerous samples simultaneously. Red cells are sedimented against the outer walls of the sample chamber, and plasma is syphoned, by imposition of eithr a slight positive or negative pressure, into individual reservoirs in a collection ring. Washing of cells is performed in situ; samples of washed cells, either packed or in saline solution, can be recovered. Cellular hemolysates are prepared and automatically transferred to individual, commercially available collection vials ready for storage in liquid nitrogen or immediate analysis. The system has potential application in any biomedical area which requires high sample throughput and in which one or more of the blood fractions will be used. A separate unit has been designed and developed for the semiautomated cleaning of the blood processing vessel.

  9. How to assess sustainability in automated manufacturing

    DEFF Research Database (Denmark)

    Dijkman, Teunis Johannes; Rödger, Jan-Markus; Bey, Niki

    2015-01-01

    The aim of this paper is to describe how sustainability in automation can be assessed. The assessment method is illustrated using a case study of a robot. Three aspects of sustainability assessment in automation are identified. Firstly, we consider automation as part of a larger system...... that fulfills the market demand for a given functionality. Secondly, three aspects of sustainability have to be assessed: environment, economy, and society. Thirdly, automation is part of a system with many levels, with different actors on each level, resulting in meeting the market demand. In this system......, (sustainability) specifications move top-down, which helps avoiding sub-optimization and problem shifting. From these three aspects, sustainable automation is defined as automation that contributes to products that fulfill a market demand in a more sustainable way. The case study presents the carbon footprints...

  10. Impact of Office Automation: An Empirical Assessment

    Science.gov (United States)

    1988-12-01

    imp F rq(I NAVAL POSTGRADUATE SCHOOL Monterey, California N I < DTIC S ELECTEI THESIS -’° "I I MPACT OF OFFICE AUTOMATION : AN EMPIRICAL ASSESSMENT by...FLNDiNG NUMBERS PROGRAM PROCT TASK IWORK UNIT ELEMNT O NONO ACCESSION NO 11 TITLE (Include Security Classification) IMPACT OF OFFICE AUTOMATION : AN...identity by block number) FIELD GROUP I SB-GROLP Productivity Assessment; SACONS; Office Automation I I 19 ABSTRACT (Continue on reverse if necessary

  11. Automated blood sampling systems for positron emission tomography

    International Nuclear Information System (INIS)

    Eriksson, L.; Holte, S.; Bohm, C.; Kesselberg, M.; Hovander, B.

    1988-01-01

    An automated blood sampling system has been constructed and evaluated. Two different detector units in the blood sampling system are compared. Results from studies of blood-brain barrier transfer of a C-11 labelled receptor antagonist will be discussed

  12. Automation of a dust sampling train | Akinola | Journal of Modeling ...

    African Journals Online (AJOL)

    The results obtained from this work show that the flue gas sampling process can be automated using microprocessor-based control system and a sampling train. Using the sampling train developed by the British Coal Utilization Research Association (BCURA) with both the internal and external flow-meter arrangements, the ...

  13. On-line Automated Sample Preparation-Capillary Gas Chromatography for the Analysis of Plasma Samples.

    NARCIS (Netherlands)

    Louter, A.J.H.; van der Wagt, R.A.C.A.; Brinkman, U.A.T.

    1995-01-01

    An automated sample preparation module, (the automated sample preparation with extraction columns, ASPEC), was interfaced with a capillary gas chromatograph (GC) by means of an on-column interface. The system was optimised for the determination of the antidepressant trazodone in plasma. The clean-up

  14. Testing an Automated Accuracy Assessment Method on Bibliographic Data

    Directory of Open Access Journals (Sweden)

    Marlies Olensky

    2014-12-01

    Full Text Available This study investigates automated data accuracy assessment as described in data quality literature for its suitability to assess bibliographic data. The data samples comprise the publications of two Nobel Prize winners in the field of Chemistry for a 10-year-publication period retrieved from the two bibliometric data sources, Web of Science and Scopus. The bibliographic records are assessed against the original publication (gold standard and an automatic assessment method is compared to a manual one. The results show that the manual assessment method reflects truer accuracy scores. The automated assessment method would need to be extended by additional rules that reflect specific characteristics of bibliographic data. Both data sources had higher accuracy scores per field than accumulated per record. This study contributes to the research on finding a standardized assessment method of bibliographic data accuracy as well as defining the impact of data accuracy on the citation matching process.

  15. Automated Autonomy Assessment System Project

    Data.gov (United States)

    National Aeronautics and Space Administration — NASA has expressed the need to assess crew autonomy relative to performance and evaluate an optimal level of autonomy that maximizes individual and team performance....

  16. Automated Bone Age Assessment: Motivation, Taxonomies, and Challenges

    Directory of Open Access Journals (Sweden)

    Marjan Mansourvar

    2013-01-01

    Full Text Available Bone age assessment (BAA of unknown people is one of the most important topics in clinical procedure for evaluation of biological maturity of children. BAA is performed usually by comparing an X-ray of left hand wrist with an atlas of known sample bones. Recently, BAA has gained remarkable ground from academia and medicine. Manual methods of BAA are time-consuming and prone to observer variability. This is a motivation for developing automated methods of BAA. However, there is considerable research on the automated assessment, much of which are still in the experimental stage. This survey provides taxonomy of automated BAA approaches and discusses the challenges. Finally, we present suggestions for future research.

  17. Automated Bone Age Assessment: Motivation, Taxonomies, and Challenges

    Science.gov (United States)

    Ismail, Maizatul Akmar; Herawan, Tutut; Gopal Raj, Ram; Abdul Kareem, Sameem; Nasaruddin, Fariza Hanum

    2013-01-01

    Bone age assessment (BAA) of unknown people is one of the most important topics in clinical procedure for evaluation of biological maturity of children. BAA is performed usually by comparing an X-ray of left hand wrist with an atlas of known sample bones. Recently, BAA has gained remarkable ground from academia and medicine. Manual methods of BAA are time-consuming and prone to observer variability. This is a motivation for developing automated methods of BAA. However, there is considerable research on the automated assessment, much of which are still in the experimental stage. This survey provides taxonomy of automated BAA approaches and discusses the challenges. Finally, we present suggestions for future research. PMID:24454534

  18. Non-Contact Conductivity Measurement for Automated Sample Processing Systems

    Science.gov (United States)

    Beegle, Luther W.; Kirby, James P.

    2012-01-01

    A new method has been developed for monitoring and control of automated sample processing and preparation especially focusing on desalting of samples before analytical analysis (described in more detail in Automated Desalting Apparatus, (NPO-45428), NASA Tech Briefs, Vol. 34, No. 8 (August 2010), page 44). The use of non-contact conductivity probes, one at the inlet and one at the outlet of the solid phase sample preparation media, allows monitoring of the process, and acts as a trigger for the start of the next step in the sequence (see figure). At each step of the muti-step process, the system is flushed with low-conductivity water, which sets the system back to an overall low-conductivity state. This measurement then triggers the next stage of sample processing protocols, and greatly minimizes use of consumables. In the case of amino acid sample preparation for desalting, the conductivity measurement will define three key conditions for the sample preparation process. First, when the system is neutralized (low conductivity, by washing with excess de-ionized water); second, when the system is acidified, by washing with a strong acid (high conductivity); and third, when the system is at a basic condition of high pH (high conductivity). Taken together, this non-contact conductivity measurement for monitoring sample preparation will not only facilitate automation of the sample preparation and processing, but will also act as a way to optimize the operational time and use of consumables

  19. Automation of motor dexterity assessment.

    Science.gov (United States)

    Heyer, Patrick; Castrejon, Luis R; Orihuela-Espina, Felipe; Sucar, Luis Enrique

    2017-07-01

    Motor dexterity assessment is regularly performed in rehabilitation wards to establish patient status and automatization for such routinary task is sought. A system for automatizing the assessment of motor dexterity based on the Fugl-Meyer scale and with loose restrictions on sensing technologies is presented. The system consists of two main elements: 1) A data representation that abstracts the low level information obtained from a variety of sensors, into a highly separable low dimensionality encoding employing t-distributed Stochastic Neighbourhood Embedding, and, 2) central to this communication, a multi-label classifier that boosts classification rates by exploiting the fact that the classes corresponding to the individual exercises are naturally organized as a network. Depending on the targeted therapeutic movement class labels i.e. exercises scores, are highly correlated-patients who perform well in one, tends to perform well in related exercises-; and critically no node can be used as proxy of others - an exercise does not encode the information of other exercises. Over data from a cohort of 20 patients, the novel classifier outperforms classical Naive Bayes, random forest and variants of support vector machines (ANOVA: p rehabilitation and telerehabilitation alternatives.

  20. An automated blood sampling system used in positron emission tomography

    International Nuclear Information System (INIS)

    Eriksson, L.; Bohm, C.; Kesselberg, M.

    1988-01-01

    Fast dynamic function studies with positron emission tomography (PET), has the potential to give accurate information of physiological functions of the brain. This capability can be realised if the positron camera system accurately quantitates the tracer uptake in the brain with sufficiently high efficiency and in sufficiently short time intervals. However, in addition, the tracer concentration in blood, as a function of time, must be accurately determined. This paper describes and evaluates an automated blood sampling system. Two different detector units are compared. The use of the automated blood sampling system is demonstrated in studies of cerebral blood flow, in studies of the blood-brain barrier transfer of amino acids and of the cerebral oxygen consumption. 5 refs.; 7 figs

  1. Optimizing centrifugation of coagulation samples in laboratory automation.

    Science.gov (United States)

    Suchsland, Juliane; Friedrich, Nele; Grotevendt, Anne; Kallner, Anders; Lüdemann, Jan; Nauck, Matthias; Petersmann, Astrid

    2014-08-01

    High acceleration centrifugation conditions are used in laboratory automation systems to reduce the turnaround time (TAT) of clinical chemistry samples, but not of coagulation samples. This often requires separate sample flows. The CLSI guideline and manufacturers recommendations for coagulation assays aim at reducing platelet counts. For measurement of prothrombin time (PT) and activated partial thromboplastin time (APTT) platelet counts (Plt) below 200×10(9)/L are recommended. Other coagulation assays may require even lower platelet counts, e.g., less than 10 × 10(9)/L. Unifying centrifugation conditions can facilitate the integration of coagulation samples in the overall workflow of a laboratory automation system. We evaluated centrifugation conditions of coagulation samples by using high acceleration centrifugation conditions (5 min; 3280×g) in a single and two consecutive runs. RESULTS of coagulation assays [PT, APTT, coagulation factor VIII (F. VIII) and protein S] and platelet counts were compared after the first and second centrifugation. Platelet counts below 200×10(9)/L were obtained in all samples after the first centrifugation and less than 10 × 10(9)/L was obtained in 73% of the samples after a second centrifugation. Passing-Bablok regression analyses showed an equal performance of PT, APTT and F. VIII after first and second centrifugation whereas protein S measurements require a second centrifugation. Coagulation samples can be integrated into the workflow of a laboratory automation system using high acceleration centrifugation. A single centrifugation was sufficient for PT, APTT and F. VIII whereas two successive centrifugations appear to be sufficient for protein S activity.

  2. Automated reliability assessment for spectroscopic redshift measurements

    Science.gov (United States)

    Jamal, S.; Le Brun, V.; Le Fèvre, O.; Vibert, D.; Schmitt, A.; Surace, C.; Copin, Y.; Garilli, B.; Moresco, M.; Pozzetti, L.

    2018-03-01

    Context. Future large-scale surveys, such as the ESA Euclid mission, will produce a large set of galaxy redshifts (≥106) that will require fully automated data-processing pipelines to analyze the data, extract crucial information and ensure that all requirements are met. A fundamental element in these pipelines is to associate to each galaxy redshift measurement a quality, or reliability, estimate. Aim. In this work, we introduce a new approach to automate the spectroscopic redshift reliability assessment based on machine learning (ML) and characteristics of the redshift probability density function. Methods: We propose to rephrase the spectroscopic redshift estimation into a Bayesian framework, in order to incorporate all sources of information and uncertainties related to the redshift estimation process and produce a redshift posterior probability density function (PDF). To automate the assessment of a reliability flag, we exploit key features in the redshift posterior PDF and machine learning algorithms. Results: As a working example, public data from the VIMOS VLT Deep Survey is exploited to present and test this new methodology. We first tried to reproduce the existing reliability flags using supervised classification in order to describe different types of redshift PDFs, but due to the subjective definition of these flags (classification accuracy 58%), we soon opted for a new homogeneous partitioning of the data into distinct clusters via unsupervised classification. After assessing the accuracy of the new clusters via resubstitution and test predictions (classification accuracy 98%), we projected unlabeled data from preliminary mock simulations for the Euclid space mission into this mapping to predict their redshift reliability labels. Conclusions: Through the development of a methodology in which a system can build its own experience to assess the quality of a parameter, we are able to set a preliminary basis of an automated reliability assessment for

  3. Automating Groundwater Sampling At Hanford, The Next Step

    International Nuclear Information System (INIS)

    Connell, C.W.; Conley, S.F.; Hildebrand, R.D.; Cunningham, D.E.

    2010-01-01

    Historically, the groundwater monitoring activities at the Department of Energy's Hanford Site in southeastern Washington State have been very 'people intensive.' Approximately 1500 wells are sampled each year by field personnel or 'samplers.' These individuals have been issued pre-printed forms showing information about the well(s) for a particular sampling evolution. This information is taken from 2 official electronic databases: the Hanford Well information System (HWIS) and the Hanford Environmental Information System (HEIS). The samplers used these hardcopy forms to document the groundwater samples and well water-levels. After recording the entries in the field, the samplers turned the forms in at the end of the day and other personnel posted the collected information onto a spreadsheet that was then printed and included in a log book. The log book was then used to make manual entries of the new information into the software application(s) for the HEIS and HWIS databases. A pilot project for automating this extremely tedious process was lauched in 2008. Initially, the automation was focused on water-level measurements. Now, the effort is being extended to automate the meta-data associated with collecting groundwater samples. The project allowed electronic forms produced in the field by samplers to be used in a work flow process where the data is transferred to the database and electronic form is filed in managed records - thus eliminating manually completed forms. Elimating the manual forms and streamlining the data entry not only improved the accuracy of the information recorded, but also enhanced the efficiency and sampling capacity of field office personnel.

  4. [Evaluation of an automated streaking system of urine samples for urine cultures].

    Science.gov (United States)

    Bustamante, Verónica; Meza, Paulina; Román, Juan C; García, Patricia

    2014-12-01

    Automated systems have simplified laboratory workflow, improved standardization, traceability and diminished human errors and workload. Although microbiology laboratories have little automation, in recent years new tools for automating pre analytical steps have appeared. To assess the performance of an automated streaking machine for urine cultures and its agreement with the conventional manual plating method for semi quantitative colony counts. 495 urine samples for urinary culture were inoculated in CPS® agar using our standard protocol and the PREVI™ Isola. Rates of positivity, negativity, polymicrobial growth, bacterial species, colony counts and re-isolation requirements were compared. Agreement was achieved in 98.97% of the positive/negative results, in 99.39% of the polymicrobial growth, 99.76% of bacterial species isolated and in 98.56 % of colony counts. The need for re-isolation of colonies decreased from 12.1% to 1.1% using the automated system. PREVI™ Isola's performance was as expected, time saving and improving bacterial isolation. It represents a helpful tool for laboratory automation.

  5. Automated high-resolution NMR with a sample changer

    International Nuclear Information System (INIS)

    Wade, C.G.; Johnson, R.D.; Philson, S.B.; Strouse, J.; McEnroe, F.J.

    1989-01-01

    Within the past two years, it has become possible to obtain high-resolution NMR spectra using automated commercial instrumentation. Software control of all spectrometer functions has reduced most of the tedious manual operations to typing a few computer commands or even making selections from a menu. Addition of an automatic sample changer is the next natural step in improving efficiency and sample throughput; it has a significant (and even unexpected) impact on how NMR laboratories are run and how it is taught. Such an instrument makes even sophisticated experiments routine, so that people with no previous exposure to NMR can run these experiments after a training session of an hour or less. This A/C Interface examines the impact of such instrumentation on both the academic and the industrial laboratory

  6. Validation of Automated Scoring of Science Assessments

    Science.gov (United States)

    Liu, Ou Lydia; Rios, Joseph A.; Heilman, Michael; Gerard, Libby; Linn, Marcia C.

    2016-01-01

    Constructed response items can both measure the coherence of student ideas and serve as reflective experiences to strengthen instruction. We report on new automated scoring technologies that can reduce the cost and complexity of scoring constructed-response items. This study explored the accuracy of c-rater-ML, an automated scoring engine…

  7. Neural Multi-task Learning in Automated Assessment

    OpenAIRE

    Cummins, Ronan; Rei, Marek

    2018-01-01

    Grammatical error detection and automated essay scoring are two tasks in the area of automated assessment. Traditionally these tasks have been treated independently with different machine learning models and features used for each task. In this paper, we develop a multi-task neural network model that jointly optimises for both tasks, and in particular we show that neural automated essay scoring can be significantly improved. We show that while the essay score provides little evidence to infor...

  8. Current advances and strategies towards fully automated sample preparation for regulated LC-MS/MS bioanalysis.

    Science.gov (United States)

    Zheng, Naiyu; Jiang, Hao; Zeng, Jianing

    2014-09-01

    Robotic liquid handlers (RLHs) have been widely used in automated sample preparation for liquid chromatography-tandem mass spectrometry (LC-MS/MS) bioanalysis. Automated sample preparation for regulated bioanalysis offers significantly higher assay efficiency, better data quality and potential bioanalytical cost-savings. For RLHs that are used for regulated bioanalysis, there are additional requirements, including 21 CFR Part 11 compliance, software validation, system qualification, calibration verification and proper maintenance. This article reviews recent advances in automated sample preparation for regulated bioanalysis in the last 5 years. Specifically, it covers the following aspects: regulated bioanalysis requirements, recent advances in automation hardware and software development, sample extraction workflow simplification, strategies towards fully automated sample extraction, and best practices in automated sample preparation for regulated bioanalysis.

  9. Automated Ecological Assessment of Physical Activity: Advancing Direct Observation

    Directory of Open Access Journals (Sweden)

    Jordan A. Carlson

    2017-12-01

    Full Text Available Technological advances provide opportunities for automating direct observations of physical activity, which allow for continuous monitoring and feedback. This pilot study evaluated the initial validity of computer vision algorithms for ecological assessment of physical activity. The sample comprised 6630 seconds per camera (three cameras in total of video capturing up to nine participants engaged in sitting, standing, walking, and jogging in an open outdoor space while wearing accelerometers. Computer vision algorithms were developed to assess the number and proportion of people in sedentary, light, moderate, and vigorous activity, and group-based metabolic equivalents of tasks (MET-minutes. Means and standard deviations (SD of bias/difference values, and intraclass correlation coefficients (ICC assessed the criterion validity compared to accelerometry separately for each camera. The number and proportion of participants sedentary and in moderate-to-vigorous physical activity (MVPA had small biases (within 20% of the criterion mean and the ICCs were excellent (0.82–0.98. Total MET-minutes were slightly underestimated by 9.3–17.1% and the ICCs were good (0.68–0.79. The standard deviations of the bias estimates were moderate-to-large relative to the means. The computer vision algorithms appeared to have acceptable sample-level validity (i.e., across a sample of time intervals and are promising for automated ecological assessment of activity in open outdoor settings, but further development and testing is needed before such tools can be used in a diverse range of settings.

  10. Automated Intelligibility Assessment of Pathological Speech Using Phonological Features

    Directory of Open Access Journals (Sweden)

    Catherine Middag

    2009-01-01

    Full Text Available It is commonly acknowledged that word or phoneme intelligibility is an important criterion in the assessment of the communication efficiency of a pathological speaker. People have therefore put a lot of effort in the design of perceptual intelligibility rating tests. These tests usually have the drawback that they employ unnatural speech material (e.g., nonsense words and that they cannot fully exclude errors due to listener bias. Therefore, there is a growing interest in the application of objective automatic speech recognition technology to automate the intelligibility assessment. Current research is headed towards the design of automated methods which can be shown to produce ratings that correspond well with those emerging from a well-designed and well-performed perceptual test. In this paper, a novel methodology that is built on previous work (Middag et al., 2008 is presented. It utilizes phonological features, automatic speech alignment based on acoustic models that were trained on normal speech, context-dependent speaker feature extraction, and intelligibility prediction based on a small model that can be trained on pathological speech samples. The experimental evaluation of the new system reveals that the root mean squared error of the discrepancies between perceived and computed intelligibilities can be as low as 8 on a scale of 0 to 100.

  11. Energy Assessment of Automated Mobility Districts

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Yuche [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-08-03

    Automated vehicles (AVs) are increasingly being discussed as the basis for on-demand mobility services, introducing a new paradigm in which a fleet of AVs displace private automobiles for day-to-day travel in dense activity districts. This project examines such a concept to displace privately owned automobiles within a region containing dense activity generators (jobs, retail, entertainment, etc.), referred to as an automated mobility district (AMDs). The project reviews several such districts including airport, college campuses, business parks, downtown urban cores, and military bases, with examples of previous attempts to meet the mobility needs apart from private automobiles, some with automated technology and others with more traditional transit based solutions. The issues and benefits of AMDs are framed within the perspective of intra-district, inter-district, and border issues, and the requirements for a modeling framework are identified to adequately reflect the breadth of mobility, energy, and emissions impact anticipated with AMDs.

  12. Automated remedial assessment methodology software system

    International Nuclear Information System (INIS)

    Whiting, M.; Wilkins, M.; Stiles, D.

    1994-11-01

    The Automated Remedial Analysis Methodology (ARAM) software system has been developed by the Pacific Northwest Laboratory to assist the U.S. Department of Energy (DOE) in evaluating cleanup options for over 10,000 contaminated sites across the DOE complex. The automated methodology comprises modules for decision logic diagrams, technology applicability and effectiveness rules, mass balance equations, cost and labor estimating factors and equations, and contaminant stream routing. ARAM is used to select technologies for meeting cleanup targets; determine the effectiveness of the technologies in destroying, removing, or immobilizing contaminants; decide the nature and amount of secondary waste requiring further treatment; and estimate the cost and labor involved when applying technologies

  13. Operational Based Vision Assessment Automated Vision Test Collection User Guide

    Science.gov (United States)

    2017-05-15

    14. ABSTRACT The U.S. Air Force School of Aerospace Medicine Operational Based Vision Assessment Laboratory has developed a set of computer - based...Air Force School of Aerospace Medicine Operational Based Vision Assessment (OBVA) Laboratory has developed a set of computer -based, automated vision ...username of your computer ]  “App Data”  “Roaming”  Automated Vision Test”  “Settings”  “Calibration.” Once inside the “Calibration” folder

  14. Automated force volume image processing for biological samples.

    Directory of Open Access Journals (Sweden)

    Pavel Polyakov

    2011-04-01

    Full Text Available Atomic force microscopy (AFM has now become a powerful technique for investigating on a molecular level, surface forces, nanomechanical properties of deformable particles, biomolecular interactions, kinetics, and dynamic processes. This paper specifically focuses on the analysis of AFM force curves collected on biological systems, in particular, bacteria. The goal is to provide fully automated tools to achieve theoretical interpretation of force curves on the basis of adequate, available physical models. In this respect, we propose two algorithms, one for the processing of approach force curves and another for the quantitative analysis of retraction force curves. In the former, electrostatic interactions prior to contact between AFM probe and bacterium are accounted for and mechanical interactions operating after contact are described in terms of Hertz-Hooke formalism. Retraction force curves are analyzed on the basis of the Freely Jointed Chain model. For both algorithms, the quantitative reconstruction of force curves is based on the robust detection of critical points (jumps, changes of slope or changes of curvature which mark the transitions between the various relevant interactions taking place between the AFM tip and the studied sample during approach and retraction. Once the key regions of separation distance and indentation are detected, the physical parameters describing the relevant interactions operating in these regions are extracted making use of regression procedure for fitting experiments to theory. The flexibility, accuracy and strength of the algorithms are illustrated with the processing of two force-volume images, which collect a large set of approach and retraction curves measured on a single biological surface. For each force-volume image, several maps are generated, representing the spatial distribution of the searched physical parameters as estimated for each pixel of the force-volume image.

  15. Automation bias: empirical results assessing influencing factors.

    Science.gov (United States)

    Goddard, Kate; Roudsari, Abdul; Wyatt, Jeremy C

    2014-05-01

    To investigate the rate of automation bias - the propensity of people to over rely on automated advice and the factors associated with it. Tested factors were attitudinal - trust and confidence, non-attitudinal - decision support experience and clinical experience, and environmental - task difficulty. The paradigm of simulated decision support advice within a prescribing context was used. The study employed within participant before-after design, whereby 26 UK NHS General Practitioners were shown 20 hypothetical prescribing scenarios with prevalidated correct and incorrect answers - advice was incorrect in 6 scenarios. They were asked to prescribe for each case, followed by being shown simulated advice. Participants were then asked whether they wished to change their prescription, and the post-advice prescription was recorded. Rate of overall decision switching was captured. Automation bias was measured by negative consultations - correct to incorrect prescription switching. Participants changed prescriptions in 22.5% of scenarios. The pre-advice accuracy rate of the clinicians was 50.38%, which improved to 58.27% post-advice. The CDSS improved the decision accuracy in 13.1% of prescribing cases. The rate of automation bias, as measured by decision switches from correct pre-advice, to incorrect post-advice was 5.2% of all cases - a net improvement of 8%. More immediate factors such as trust in the specific CDSS, decision confidence, and task difficulty influenced rate of decision switching. Lower clinical experience was associated with more decision switching. Age, DSS experience and trust in CDSS generally were not significantly associated with decision switching. This study adds to the literature surrounding automation bias in terms of its potential frequency and influencing factors. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  16. Rapid and Automated Determination of Plutonium and Neptunium in Environmental Samples

    DEFF Research Database (Denmark)

    Qiao, Jixin

    This thesis presents improved analytical methods for rapid and automated determination of plutonium and neptunium in environmental samples using sequential injection (SI) based chromatography and inductively coupled plasma mass spectrometry (ICP-MS). The progress of methodology development in thi...... environmental risk monitoring and assessment, emergency preparedness and surveillance of contaminated areas....... for rapid and simultaneous determination of plutonium and neptunium within an SI system (Paper VI). The results demonstrate that the developed methods in this study are reliable and efficient for accurate assays of trace levels of plutonium and neptunium as demanded in different situations including...

  17. Automated Aqueous Sample Concentration Methods for in situ Astrobiological Instrumentation

    Science.gov (United States)

    Aubrey, A. D.; Grunthaner, F. J.

    2009-12-01

    The era of wet chemical experiments for in situ planetary science investigations is upon us, as evidenced by recent results from the surface of Mars by Phoenix’s microscopy, electrochemistry, and conductivity analyzer, MECA [1]. Studies suggest that traditional thermal volatilization methods for planetary science in situ investigations induce organic degradation during sample processing [2], an effect that is enhanced in the presence of oxidants [3]. Recent developments have trended towards adaptation of non-destructive aqueous extraction and analytical methods for future astrobiological instrumentation. Wet chemical extraction techniques under investigation include subcritical water extraction, SCWE [4], aqueous microwave assisted extraction, MAE, and organic solvent extraction [5]. Similarly, development of miniaturized analytical space flight instruments that require aqueous extracts include microfluidic capillary electrophoresis chips, μCE [6], liquid-chromatography mass-spectrometrometers, LC-MS [7], and life marker chips, LMC [8]. If organics are present on the surface of Mars, they are expected to be present at extremely low concentrations (parts-per-billion), orders of magnitude below the sensitivities of most flight instrument technologies. Therefore, it becomes necessary to develop and integrate concentration mechanisms for in situ sample processing before delivery to analytical flight instrumentation. We present preliminary results of automated solid-phase-extraction (SPE) sample purification and concentration methods for the treatment of highly saline aqueous soil extracts. These methods take advantage of the affinity of low molecular weight organic compounds with natural and synthetic scavenger materials. These interactions allow for the separation of target organic analytes from unfavorable background species (i.e. salts) during inline treatment, and a clever method for selective desorption is utilized to obtain concentrated solutions on the order

  18. Automated assessment of cognitive health using smart home technologies.

    Science.gov (United States)

    Dawadi, Prafulla N; Cook, Diane J; Schmitter-Edgecombe, Maureen; Parsey, Carolyn

    2013-01-01

    The goal of this work is to develop intelligent systems to monitor the wellbeing of individuals in their home environments. This paper introduces a machine learning-based method to automatically predict activity quality in smart homes and automatically assess cognitive health based on activity quality. This paper describes an automated framework to extract set of features from smart home sensors data that reflects the activity performance or ability of an individual to complete an activity which can be input to machine learning algorithms. Output from learning algorithms including principal component analysis, support vector machine, and logistic regression algorithms are used to quantify activity quality for a complex set of smart home activities and predict cognitive health of participants. Smart home activity data was gathered from volunteer participants (n=263) who performed a complex set of activities in our smart home testbed. We compare our automated activity quality prediction and cognitive health prediction with direct observation scores and health assessment obtained from neuropsychologists. With all samples included, we obtained statistically significant correlation (r=0.54) between direct observation scores and predicted activity quality. Similarly, using a support vector machine classifier, we obtained reasonable classification accuracy (area under the ROC curve=0.80, g-mean=0.73) in classifying participants into two different cognitive classes, dementia and cognitive healthy. The results suggest that it is possible to automatically quantify the task quality of smart home activities and perform limited assessment of the cognitive health of individual if smart home activities are properly chosen and learning algorithms are appropriately trained.

  19. Automated sampling and data processing derived from biomimetic membranes

    DEFF Research Database (Denmark)

    Perry, Mark; Vissing, Thomas; Boesen, P.

    2009-01-01

    Recent advances in biomimetic membrane systems have resulted in an increase in membrane lifetimes from hours to days and months. Long-lived membrane systems demand the development of both new automated monitoring equipment capable of measuring electrophysiological membrane characteristics and new...

  20. Automated sampling and data processing derived from biomimetic membranes

    International Nuclear Information System (INIS)

    Perry, M; Vissing, T; Hansen, J S; Nielsen, C H; Boesen, T P; Emneus, J

    2009-01-01

    Recent advances in biomimetic membrane systems have resulted in an increase in membrane lifetimes from hours to days and months. Long-lived membrane systems demand the development of both new automated monitoring equipment capable of measuring electrophysiological membrane characteristics and new data processing software to analyze and organize the large amounts of data generated. In this work, we developed an automated instrumental voltage clamp solution based on a custom-designed software controller application (the WaveManager), which enables automated on-line voltage clamp data acquisition applicable to long-time series experiments. We designed another software program for off-line data processing. The automation of the on-line voltage clamp data acquisition and off-line processing was furthermore integrated with a searchable database (DiscoverySheet(TM)) for efficient data management. The combined solution provides a cost efficient and fast way to acquire, process and administrate large amounts of voltage clamp data that may be too laborious and time consuming to handle manually. (communication)

  1. A continuous flow from sample collection to data acceptability determination using an automated system

    International Nuclear Information System (INIS)

    Fisk, J.F.; Leasure, C.; Sauter, A.D.

    1993-01-01

    In its role as regulator, EPA is the recipient of enormous reams of analytical data, especially within the Superfund Program. In order to better manage the volume of paper that comes in daily, Superfund has required its laboratories to provide data that is contained on reporting forms to be delivered also on a diskette for uploading into data bases for various purposes, such as checking for contractual compliance, tracking quality assurance parameters, and, ultimately, for reviewing the data by computer. This last area, automated review of the data, has generated programs that are not necessarily appropriate for use by clients other than Superfund. Such is the case with Los Alamos National Laboratory's Environmental Chemistry Group and its emerging subcontractor community, designed to meet the needs of the remedial action program at LANL. LANL is in the process of implementing an automated system that will be used from the planning stage of sample collection to the production of a project-specific report on analytical data quality. Included are electronic scheduling and tracking of samples, data entry, checking and transmission, data assessment and qualification for use, and report generation that will tie the analytical data quality back to the performance criteria defined prior to sample collection. Industry standard products will be used (e.g., ORACLE, Microsoft Excel) to ensure support for users, prevent dependence on proprietary software, and to protect LANL's investment for the future

  2. Sample preparation automation for dosing plutonium in urine

    International Nuclear Information System (INIS)

    Jeanmaire, Lucien; Ballada, Jean; Ridelle Berger, Ariane

    1969-06-01

    After having indicated that dosing urinary plutonium by using the Henry technique can be divided into three stages (plutonium concentration by precipitation, passing the solution on an anionic resin column and plutonium elution, and eluate evaporation to obtain a source of which the radioactivity is measured), and recalled that the automation of the second stage has been reported in another document, this document describes the automation of the first stage, i.e. obtaining from urine a residue containing the plutonium, and sufficiently mineralized to be analyzed by means of ion exchanging resins. Two techniques are proposed, leading to slightly different devices. The different operations to be performed are indicated. The different components of the apparatus are described: beakers, hot plate stirrers, reagent circuits, a system for supernatant suction, and a control-command circuit. The operation and use are then described, and results are given

  3. Automated washing of FTA Card punches and PCR setup for reference samples using a LIMS-controlled Sias Xantus automated liquid handler

    DEFF Research Database (Denmark)

    Stangegaard, Michael; Olsen, Addie Nina; Frøslev, Tobias G.

    2009-01-01

    We have implemented and validated automated methods for washing FTA Card punches containing buccal samples and subsequent PCR setup using a Sias Xantus automated liquid handler. The automated methods were controlled by worklists generated by our LabWare Laboratory Information Management System...

  4. Evaluation of an automated protocol for efficient and reliable DNA extraction of dietary samples.

    Science.gov (United States)

    Wallinger, Corinna; Staudacher, Karin; Sint, Daniela; Thalinger, Bettina; Oehm, Johannes; Juen, Anita; Traugott, Michael

    2017-08-01

    Molecular techniques have become an important tool to empirically assess feeding interactions. The increased usage of next-generation sequencing approaches has stressed the need of fast DNA extraction that does not compromise DNA quality. Dietary samples here pose a particular challenge, as these demand high-quality DNA extraction procedures for obtaining the minute quantities of short-fragmented food DNA. Automatic high-throughput procedures significantly decrease time and costs and allow for standardization of extracting total DNA. However, these approaches have not yet been evaluated for dietary samples. We tested the efficiency of an automatic DNA extraction platform and a traditional CTAB protocol, employing a variety of dietary samples including invertebrate whole-body extracts as well as invertebrate and vertebrate gut content samples and feces. Extraction efficacy was quantified using the proportions of successful PCR amplifications of both total and prey DNA, and cost was estimated in terms of time and material expense. For extraction of total DNA, the automated platform performed better for both invertebrate and vertebrate samples. This was also true for prey detection in vertebrate samples. For the dietary analysis in invertebrates, there is still room for improvement when using the high-throughput system for optimal DNA yields. Overall, the automated DNA extraction system turned out as a promising alternative to labor-intensive, low-throughput manual extraction methods such as CTAB. It is opening up the opportunity for an extensive use of this cost-efficient and innovative methodology at low contamination risk also in trophic ecology.

  5. An Automated Sample Preparation System for Large-Scale DNA Sequencing

    Science.gov (United States)

    Marziali, Andre; Willis, Thomas D.; Federspiel, Nancy A.; Davis, Ronald W.

    1999-01-01

    Recent advances in DNA sequencing technologies, both in the form of high lane-density gels and automated capillary systems, will lead to an increased requirement for sample preparation systems that operate at low cost and high throughput. As part of the development of a fully automated sequencing system, we have developed an automated subsystem capable of producing 10,000 sequence-ready ssDNA templates per day from libraries of M13 plaques at a cost of $0.29 per sample. This Front End has been in high throughput operation since June, 1997 and has produced > 400,000 high-quality DNA templates. PMID:10330125

  6. Technology assessment of automation trends in the modular home industry

    Science.gov (United States)

    Phil Mitchell; Robert Russell Hurst

    2009-01-01

    This report provides an assessment of technology used in manufacturing modular homes in the United States, and that used in the German prefabricated wooden home industry. It is the first step toward identifying the research needs in automation and manufacturing methods that will facilitate mass customization in the home manufacturing industry. Within the United States...

  7. Human and Automated Assessment of Oral Reading Fluency

    Science.gov (United States)

    Bolaños, Daniel; Cole, Ron A.; Ward, Wayne H.; Tindal, Gerald A.; Hasbrouck, Jan; Schwanenflugel, Paula J.

    2013-01-01

    This article describes a comprehensive approach to fully automated assessment of children's oral reading fluency (ORF), one of the most informative and frequently administered measures of children's reading ability. Speech recognition and machine learning techniques are described that model the 3 components of oral reading fluency: word accuracy,…

  8. The Automation of Nowcast Model Assessment Processes

    Science.gov (United States)

    2016-09-01

    9 km (m2), and 3 km (m3) will be evaluated over D1 (o1), D2 ( o2 ), and D3 (o3), respectively. The goal would be to assess and calculate error...consistent domain. For RDA, the innermost domain masking files are needed and should be placed in the same directory as the Point-Stat configuration

  9. Automated volumetric breast density estimation: A comparison with visual assessment

    International Nuclear Information System (INIS)

    Seo, J.M.; Ko, E.S.; Han, B.-K.; Ko, E.Y.; Shin, J.H.; Hahn, S.Y.

    2013-01-01

    Aim: To compare automated volumetric breast density (VBD) measurement with visual assessment according to Breast Imaging Reporting and Data System (BI-RADS), and to determine the factors influencing the agreement between them. Materials and methods: One hundred and ninety-three consecutive screening mammograms reported as negative were included in the study. Three radiologists assigned qualitative BI-RADS density categories to the mammograms. An automated volumetric breast-density method was used to measure VBD (% breast density) and density grade (VDG). Each case was classified into an agreement or disagreement group according to the comparison between visual assessment and VDG. The correlation between visual assessment and VDG was obtained. Various physical factors were compared between the two groups. Results: Agreement between visual assessment by the radiologists and VDG was good (ICC value = 0.757). VBD showed a highly significant positive correlation with visual assessment (Spearman's ρ = 0.754, p < 0.001). VBD and the x-ray tube target was significantly different between the agreement group and the disagreement groups (p = 0.02 and 0.04, respectively). Conclusion: Automated VBD is a reliable objective method to measure breast density. The agreement between VDG and visual assessment by radiologist might be influenced by physical factors

  10. Flightdeck Automation Problems (FLAP) Model for Safety Technology Portfolio Assessment

    Science.gov (United States)

    Ancel, Ersin; Shih, Ann T.

    2014-01-01

    NASA's Aviation Safety Program (AvSP) develops and advances methodologies and technologies to improve air transportation safety. The Safety Analysis and Integration Team (SAIT) conducts a safety technology portfolio assessment (PA) to analyze the program content, to examine the benefits and risks of products with respect to program goals, and to support programmatic decision making. The PA process includes systematic identification of current and future safety risks as well as tracking several quantitative and qualitative metrics to ensure the program goals are addressing prominent safety risks accurately and effectively. One of the metrics within the PA process involves using quantitative aviation safety models to gauge the impact of the safety products. This paper demonstrates the role of aviation safety modeling by providing model outputs and evaluating a sample of portfolio elements using the Flightdeck Automation Problems (FLAP) model. The model enables not only ranking of the quantitative relative risk reduction impact of all portfolio elements, but also highlighting the areas with high potential impact via sensitivity and gap analyses in support of the program office. Although the model outputs are preliminary and products are notional, the process shown in this paper is essential to a comprehensive PA of NASA's safety products in the current program and future programs/projects.

  11. SASSI: Subsystems for Automated Subsurface Sampling Instruments, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Future robotic planetary exploration missions will benefit greatly from the ability to capture rock and/or regolith core samples that deliver the stratigraphy of the...

  12. SASSI: Subsystems for Automated Subsurface Sampling Instruments, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Autonomous surface sampling systems are necessary, near term, to construct a historical view of planetary significant events; as well as allow for the identification...

  13. Rapid and automated determination of plutonium and neptunium in environmental samples

    Energy Technology Data Exchange (ETDEWEB)

    Qiao, J.

    2011-03-15

    This thesis presents improved analytical methods for rapid and automated determination of plutonium and neptunium in environmental samples using sequential injection (SI) based chromatography and inductively coupled plasma mass spectrometry (ICP-MS). The progress of methodology development in this work consists of 5 subjects stated as follows: 1) Development and optimization of an SI-anion exchange chromatographic method for rapid determination of plutonium in environmental samples in combination of inductively coupled plasma mass spectrometry detection (Paper II); (2) Methodology development and optimization for rapid determination of plutonium in environmental samples using SI-extraction chromatography prior to inductively coupled plasma mass spectrometry (Paper III); (3) Development of an SI-chromatographic method for simultaneous determination of plutonium and neptunium in environmental samples (Paper IV); (4) Investigation of the suitability and applicability of 242Pu as a tracer for rapid neptunium determination using anion exchange chromatography in an SI-network coupled with inductively coupled plasma mass spectrometry (Paper V); (5) Exploration of macro-porous anion exchange chromatography for rapid and simultaneous determination of plutonium and neptunium within an SI system (Paper VI). The results demonstrate that the developed methods in this study are reliable and efficient for accurate assays of trace levels of plutonium and neptunium as demanded in different situations including environmental risk monitoring and assessment, emergency preparedness and surveillance of contaminated areas. (Author)

  14. Automated modal parameter estimation using correlation analysis and bootstrap sampling

    Science.gov (United States)

    Yaghoubi, Vahid; Vakilzadeh, Majid K.; Abrahamsson, Thomas J. S.

    2018-02-01

    The estimation of modal parameters from a set of noisy measured data is a highly judgmental task, with user expertise playing a significant role in distinguishing between estimated physical and noise modes of a test-piece. Various methods have been developed to automate this procedure. The common approach is to identify models with different orders and cluster similar modes together. However, most proposed methods based on this approach suffer from high-dimensional optimization problems in either the estimation or clustering step. To overcome this problem, this study presents an algorithm for autonomous modal parameter estimation in which the only required optimization is performed in a three-dimensional space. To this end, a subspace-based identification method is employed for the estimation and a non-iterative correlation-based method is used for the clustering. This clustering is at the heart of the paper. The keys to success are correlation metrics that are able to treat the problems of spatial eigenvector aliasing and nonunique eigenvectors of coalescent modes simultaneously. The algorithm commences by the identification of an excessively high-order model from frequency response function test data. The high number of modes of this model provides bases for two subspaces: one for likely physical modes of the tested system and one for its complement dubbed the subspace of noise modes. By employing the bootstrap resampling technique, several subsets are generated from the same basic dataset and for each of them a model is identified to form a set of models. Then, by correlation analysis with the two aforementioned subspaces, highly correlated modes of these models which appear repeatedly are clustered together and the noise modes are collected in a so-called Trashbox cluster. Stray noise modes attracted to the mode clusters are trimmed away in a second step by correlation analysis. The final step of the algorithm is a fuzzy c-means clustering procedure applied to

  15. Automated Blood Sample Preparation Unit (ABSPU) for Portable Microfluidic Flow Cytometry.

    Science.gov (United States)

    Chaturvedi, Akhil; Gorthi, Sai Siva

    2017-02-01

    Portable microfluidic diagnostic devices, including flow cytometers, are being developed for point-of-care settings, especially in conjunction with inexpensive imaging devices such as mobile phone cameras. However, two pervasive drawbacks of these have been the lack of automated sample preparation processes and cells settling out of sample suspensions, leading to inaccurate results. We report an automated blood sample preparation unit (ABSPU) to prevent blood samples from settling in a reservoir during loading of samples in flow cytometers. This apparatus automates the preanalytical steps of dilution and staining of blood cells prior to microfluidic loading. It employs an assembly with a miniature vibration motor to drive turbulence in a sample reservoir. To validate performance of this system, we present experimental evidence demonstrating prevention of blood cell settling, cell integrity, and staining of cells prior to flow cytometric analysis. This setup is further integrated with a microfluidic imaging flow cytometer to investigate cell count variability. With no need for prior sample preparation, a drop of whole blood can be directly introduced to the setup without premixing with buffers manually. Our results show that integration of this assembly with microfluidic analysis provides a competent automation tool for low-cost point-of-care blood-based diagnostics.

  16. Development of an automated fracture assessment system for nuclear structures

    International Nuclear Information System (INIS)

    Mikkola, T.P.J.; Raiko, H.

    1991-01-01

    A program system for automated fracture mechanical analyses with three dimensional (3D) finite element (FE) models has been developed. The accuracy of the generated models is widely tested. The system is aimed at safety analyses of nuclear power plant components. Moreover, the results of the fracture mechanical FE-analyses can be implemented in an easy to use fracture assessment program based on the use of weight functions. (author)

  17. Assessment of Reproducibility – Automated and Digital Caliper ECG Measurement in the Framingham Heart Study

    Science.gov (United States)

    Burke, Gordon M.; Wang, Na; Blease, Sue; Levy, Daniel; Magnani, Jared W.

    2014-01-01

    Background Digitized electrocardiography permits the rapid, automated quantification of electrocardiograms (ECGs) for analysis. Community- and population-based studies have increasingly integrated such data. Assessing the reproducibility of automated ECG measures with manual measures is a critical step in preparation for using automated measures for research purposes. We recently established an ECG repository of digitally recorded ECGs for the Framingham Heart Study and we sought to assess the reproducibility of automated and manual measures. Methods We selected 185 digitally recorded ECGs from routine visits of Framingham Heart Study participants spanning from 1986 to 2012. We selected the following ECG measures for their relevance to clinical and epidemiologic research: P wave duration, P wave amplitude, and PR interval in lead II; QRS duration and R wave amplitude in lead V6; and QT interval in lead V5. We obtained automated values for each waveform, and used a digital caliper for manual measurements. Digital caliper measurements were repeated in a subset (n=81) of the samples for intrarater assessment. Results We calculated the intraclass correlation coefficient (ICC) values for the interrater and intrarater assessments. P wave duration had the lowest interrater ICC (r=0.46) and lowest intrarater ICC (r=0.57). R wave amplitude had the highest interrater and intrarater ICC (r=0.98) indicating excellent reproducibility. The remaining measures had interrater and intrarater ICCs of r≥0.81. Conclusions The interrater reproducibility findings for P wave amplitude, PR interval, QT interval, QRS duration, and R wave amplitude were excellent. In contrast, the reproducibility of P wave duration was more modest. These findings indicate high reproducibility of most automated and manual ECG measurements. PMID:24792985

  18. Assessment of reproducibility--automated and digital caliper ECG measurement in the Framingham Heart Study.

    Science.gov (United States)

    Burke, Gordon M; Wang, Na; Blease, Sue; Levy, Daniel; Magnani, Jared W

    2014-01-01

    Digitized electrocardiography permits the rapid, automated quantification of electrocardiograms (ECGs) for analysis. Community- and population-based studies have increasingly integrated such data. Assessing the reproducibility of automated ECG measures with manual measures is a critical step in preparation for using automated measures for research purposes. We recently established an ECG repository of digitally recorded ECGs for the Framingham Heart Study and we sought to assess the reproducibility of automated and manual measures. We selected 185 digitally recorded ECGs from routine visits of Framingham Heart Study participants spanning from 1986 to 2012. We selected the following ECG measures for their relevance to clinical and epidemiologic research: P wave duration, P wave amplitude, and PR interval in lead II; QRS duration and R wave amplitude in lead V6; and QT interval in lead V5. We obtained automated values for each waveform, and used a digital caliper for manual measurements. Digital caliper measurements were repeated in a subset (n=81) of the samples for intrarater assessment. We calculated the intraclass correlation coefficient (ICC) values for the interrater and intrarater assessments. P wave duration had the lowest interrater ICC (r=0.46) and lowest intrarater ICC (r=0.57). R wave amplitude had the highest interrater and intrarater ICC (r=0.98) indicating excellent reproducibility. The remaining measures had interrater and intrarater ICCs of r≥0.81. The interrater reproducibility findings for P wave amplitude, PR interval, QT interval, QRS duration, and R wave amplitude were excellent. In contrast, the reproducibility of P wave duration was more modest. These findings indicate high reproducibility of most automated and manual ECG measurements. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. Automated Sample Preparation for Radiogenic and Non-Traditional Metal Isotopes: Removing an Analytical Barrier for High Sample Throughput

    Science.gov (United States)

    Field, M. Paul; Romaniello, Stephen; Gordon, Gwyneth W.; Anbar, Ariel D.; Herrmann, Achim; Martinez-Boti, Miguel A.; Anagnostou, Eleni; Foster, Gavin L.

    2014-05-01

    MC-ICP-MS has dramatically improved the analytical throughput for high-precision radiogenic and non-traditional isotope ratio measurements, compared to TIMS. The generation of large data sets, however, remains hampered by tedious manual drip chromatography required for sample purification. A new, automated chromatography system reduces the laboratory bottle neck and expands the utility of high-precision isotope analyses in applications where large data sets are required: geochemistry, forensic anthropology, nuclear forensics, medical research and food authentication. We have developed protocols to automate ion exchange purification for several isotopic systems (B, Ca, Fe, Cu, Zn, Sr, Cd, Pb and U) using the new prepFAST-MC™ (ESI, Nebraska, Omaha). The system is not only inert (all-flouropolymer flow paths), but is also very flexible and can easily facilitate different resins, samples, and reagent types. When programmed, precise and accurate user defined volumes and flow rates are implemented to automatically load samples, wash the column, condition the column and elute fractions. Unattended, the automated, low-pressure ion exchange chromatography system can process up to 60 samples overnight. Excellent reproducibility, reliability, recovery, with low blank and carry over for samples in a variety of different matrices, have been demonstrated to give accurate and precise isotopic ratios within analytical error for several isotopic systems (B, Ca, Fe, Cu, Zn, Sr, Cd, Pb and U). This illustrates the potential of the new prepFAST-MC™ (ESI, Nebraska, Omaha) as a powerful tool in radiogenic and non-traditional isotope research.

  20. An Automated Algorithm to Screen Massive Training Samples for a Global Impervious Surface Classification

    Science.gov (United States)

    Tan, Bin; Brown de Colstoun, Eric; Wolfe, Robert E.; Tilton, James C.; Huang, Chengquan; Smith, Sarah E.

    2012-01-01

    An algorithm is developed to automatically screen the outliers from massive training samples for Global Land Survey - Imperviousness Mapping Project (GLS-IMP). GLS-IMP is to produce a global 30 m spatial resolution impervious cover data set for years 2000 and 2010 based on the Landsat Global Land Survey (GLS) data set. This unprecedented high resolution impervious cover data set is not only significant to the urbanization studies but also desired by the global carbon, hydrology, and energy balance researches. A supervised classification method, regression tree, is applied in this project. A set of accurate training samples is the key to the supervised classifications. Here we developed the global scale training samples from 1 m or so resolution fine resolution satellite data (Quickbird and Worldview2), and then aggregate the fine resolution impervious cover map to 30 m resolution. In order to improve the classification accuracy, the training samples should be screened before used to train the regression tree. It is impossible to manually screen 30 m resolution training samples collected globally. For example, in Europe only, there are 174 training sites. The size of the sites ranges from 4.5 km by 4.5 km to 8.1 km by 3.6 km. The amount training samples are over six millions. Therefore, we develop this automated statistic based algorithm to screen the training samples in two levels: site and scene level. At the site level, all the training samples are divided to 10 groups according to the percentage of the impervious surface within a sample pixel. The samples following in each 10% forms one group. For each group, both univariate and multivariate outliers are detected and removed. Then the screen process escalates to the scene level. A similar screen process but with a looser threshold is applied on the scene level considering the possible variance due to the site difference. We do not perform the screen process across the scenes because the scenes might vary due to

  1. Automated injection of a radioactive sample for preparative HPLC with feedback control

    International Nuclear Information System (INIS)

    Iwata, Ren; Yamazaki, Shigeki

    1990-01-01

    The injection of a radioactive reaction mixture into a preparative HPLC column has been automated with computer control for rapid purification of routinely prepared positron emitting radiopharmaceuticals. Using pneumatic valves, a motor-driven pump and a liquid level sensor, two intelligent injection methods for the automation were compared with regard to efficient and rapid sample loading into a 2 mL loop of the 6-way valve. One, a precise but rather slow method, was demonstrated to be suitable for purification of 18 F-radiopharmaceuticals, while the other, due to its rapid operation, was more suitable for 11 C-radiopharmaceuticals. A sample volume of approx 0.5 mL can be injected onto a preparative HPLC column with over 90% efficiency with the present automated system. (author)

  2. Automated Research Impact Assessment: A New Bibliometrics Approach.

    Science.gov (United States)

    Drew, Christina H; Pettibone, Kristianna G; Finch, Fallis Owen; Giles, Douglas; Jordan, Paul

    2016-03-01

    As federal programs are held more accountable for their research investments, The National Institute of Environmental Health Sciences (NIEHS) has developed a new method to quantify the impact of our funded research on the scientific and broader communities. In this article we review traditional bibliometric analyses, address challenges associated with them, and describe a new bibliometric analysis method, the Automated Research Impact Assessment (ARIA). ARIA taps into a resource that has only rarely been used for bibliometric analyses: references cited in "important" research artifacts, such as policies, regulations, clinical guidelines, and expert panel reports. The approach includes new statistics that science managers can use to benchmark contributions to research by funding source. This new method provides the ability to conduct automated impact analyses of federal research that can be incorporated in program evaluations. We apply this method to several case studies to examine the impact of NIEHS funded research.

  3. A bench-top automated workstation for nucleic acid isolation from clinical sample types.

    Science.gov (United States)

    Thakore, Nitu; Garber, Steve; Bueno, Arial; Qu, Peter; Norville, Ryan; Villanueva, Michael; Chandler, Darrell P; Holmberg, Rebecca; Cooney, Christopher G

    2018-04-18

    Systems that automate extraction of nucleic acid from cells or viruses in complex clinical matrices have tremendous value even in the absence of an integrated downstream detector. We describe our bench-top automated workstation that integrates our previously-reported extraction method - TruTip - with our newly-developed mechanical lysis method. This is the first report of this method for homogenizing viscous and heterogeneous samples and lysing difficult-to-disrupt cells using "MagVor": a rotating magnet that rotates a miniature stir disk amidst glass beads confined inside of a disposable tube. Using this system, we demonstrate automated nucleic acid extraction from methicillin-resistant Staphylococcus aureus (MRSA) in nasopharyngeal aspirate (NPA), influenza A in nasopharyngeal swabs (NPS), human genomic DNA from whole blood, and Mycobacterium tuberculosis in NPA. The automated workstation yields nucleic acid with comparable extraction efficiency to manual protocols, which include commercially-available Qiagen spin column kits, across each of these sample types. This work expands the scope of applications beyond previous reports of TruTip to include difficult-to-disrupt cell types and automates the process, including a method for removal of organics, inside a compact bench-top workstation. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Biological Environmental Sampling Technologies Assessment

    Science.gov (United States)

    2015-12-01

    assay performance for the detection of target pathogens or protein biomarkers in liquid matrices. The nanomanipulation technology provides a dramatic...personal protective equipment qPCR quantitative polymerase chain reaction RAID Rapid Assessment Initial Detection kit RFI request for information RT...Carrie Poore Robert Dorsey RESEARCH AND TECHNOLOGY DIRECTORATE Aaron Chonko David Grieco JOINT BIOLOGICAL TACTICAL DETECTION SYSTEM

  5. Feasibility of surface sampling in automated inspection of concrete aggregates during bulk transport on a conveyor

    NARCIS (Netherlands)

    Bakker, M.C.M.; Di Maio, F.; Lotfi, S.; Bakker, M.; Hu, M.; Vahidi, A.

    2017-01-01

    Automated optic inspection of concrete aggregates for pollutants (e.g. wood, plastics, gypsum and brick) is required to establish the suitability for reuse in new concrete products. Inspection is more efficient when directly sampling the materials on the conveyor belt instead of feeding them in a

  6. Novel diffusion cell for in vitro transdermal permeation, compatible with automated dynamic sampling

    NARCIS (Netherlands)

    Bosman, I.J; Lawant, A.L; Avegaart, S.R.; Ensing, K; de Zeeuw, R.A

    The development of a new diffusion cell for in vitro transdermal permeation is described. The so-called Kelder cells were used in combination with the ASPEC system (Automatic Sample Preparation with Extraction Columns), which is designed for the automation of solid-phase extractions (SPE). Instead

  7. Thermophilic Campylobacter spp. in turkey samples: evaluation of two automated enzyme immunoassays and conventional microbiological techniques

    DEFF Research Database (Denmark)

    Borck, Birgitte; Stryhn, H.; Ersboll, A.K.

    2002-01-01

    Aims: To determine the sensitivity and specificity of two automated enzyme immunoassays (EIA), EiaFoss and Minividas, and a conventional microbiological culture technique for detecting thermophilic Campylobacter spp. in turkey samples. Methods and Results: A total of 286 samples (faecal, meat......, neckskin and environmental samples) were collected over a period of 4 months at a turkey slaughterhouse and meat-cutting plant in Denmark. Faecal and environmental samples were tested by the conventional culture method and by the two EIAs, whereas meat and neckskin samples were tested by the two EIAs only...

  8. Integrated Automation of High-Throughput Screening and Reverse Phase Protein Array Sample Preparation

    DEFF Research Database (Denmark)

    Pedersen, Marlene Lemvig; Block, Ines; List, Markus

    multiplexing readouts, but this has a natural limitation. High-content screening via image acquisition and analysis allows multiplexing of few parameters, but is connected to substantial time consumption and complex logistics. We report on integration of Reverse Phase Protein Arrays (RPPA)-based readouts...... into automated robotic high-throughput screens, which allows subsequent protein quantification. In this integrated solution, samples are directly forwarded to automated cell lysate preparation and preparation of dilution series, including reformatting to a protein spotter-compatible format after the high...

  9. Experiences of Using Automated Assessment in Computer Science Courses

    Directory of Open Access Journals (Sweden)

    John English

    2015-10-01

    Full Text Available In this paper we discuss the use of automated assessment in a variety of computer science courses that have been taught at Israel Academic College by the authors. The course assignments were assessed entirely automatically using Checkpoint, a web-based automated assessment framework. The assignments all used free-text questions (where the students type in their own answers. Students were allowed to correct errors based on feedback provided by the system and resubmit their answers. A total of 141 students were surveyed to assess their opinions of this approach, and we analysed their responses. Analysis of the questionnaire showed a low correlation between questions, indicating the statistical independence of the individual questions. As a whole, student feedback on using Checkpoint was very positive, emphasizing the benefits of multiple attempts, impartial marking, and a quick turnaround time for submissions. Many students said that Checkpoint gave them confidence in learning and motivation to practise. Students also said that the detailed feedback that Checkpoint generated when their programs failed helped them understand their mistakes and how to correct them.

  10. Automated assessment of the quality of depression websites.

    Science.gov (United States)

    Griffiths, Kathleen M; Tang, Thanh Tin; Hawking, David; Christensen, Helen

    2005-12-30

    Since health information on the World Wide Web is of variable quality, methods are needed to assist consumers to identify health websites containing evidence-based information. Manual assessment tools may assist consumers to evaluate the quality of sites. However, these tools are poorly validated and often impractical. There is a need to develop better consumer tools, and in particular to explore the potential of automated procedures for evaluating the quality of health information on the web. This study (1) describes the development of an automated quality assessment procedure (AQA) designed to automatically rank depression websites according to their evidence-based quality; (2) evaluates the validity of the AQA relative to human rated evidence-based quality scores; and (3) compares the validity of Google PageRank and the AQA as indicators of evidence-based quality. The AQA was developed using a quality feedback technique and a set of training websites previously rated manually according to their concordance with statements in the Oxford University Centre for Evidence-Based Mental Health's guidelines for treating depression. The validation phase involved 30 websites compiled from the DMOZ, Yahoo! and LookSmart Depression Directories by randomly selecting six sites from each of the Google PageRank bands of 0, 1-2, 3-4, 5-6 and 7-8. Evidence-based ratings from two independent raters (based on concordance with the Oxford guidelines) were then compared with scores derived from the automated AQA and Google algorithms. There was no overlap in the websites used in the training and validation phases of the study. The correlation between the AQA score and the evidence-based ratings was high and significant (r=0.85, PPageRank and the evidence-based score was lower than that for the AQA. When sites with zero PageRanks were included the association was weak and non-significant (r=0.23, P=.22). When sites with zero PageRanks were excluded, the correlation was moderate (r=.61

  11. Feasibility of automated speech sample collection with stuttering children using interactive voice response (IVR) technology.

    Science.gov (United States)

    Vogel, Adam P; Block, Susan; Kefalianos, Elaina; Onslow, Mark; Eadie, Patricia; Barth, Ben; Conway, Laura; Mundt, James C; Reilly, Sheena

    2015-04-01

    To investigate the feasibility of adopting automated interactive voice response (IVR) technology for remotely capturing standardized speech samples from stuttering children. Participants were 10 6-year-old stuttering children. Their parents called a toll-free number from their homes and were prompted to elicit speech from their children using a standard protocol involving conversation, picture description and games. The automated IVR system was implemented using an off-the-shelf telephony software program and delivered by a standard desktop computer. The software infrastructure utilizes voice over internet protocol. Speech samples were automatically recorded during the calls. Video recordings were simultaneously acquired in the home at the time of the call to evaluate the fidelity of the telephone collected samples. Key outcome measures included syllables spoken, percentage of syllables stuttered and an overall rating of stuttering severity using a 10-point scale. Data revealed a high level of relative reliability in terms of intra-class correlation between the video and telephone acquired samples on all outcome measures during the conversation task. Findings were less consistent for speech samples during picture description and games. Results suggest that IVR technology can be used successfully to automate remote capture of child speech samples.

  12. Core sampling system spare parts assessment

    International Nuclear Information System (INIS)

    Walter, E.J.

    1995-01-01

    Soon, there will be 4 independent core sampling systems obtaining samples from the underground tanks. It is desirable that these systems be available for sampling during the next 2 years. This assessment was prepared to evaluate the adequacy of the spare parts identified for the core sampling system and to provide recommendations that may remediate overages or inadequacies of spare parts

  13. Theoretical methods in the assessment of vision and automated perimetry.

    Science.gov (United States)

    Jindra, Lawrence F

    2006-01-01

    An analytic understanding of automated perimetry requires an appreciation of the fundamental theories of vision and an understanding of the basic mathematical rudiments of signal processing theory. The theories of vision by Weber, Fechtner, and Stevens are evaluated and the mathematical bases of logarithmic, exponential, and power functions are considered as they relate to various models of visual functioning. Presenting perimetry results as actual, linear stimulus values, not theoretical, non-linear response values, could better allow clinicians to assess and examine the testing data directly to evaluate more correctly and accurately their patients' visual function.

  14. Trends and applications of integrated automated ultra-trace sample handling and analysis (T9)

    International Nuclear Information System (INIS)

    Kingston, H.M.S.; Ye Han; Stewart, L.; Link, D.

    2002-01-01

    Full text: Automated analysis, sub-ppt detection limits, and the trend toward speciated analysis (rather than just elemental analysis) force the innovation of sophisticated and integrated sample preparation and analysis techniques. Traditionally, the ability to handle samples at ppt and sub-ppt levels has been limited to clean laboratories and special sample handling techniques and equipment. The world of sample handling has passed a threshold where older or 'old fashioned' traditional techniques no longer provide the ability to see the sample due to the influence of the analytical blank and the fragile nature of the analyte. When samples require decomposition, extraction, separation and manipulation, application of newer more sophisticated sample handling systems are emerging that enable ultra-trace analysis and species manipulation. In addition, new instrumentation has emerged which integrate sample preparation and analysis to enable on-line near real-time analysis. Examples of those newer sample-handling methods will be discussed and current examples provided as alternatives to traditional sample handling. Two new techniques applying ultra-trace microwave energy enhanced sample handling have been developed that permit sample separation and refinement while performing species manipulation during decomposition. A demonstration, that applies to semiconductor materials, will be presented. Next, a new approach to the old problem of sample evaporation without losses will be demonstrated that is capable of retaining all elements and species tested. Both of those methods require microwave energy manipulation in specialized systems and are not accessible through convection, conduction, or other traditional energy applications. A new automated integrated method for handling samples for ultra-trace analysis has been developed. An on-line near real-time measurement system will be described that enables many new automated sample handling and measurement capabilities. This

  15. Integrated Automation of High-Throughput Screening and Reverse Phase Protein Array Sample Preparation

    DEFF Research Database (Denmark)

    Pedersen, Marlene Lemvig; Block, Ines; List, Markus

    into automated robotic high-throughput screens, which allows subsequent protein quantification. In this integrated solution, samples are directly forwarded to automated cell lysate preparation and preparation of dilution series, including reformatting to a protein spotter-compatible format after the high......High-throughput screening of genome wide siRNA- or compound libraries is currently applied for drug target and drug discovery. Commonly, these approaches deal with sample numbers ranging from 100,000 to several millions. Efforts to decrease costs and to increase information gained include......-throughput screening. Tracking of huge sample numbers and data analysis from a high-content screen to RPPAs is accomplished via MIRACLE, a custom made software suite developed by us. To this end, we demonstrate that the RPPAs generated in this manner deliver reliable protein readouts and that GAPDH and TFR levels can...

  16. Automated analysis of carbon in powdered geological and environmental samples by Raman spectroscopy.

    Science.gov (United States)

    Sparkes, Robert; Hovius, Niels; Galy, Albert; Kumar, R Vasant; Liu, James T

    2013-07-01

    Raman spectroscopy can be used to assess the structure of naturally occurring carbonaceous materials (CM), which exist in a wide range of crystal structures. The sources of these geological and environmental materials include rocks, soils, river sediments, and marine sediment cores, all of which can contain carbonaceous material ranging from highly crystalline graphite to amorphous-like organic compounds. In order to fully characterize a geological sample and its intrinsic heterogeneity, several spectra must be collected and analyzed in a precise and repeatable manner. Here, we describe a suitable processing and analysis technique. We show that short-period ball-mill grinding does not introduce structural changes to semi-graphitized material and allows for easy collection of Raman spectra from the resulting powder. Two automated peak-fitting procedures are defined that allow for rapid processing of large datasets. For very disordered CM, Lorentzian profiles are fitted to five characteristic peaks, for highly graphitized material, three Voigt profiles are fitted. Peak area ratios and peak width measurements are used to classify each spectrum and allow easy comparison between samples. By applying this technique to samples collected in Taiwan after Typhoon Morakot, sources of carbon to offshore sediments have been identified. Carbon eroded from different areas of Taiwan can be seen mixed and deposited in the offshore flood sediments, and both graphite and amorphous-like carbon have been recycled from terrestrial to marine deposits. The practicality of this application illustrates the potential for this technique to be deployed to sediment-sourcing problems in a wide range of geological settings.

  17. Small sample sorting of primary adherent cells by automated micropallet imaging and release.

    Science.gov (United States)

    Shah, Pavak K; Herrera-Loeza, Silvia Gabriela; Sims, Christopher E; Yeh, Jen Jen; Allbritton, Nancy L

    2014-07-01

    Primary patient samples are the gold standard for molecular investigations of tumor biology yet are difficult to acquire, heterogeneous in nature and variable in size. Patient-derived xenografts (PDXs) comprised of primary tumor tissue cultured in host organisms such as nude mice permit the propagation of human tumor samples in an in vivo environment and closely mimic the phenotype and gene expression profile of the primary tumor. Although PDX models reduce the cost and complexity of acquiring sample tissue and permit repeated sampling of the primary tumor, these samples are typically contaminated by immune, blood, and vascular tissues from the host organism while also being limited in size. For very small tissue samples (on the order of 10(3) cells) purification by fluorescence-activated cell sorting (FACS) is not feasible while magnetic activated cell sorting (MACS) of small samples results in very low purity, low yield, and poor viability. We developed a platform for imaging cytometry integrated with micropallet array technology to perform automated cell sorting on very small samples obtained from PDX models of pancreatic and colorectal cancer using antibody staining of EpCAM (CD326) as a selection criteria. These data demonstrate the ability to automate and efficiently separate samples with very low number of cells. © 2014 International Society for Advancement of Cytometry.

  18. Design aspects of automation system for initial processing of fecal samples

    International Nuclear Information System (INIS)

    Sawant, Pramilla D.; Prabhu, Supreetha P.; Suja, A.; Wankhede, Sonal; Chaudhary, Seema; Rao, D.D.; Pradeepkumar, K.S.; Das, A.P.; Badodkar, B.D.

    2014-01-01

    The procedure for initial handling of the fecal samples at Bioassay Lab., Trombay is as follows: overnight fecal samples are collected from the worker in a kit consisting of a polythene bag placed in a wide mouth polythene container closed with an inner lid and a screw cap. Occupational worker collects the sample in the polythene bag. On receiving the sample, the polythene container along with the sample is weighed, polythene bag containing fecal sample is lifted out of the container using a pair of tongs placed inside a crucible and ashed inside a muffle furnace at 450℃. After complete ashing, the crucible containing white ash is taken-up for further radiochemical processing. This paper describes the various steps in developing a prototype automated system for initial handling of fecal samples. The proposed system for handling and processing of fecal samples is proposed to automate the above. The system once developed will help eliminate manual intervention till the ashing stage and reduce the biological hazard involved in handling such samples mentioned procedure

  19. Development of an automated fracture. Assessment system for nuclear structures

    International Nuclear Information System (INIS)

    Mikkola, T.P.J.; Raiko, H.

    1992-01-01

    A program system for fracture assessment of nuclear power plant structures has been developed. The system consists of an easy-to-use program for engineering analysis and an automated finite element (FE) program system for more accurate analysis with solid three-dimensional (3D) models. The VTTSIF (SIF stress intensity factor) program for engineering fracture assessment applies either the weight function method or superposition method in calculating the stress intensity factor, and the fatigue crack growth analysis is based on the Paris equation. The structural geometry cases of the VTTSIF program are organized in an extendable subroutine database. The generation of a 3D FE model of a cracked structure is automated by the ACR program (automatic finite element model generation for part through cracks). The FE analyses are created with generally accepted commercial programs, and the virtual crack extension method (VCE) is used for fracture parameter evaluation by the VTTVIRT postprocessor program (program for J-integral evaluation using virtual crack extension method). The several test cases have demonstrated that the accuracy of the present system is satisfactory for practical applications. (author)

  20. Automated Video Quality Assessment for Deep-Sea Video

    Science.gov (United States)

    Pirenne, B.; Hoeberechts, M.; Kalmbach, A.; Sadhu, T.; Branzan Albu, A.; Glotin, H.; Jeffries, M. A.; Bui, A. O. V.

    2015-12-01

    Video provides a rich source of data for geophysical analysis, often supplying detailed information about the environment when other instruments may not. This is especially true of deep-sea environments, where direct visual observations cannot be made. As computer vision techniques improve and volumes of video data increase, automated video analysis is emerging as a practical alternative to labor-intensive manual analysis. Automated techniques can be much more sensitive to video quality than their manual counterparts, so performing quality assessment before doing full analysis is critical to producing valid results.Ocean Networks Canada (ONC), an initiative of the University of Victoria, operates cabled ocean observatories that supply continuous power and Internet connectivity to a broad suite of subsea instruments from the coast to the deep sea, including video and still cameras. This network of ocean observatories has produced almost 20,000 hours of video (about 38 hours are recorded each day) and an additional 8,000 hours of logs from remotely operated vehicle (ROV) dives. We begin by surveying some ways in which deep-sea video poses challenges for automated analysis, including: 1. Non-uniform lighting: Single, directional, light sources produce uneven luminance distributions and shadows; remotely operated lighting equipment are also susceptible to technical failures. 2. Particulate noise: Turbidity and marine snow are often present in underwater video; particles in the water column can have sharper focus and higher contrast than the objects of interest due to their proximity to the light source and can also influence the camera's autofocus and auto white-balance routines. 3. Color distortion (low contrast): The rate of absorption of light in water varies by wavelength, and is higher overall than in air, altering apparent colors and lowering the contrast of objects at a distance.We also describe measures under development at ONC for detecting and mitigating

  1. Automated Clinical Assessment from Smart home-based Behavior Data

    Science.gov (United States)

    Dawadi, Prafulla Nath; Cook, Diane Joyce; Schmitter-Edgecombe, Maureen

    2016-01-01

    Smart home technologies offer potential benefits for assisting clinicians by automating health monitoring and well-being assessment. In this paper, we examine the actual benefits of smart home-based analysis by monitoring daily behaviour in the home and predicting standard clinical assessment scores of the residents. To accomplish this goal, we propose a Clinical Assessment using Activity Behavior (CAAB) approach to model a smart home resident’s daily behavior and predict the corresponding standard clinical assessment scores. CAAB uses statistical features that describe characteristics of a resident’s daily activity performance to train machine learning algorithms that predict the clinical assessment scores. We evaluate the performance of CAAB utilizing smart home sensor data collected from 18 smart homes over two years using prediction and classification-based experiments. In the prediction-based experiments, we obtain a statistically significant correlation (r = 0.72) between CAAB-predicted and clinician-provided cognitive assessment scores and a statistically significant correlation (r = 0.45) between CAAB-predicted and clinician-provided mobility scores. Similarly, for the classification-based experiments, we find CAAB has a classification accuracy of 72% while classifying cognitive assessment scores and 76% while classifying mobility scores. These prediction and classification results suggest that it is feasible to predict standard clinical scores using smart home sensor data and learning-based data analysis. PMID:26292348

  2. Security Measures in Automated Assessment System for Programming Courses

    Directory of Open Access Journals (Sweden)

    Jana Šťastná

    2015-12-01

    Full Text Available A desirable characteristic of programming code assessment is to provide the learner the most appropriate information regarding the code functionality as well as a chance to improve. This can be hardly achieved in case the number of learners is high (500 or more. In this paper we address the problem of risky code testing and availability of an assessment platform Arena, dealing with potential security risks when providing an automated assessment for a large set of source code. Looking at students’ programs as if they were potentially malicious inspired us to investigate separated execution environments, used by security experts for secure software analysis. The results also show that availability issues of our assessment platform can be conveniently resolved with task queues. A special attention is paid to Docker, a virtual container ensuring no risky code can affect the assessment system security. The assessment platform Arena enables to regularly, effectively and securely assess students' source code in various programming courses. In addition to that it is a motivating factor and helps students to engage in the educational process.

  3. Automatic sample changer control software for automation of neutron activation analysis process in Malaysian Nuclear Agency

    Science.gov (United States)

    Yussup, N.; Ibrahim, M. M.; Rahman, N. A. A.; Mokhtar, M.; Salim, N. A. A.; Soh@Shaari, S. C.; Azman, A.; Lombigit, L.; Azman, A.; Omar, S. A.

    2018-01-01

    Most of the procedures in neutron activation analysis (NAA) process that has been established in Malaysian Nuclear Agency (Nuclear Malaysia) since 1980s were performed manually. These manual procedures carried out by the NAA laboratory personnel are time consuming and inefficient especially for sample counting and measurement process. The sample needs to be changed and the measurement software needs to be setup for every one hour counting time. Both of these procedures are performed manually for every sample. Hence, an automatic sample changer system (ASC) that consists of hardware and software is developed to automate sample counting process for up to 30 samples consecutively. This paper describes the ASC control software for NAA process which is designed and developed to control the ASC hardware and call GammaVision software for sample measurement. The software is developed by using National Instrument LabVIEW development package.

  4. Cost efficiency assessment of automated quality control of precast structures

    Directory of Open Access Journals (Sweden)

    Kaverzina Liudmila

    2018-01-01

    Full Text Available Relevance of the research is conditioned by the necessity to enhance the factory quality control of reinforced concrete structures based on integral assessment of their reliability. The current system of selective quality control of precast concrete structures does not provide reliability assurance of the whole lot of products. The present research aims to develop operational procedure and consider economic feasibility of automated quality control of precast RC structures. Quality control is performed each shift according to the developed software system based on probabilistic methods considering statistic variability of the controlled parameters. The critical criterion of operational integrity of structures is integral assessment of the reliability indicators. The following theoretical research methods were used in the study: probabilistic-statistical, methods of system and economic analysis. Validity of the obtained results and economic feasibility were proved by experimental studies including full-scale tests.

  5. Quantitative Vulnerability Assessment of Cyber Security for Distribution Automation Systems

    Directory of Open Access Journals (Sweden)

    Xiaming Ye

    2015-06-01

    Full Text Available The distribution automation system (DAS is vulnerable to cyber-attacks due to the widespread use of terminal devices and standard communication protocols. On account of the cost of defense, it is impossible to ensure the security of every device in the DAS. Given this background, a novel quantitative vulnerability assessment model of cyber security for DAS is developed in this paper. In the assessment model, the potential physical consequences of cyber-attacks are analyzed from two levels: terminal device level and control center server level. Then, the attack process is modeled based on game theory and the relationships among different vulnerabilities are analyzed by introducing a vulnerability adjacency matrix. Finally, the application process of the proposed methodology is illustrated through a case study based on bus 2 of the Roy Billinton Test System (RBTS. The results demonstrate the reasonability and effectiveness of the proposed methodology.

  6. Portable Automation of Static Chamber Sample Collection for Quantifying Soil Gas Flux

    Energy Technology Data Exchange (ETDEWEB)

    Davis, Morgan P.; Groh, Tyler A.; Parkin, Timothy B.; Williams, Ryan J.; Isenhart, Thomas M.; Hofmockel, Kirsten S.

    2018-01-01

    Quantification of soil gas flux using the static chamber method is labor intensive. The number of chambers that can be sampled is limited by the spacing between chambers and the availability of trained research technicians. An automated system for collecting gas samples from chambers in the field would eliminate the need for personnel to return to the chamber during a flux measurement period and would allow a single technician to sample multiple chambers simultaneously. This study describes Chamber Automated Sampling Equipment (FluxCASE) to collect and store chamber headspace gas samples at assigned time points for the measurement of soil gas flux. The FluxCASE design and operation is described, and the accuracy and precision of the FluxCASE system is evaluated. In laboratory measurements of nitrous oxide (N2O), carbon dioxide (CO2), and methane (CH4) concentrations of a standardized gas mixture, coefficients of variation associated with automated and manual sample collection were comparable, indicating no loss of precision. In the field, soil gas fluxes measured from FluxCASEs were in agreement with manual sampling for both N2O and CO2. Slopes of regression equations were 1.01 for CO2 and 0.97 for N2O. The 95% confidence limits of the slopes of the regression lines included the value of one, indicating no bias. Additionally, an expense analysis found a cost recovery ranging from 0.6 to 2.2 yr. Implementing the FluxCASE system is an alternative to improve the efficiency of the static chamber method for measuring soil gas flux while maintaining the accuracy and precision of manual sampling.

  7. Highly Reproducible Automated Proteomics Sample Preparation Workflow for Quantitative Mass Spectrometry.

    Science.gov (United States)

    Fu, Qin; Kowalski, Michael P; Mastali, Mitra; Parker, Sarah J; Sobhani, Kimia; van den Broek, Irene; Hunter, Christie L; Van Eyk, Jennifer E

    2018-01-05

    Sample preparation for protein quantification by mass spectrometry requires multiple processing steps including denaturation, reduction, alkylation, protease digestion, and peptide cleanup. Scaling these procedures for the analysis of numerous complex biological samples can be tedious and time-consuming, as there are many liquid transfer steps and timed reactions where technical variations can be introduced and propagated. We established an automated sample preparation workflow with a total processing time for 96 samples of 5 h, including a 2 h incubation with trypsin. Peptide cleanup is accomplished by online diversion during the LC/MS/MS analysis. In a selected reaction monitoring (SRM) assay targeting 6 plasma biomarkers and spiked β-galactosidase, mean intraday and interday cyclic voltammograms (CVs) for 5 serum and 5 plasma samples over 5 days were samples repeated on 3 separate days had total CVs below 20%. Similar results were obtained when the workflow was transferred to a second site: 93% of peptides had CVs below 20%. An automated trypsin digestion workflow yields uniformly processed samples in less than 5 h. Reproducible quantification of peptides was observed across replicates, days, instruments, and laboratory sites, demonstrating the broad applicability of this approach.

  8. LAVA: a conceptual framework for automated risk assessment

    International Nuclear Information System (INIS)

    Smith, S.T.; Brown, D.C.; Erkkila, T.H.; FitzGerald, P.D.; Lim, J.J.; Massagli, L.; Phillips, J.R.; Tisinger, R.M.

    1986-01-01

    At the Los Alamos National Laboratory we are developing the framework for generating knowledge-based systems that perform automated risk analyses on an organization's assets. An organization's assets can be subdivided into tangible and intangible assets. Tangible assets include facilities, materiel, personnel, and time, while intangible assets include such factors as reputation, employee morale, and technical knowledge. The potential loss exposure of an asset is dependent upon the threats (both static and dynamic), the vulnerabilities in the mechanisms protecting the assets from the threats, and the consequences of the threats successfully exploiting the protective systems vulnerabilities. The methodology is based upon decision analysis, fuzzy set theory, natural-language processing, and event-tree structures. The Los Alamos Vulnerability and Risk Assessment (LAVA) methodology has been applied to computer security. LAVA is modeled using an interactive questionnaire in natural language and is fully automated on a personal computer. The program generates both summary reports for use by both management personnel and detailed reports for use by operations staff. LAVA has been in use by the Nuclear Regulatory Commission and the National Bureau of Standards for nearly two years and is presently under evaluation by other governmental agencies. 7 refs

  9. Fully Automated Deep Learning System for Bone Age Assessment.

    Science.gov (United States)

    Lee, Hyunkwang; Tajmir, Shahein; Lee, Jenny; Zissen, Maurice; Yeshiwas, Bethel Ayele; Alkasab, Tarik K; Choy, Garry; Do, Synho

    2017-08-01

    Skeletal maturity progresses through discrete phases, a fact that is used routinely in pediatrics where bone age assessments (BAAs) are compared to chronological age in the evaluation of endocrine and metabolic disorders. While central to many disease evaluations, little has changed to improve the tedious process since its introduction in 1950. In this study, we propose a fully automated deep learning pipeline to segment a region of interest, standardize and preprocess input radiographs, and perform BAA. Our models use an ImageNet pretrained, fine-tuned convolutional neural network (CNN) to achieve 57.32 and 61.40% accuracies for the female and male cohorts on our held-out test images. Female test radiographs were assigned a BAA within 1 year 90.39% and within 2 years 98.11% of the time. Male test radiographs were assigned 94.18% within 1 year and 99.00% within 2 years. Using the input occlusion method, attention maps were created which reveal what features the trained model uses to perform BAA. These correspond to what human experts look at when manually performing BAA. Finally, the fully automated BAA system was deployed in the clinical environment as a decision supporting system for more accurate and efficient BAAs at much faster interpretation time (<2 s) than the conventional method.

  10. Oak ridge national laboratory automated clean chemistry for bulk analysis of environmental swipe samples

    Energy Technology Data Exchange (ETDEWEB)

    Bostick, Debra A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Hexel, Cole R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Ticknor, Brian W. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Tevepaugh, Kayron N. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Metzger, Shalina C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-11-01

    To shorten the lengthy and costly manual chemical purification procedures, sample preparation methods for mass spectrometry are being automated using commercial-off-the-shelf (COTS) equipment. This addresses a serious need in the nuclear safeguards community to debottleneck the separation of U and Pu in environmental samples—currently performed by overburdened chemists—with a method that allows unattended, overnight operation. In collaboration with Elemental Scientific Inc., the prepFAST-MC2 was designed based on current COTS equipment that was modified for U/Pu separations utilizing Eichrom™ TEVA and UTEVA resins. Initial verification of individual columns yielded small elution volumes with consistent elution profiles and good recovery. Combined column calibration demonstrated ample separation without crosscontamination of the eluent. Automated packing and unpacking of the built-in columns initially showed >15% deviation in resin loading by weight, which can lead to inconsistent separations. Optimization of the packing and unpacking methods led to a reduction in the variability of the packed resin to less than 5% daily. The reproducibility of the automated system was tested with samples containing 30 ng U and 15 pg Pu, which were separated in a series with alternating reagent blanks. These experiments showed very good washout of both the resin and the sample from the columns as evidenced by low blank values. Analysis of the major and minor isotope ratios for U and Pu provided values well within data quality limits for the International Atomic Energy Agency. Additionally, system process blanks spiked with 233U and 244Pu tracers were separated using the automated system after it was moved outside of a clean room and yielded levels equivalent to clean room blanks, confirming that the system can produce high quality results without the need for expensive clean room infrastructure. Comparison of the amount of personnel time necessary for successful manual vs

  11. The Automated Assessment of Postural Stability: Balance Detection Algorithm.

    Science.gov (United States)

    Napoli, Alessandro; Glass, Stephen M; Tucker, Carole; Obeid, Iyad

    2017-12-01

    Impaired balance is a common indicator of mild traumatic brain injury, concussion and musculoskeletal injury. Given the clinical relevance of such injuries, especially in military settings, it is paramount to develop more accurate and reliable on-field evaluation tools. This work presents the design and implementation of the automated assessment of postural stability (AAPS) system, for on-field evaluations following concussion. The AAPS is a computer system, based on inexpensive off-the-shelf components and custom software, that aims to automatically and reliably evaluate balance deficits, by replicating a known on-field clinical test, namely, the Balance Error Scoring System (BESS). The AAPS main innovation is its balance error detection algorithm that has been designed to acquire data from a Microsoft Kinect ® sensor and convert them into clinically-relevant BESS scores, using the same detection criteria defined by the original BESS test. In order to assess the AAPS balance evaluation capability, a total of 15 healthy subjects (7 male, 8 female) were required to perform the BESS test, while simultaneously being tracked by a Kinect 2.0 sensor and a professional-grade motion capture system (Qualisys AB, Gothenburg, Sweden). High definition videos with BESS trials were scored off-line by three experienced observers for reference scores. AAPS performance was assessed by comparing the AAPS automated scores to those derived by three experienced observers. Our results show that the AAPS error detection algorithm presented here can accurately and precisely detect balance deficits with performance levels that are comparable to those of experienced medical personnel. Specifically, agreement levels between the AAPS algorithm and the human average BESS scores ranging between 87.9% (single-leg on foam) and 99.8% (double-leg on firm ground) were detected. Moreover, statistically significant differences in balance scores were not detected by an ANOVA test with alpha equal to 0

  12. Automated Prediction of Catalytic Mechanism and Rate Law Using Graph-Based Reaction Path Sampling.

    Science.gov (United States)

    Habershon, Scott

    2016-04-12

    In a recent article [ J. Chem. Phys. 2015 , 143 , 094106 ], we introduced a novel graph-based sampling scheme which can be used to generate chemical reaction paths in many-atom systems in an efficient and highly automated manner. The main goal of this work is to demonstrate how this approach, when combined with direct kinetic modeling, can be used to determine the mechanism and phenomenological rate law of a complex catalytic cycle, namely cobalt-catalyzed hydroformylation of ethene. Our graph-based sampling scheme generates 31 unique chemical products and 32 unique chemical reaction pathways; these sampled structures and reaction paths enable automated construction of a kinetic network model of the catalytic system when combined with density functional theory (DFT) calculations of free energies and resultant transition-state theory rate constants. Direct simulations of this kinetic network across a range of initial reactant concentrations enables determination of both the reaction mechanism and the associated rate law in an automated fashion, without the need for either presupposing a mechanism or making steady-state approximations in kinetic analysis. Most importantly, we find that the reaction mechanism which emerges from these simulations is exactly that originally proposed by Heck and Breslow; furthermore, the simulated rate law is also consistent with previous experimental and computational studies, exhibiting a complex dependence on carbon monoxide pressure. While the inherent errors of using DFT simulations to model chemical reactivity limit the quantitative accuracy of our calculated rates, this work confirms that our automated simulation strategy enables direct analysis of catalytic mechanisms from first principles.

  13. RoboDiff: combining a sample changer and goniometer for highly automated macromolecular crystallography experiments.

    Science.gov (United States)

    Nurizzo, Didier; Bowler, Matthew W; Caserotto, Hugo; Dobias, Fabien; Giraud, Thierry; Surr, John; Guichard, Nicolas; Papp, Gergely; Guijarro, Matias; Mueller-Dieckmann, Christoph; Flot, David; McSweeney, Sean; Cipriani, Florent; Theveneau, Pascal; Leonard, Gordon A

    2016-08-01

    Automation of the mounting of cryocooled samples is now a feature of the majority of beamlines dedicated to macromolecular crystallography (MX). Robotic sample changers have been developed over many years, with the latest designs increasing capacity, reliability and speed. Here, the development of a new sample changer deployed at the ESRF beamline MASSIF-1 (ID30A-1), based on an industrial six-axis robot, is described. The device, named RoboDiff, includes a high-capacity dewar, acts as both a sample changer and a high-accuracy goniometer, and has been designed for completely unattended sample mounting and diffraction data collection. This aim has been achieved using a high level of diagnostics at all steps of the process from mounting and characterization to data collection. The RoboDiff has been in service on the fully automated endstation MASSIF-1 at the ESRF since September 2014 and, at the time of writing, has processed more than 20 000 samples completely automatically.

  14. Automated combustion accelerator mass spectrometry for the analysis of biomedical samples in the low attomole range.

    Science.gov (United States)

    van Duijn, Esther; Sandman, Hugo; Grossouw, Dimitri; Mocking, Johannes A J; Coulier, Leon; Vaes, Wouter H J

    2014-08-05

    The increasing role of accelerator mass spectrometry (AMS) in biomedical research necessitates modernization of the traditional sample handling process. AMS was originally developed and used for carbon dating, therefore focusing on a very high precision but with a comparably low sample throughput. Here, we describe the combination of automated sample combustion with an elemental analyzer (EA) online coupled to an AMS via a dedicated interface. This setup allows direct radiocarbon measurements for over 70 samples daily by AMS. No sample processing is required apart from the pipetting of the sample into a tin foil cup, which is placed in the carousel of the EA. In our system, up to 200 AMS analyses are performed automatically without the need for manual interventions. We present results on the direct total (14)C count measurements in <2 μL human plasma samples. The method shows linearity over a range of 0.65-821 mBq/mL, with a lower limit of quantification of 0.65 mBq/mL (corresponding to 0.67 amol for acetaminophen). At these extremely low levels of activity, it becomes important to quantify plasma specific carbon percentages. This carbon percentage is automatically generated upon combustion of a sample on the EA. Apparent advantages of the present approach include complete omission of sample preparation (reduced hands-on time) and fully automated sample analysis. These improvements clearly stimulate the standard incorporation of microtracer research in the drug development process. In combination with the particularly low sample volumes required and extreme sensitivity, AMS strongly improves its position as a bioanalysis method.

  15. Automated low energy photon absorption equipment for measuring internal moisture and density distributions of wood samples

    International Nuclear Information System (INIS)

    Tiitta, M.; Olkkonen, H.; Lappalainen, T.; Kanko, T.

    1993-01-01

    Automated equipment for measuring the moisture and density distributions of wood samples was developed. Using a narrow beam of gamma rays, the equipment scans the wood samples, which are placed on the moving belt. The moisture measurement is based on the 241 Am photon absorption technique (59.5 keV), where the difference of the linear absorption coefficients of the moist and dry wood is measured. The method requires no knowledge of the thickness of the specimen. The density estimation method is based on the measurement of the linear attenuation coefficient of wood. Comprehensive software including image processing was developed for treatment of the numerical values of the measurements. (author)

  16. Human mixed lymphocyte cultures. Evaluation of microculture technique utilizing the multiple automated sample harvester (MASH)

    Science.gov (United States)

    Thurman, G. B.; Strong, D. M.; Ahmed, A.; Green, S. S.; Sell, K. W.; Hartzman, R. J.; Bach, F. H.

    1973-01-01

    Use of lymphocyte cultures for in vitro studies such as pretransplant histocompatibility testing has established the need for standardization of this technique. A microculture technique has been developed that has facilitated the culturing of lymphocytes and increased the quantity of cultures feasible, while lowering the variation between replicate samples. Cultures were prepared for determination of tritiated thymidine incorporation using a Multiple Automated Sample Harvester (MASH). Using this system, the parameters that influence the in vitro responsiveness of human lymphocytes to allogeneic lymphocytes have been investigated. PMID:4271568

  17. LAVA: A conceptual framework for automated risk assessment

    International Nuclear Information System (INIS)

    Smith, S.T.; Brown, D.C.; Erkkila, T.H.; FitzGerald, P.D.; Lim, J.J.; Massagli, L.; Phillips, J.R.; Tisinger, R.M.

    1986-01-01

    At the Los Alamos National Laboratory the authors are developing the framework for generating knowledge-based systems that perform automated risk analyses on an organizations's assets. An organization's assets can be subdivided into tangible and intangible assets. Tangible assets include facilities, material, personnel, and time, while intangible assets include such factors as reputation, employee morale, and technical knowledge. The potential loss exposure of an asset is dependent upon the threats (both static and dynamic), the vulnerabilities in the mechanisms protecting the assets from the threats, and the consequences of the threats successfully exploiting the protective systems vulnerabilities. The methodology is based upon decision analysis, fuzzy set theory, natural language processing, and event tree structures. The Los Alamos Vulnerability and Risk Assessment (LAVA) methodology has been applied to computer security. The program generates both summary reports for use by both management personnel and detailed reports for use by operations staff

  18. Fully automated gamma spectrometry gauge observing possible radioactive contamination of melting-shop samples

    International Nuclear Information System (INIS)

    Kroos, J.; Westkaemper, G.; Stein, J.

    1999-01-01

    At Salzgitter AG, several monitoring systems have been installed to check the scrap transport by rail and by car. At the moment, the scrap transport by ship is reloaded onto wagons for monitoring afterwards. In the future, a detection system will be mounted onto a crane for a direct check on scrap upon the departure of ship. Furthermore, at Salzgitter AG Central Chemical Laboratory, a fully automated gamma spectrometry gauge is installed in order to observe a possible radioactive contamination of the products. The gamma spectrometer is integrated into the automated OE spectrometry line for testing melting shop samples after performing the OE spectrometry. With this technique the specific activity of selected nuclides and dose rate will be determined. The activity observation is part of the release procedure. The corresponding measurement data are stored in a database for quality management reasons. (author)

  19. Automated, Ultra-Sterile Solid Sample Handling and Analysis on a Chip

    Science.gov (United States)

    Mora, Maria F.; Stockton, Amanda M.; Willis, Peter A.

    2013-01-01

    There are no existing ultra-sterile lab-on-a-chip systems that can accept solid samples and perform complete chemical analyses without human intervention. The proposed solution is to demonstrate completely automated lab-on-a-chip manipulation of powdered solid samples, followed by on-chip liquid extraction and chemical analysis. This technology utilizes a newly invented glass micro-device for solid manipulation, which mates with existing lab-on-a-chip instrumentation. Devices are fabricated in a Class 10 cleanroom at the JPL MicroDevices Lab, and are plasma-cleaned before and after assembly. Solid samples enter the device through a drilled hole in the top. Existing micro-pumping technology is used to transfer milligrams of powdered sample into an extraction chamber where it is mixed with liquids to extract organic material. Subsequent chemical analysis is performed using portable microchip capillary electrophoresis systems (CE). These instruments have been used for ultra-highly sensitive (parts-per-trillion, pptr) analysis of organic compounds including amines, amino acids, aldehydes, ketones, carboxylic acids, and thiols. Fully autonomous amino acid analyses in liquids were demonstrated; however, to date there have been no reports of completely automated analysis of solid samples on chip. This approach utilizes an existing portable instrument that houses optics, high-voltage power supplies, and solenoids for fully autonomous microfluidic sample processing and CE analysis with laser-induced fluorescence (LIF) detection. Furthermore, the entire system can be sterilized and placed in a cleanroom environment for analyzing samples returned from extraterrestrial targets, if desired. This is an entirely new capability never demonstrated before. The ability to manipulate solid samples, coupled with lab-on-a-chip analysis technology, will enable ultraclean and ultrasensitive end-to-end analysis of samples that is orders of magnitude more sensitive than the ppb goal given

  20. Radiologist assessment of breast density by BI-RADS categories versus fully automated volumetric assessment.

    Science.gov (United States)

    Gweon, Hye Mi; Youk, Ji Hyun; Kim, Jeong-Ah; Son, Eun Ju

    2013-09-01

    The objective of our study was to estimate mammographic breast density using a fully automated volumetric breast density measurement method in comparison with BI-RADS breast density categories determined by radiologists. A total of 791 full-field digital mammography examinations with standard views were evaluated by three blinded radiologists as BI-RADS density categories 1-4. For fully automated volumetric analysis, volumetric breast density was calculated with fully automated software. The volume of fibroglandular tissue, the volume of the breast, and the volumetric percentage density were provided. The weighted overall kappa was 0.48 (moderate agreement) for the three radiologists' estimates of BI-RADS density. Pairwise comparisons of the radiologists' measurements of BI-RADS density revealed moderate to substantial agreement, with kappa values ranging from 0.51 to 0.64. There was a significant difference in mean volumetric breast density among the BI-RADS density categories, and the mean volumetric breast density increased as the BI-RADS density category increased (pBI-RADS categories and fully automated volumetric breast density (ρ=0.765, pBI-RADS density categories. Mammographic density assessment with the fully automated volumetric method may be used to assign BI-RADS density categories.

  1. Assessment of organic matter resistance to biodegradation in volcanic ash soils assisted by automated interpretation of infrared spectra from humic acid and whole soil samples by using partial least squares

    Science.gov (United States)

    Hernández, Zulimar; Pérez Trujillo, Juan Pedro; Hernández-Hernández, Sergio Alexander; Almendros, Gonzalo; Sanz, Jesús

    2014-05-01

    From a practical viewpoint, the most interesting possibilities of applying infrared (IR) spectroscopy to soil studies lie on processing IR spectra of whole soil (WS) samples [1] in order to forecast functional descriptors at high organizational levels of the soil system, such as soil C resilience. Currently, there is a discussion on whether the resistance to biodegradation of soil organic matter (SOM) depends on its molecular composition or on environmental interactions between SOM and mineral components, such could be the case with physical encapsulation of particulate SOM or organo-mineral derivatives, e.g., those formed with amorphous oxides [2]. A set of about 200 dependent variables from WS and isolated, ash free, humic acids (HA) [3] was obtained in 30 volcanic ash soils from Tenerife Island (Spain). Soil biogeochemical properties such as SOM, allophane (Alo + 1 /2 Feo), total mineralization coefficient (TMC) or aggregate stability were determined in WS. In addition, structural information on SOM was obtained from the isolated HA fractions by visible spectroscopy and analytical pyrolysis (Py-GC/MS). Aiming to explore the potential of partial least squares regression (PLS) in forecasting soil dependent variables, exclusively using the information extracted from WS and HA IR spectral profiles, data were processed by using ParLeS [4] and Unscrambler programs. Data pre-treatments should be carefully chosen: the most significant PLS models from IR spectra of HA were obtained after second derivative pre-treatment, which prevented effects of intrinsically broadband spectral profiles typical in macromolecular heterogeneous material such as HA. Conversely, when using IR spectra of WS, the best forecasting models were obtained using linear baseline correction and maximum normalization pre-treatment. With WS spectra, the most successful prediction models were obtained for SOM, magnetite, allophane, aggregate stability, clay and total aromatic compounds, whereas the PLS

  2. Development and evaluation of an automated fall risk assessment system.

    Science.gov (United States)

    Lee, Ju Young; Jin, Yinji; Piao, Jinshi; Lee, Sun-Mi

    2016-04-01

    Fall risk assessment is the first step toward prevention, and a risk assessment tool with high validity should be used. This study aimed to develop and validate an automated fall risk assessment system (Auto-FallRAS) to assess fall risks based on electronic medical records (EMRs) without additional data collected or entered by nurses. This study was conducted in a 1335-bed university hospital in Seoul, South Korea. The Auto-FallRAS was developed using 4211 fall-related clinical data extracted from EMRs. Participants included fall patients and non-fall patients (868 and 3472 for the development study; 752 and 3008 for the validation study; and 58 and 232 for validation after clinical application, respectively). The system was evaluated for predictive validity and concurrent validity. The final 10 predictors were included in the logistic regression model for the risk-scoring algorithm. The results of the Auto-FallRAS were shown as high/moderate/low risk on the EMR screen. The predictive validity analyzed after clinical application of the Auto-FallRAS was as follows: sensitivity = 0.95, NPV = 0.97 and Youden index = 0.44. The validity of the Morse Fall Scale assessed by nurses was as follows: sensitivity = 0.68, NPV = 0.88 and Youden index = 0.28. This study found that the Auto-FallRAS results were better than were the nurses' predictions. The advantage of the Auto-FallRAS is that it automatically analyzes information and shows patients' fall risk assessment results without requiring additional time from nurses. © The Author 2016. Published by Oxford University Press in association with the International Society for Quality in Health Care; all rights reserved.

  3. An Automated Summarization Assessment Algorithm for Identifying Summarizing Strategies.

    Science.gov (United States)

    Abdi, Asad; Idris, Norisma; Alguliyev, Rasim M; Aliguliyev, Ramiz M

    2016-01-01

    Summarization is a process to select important information from a source text. Summarizing strategies are the core cognitive processes in summarization activity. Since summarization can be important as a tool to improve comprehension, it has attracted interest of teachers for teaching summary writing through direct instruction. To do this, they need to review and assess the students' summaries and these tasks are very time-consuming. Thus, a computer-assisted assessment can be used to help teachers to conduct this task more effectively. This paper aims to propose an algorithm based on the combination of semantic relations between words and their syntactic composition to identify summarizing strategies employed by students in summary writing. An innovative aspect of our algorithm lies in its ability to identify summarizing strategies at the syntactic and semantic levels. The efficiency of the algorithm is measured in terms of Precision, Recall and F-measure. We then implemented the algorithm for the automated summarization assessment system that can be used to identify the summarizing strategies used by students in summary writing.

  4. ALARA ASSESSMENT OF SETTLER SLUDGE SAMPLING METHODS

    International Nuclear Information System (INIS)

    Nelsen, L.A.

    2009-01-01

    The purpose of this assessment is to compare underwater and above water settler sludge sampling methods to determine if the added cost for underwater sampling for the sole purpose of worker dose reductions is justified. Initial planning for sludge sampling included container, settler and knock-out-pot (KOP) sampling. Due to the significantly higher dose consequence of KOP sludge, a decision was made to sample KOP underwater to achieve worker dose reductions. Additionally, initial plans were to utilize the underwater sampling apparatus for settler sludge. Since there are no longer plans to sample KOP sludge, the decision for underwater sampling for settler sludge needs to be revisited. The present sampling plan calls for spending an estimated $2,500,000 to design and construct a new underwater sampling system (per A21 C-PL-001 RevOE). This evaluation will compare and contrast the present method of above water sampling to the underwater method that is planned by the Sludge Treatment Project (STP) and determine if settler samples can be taken using the existing sampling cart (with potentially minor modifications) while maintaining doses to workers As Low As Reasonably Achievable (ALARA) and eliminate the need for costly redesigns, testing and personnel retraining

  5. ALARA ASSESSMENT OF SETTLER SLUDGE SAMPLING METHODS

    Energy Technology Data Exchange (ETDEWEB)

    NELSEN LA

    2009-01-30

    The purpose of this assessment is to compare underwater and above water settler sludge sampling methods to determine if the added cost for underwater sampling for the sole purpose of worker dose reductions is justified. Initial planning for sludge sampling included container, settler and knock-out-pot (KOP) sampling. Due to the significantly higher dose consequence of KOP sludge, a decision was made to sample KOP underwater to achieve worker dose reductions. Additionally, initial plans were to utilize the underwater sampling apparatus for settler sludge. Since there are no longer plans to sample KOP sludge, the decision for underwater sampling for settler sludge needs to be revisited. The present sampling plan calls for spending an estimated $2,500,000 to design and construct a new underwater sampling system (per A21 C-PL-001 RevOE). This evaluation will compare and contrast the present method of above water sampling to the underwater method that is planned by the Sludge Treatment Project (STP) and determine if settler samples can be taken using the existing sampling cart (with potentially minor modifications) while maintaining doses to workers As Low As Reasonably Achievable (ALARA) and eliminate the need for costly redesigns, testing and personnel retraining.

  6. Construction and calibration of a low cost and fully automated vibrating sample magnetometer

    Energy Technology Data Exchange (ETDEWEB)

    El-Alaily, T.M., E-mail: toson_alaily@yahoo.com [Physics Department, Faculty of Science, Tanta University, Tanta (Egypt); El-Nimr, M.K.; Saafan, S.A.; Kamel, M.M.; Meaz, T.M. [Physics Department, Faculty of Science, Tanta University, Tanta (Egypt); Assar, S.T. [Engineering Physics and Mathematics Department, Faculty of Engineering, Tanta University, Tanta (Egypt)

    2015-07-15

    A low cost vibrating sample magnetometer (VSM) has been constructed by using an electromagnet and an audio loud speaker; where both are controlled by a data acquisition device. The constructed VSM records the magnetic hysteresis loop up to 8.3 KG at room temperature. The apparatus has been calibrated and tested by using magnetic hysteresis data of some ferrite samples measured by two scientifically calibrated magnetometers; model (Lake Shore 7410) and model (LDJ Electronics Inc. Troy, MI). Our VSM lab-built new design proved success and reliability. - Highlights: • A low cost automated vibrating sample magnetometer VSM has been constructed. • The VSM records the magnetic hysteresis loop up to 8.3 KG at room temperature. • The VSM has been calibrated and tested by using some measured ferrite samples. • Our VSM lab-built new design proved success and reliability.

  7. Automated Device for Asynchronous Extraction of RNA, DNA, or Protein Biomarkers from Surrogate Patient Samples.

    Science.gov (United States)

    Bitting, Anna L; Bordelon, Hali; Baglia, Mark L; Davis, Keersten M; Creecy, Amy E; Short, Philip A; Albert, Laura E; Karhade, Aditya V; Wright, David W; Haselton, Frederick R; Adams, Nicholas M

    2016-12-01

    Many biomarker-based diagnostic methods are inhibited by nontarget molecules in patient samples, necessitating biomarker extraction before detection. We have developed a simple device that purifies RNA, DNA, or protein biomarkers from complex biological samples without robotics or fluid pumping. The device design is based on functionalized magnetic beads, which capture biomarkers and remove background biomolecules by magnetically transferring the beads through processing solutions arrayed within small-diameter tubing. The process was automated by wrapping the tubing around a disc-like cassette and rotating it past a magnet using a programmable motor. This device recovered biomarkers at ~80% of the operator-dependent extraction method published previously. The device was validated by extracting biomarkers from a panel of surrogate patient samples containing clinically relevant concentrations of (1) influenza A RNA in nasal swabs, (2) Escherichia coli DNA in urine, (3) Mycobacterium tuberculosis DNA in sputum, and (4) Plasmodium falciparum protein and DNA in blood. The device successfully extracted each biomarker type from samples representing low levels of clinically relevant infectivity (i.e., 7.3 copies/µL of influenza A RNA, 405 copies/µL of E. coli DNA, 0.22 copies/µL of TB DNA, 167 copies/µL of malaria parasite DNA, and 2.7 pM of malaria parasite protein). © 2015 Society for Laboratory Automation and Screening.

  8. Automated sample exchange and tracking system for neutron research at cryogenic temperatures

    Science.gov (United States)

    Rix, J. E.; Weber, J. K. R.; Santodonato, L. J.; Hill, B.; Walker, L. M.; McPherson, R.; Wenzel, J.; Hammons, S. E.; Hodges, J.; Rennich, M.; Volin, K. J.

    2007-01-01

    An automated system for sample exchange and tracking in a cryogenic environment and under remote computer control was developed. Up to 24 sample "cans" per cycle can be inserted and retrieved in a programed sequence. A video camera acquires a unique identification marked on the sample can to provide a record of the sequence. All operations are coordinated via a LABVIEW™ program that can be operated locally or over a network. The samples are contained in vanadium cans of 6-10mm in diameter and equipped with a hermetically sealed lid that interfaces with the sample handler. The system uses a closed-cycle refrigerator (CCR) for cooling. The sample was delivered to a precooling location that was at a temperature of ˜25K, after several minutes, it was moved onto a "landing pad" at ˜10K that locates the sample in the probe beam. After the sample was released onto the landing pad, the sample handler was retracted. Reading the sample identification and the exchange operation takes approximately 2min. The time to cool the sample from ambient temperature to ˜10K was approximately 7min including precooling time. The cooling time increases to approximately 12min if precooling is not used. Small differences in cooling rate were observed between sample materials and for different sample can sizes. Filling the sample well and the sample can with low pressure helium is essential to provide heat transfer and to achieve useful cooling rates. A resistive heating coil can be used to offset the refrigeration so that temperatures up to ˜350K can be accessed and controlled using a proportional-integral-derivative control loop. The time for the landing pad to cool to ˜10K after it has been heated to ˜240K was approximately 20min.

  9. Automated mango fruit assessment using fuzzy logic approach

    Science.gov (United States)

    Hasan, Suzanawati Abu; Kin, Teoh Yeong; Sauddin@Sa'duddin, Suraiya; Aziz, Azlan Abdul; Othman, Mahmod; Mansor, Ab Razak; Parnabas, Vincent

    2014-06-01

    In term of value and volume of production, mango is the third most important fruit product next to pineapple and banana. Accurate size assessment of mango fruits during harvesting is vital to ensure that they are classified to the grade accordingly. However, the current practice in mango industry is grading the mango fruit manually using human graders. This method is inconsistent, inefficient and labor intensive. In this project, a new method of automated mango size and grade assessment is developed using RGB fiber optic sensor and fuzzy logic approach. The calculation of maximum, minimum and mean values based on RGB fiber optic sensor and the decision making development using minimum entropy formulation to analyse the data and make the classification for the mango fruit. This proposed method is capable to differentiate three different grades of mango fruit automatically with 77.78% of overall accuracy compared to human graders sorting. This method was found to be helpful for the application in the current agricultural industry.

  10. Surveillance cultures of samples obtained from biopsy channels and automated endoscope reprocessors after high-level disinfection of gastrointestinal endoscopes

    Directory of Open Access Journals (Sweden)

    Chiu King-Wah

    2012-09-01

    Full Text Available Abstract Background The instrument channels of gastrointestinal (GI endoscopes may be heavily contaminated with bacteria even after high-level disinfection (HLD. The British Society of Gastroenterology guidelines emphasize the benefits of manually brushing endoscope channels and using automated endoscope reprocessors (AERs for disinfecting endoscopes. In this study, we aimed to assess the effectiveness of decontamination using reprocessors after HLD by comparing the cultured samples obtained from biopsy channels (BCs of GI endoscopes and the internal surfaces of AERs. Methods We conducted a 5-year prospective study. Every month random consecutive sampling was carried out after a complete reprocessing cycle; 420 rinse and swabs samples were collected from BCs and internal surface of AERs, respectively. Of the 420 rinse samples collected from the BC of the GI endoscopes, 300 were obtained from the BCs of gastroscopes and 120 from BCs of colonoscopes. Samples were collected by flushing the BCs with sterile distilled water, and swabbing the residual water from the AERs after reprocessing. These samples were cultured to detect the presence of aerobic and anaerobic bacteria and mycobacteria. Results The number of culture-positive samples obtained from BCs (13.6%, 57/420 was significantly higher than that obtained from AERs (1.7%, 7/420. In addition, the number of culture-positive samples obtained from the BCs of gastroscopes (10.7%, 32/300 and colonoscopes (20.8%, 25/120 were significantly higher than that obtained from AER reprocess to gastroscopes (2.0%, 6/300 and AER reprocess to colonoscopes (0.8%, 1/120. Conclusions Culturing rinse samples obtained from BCs provides a better indication of the effectiveness of the decontamination of GI endoscopes after HLD than culturing the swab samples obtained from the inner surfaces of AERs as the swab samples only indicate whether the AERs are free from microbial contamination or not.

  11. An Energy Efficient Adaptive Sampling Algorithm in a Sensor Network for Automated Water Quality Monitoring.

    Science.gov (United States)

    Shu, Tongxin; Xia, Min; Chen, Jiahong; Silva, Clarence de

    2017-11-05

    Power management is crucial in the monitoring of a remote environment, especially when long-term monitoring is needed. Renewable energy sources such as solar and wind may be harvested to sustain a monitoring system. However, without proper power management, equipment within the monitoring system may become nonfunctional and, as a consequence, the data or events captured during the monitoring process will become inaccurate as well. This paper develops and applies a novel adaptive sampling algorithm for power management in the automated monitoring of the quality of water in an extensive and remote aquatic environment. Based on the data collected on line using sensor nodes, a data-driven adaptive sampling algorithm (DDASA) is developed for improving the power efficiency while ensuring the accuracy of sampled data. The developed algorithm is evaluated using two distinct key parameters, which are dissolved oxygen (DO) and turbidity. It is found that by dynamically changing the sampling frequency, the battery lifetime can be effectively prolonged while maintaining a required level of sampling accuracy. According to the simulation results, compared to a fixed sampling rate, approximately 30.66% of the battery energy can be saved for three months of continuous water quality monitoring. Using the same dataset to compare with a traditional adaptive sampling algorithm (ASA), while achieving around the same Normalized Mean Error (NME), DDASA is superior in saving 5.31% more battery energy.

  12. An Energy Efficient Adaptive Sampling Algorithm in a Sensor Network for Automated Water Quality Monitoring

    Directory of Open Access Journals (Sweden)

    Tongxin Shu

    2017-11-01

    Full Text Available Power management is crucial in the monitoring of a remote environment, especially when long-term monitoring is needed. Renewable energy sources such as solar and wind may be harvested to sustain a monitoring system. However, without proper power management, equipment within the monitoring system may become nonfunctional and, as a consequence, the data or events captured during the monitoring process will become inaccurate as well. This paper develops and applies a novel adaptive sampling algorithm for power management in the automated monitoring of the quality of water in an extensive and remote aquatic environment. Based on the data collected on line using sensor nodes, a data-driven adaptive sampling algorithm (DDASA is developed for improving the power efficiency while ensuring the accuracy of sampled data. The developed algorithm is evaluated using two distinct key parameters, which are dissolved oxygen (DO and turbidity. It is found that by dynamically changing the sampling frequency, the battery lifetime can be effectively prolonged while maintaining a required level of sampling accuracy. According to the simulation results, compared to a fixed sampling rate, approximately 30.66% of the battery energy can be saved for three months of continuous water quality monitoring. Using the same dataset to compare with a traditional adaptive sampling algorithm (ASA, while achieving around the same Normalized Mean Error (NME, DDASA is superior in saving 5.31% more battery energy.

  13. A proposed protocol for remote control of automated assessment devices

    International Nuclear Information System (INIS)

    Kissock, P.S.; Pritchard, D.A.

    1996-01-01

    Systems and devices that are controlled remotely are becoming more common in security systems in the US Air Force and other government agencies to provide protection of valuable assets. These systems reduce the number of needed personnel while still providing a high level of protection. However, each remotely controlled device usually has its own communication protocol. This limits the ability to change devices without changing the system that provides the communications control to the device. Sandia is pursuing a standard protocol that can be used to communicate with the different devices currently in use, or may be used in the future, in the US Air Force and other government agencies throughout the security community. Devices to be controlled include intelligent pan/tilt mounts, day/night video cameras, thermal imaging cameras, and remote data processors. Important features of this protocol include the ability to send messages of varying length, identify the sender, and more importantly, control remote data processors. This paper describes the proposed public domain protocol, features, and examples of use. The authors hope to elicit comments from security technology developers regarding format and use of remotely controlled automated assessment devices

  14. Automated Clean Chemistry for Bulk Analysis of Environmental Swipe Samples - FY17 Year End Report

    Energy Technology Data Exchange (ETDEWEB)

    Ticknor, Brian W. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Metzger, Shalina C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); McBay, Eddy H. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Hexel, Cole R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Tevepaugh, Kayron N. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Bostick, Debra A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2017-11-30

    Sample preparation methods for mass spectrometry are being automated using commercial-off-the-shelf (COTS) equipment to shorten lengthy and costly manual chemical purification procedures. This development addresses a serious need in the International Atomic Energy Agency’s Network of Analytical Laboratories (IAEA NWAL) to increase efficiency in the Bulk Analysis of Environmental Samples for Safeguards program with a method that allows unattended, overnight operation. In collaboration with Elemental Scientific Inc., the prepFAST-MC2 was designed based on COTS equipment. It was modified for uranium/plutonium separations using renewable columns packed with Eichrom TEVA and UTEVA resins, with a chemical separation method based on the Oak Ridge National Laboratory (ORNL) NWAL chemical procedure. The newly designed prepFAST-SR has had several upgrades compared with the original prepFAST-MC2. Both systems are currently installed in the Ultra-Trace Forensics Science Center at ORNL.

  15. Automated high-volume aerosol sampling station for environmental radiation monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Toivonen, H.; Honkamaa, T.; Ilander, T.; Leppaenen, A.; Nikkinen, M.; Poellaenen, R.; Ylaetalo, S

    1998-07-01

    An automated high-volume aerosol sampling station, known as CINDERELLA.STUK, for environmental radiation monitoring has been developed by the Radiation and Nuclear Safety Authority (STUK), Finland. The sample is collected on a glass fibre filter (attached into a cassette), the airflow through the filter is 800 m{sup 3}/h at maximum. During the sampling, the filter is continuously monitored with Na(I) scintillation detectors. After the sampling, the large filter is automatically cut into 15 pieces that form a small sample and after ageing, the pile of filter pieces is moved onto an HPGe detector. These actions are performed automatically by a robot. The system is operated at a duty cycle of 1 d sampling, 1 d decay and 1 d counting. Minimum detectable concentrations of radionuclides in air are typically 1Ae10 x 10{sup -6} Bq/m{sup 3}. The station is equipped with various sensors to reveal unauthorized admittance. These sensors can be monitored remotely in real time via Internet or telephone lines. The processes and operation of the station are monitored and partly controlled by computer. The present approach fulfils the requirements of CTBTO for aerosol monitoring. The concept suits well for nuclear material safeguards, too 10 refs.

  16. Automated biphasic morphological assessment of hepatitis B-related liver fibrosis using second harmonic generation microscopy

    Science.gov (United States)

    Wang, Tong-Hong; Chen, Tse-Ching; Teng, Xiao; Liang, Kung-Hao; Yeh, Chau-Ting

    2015-08-01

    Liver fibrosis assessment by biopsy and conventional staining scores is based on histopathological criteria. Variations in sample preparation and the use of semi-quantitative histopathological methods commonly result in discrepancies between medical centers. Thus, minor changes in liver fibrosis might be overlooked in multi-center clinical trials, leading to statistically non-significant data. Here, we developed a computer-assisted, fully automated, staining-free method for hepatitis B-related liver fibrosis assessment. In total, 175 liver biopsies were divided into training (n = 105) and verification (n = 70) cohorts. Collagen was observed using second harmonic generation (SHG) microscopy without prior staining, and hepatocyte morphology was recorded using two-photon excitation fluorescence (TPEF) microscopy. The training cohort was utilized to establish a quantification algorithm. Eleven of 19 computer-recognizable SHG/TPEF microscopic morphological features were significantly correlated with the ISHAK fibrosis stages (P method was applied, combining support vector machine and multivariate generalized linear models to assess the early and late stages of fibrosis, respectively, based on these parameters. The verification cohort was used to verify the scoring method, and the area under the receiver operating characteristic curve was >0.82 for liver cirrhosis detection. Since no subjective gradings are needed, interobserver discrepancies could be avoided using this fully automated method.

  17. Fully Automated Laser Ablation Liquid Capture Sample Analysis using NanoElectrospray Ionization Mass Spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Lorenz, Matthias [ORNL; Ovchinnikova, Olga S [ORNL; Van Berkel, Gary J [ORNL

    2014-01-01

    RATIONALE: Laser ablation provides for the possibility of sampling a large variety of surfaces with high spatial resolution. This type of sampling when employed in conjunction with liquid capture followed by nanoelectrospray ionization provides the opportunity for sensitive and prolonged interrogation of samples by mass spectrometry as well as the ability to analyze surfaces not amenable to direct liquid extraction. METHODS: A fully automated, reflection geometry, laser ablation liquid capture spot sampling system was achieved by incorporating appropriate laser fiber optics and a focusing lens into a commercially available, liquid extraction surface analysis (LESA ) ready Advion TriVersa NanoMate system. RESULTS: Under optimized conditions about 10% of laser ablated material could be captured in a droplet positioned vertically over the ablation region using the NanoMate robot controlled pipette. The sampling spot size area with this laser ablation liquid capture surface analysis (LA/LCSA) mode of operation (typically about 120 m x 160 m) was approximately 50 times smaller than that achievable by direct liquid extraction using LESA (ca. 1 mm diameter liquid extraction spot). The set-up was successfully applied for the analysis of ink on glass and paper as well as the endogenous components in Alstroemeria Yellow King flower petals. In a second mode of operation with a comparable sampling spot size, termed laser ablation/LESA , the laser system was used to drill through, penetrate, or otherwise expose material beneath a solvent resistant surface. Once drilled, LESA was effective in sampling soluble material exposed at that location on the surface. CONCLUSIONS: Incorporating the capability for different laser ablation liquid capture spot sampling modes of operation into a LESA ready Advion TriVersa NanoMate enhanced the spot sampling spatial resolution of this device and broadened the surface types amenable to analysis to include absorbent and solvent resistant

  18. Artificial Neural Network for Total Laboratory Automation to Improve the Management of Sample Dilution.

    Science.gov (United States)

    Ialongo, Cristiano; Pieri, Massimo; Bernardini, Sergio

    2017-02-01

    Diluting a sample to obtain a measure within the analytical range is a common task in clinical laboratories. However, for urgent samples, it can cause delays in test reporting, which can put patients' safety at risk. The aim of this work is to show a simple artificial neural network that can be used to make it unnecessary to predilute a sample using the information available through the laboratory information system. Particularly, the Multilayer Perceptron neural network built on a data set of 16,106 cardiac troponin I test records produced a correct inference rate of 100% for samples not requiring predilution and 86.2% for those requiring predilution. With respect to the inference reliability, the most relevant inputs were the presence of a cardiac event or surgery and the result of the previous assay. Therefore, such an artificial neural network can be easily implemented into a total automation framework to sensibly reduce the turnaround time of critical orders delayed by the operation required to retrieve, dilute, and retest the sample.

  19. Automated MALDI Matrix Coating System for Multiple Tissue Samples for Imaging Mass Spectrometry

    Science.gov (United States)

    Mounfield, William P.; Garrett, Timothy J.

    2012-03-01

    Uniform matrix deposition on tissue samples for matrix-assisted laser desorption/ionization (MALDI) is key for reproducible analyte ion signals. Current methods often result in nonhomogenous matrix deposition, and take time and effort to produce acceptable ion signals. Here we describe a fully-automated method for matrix deposition using an enclosed spray chamber and spray nozzle for matrix solution delivery. A commercial air-atomizing spray nozzle was modified and combined with solenoid controlled valves and a Programmable Logic Controller (PLC) to control and deliver the matrix solution. A spray chamber was employed to contain the nozzle, sample, and atomized matrix solution stream, and to prevent any interference from outside conditions as well as allow complete control of the sample environment. A gravity cup was filled with MALDI matrix solutions, including DHB in chloroform/methanol (50:50) at concentrations up to 60 mg/mL. Various samples (including rat brain tissue sections) were prepared using two deposition methods (spray chamber, inkjet). A linear ion trap equipped with an intermediate-pressure MALDI source was used for analyses. Optical microscopic examination showed a uniform coating of matrix crystals across the sample. Overall, the mass spectral images gathered from tissues coated using the spray chamber system were of better quality and more reproducible than from tissue specimens prepared by the inkjet deposition method.

  20. Design of an automated rapid vapor concentrator and its application in nitroaromatic vapor sampling

    Science.gov (United States)

    Gehrke, Mark; Kapila, Shubhender; Hambacker, Kurt L.; Flanigan, Virgil I.

    2000-08-01

    An automated, rapid-cycling vapor concentrator and sample introduction device was designed and evaluated. The device consists of an inert deactivated fused silica capillary sampling loop. The temperature of the loop was manipulated through contact with a cold plate or a hot plate, maintained at pre-selected temperatures with a thermoelectric cooler and heating cartridge, respectively. The position of the loop was controlled with a stepper motor under microprocessor control. The low mass of the loop permit its rapid cooling and heating. This permits efficient trapping of adsorptive vapors such as the nitroaromatics from the air stream and also allows rapid and quantitative transfer of the trapped analytes to the detection system. The use of at thermoelectric cooler permits variable trapping temperatures and increased sampling selectivity without the use of cumbersome cryogenic fluids. Chemically inert sampling train surfaces prevent analyte loss due to irreversible adsorption and cross contamination between samples. The device was evaluated for rapid analysis of nitroaromatic and chlorinated aromatic vapors from air stream at trace concentrations with a selective electron capture detection system. Trapping efficiencies of > 95 percent can be readily obtained with the device for nitroaromatics at ppb and sub ppb concentrations.

  1. Completely automated short-term genotoxicity testing for the assessment of chemicals and characterisation of contaminated soils and waste waters.

    Science.gov (United States)

    Brinkmann, Corinna; Eisentraeger, Adolf

    2008-05-01

    The umu-test was developed for the detection of effects of chemical mutagens and carcinogens in environmental samples. It is performed according to ISO 13829 with Salmonella choleraesius subsp. chol. (strain TA1535/pSK1002). By automating the entire test, large numbers of toxicants and environmental samples as well as more treatments and parallels can be tested and, additionally, only low sample volumes are needed. In this work, an automated umu-test has been set up by installing a robotic XYZ-platform and a microplate reader inside a cabin. The use of established technical equipment for the automation in combination with a performance according to ISO standards was the essential aim of the approach. After initial preparation, the test is conducted software-controlled, follows the standard and fulfils the validity criteria of the standard procedure. For the optimization of the automated test umu-tests with one concentration of methyl methanesulfonate (MMS) of 166.7 mg/L were carried out. After optimization of incubation and pipetting conditions in the automated test, dose-response curves of various chemicals and environmental samples were assessed. The results of the automated umu-test have been compared with those of the standard manual test. The aim of the study was to show the applicability of an automated test system for the assessment of the genotoxic effects of various chemicals and environmental samples. During optimization, tests with 166.7 mg/L of MMS in every well of the microplate are carried out. Chemicals with different physical, chemical and toxicological properties are applied in both test systems. Water samples from different waste water treatment plants, and water extracts of contaminated and uncontaminated soils are assessed in the umu-test. The test is performed in parallel manually according to the standard and automatically using the robotic platform. Dose-response relationships and DLI-values are recorded and compared. The umu-test is applied

  2. Automated Three-Dimensional Microbial Sensing and Recognition Using Digital Holography and Statistical Sampling

    Directory of Open Access Journals (Sweden)

    Inkyu Moon

    2010-09-01

    Full Text Available We overview an approach to providing automated three-dimensional (3D sensing and recognition of biological micro/nanoorganisms integrating Gabor digital holographic microscopy and statistical sampling methods. For 3D data acquisition of biological specimens, a coherent beam propagates through the specimen and its transversely and longitudinally magnified diffraction pattern observed by the microscope objective is optically recorded with an image sensor array interfaced with a computer. 3D visualization of the biological specimen from the magnified diffraction pattern is accomplished by using the computational Fresnel propagation algorithm. For 3D recognition of the biological specimen, a watershed image segmentation algorithm is applied to automatically remove the unnecessary background parts in the reconstructed holographic image. Statistical estimation and inference algorithms are developed to the automatically segmented holographic image. Overviews of preliminary experimental results illustrate how the holographic image reconstructed from the Gabor digital hologram of biological specimen contains important information for microbial recognition.

  3. Are Flow Injection-based Approaches Suitable for Automated Handling of Solid Samples?

    DEFF Research Database (Denmark)

    Miró, Manuel; Hansen, Elo Harald; Cerdà, Victor

    Flow-based approaches were originally conceived for liquid-phase analysis, implying that constituents in solid samples generally had to be transferred into the liquid state, via appropriate batch pretreatment procedures, prior to analysis. Yet, in recent years, much effort has been focused...... electrolytic or aqueous leaching, on-line dialysis/microdialysis, in-line filtration, and pervaporation-based procedures have been successfully implemented in continuous flow/flow injection systems. In this communication, the new generation of flow analysis, including sequential injection, multicommutated flow...... with the potential hyphenation with modern analytical instrumentation for automated monitoring of the content of targeted species in the on-line generated extracts [3,4]. [1] Z.-L. Zhi, A. Ríos, M. Valcárcel, Crit. Rev. Anal. Chem., 26 (1996) 239. [2] M. Miró, E.H. Hansen, R. Chomchoei, W. Frenzel, TRAC-Trends Anal...

  4. Subjective and objective assessment of manual, supported, and automated vehicle control

    NARCIS (Netherlands)

    Vos, A.P. de; Godthelp, J.; Käppler, W.D.

    1998-01-01

    In this paper subjective and objective assessments of vehicle control are illustrated by means of ex-periments concerning manipulation of vehicle dynamics, driver support, and automated driving. Subjective ratings are discussed in relation to objective performance measures.

  5. Socio-Economic Impact Assessment of Automated Transit Information Systems Technology

    Science.gov (United States)

    1984-03-01

    This report is the final product of a program to assess the socio-economic impacts of automated transit information system (ATIS) technology deployments on the transit industry's telephone information/marketing function. In the course of this program...

  6. Evaluation of Sample Stability and Automated DNA Extraction for Fetal Sex Determination Using Cell-Free Fetal DNA in Maternal Plasma

    Directory of Open Access Journals (Sweden)

    Elena Ordoñez

    2013-01-01

    Full Text Available Objective. The detection of paternally inherited sequences in maternal plasma, such as the SRY gene for fetal sexing or RHD for fetal blood group genotyping, is becoming part of daily routine in diagnostic laboratories. Due to the low percentage of fetal DNA, it is crucial to ensure sample stability and the efficiency of DNA extraction. We evaluated blood stability at 4°C for at least 24 hours and automated DNA extraction, for fetal sex determination in maternal plasma. Methods. A total of 158 blood samples were collected, using EDTA-K tubes, from women in their 1st trimester of pregnancy. Samples were kept at 4°C for at least 24 hours before processing. An automated DNA extraction was evaluated, and its efficiency was compared with a standard manual procedure. The SRY marker was used to quantify cfDNA by real-time PCR. Results. Although lower cfDNA amounts were obtained by automated DNA extraction (mean 107,35 GE/mL versus 259,43 GE/mL, the SRY sequence was successfully detected in all 108 samples from pregnancies with male fetuses. Conclusion. We successfully evaluated the suitability of standard blood tubes for the collection of maternal blood and assessed samples to be suitable for analysis at least 24 hours later. This would allow shipping to a central reference laboratory almost from anywhere in Europe.

  7. ASSESSING SMALL SAMPLE WAR-GAMING DATASETS

    Directory of Open Access Journals (Sweden)

    W. J. HURLEY

    2013-10-01

    Full Text Available One of the fundamental problems faced by military planners is the assessment of changes to force structure. An example is whether to replace an existing capability with an enhanced system. This can be done directly with a comparison of measures such as accuracy, lethality, survivability, etc. However this approach does not allow an assessment of the force multiplier effects of the proposed change. To gauge these effects, planners often turn to war-gaming. For many war-gaming experiments, it is expensive, both in terms of time and dollars, to generate a large number of sample observations. This puts a premium on the statistical methodology used to examine these small datasets. In this paper we compare the power of three tests to assess population differences: the Wald-Wolfowitz test, the Mann-Whitney U test, and re-sampling. We employ a series of Monte Carlo simulation experiments. Not unexpectedly, we find that the Mann-Whitney test performs better than the Wald-Wolfowitz test. Resampling is judged to perform slightly better than the Mann-Whitney test.

  8. Automated Scoring in Context: Rapid Assessment for Placed Students

    Science.gov (United States)

    Klobucar, Andrew; Elliot, Norbert; Deess, Perry; Rudniy, Oleksandr; Joshi, Kamal

    2013-01-01

    This study investigated the use of automated essay scoring (AES) to identify at-risk students enrolled in a first-year university writing course. An application of AES, the "Criterion"[R] Online Writing Evaluation Service was evaluated through a methodology focusing on construct modelling, response processes, disaggregation, extrapolation,…

  9. Automated negotiation in environmental resource management: Review and assessment.

    Science.gov (United States)

    Eshragh, Faezeh; Pooyandeh, Majeed; Marceau, Danielle J

    2015-10-01

    Negotiation is an integral part of our daily life and plays an important role in resolving conflicts and facilitating human interactions. Automated negotiation, which aims at capturing the human negotiation process using artificial intelligence and machine learning techniques, is well-established in e-commerce, but its application in environmental resource management remains limited. This is due to the inherent uncertainties and complexity of environmental issues, along with the diversity of stakeholders' perspectives when dealing with these issues. The objective of this paper is to describe the main components of automated negotiation, review and compare machine learning techniques in automated negotiation, and provide a guideline for the selection of suitable methods in the particular context of stakeholders' negotiation over environmental resource issues. We advocate that automated negotiation can facilitate the involvement of stakeholders in the exploration of a plurality of solutions in order to reach a mutually satisfying agreement and contribute to informed decisions in environmental management along with the need for further studies to consolidate the potential of this modeling approach. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Assessment of Automated Driving Systems using real-life scenarios

    NARCIS (Netherlands)

    Gelder, E. de; Paardekooper, J.P.

    2017-01-01

    More and more Advanced Driver Assistance Systems (ADAS) are entering the market for improving both safety and comfort by assisting the driver with their driving task. An important aspect in developing future ADAS and Automated Driving Systems (ADS) is testing and validation. Validating the failure

  11. Automating the Fireshed Assessment Process with ArcGIS

    Science.gov (United States)

    Alan Ager; Klaus Barber

    2006-01-01

    A library of macros was developed to automate the Fireshed process within ArcGIS. The macros link a number of vegetation simulation and wildfire behavior models (FVS, SVS, FARSITE, and FlamMap) with ESRI geodatabases, desktop software (Access, Excel), and ArcGIS. The macros provide for (1) an interactive linkage between digital imagery, vegetation data, FVS-FFE, and...

  12. Establishing a novel automated magnetic bead-based method for the extraction of DNA from a variety of forensic samples.

    Science.gov (United States)

    Witt, Sebastian; Neumann, Jan; Zierdt, Holger; Gébel, Gabriella; Röscheisen, Christiane

    2012-09-01

    Automated systems have been increasingly utilized for DNA extraction by many forensic laboratories to handle growing numbers of forensic casework samples while minimizing the risk of human errors and assuring high reproducibility. The step towards automation however is not easy: The automated extraction method has to be very versatile to reliably prepare high yields of pure genomic DNA from a broad variety of sample types on different carrier materials. To prevent possible cross-contamination of samples or the loss of DNA, the components of the kit have to be designed in a way that allows for the automated handling of the samples with no manual intervention necessary. DNA extraction using paramagnetic particles coated with a DNA-binding surface is predestined for an automated approach. For this study, we tested different DNA extraction kits using DNA-binding paramagnetic particles with regard to DNA yield and handling by a Freedom EVO(®)150 extraction robot (Tecan) equipped with a Te-MagS magnetic separator. Among others, the extraction kits tested were the ChargeSwitch(®)Forensic DNA Purification Kit (Invitrogen), the PrepFiler™Automated Forensic DNA Extraction Kit (Applied Biosystems) and NucleoMag™96 Trace (Macherey-Nagel). After an extensive test phase, we established a novel magnetic bead extraction method based upon the NucleoMag™ extraction kit (Macherey-Nagel). The new method is readily automatable and produces high yields of DNA from different sample types (blood, saliva, sperm, contact stains) on various substrates (filter paper, swabs, cigarette butts) with no evidence of a loss of magnetic beads or sample cross-contamination. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  13. Assessment of sampling mortality of larval fishes

    International Nuclear Information System (INIS)

    Cada, G.F.; Hergenrader, G.L.

    1978-01-01

    A study was initiated to assess the mortality of larval fishes that were entrained in the condenser cooling systems of two nuclear power plants on the Missouri River in Nebraska. High mortalities were observed not only in the discharge collections but also in control samples taken upriver from the plants where no entrainment effects were possible. As a result, entrainment mortality generally could not be demonstrated. A technique was developed which indicated that (1) a significant portion of the observed mortality above the power plants was the result of net-induced sampling mortality, and (2) a direct relationship existed between observed mortality and water velocity in the nets when sampling at the control sites, which was described by linear regression equations. When these equations were subsequently used to remove the effects of wide differences in sampling velocities between control and discharge collections, significant entrainment mortality was noted in all cases. The equations were also used to derive estimates of the natural mortality of ichthyoplankton in this portion of the Missouri River

  14. Automated Generation and Assessment of Autonomous Systems Test Cases

    Science.gov (United States)

    Barltrop, Kevin J.; Friberg, Kenneth H.; Horvath, Gregory A.

    2008-01-01

    This slide presentation reviews some of the issues concerning verification and validation testing of autonomous spacecraft routinely culminates in the exploration of anomalous or faulted mission-like scenarios using the work involved during the Dawn mission's tests as examples. Prioritizing which scenarios to develop usually comes down to focusing on the most vulnerable areas and ensuring the best return on investment of test time. Rules-of-thumb strategies often come into play, such as injecting applicable anomalies prior to, during, and after system state changes; or, creating cases that ensure good safety-net algorithm coverage. Although experience and judgment in test selection can lead to high levels of confidence about the majority of a system's autonomy, it's likely that important test cases are overlooked. One method to fill in potential test coverage gaps is to automatically generate and execute test cases using algorithms that ensure desirable properties about the coverage. For example, generate cases for all possible fault monitors, and across all state change boundaries. Of course, the scope of coverage is determined by the test environment capabilities, where a faster-than-real-time, high-fidelity, software-only simulation would allow the broadest coverage. Even real-time systems that can be replicated and run in parallel, and that have reliable set-up and operations features provide an excellent resource for automated testing. Making detailed predictions for the outcome of such tests can be difficult, and when algorithmic means are employed to produce hundreds or even thousands of cases, generating predicts individually is impractical, and generating predicts with tools requires executable models of the design and environment that themselves require a complete test program. Therefore, evaluating the results of large number of mission scenario tests poses special challenges. A good approach to address this problem is to automatically score the results

  15. A simple and automated sample preparation system for subsequent halogens determination: Combustion followed by pyrohydrolysis.

    Science.gov (United States)

    Pereira, L S F; Pedrotti, M F; Vecchia, P Dalla; Pereira, J S F; Flores, E M M

    2018-06-20

    A simple and automated system based on combustion followed by a pyrohydrolysis reaction was proposed for further halogens determination. This system was applied for digestion of soils containing high (90%) and also low (10%) organic matter content for further halogens determination. The following parameters were evaluated: sample mass, use of microcrystalline cellulose and heating time. For analytes absorption, a diluted alkaline solution (6 mL of 25 mmol L -1  NH 4 OH) was used in all experiments. Up to 400 mg of soil with high organic matter content and 100 mg of soil with low organic matter content (mixed with 400 mg of cellulose) could be completely digested using the proposed system. Quantitative results for all halogens were obtained using less than 12 min of sample preparation step (about 1.8 min for sample combustion and 10 min for pyrohydrolysis). The accuracy was evaluated using a certified reference material of coal and spiked samples. No statistical difference was observed between the certified values and results obtained by the proposed method. Additionally, the recoveries obtained using spiked samples were in the range of 98-103% with relative standard deviation values lower than 5%. The limits of quantification obtained for F, Cl, Br and I for soil with high (400 mg of soil) and low (100 mg of soil) organic matter were in the range of 0.01-2 μg g -1 and 0.07-59 μg g -1 , respectively. The proposed system was considered as a simple and suitable alternative for soils digestion for further halogens determination by ion chromatography and inductively coupled plasma mass spectrometry techniques. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Automated quantification of epicardial adipose tissue (EAT) in coronary CT angiography; comparison with manual assessment and correlation with coronary artery disease.

    Science.gov (United States)

    Mihl, Casper; Loeffen, Daan; Versteylen, Mathijs O; Takx, Richard A P; Nelemans, Patricia J; Nijssen, Estelle C; Vega-Higuera, Fernando; Wildberger, Joachim E; Das, Marco

    2014-01-01

    Epicardial adipose tissue (EAT) is emerging as a risk factor for coronary artery disease (CAD). The aim of this study was to determine the applicability and efficiency of automated EAT quantification. EAT volume was assessed both manually and automatically in 157 patients undergoing coronary CT angiography. Manual assessment consisted of a short-axis-based manual measurement, whereas automated assessment on both contrast and non-contrast-enhanced data sets was achieved through novel prototype software. Duration of both quantification methods was recorded, and EAT volumes were compared with paired samples t test. Correlation of volumes was determined with intraclass correlation coefficient; agreement was tested with Bland-Altman analysis. The association between EAT and CAD was estimated with logistic regression. Automated quantification was significantly less time consuming than automated quantification (17 ± 2 seconds vs 280 ± 78 seconds; P EAT volume differed significantly from automated EAT volume (75 ± 33 cm(³) vs 95 ± 45 cm(³); P EAT volume was positively associated with the presence of CAD. Stronger predictive value for the severity of CAD was achieved through automated quantification on both contrast-enhanced and non-contrast-enhanced data sets. Automated EAT quantification is a quick method to estimate EAT and may serve as a predictor for CAD presence and severity. Copyright © 2014 Society of Cardiovascular Computed Tomography. Published by Elsevier Inc. All rights reserved.

  17. Invention and validation of an automated camera system that uses optical character recognition to identify patient name mislabeled samples.

    Science.gov (United States)

    Hawker, Charles D; McCarthy, William; Cleveland, David; Messinger, Bonnie L

    2014-03-01

    Mislabeled samples are a serious problem in most clinical laboratories. Published error rates range from 0.39/1000 to as high as 1.12%. Standardization of bar codes and label formats has not yet achieved the needed improvement. The mislabel rate in our laboratory, although low compared with published rates, prompted us to seek a solution to achieve zero errors. To reduce or eliminate our mislabeled samples, we invented an automated device using 4 cameras to photograph the outside of a sample tube. The system uses optical character recognition (OCR) to look for discrepancies between the patient name in our laboratory information system (LIS) vs the patient name on the customer label. All discrepancies detected by the system's software then require human inspection. The system was installed on our automated track and validated with production samples. We obtained 1 009 830 images during the validation period, and every image was reviewed. OCR passed approximately 75% of the samples, and no mislabeled samples were passed. The 25% failed by the system included 121 samples actually mislabeled by patient name and 148 samples with spelling discrepancies between the patient name on the customer label and the patient name in our LIS. Only 71 of the 121 mislabeled samples detected by OCR were found through our normal quality assurance process. We have invented an automated camera system that uses OCR technology to identify potential mislabeled samples. We have validated this system using samples transported on our automated track. Full implementation of this technology offers the possibility of zero mislabeled samples in the preanalytic stage.

  18. Microassay for interferon, using [3H]uridine, microculture plates, and a multiple automated sample harvester.

    Science.gov (United States)

    Richmond, J Y; Polatnick, J; Knudsen, R C

    1980-01-01

    A microassay for interferon is described which uses target cells grown in microculture wells, [3H]uridine to measure vesicular stomatitis virus replication in target cells, and a multiple automated sample harvester to collect the radioactively labeled viral ribonucleic acid onto glass fiber filter disks. The disks were placed in minivials, and radioactivity was counted in a liquid scintillation spectrophotometer. Interferon activity was calculated as the reciprocal of the highest titer which inhibited the incorporation of [3H]uridine into viral ribonucleic acid by 50%. Interferon titers determined by the microassay were similar to the plaque reduction assay when 100 plaque-forming units of challenge vesicular stomatitis virus was used. However, it was found that the interferon titers decreased approximately 2-fold for each 10-fold increase in the concentration of challenge vesicular stomatitis virus when tested in the range of 10(2) to 10(5) plaque-forming units. Interferon titers determined by the microassay show a high degree of repeatability, and the assay can be used to measure small and large numbers of interferon samples. PMID:6155105

  19. Automation and integration of multiplexed on-line sample preparation with capillary electrophoresis for DNA sequencing

    Energy Technology Data Exchange (ETDEWEB)

    Tan, H.

    1999-03-31

    The purpose of this research is to develop a multiplexed sample processing system in conjunction with multiplexed capillary electrophoresis for high-throughput DNA sequencing. The concept from DNA template to called bases was first demonstrated with a manually operated single capillary system. Later, an automated microfluidic system with 8 channels based on the same principle was successfully constructed. The instrument automatically processes 8 templates through reaction, purification, denaturation, pre-concentration, injection, separation and detection in a parallel fashion. A multiplexed freeze/thaw switching principle and a distribution network were implemented to manage flow direction and sample transportation. Dye-labeled terminator cycle-sequencing reactions are performed in an 8-capillary array in a hot air thermal cycler. Subsequently, the sequencing ladders are directly loaded into a corresponding size-exclusion chromatographic column operated at {approximately} 60 C for purification. On-line denaturation and stacking injection for capillary electrophoresis is simultaneously accomplished at a cross assembly set at {approximately} 70 C. Not only the separation capillary array but also the reaction capillary array and purification columns can be regenerated after every run. DNA sequencing data from this system allow base calling up to 460 bases with accuracy of 98%.

  20. Determining sample size when assessing mean equivalence.

    Science.gov (United States)

    Asberg, Arne; Solem, Kristine B; Mikkelsen, Gustav

    2014-11-01

    When we want to assess whether two analytical methods are equivalent, we could test if the difference between the mean results is within the specification limits of 0 ± an acceptance criterion. Testing the null hypothesis of zero difference is less interesting, and so is the sample size estimation based on testing that hypothesis. Power function curves for equivalence testing experiments are not widely available. In this paper we present power function curves to help decide on the number of measurements when testing equivalence between the means of two analytical methods. Computer simulation was used to calculate the probability that the 90% confidence interval for the difference between the means of two analytical methods would exceed the specification limits of 0 ± 1, 0 ± 2 or 0 ± 3 analytical standard deviations (SDa), respectively. The probability of getting a nonequivalence alarm increases with increasing difference between the means when the difference is well within the specification limits. The probability increases with decreasing sample size and with smaller acceptance criteria. We may need at least 40-50 measurements with each analytical method when the specification limits are 0 ± 1 SDa, and 10-15 and 5-10 when the specification limits are 0 ± 2 and 0 ± 3 SDa, respectively. The power function curves provide information of the probability of false alarm, so that we can decide on the sample size under less uncertainty.

  1. Assessing the Performance of Human-Automation Collaborative Planning Systems

    Science.gov (United States)

    2011-06-01

    Luisa, Birsen, Brian, Tuco – thanks for all the good times, the laughs, the beer, the hockey, the volleyball , for eating my cooking, for the late...the features of the DCAP system and its embedded automated algorithm. Chapter 4, Performance Validation Testing, describes the creation of the...on aircraft and deck resource failures to the user. The remaining features of the interface are supporting features , such as sort options and

  2. Assessing bat detectability and occupancy with multiple automated echolocation detectors

    Science.gov (United States)

    Gorresen, P.M.; Miles, A.C.; Todd, C.M.; Bonaccorso, F.J.; Weller, T.J.

    2008-01-01

    Occupancy analysis and its ability to account for differential detection probabilities is important for studies in which detecting echolocation calls is used as a measure of bat occurrence and activity. We examined the feasibility of remotely acquiring bat encounter histories to estimate detection probability and occupancy. We used echolocation detectors coupled to digital recorders operating at a series of proximate sites on consecutive nights in 2 trial surveys for the Hawaiian hoary bat (Lasiurus cinereus semotus). Our results confirmed that the technique is readily amenable for use in occupancy analysis. We also conducted a simulation exercise to assess the effects of sampling effort on parameter estimation. The results indicated that the precision and bias of parameter estimation were often more influenced by the number of sites sampled than number of visits. Acceptable accuracy often was not attained until at least 15 sites or 15 visits were used to estimate detection probability and occupancy. The method has significant potential for use in monitoring trends in bat activity and in comparative studies of habitat use. ?? 2008 American Society of Mammalogists.

  3. Automated dispersive liquid-liquid microextraction coupled to high performance liquid chromatography - cold vapour atomic fluorescence spectroscopy for the determination of mercury species in natural water samples.

    Science.gov (United States)

    Liu, Yao-Min; Zhang, Feng-Ping; Jiao, Bao-Yu; Rao, Jin-Yu; Leng, Geng

    2017-04-14

    An automated, home-constructed, and low cost dispersive liquid-liquid microextraction (DLLME) device that directly coupled to a high performance liquid chromatography (HPLC) - cold vapour atomic fluorescence spectroscopy (CVAFS) system was designed and developed for the determination of trace concentrations of methylmercury (MeHg + ), ethylmercury (EtHg + ) and inorganic mercury (Hg 2+ ) in natural waters. With a simple, miniaturized and efficient automated DLLME system, nanogram amounts of these mercury species were extracted from natural water samples and injected into a hyphenated HPLC-CVAFS for quantification. The complete analytical procedure, including chelation, extraction, phase separation, collection and injection of the extracts, as well as HPLC-CVAFS quantification, was automated. Key parameters, such as the type and volume of the chelation, extraction and dispersive solvent, aspiration speed, sample pH, salt effect and matrix effect, were thoroughly investigated. Under the optimum conditions, linear range was 10-1200ngL -1 for EtHg + and 5-450ngL -1 for MeHg + and Hg 2+ . Limits of detection were 3.0ngL -1 for EtHg + and 1.5ngL -1 for MeHg + and Hg 2+ . Reproducibility and recoveries were assessed by spiking three natural water samples with different Hg concentrations, giving recoveries from 88.4-96.1%, and relative standard deviations <5.1%. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Automated Diagnosis of Myocardial Infarction ECG Signals Using Sample Entropy in Flexible Analytic Wavelet Transform Framework

    Directory of Open Access Journals (Sweden)

    Mohit Kumar

    2017-09-01

    Full Text Available Myocardial infarction (MI is a silent condition that irreversibly damages the heart muscles. It expands rapidly and, if not treated timely, continues to damage the heart muscles. An electrocardiogram (ECG is generally used by the clinicians to diagnose the MI patients. Manual identification of the changes introduced by MI is a time-consuming and tedious task, and there is also a possibility of misinterpretation of the changes in the ECG. Therefore, a method for automatic diagnosis of MI using ECG beat with flexible analytic wavelet transform (FAWT method is proposed in this work. First, the segmentation of ECG signals into beats is performed. Then, FAWT is applied to each ECG beat, which decomposes them into subband signals. Sample entropy (SEnt is computed from these subband signals and fed to the random forest (RF, J48 decision tree, back propagation neural network (BPNN, and least-squares support vector machine (LS-SVM classifiers to choose the highest performing one. We have achieved highest classification accuracy of 99.31% using LS-SVM classifier. We have also incorporated Wilcoxon and Bhattacharya ranking methods and observed no improvement in the performance. The proposed automated method can be installed in the intensive care units (ICUs of hospitals to aid the clinicians in confirming their diagnosis.

  5. Development of an Automated Security Risk Assessment Methodology Tool for Critical Infrastructures.

    Energy Technology Data Exchange (ETDEWEB)

    Jaeger, Calvin Dell; Roehrig, Nathaniel S.; Torres, Teresa M.

    2008-12-01

    This document presents the security automated Risk Assessment Methodology (RAM) prototype tool developed by Sandia National Laboratories (SNL). This work leverages SNL's capabilities and skills in security risk analysis and the development of vulnerability assessment/risk assessment methodologies to develop an automated prototype security RAM tool for critical infrastructures (RAM-CITM). The prototype automated RAM tool provides a user-friendly, systematic, and comprehensive risk-based tool to assist CI sector and security professionals in assessing and managing security risk from malevolent threats. The current tool is structured on the basic RAM framework developed by SNL. It is envisioned that this prototype tool will be adapted to meet the requirements of different CI sectors and thereby provide additional capabilities.

  6. A novel flow injection chemiluminescence method for automated and miniaturized determination of phenols in smoked food samples.

    Science.gov (United States)

    Vakh, Christina; Evdokimova, Ekaterina; Pochivalov, Aleksei; Moskvin, Leonid; Bulatov, Andrey

    2017-12-15

    An easily performed fully automated and miniaturized flow injection chemiluminescence (CL) method for determination of phenols in smoked food samples has been proposed. This method includes the ultrasound assisted solid-liquid extraction coupled with gas-diffusion separation of phenols from smoked food sample and analytes absorption into a NaOH solution in a specially designed gas-diffusion cell. The flow system was designed to focus on automation and miniaturization with minimal sample and reagent consumption by inexpensive instrumentation. The luminol - N-bromosuccinimide system in an alkaline medium was used for the CL determination of phenols. The limit of detection of the proposed procedure was 3·10 -8 ·molL -1 (0.01mgkg -1 ) in terms of phenol. The presented method demonstrated to be a good tool for easy, rapid and cost-effective point-of-need screening phenols in smoked food samples. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Correction of an input function for errors introduced with automated blood sampling

    Energy Technology Data Exchange (ETDEWEB)

    Schlyer, D.J.; Dewey, S.L. [Brookhaven National Lab., Upton, NY (United States)

    1994-05-01

    Accurate kinetic modeling of PET data requires an precise arterial plasma input function. The use of automated blood sampling machines has greatly improved the accuracy but errors can be introduced by the dispersion of the radiotracer in the sampling tubing. This dispersion results from three effects. The first is the spreading of the radiotracer in the tube due to mass transfer. The second is due to the mechanical action of the peristaltic pump and can be determined experimentally from the width of a step function. The third is the adsorption of the radiotracer on the walls of the tubing during transport through the tube. This is a more insidious effect since the amount recovered from the end of the tube can be significantly different than that introduced into the tubing. We have measured the simple mass transport using [{sup 18}F]fluoride in water which we have shown to be quantitatively recovered with no interaction with the tubing walls. We have also carried out experiments with several radiotracers including [{sup 18}F]Haloperidol, [{sup 11}C]L-deprenyl, [{sup 18}]N-methylspiroperidol ([{sup 18}F]NMS) and [{sup 11}C]buprenorphine. In all cases there was some retention of the radiotracer by untreated silicone tubing. The amount retained in the tubing ranged from 6% for L-deprenyl to 30% for NMS. The retention of the radiotracer was essentially eliminated after pretreatment with the relevant unlabeled compound. For example less am 2% of the [{sup 18}F]NMS was retained in tubing treated with unlabelled NMS. Similar results were obtained with baboon plasma although the amount retained in the untreated tubing was less in all cases. From these results it is possible to apply a mathematical correction to the measured input function to account for mechanical dispersion and to apply a chemical passivation to the tubing to reduce the dispersion due to adsorption of the radiotracer on the tubing walls.

  8. Designing an Automated Assessment of Public Speaking Skills Using Multimodal Cues

    Science.gov (United States)

    Chen, Lei; Feng, Gary; Leong, Chee Wee; Joe, Jilliam; Kitchen, Christopher; Lee, Chong Min

    2016-01-01

    Traditional assessments of public speaking skills rely on human scoring. We report an initial study on the development of an automated scoring model for public speaking performances using multimodal technologies. Task design, rubric development, and human rating were conducted according to standards in educational assessment. An initial corpus of…

  9. Fully automated algorithm for wound surface area assessment.

    Science.gov (United States)

    Deana, Alessandro Melo; de Jesus, Sérgio Henrique Costa; Sampaio, Brunna Pileggi Azevedo; Oliveira, Marcelo Tavares; Silva, Daniela Fátima Teixeira; França, Cristiane Miranda

    2013-01-01

    Worldwide, clinicians, dentists, nurses, researchers, and other health professionals need to monitor the wound healing progress and to quantify the rate of wound closure. The aim of this study is to demonstrate, step by step, a fully automated numerical method to estimate the size of the wound and the percentage damaged relative to the body surface area (BSA) in images, without the requirement for human intervention. We included the formula for BSA in rats in the algorithm. The methodology was validated in experimental wounds and human ulcers and was compared with the analysis of an experienced pathologist, with good agreement. Therefore, this algorithm is suitable for experimental wounds and burns and human ulcers, as they have a high contrast with adjacent normal skin. © 2013 by the Wound Healing Society.

  10. Validation of a fully automated robotic setup for preparation of whole blood samples for LC-MS toxicology analysis

    DEFF Research Database (Denmark)

    Andersen, David Wederkinck; Rasmussen, Brian; Linnet, Kristian

    2012-01-01

    -mass spectrometry using several preparation techniques, including protein precipitation, solid-phase extraction and centrifugation, without any manual intervention. Pipetting of a known aliquot of whole blood was achieved by integrating a balance and performing gravimetric measurements. The system was able......A fully automated setup was developed for preparing whole blood samples using a Tecan Evo workstation. By integrating several add-ons to the robotic platform, the flexible setup was able to prepare samples from sample tubes to a 96-well sample plate ready for injection on liquid chromatography...... to handle 1,073 of 1,092 (98.3%) samples of whole blood from forensic material, including postmortem samples, without any need for repeating sample preparation. Only three samples required special treatment such as dilution. The addition of internal and calibration standards were validated by pipetting...

  11. ASPIRE: An automated sample positioning and irradiation system for radiation biology experiments at Inter University Accelerator Centre, New Delhi

    International Nuclear Information System (INIS)

    Kothari, Ashok; Barua, P.; Archunan, M.; Rani, Kusum; Subramanian, E.T.; Pujari, Geetanjali; Kaur, Harminder; Satyanarayanan, V.V.V.; Sarma, Asitikantha; Avasthi, D.K.

    2015-01-01

    An automated irradiation setup for biology samples has been built at Inter University Accelerator Centre (IUAC), New Delhi, India. It can automatically load and unload 20 biology samples in a run of experiment. It takes about 20 min [2% of the cell doubling time] to irradiate all the 20 samples. Cell doubling time is the time taken by the cells (kept in the medium) to grow double in numbers. The cells in the samples keep growing during entire of the experiment. The fluence irradiated to the samples is measured with two silicon surface barrier detectors. Tests show that the uniformity of fluence and dose of heavy ions reaches to 2% at the sample area in diameter of 40 mm. The accuracy of mean fluence at the center of the target area is within 1%. The irradiation setup can be used to the studies of radiation therapy, radiation dosimetry and molecular biology at the heavy ion accelerator. - Highlights: • Automated positioning and irradiation setup for biology samples at IUAC is built. • Loading and unloading of 20 biology samples can be automatically carried out. • Biologicals cells keep growing during entire experiment. • Fluence and dose of heavy ions are measured by two silicon barrier detectors. • Uniformity of fluence and dose of heavy ions at sample position reaches to 2%

  12. Assessment of automated disease detection in diabetic retinopathy screening using two-field photography.

    Directory of Open Access Journals (Sweden)

    Keith Goatman

    Full Text Available To assess the performance of automated disease detection in diabetic retinopathy screening using two field mydriatic photography.Images from 8,271 sequential patient screening episodes from a South London diabetic retinopathy screening service were processed by the Medalytix iGrading™ automated grading system. For each screening episode macular-centred and disc-centred images of both eyes were acquired and independently graded according to the English national grading scheme. Where discrepancies were found between the automated result and original manual grade, internal and external arbitration was used to determine the final study grades. Two versions of the software were used: one that detected microaneurysms alone, and one that detected blot haemorrhages and exudates in addition to microaneurysms. Results for each version were calculated once using both fields and once using the macula-centred field alone.Of the 8,271 episodes, 346 (4.2% were considered unassessable. Referable disease was detected in 587 episodes (7.1%. The sensitivity of the automated system for detecting unassessable images ranged from 97.4% to 99.1% depending on configuration. The sensitivity of the automated system for referable episodes ranged from 98.3% to 99.3%. All the episodes that included proliferative or pre-proliferative retinopathy were detected by the automated system regardless of configuration (192/192, 95% confidence interval 98.0% to 100%. If implemented as the first step in grading, the automated system would have reduced the manual grading effort by between 2,183 and 3,147 patient episodes (26.4% to 38.1%.Automated grading can safely reduce the workload of manual grading using two field, mydriatic photography in a routine screening service.

  13. Suitability of semi-automated tumor response assessment of liver metastases using a dedicated software package

    International Nuclear Information System (INIS)

    Kalkmann, Janine; Ladd, S.C.; Greiff, A. de; Forsting, M.; Stattaus, J.

    2010-01-01

    Purpose: to evaluate the suitability of semi-automated compared to manual tumor response assessment (TRA) of liver metastases. Materials and methods: in total, 32 patients with colorectal cancer and liver metastases were followed by an average of 2.8 contrast-enhanced CT scans. Two observers (O1, O2) measured the longest diameter (LD) of 269 liver metastases manually and semi-automatically using software installed as thin-client on a PACS workstation (LMS-Liver, MEDIAN Technologies). LD and TRA (''progressive'', ''stable'', ''partial remission'') were performed according to RECIST (Response Evaluation Criteria in Solid Tumors) and analyzed for between-method, interobserver and intraobserver variability. The time needed for evaluation was compared for both methods. Results: all measurements correlated excellently (r ≥ 0.96). Intraobserver (semi-automated), interobserver (manual) and between-method differences (by O1) in LD of 1.4 ± 2.6 mm, 1.9 ± 1.9 mm and 2.1 ± 2.0 mm, respectively, were not significant. Interobserver (semi-automated) and between-method (by O2) differences in LD of 3.0 ± 3.0 mm and 2.6 ± 2.0 mm, respectively, reflected a significant variability (p < 0.01). The interobserver agreement in manual and semi-automated TRA was 91.4%. The intraobserver agreement in semi-automated TRA was 84.5%. Between both methods a TRA agreement of 86.2% was obtained. Semi-automated evaluation (2.7 min) took slightly more time than manual evaluation (2.3 min). Conclusion: semi-automated and manual evaluation of liver metastases yield comparable results in response assessments and require comparable effort. (orig.)

  14. Automated tissue classification framework for reproducible chronic wound assessment.

    Science.gov (United States)

    Mukherjee, Rashmi; Manohar, Dhiraj Dhane; Das, Dev Kumar; Achar, Arun; Mitra, Analava; Chakraborty, Chandan

    2014-01-01

    The aim of this paper was to develop a computer assisted tissue classification (granulation, necrotic, and slough) scheme for chronic wound (CW) evaluation using medical image processing and statistical machine learning techniques. The red-green-blue (RGB) wound images grabbed by normal digital camera were first transformed into HSI (hue, saturation, and intensity) color space and subsequently the "S" component of HSI color channels was selected as it provided higher contrast. Wound areas from 6 different types of CW were segmented from whole images using fuzzy divergence based thresholding by minimizing edge ambiguity. A set of color and textural features describing granulation, necrotic, and slough tissues in the segmented wound area were extracted using various mathematical techniques. Finally, statistical learning algorithms, namely, Bayesian classification and support vector machine (SVM), were trained and tested for wound tissue classification in different CW images. The performance of the wound area segmentation protocol was further validated by ground truth images labeled by clinical experts. It was observed that SVM with 3rd order polynomial kernel provided the highest accuracies, that is, 86.94%, 90.47%, and 75.53%, for classifying granulation, slough, and necrotic tissues, respectively. The proposed automated tissue classification technique achieved the highest overall accuracy, that is, 87.61%, with highest kappa statistic value (0.793).

  15. Automated Tissue Classification Framework for Reproducible Chronic Wound Assessment

    Directory of Open Access Journals (Sweden)

    Rashmi Mukherjee

    2014-01-01

    Full Text Available The aim of this paper was to develop a computer assisted tissue classification (granulation, necrotic, and slough scheme for chronic wound (CW evaluation using medical image processing and statistical machine learning techniques. The red-green-blue (RGB wound images grabbed by normal digital camera were first transformed into HSI (hue, saturation, and intensity color space and subsequently the “S” component of HSI color channels was selected as it provided higher contrast. Wound areas from 6 different types of CW were segmented from whole images using fuzzy divergence based thresholding by minimizing edge ambiguity. A set of color and textural features describing granulation, necrotic, and slough tissues in the segmented wound area were extracted using various mathematical techniques. Finally, statistical learning algorithms, namely, Bayesian classification and support vector machine (SVM, were trained and tested for wound tissue classification in different CW images. The performance of the wound area segmentation protocol was further validated by ground truth images labeled by clinical experts. It was observed that SVM with 3rd order polynomial kernel provided the highest accuracies, that is, 86.94%, 90.47%, and 75.53%, for classifying granulation, slough, and necrotic tissues, respectively. The proposed automated tissue classification technique achieved the highest overall accuracy, that is, 87.61%, with highest kappa statistic value (0.793.

  16. [Automated Assessment for Bone Age of Left Wrist Joint in Uyghur Teenagers by Deep Learning].

    Science.gov (United States)

    Hu, T H; Huo, Z; Liu, T A; Wang, F; Wan, L; Wang, M W; Chen, T; Wang, Y H

    2018-02-01

    To realize the automated bone age assessment by applying deep learning to digital radiography (DR) image recognition of left wrist joint in Uyghur teenagers, and explore its practical application value in forensic medicine bone age assessment. The X-ray films of left wrist joint after pretreatment, which were taken from 245 male and 227 female Uyghur nationality teenagers in Uygur Autonomous Region aged from 13.0 to 19.0 years old, were chosen as subjects. And AlexNet was as a regression model of image recognition. From the total samples above, 60% of male and female DR images of left wrist joint were selected as net train set, and 10% of samples were selected as validation set. As test set, the rest 30% were used to obtain the image recognition accuracy with an error range in ±1.0 and ±0.7 age respectively, compared to the real age. The modelling results of deep learning algorithm showed that when the error range was in ±1.0 and ±0.7 age respectively, the accuracy of the net train set was 81.4% and 75.6% in male, and 80.5% and 74.8% in female, respectively. When the error range was in ±1.0 and ±0.7 age respectively, the accuracy of the test set was 79.5% and 71.2% in male, and 79.4% and 66.2% in female, respectively. The combination of bone age research on teenagers' left wrist joint and deep learning, which has high accuracy and good feasibility, can be the research basis of bone age automatic assessment system for the rest joints of body. Copyright© by the Editorial Department of Journal of Forensic Medicine.

  17. Using Automated Assessment Feedback to Enhance the Quality of Student Learning in Universities: A Case Study

    Science.gov (United States)

    Biggam, John

    There are many different ways of providing university students with feedback on their assessment performance, ranging from written checklists and handwritten commentaries to individual verbal feedback. Regardless of whether the feedback is summative or formative in nature, it is widely recognized that providing consistent, meaningful written feedback to students on assessment performance is not a simple task, particularly where a module is delivered by a team of staff. Typical student complaints about such feedback include: inconsistency of comment between lecturers; illegible handwriting; difficulty in relating feedback to assessment criteria; and vague remarks. For staff themselves, there is the problem that written comments, to be of any benefit to students, are immensely time-consuming. This paper illustrates, through a case study, the enormous benefits of Automated Assessment Feedback for staff and students. A clear strategy on how to develop an automated assessment feedback system, using the simplest of technologies, is provided.

  18. Assessing Writing in MOOCs: Automated Essay Scoring and Calibrated Peer Review™

    Science.gov (United States)

    Balfour, Stephen P.

    2013-01-01

    Two of the largest Massive Open Online Course (MOOC) organizations have chosen different methods for the way they will score and provide feedback on essays students submit. EdX, MIT and Harvard's non-profit MOOC federation, recently announced that they will use a machine-based Automated Essay Scoring (AES) application to assess written work in…

  19. Automated Quality Assessment of Structural Magnetic Resonance Brain Images Based on a Supervised Machine Learning Algorithm

    Directory of Open Access Journals (Sweden)

    Ricardo Andres Pizarro

    2016-12-01

    Full Text Available High-resolution three-dimensional magnetic resonance imaging (3D-MRI is being increasingly used to delineate morphological changes underlying neuropsychiatric disorders. Unfortunately, artifacts frequently compromise the utility of 3D-MRI yielding irreproducible results, from both type I and type II errors. It is therefore critical to screen 3D-MRIs for artifacts before use. Currently, quality assessment involves slice-wise visual inspection of 3D-MRI volumes, a procedure that is both subjective and time consuming. Automating the quality rating of 3D-MRI could improve the efficiency and reproducibility of the procedure. The present study is one of the first efforts to apply a support vector machine (SVM algorithm in the quality assessment of structural brain images, using global and region of interest (ROI automated image quality features developed in-house. SVM is a supervised machine-learning algorithm that can predict the category of test datasets based on the knowledge acquired from a learning dataset. The performance (accuracy of the automated SVM approach was assessed, by comparing the SVM-predicted quality labels to investigator-determined quality labels. The accuracy for classifying 1457 3D-MRI volumes from our database using the SVM approach is around 80%. These results are promising and illustrate the possibility of using SVM as an automated quality assessment tool for 3D-MRI.

  20. Automated Quality Assessment of Structural Magnetic Resonance Brain Images Based on a Supervised Machine Learning Algorithm.

    Science.gov (United States)

    Pizarro, Ricardo A; Cheng, Xi; Barnett, Alan; Lemaitre, Herve; Verchinski, Beth A; Goldman, Aaron L; Xiao, Ena; Luo, Qian; Berman, Karen F; Callicott, Joseph H; Weinberger, Daniel R; Mattay, Venkata S

    2016-01-01

    High-resolution three-dimensional magnetic resonance imaging (3D-MRI) is being increasingly used to delineate morphological changes underlying neuropsychiatric disorders. Unfortunately, artifacts frequently compromise the utility of 3D-MRI yielding irreproducible results, from both type I and type II errors. It is therefore critical to screen 3D-MRIs for artifacts before use. Currently, quality assessment involves slice-wise visual inspection of 3D-MRI volumes, a procedure that is both subjective and time consuming. Automating the quality rating of 3D-MRI could improve the efficiency and reproducibility of the procedure. The present study is one of the first efforts to apply a support vector machine (SVM) algorithm in the quality assessment of structural brain images, using global and region of interest (ROI) automated image quality features developed in-house. SVM is a supervised machine-learning algorithm that can predict the category of test datasets based on the knowledge acquired from a learning dataset. The performance (accuracy) of the automated SVM approach was assessed, by comparing the SVM-predicted quality labels to investigator-determined quality labels. The accuracy for classifying 1457 3D-MRI volumes from our database using the SVM approach is around 80%. These results are promising and illustrate the possibility of using SVM as an automated quality assessment tool for 3D-MRI.

  1. ARAM: an automated image analysis software to determine rosetting parameters and parasitaemia in Plasmodium samples.

    Science.gov (United States)

    Kudella, Patrick Wolfgang; Moll, Kirsten; Wahlgren, Mats; Wixforth, Achim; Westerhausen, Christoph

    2016-04-18

    Rosetting is associated with severe malaria and a primary cause of death in Plasmodium falciparum infections. Detailed understanding of this adhesive phenomenon may enable the development of new therapies interfering with rosette formation. For this, it is crucial to determine parameters such as rosetting and parasitaemia of laboratory strains or patient isolates, a bottleneck in malaria research due to the time consuming and error prone manual analysis of specimens. Here, the automated, free, stand-alone analysis software automated rosetting analyzer for micrographs (ARAM) to determine rosetting rate, rosette size distribution as well as parasitaemia with a convenient graphical user interface is presented. Automated rosetting analyzer for micrographs is an executable with two operation modes for automated identification of objects on images. The default mode detects red blood cells and fluorescently labelled parasitized red blood cells by combining an intensity-gradient with a threshold filter. The second mode determines object location and size distribution from a single contrast method. The obtained results are compared with standardized manual analysis. Automated rosetting analyzer for micrographs calculates statistical confidence probabilities for rosetting rate and parasitaemia. Automated rosetting analyzer for micrographs analyses 25 cell objects per second reliably delivering identical results compared to manual analysis. For the first time rosette size distribution is determined in a precise and quantitative manner employing ARAM in combination with established inhibition tests. Additionally ARAM measures the essential observables parasitaemia, rosetting rate and size as well as location of all detected objects and provides confidence intervals for the determined observables. No other existing software solution offers this range of function. The second, non-malaria specific, analysis mode of ARAM offers the functionality to detect arbitrary objects

  2. Advances toward fully automated in vivo assessment of oral epithelial dysplasia by nuclear endomicroscopy-A pilot study.

    Science.gov (United States)

    Liese, Jan; Winter, Karsten; Glass, Änne; Bertolini, Julia; Kämmerer, Peer Wolfgang; Frerich, Bernhard; Schiefke, Ingolf; Remmerbach, Torsten W

    2017-11-01

    Uncertainties in detection of oral epithelial dysplasia (OED) frequently result from sampling error especially in inflammatory oral lesions. Endomicroscopy allows non-invasive, "en face" imaging of upper oral epithelium, but parameters of OED are unknown. Mucosal nuclei were imaged in 34 toluidine blue-stained oral lesions with a commercial endomicroscopy. Histopathological diagnosis showed four biopsies in "dys-/neoplastic," 23 in "inflammatory," and seven in "others" disease groups. Strength of different assessment strategies of nuclear scoring, nuclear count, and automated nuclear analysis were measured by area under ROC curve (AUC) to identify histopathological "dys-/neoplastic" group. Nuclear objects from automated image analysis were visually corrected. Best-performing parameters of nuclear-to-image ratios were the count of large nuclei (AUC=0.986) and 6-nearest neighborhood relation (AUC=0.896), and best parameters of nuclear polymorphism were the count of atypical nuclei (AUC=0.996) and compactness of nuclei (AUC=0.922). Excluding low-grade OED, nuclear scoring and count reached 100% sensitivity and 98% specificity for detection of dys-/neoplastic lesions. In automated analysis, combination of parameters enhanced diagnostic strength. Sensitivity of 100% and specificity of 87% were seen for distances of 6-nearest neighbors and aspect ratios even in uncorrected objects. Correction improved measures of nuclear polymorphism only. The hue of background color was stronger than nuclear density (AUC=0.779 vs 0.687) to detect dys-/neoplastic group indicating that macroscopic aspect is biased. Nuclear-to-image ratios are applicable for automated optical in vivo diagnostics for oral potentially malignant disorders. Nuclear endomicroscopy may promote non-invasive, early detection of dys-/neoplastic lesions by reducing sampling error. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  3. New frontiers in automated assessment: using latent semantic ...

    African Journals Online (AJOL)

    learning technique which has been developed for the computerised assessment of knowledge. LSA employs linear algebra techniques to induce and represent knowledge in high dimensional spaces, and can be used to compare documents in ...

  4. Assessment of Automated Measurement and Verification (M&V) Methods

    Energy Technology Data Exchange (ETDEWEB)

    Granderson, Jessica [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Touzani, Samir [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Custodio, Claudine [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Sohn, Michael [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Fernandes, Samuel [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Jump, David [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-07-01

    This report documents the application of a general statistical methodology to assess the accuracy of baseline energy models, focusing on its application to Measurement and Verification (M&V) of whole-building energy savings.

  5. Automated Gel Size Selection to Improve the Quality of Next-generation Sequencing Libraries Prepared from Environmental Water Samples.

    Science.gov (United States)

    Uyaguari-Diaz, Miguel I; Slobodan, Jared R; Nesbitt, Matthew J; Croxen, Matthew A; Isaac-Renton, Judith; Prystajecky, Natalie A; Tang, Patrick

    2015-04-17

    Next-generation sequencing of environmental samples can be challenging because of the variable DNA quantity and quality in these samples. High quality DNA libraries are needed for optimal results from next-generation sequencing. Environmental samples such as water may have low quality and quantities of DNA as well as contaminants that co-precipitate with DNA. The mechanical and enzymatic processes involved in extraction and library preparation may further damage the DNA. Gel size selection enables purification and recovery of DNA fragments of a defined size for sequencing applications. Nevertheless, this task is one of the most time-consuming steps in the DNA library preparation workflow. The protocol described here enables complete automation of agarose gel loading, electrophoretic analysis, and recovery of targeted DNA fragments. In this study, we describe a high-throughput approach to prepare high quality DNA libraries from freshwater samples that can be applied also to other environmental samples. We used an indirect approach to concentrate bacterial cells from environmental freshwater samples; DNA was extracted using a commercially available DNA extraction kit, and DNA libraries were prepared using a commercial transposon-based protocol. DNA fragments of 500 to 800 bp were gel size selected using Ranger Technology, an automated electrophoresis workstation. Sequencing of the size-selected DNA libraries demonstrated significant improvements to read length and quality of the sequencing reads.

  6. Access to information: assessment of the use of automated interaction technologies in call centers

    Directory of Open Access Journals (Sweden)

    Fernando de Souza Meirelles

    2011-01-01

    Full Text Available With the purpose of at lowering costs and reendering the demanded information available to users with no access to the internet, service companies have adopted automated interaction technologies in their call centers, which may or may not meet the expectations of users. Based on different areas of knowledge (man-machine interaction, consumer behavior and use of IT 13 propositions are raised and a research is carried out in three parts: focus group, field study with users and interviews with experts. Eleven automated service characteristics which support the explanation for user satisfaction are listed, a preferences model is proposed and evidence in favor or against each of the 13 propositions is brought in. With balance scorecard concepts, a managerial assessment model is proposed for the use of automated call center technology. In future works, the propositions may become verifiable hypotheses through conclusive empirical research.

  7. Bayesian stratified sampling to assess corpus utility

    Energy Technology Data Exchange (ETDEWEB)

    Hochberg, J.; Scovel, C.; Thomas, T.; Hall, S.

    1998-12-01

    This paper describes a method for asking statistical questions about a large text corpus. The authors exemplify the method by addressing the question, ``What percentage of Federal Register documents are real documents, of possible interest to a text researcher or analyst?`` They estimate an answer to this question by evaluating 200 documents selected from a corpus of 45,820 Federal Register documents. Bayesian analysis and stratified sampling are used to reduce the sampling uncertainty of the estimate from over 3,100 documents to fewer than 1,000. A possible application of the method is to establish baseline statistics used to estimate recall rates for information retrieval systems.

  8. Automation impact study of Army training management 2: Extension of sampling and collection of installation resource data

    Energy Technology Data Exchange (ETDEWEB)

    Sanquist, T.F.; McCallum, M.C.; Hunt, P.S.; Slavich, A.L.; Underwood, J.A.; Toquam, J.L.; Seaver, D.A.

    1989-05-01

    This automation impact study of Army training management (TM) was performed for the Army Development and Employment Agency (ADEA) and the Combined Arms Training Activity (CATA) by the Battelle Human Affairs Research Centers and the Pacific Northwest Laboratory. The primary objective of the study was to provide the Army with information concerning the potential costs and savings associated with automating the TM process. This study expands the sample of units surveyed in Phase I of the automation impact effort (Sanquist et al., 1988), and presents data concerning installation resource management in relation to TM. The structured interview employed in Phase I was adapted to a self-administered survey. The data collected were compatible with that of Phase I, and both were combined for analysis. Three US sites, one reserve division, one National Guard division, and one unit in the active component outside the continental US (OCONUS) (referred to in this report as forward deployed) were surveyed. The total sample size was 459, of which 337 respondents contributed the most detailed data. 20 figs., 62 tabs.

  9. Personal gravimetric dust sampling and risk assessment.

    CSIR Research Space (South Africa)

    Unsted, AD

    1996-03-01

    Full Text Available installation. The first underground site was a highly mechanized, shallow mine, the second site was a shallow mine with conventional mini-longwalls and the third site was a deep mechanized mine. The fourth site was an assay and sample preparation laboratory...

  10. Diagnostic Hearing Assessment in Schools: Validity and Time Efficiency of Automated Audiometry.

    Science.gov (United States)

    Mahomed-Asmail, Faheema; Swanepoel, De Wet; Eikelboom, Robert H

    2016-01-01

    Poor follow-up compliance from school-based hearing screening typically undermines the efficacy of school-based hearing screening programs. Onsite diagnostic audiometry with automation may reduce false positives and ensure directed referrals. To investigate the validity and time efficiency of automated diagnostic air- and bone-conduction audiometry for children in a natural school environment following hearing screening. A within-subject repeated measures design was employed to compare air- and bone-conduction pure-tone thresholds (0.5-4 kHz), measured by manual and automated pure-tone audiometry. Sixty-two children, 25 males and 37 females, with an average age of 8 yr (standard deviation [SD] = 0.92; range = 6-10 yr) were recruited for this study. The participants included 30 children who failed on a hearing screening and 32 children who passed a hearing screening. Threshold comparisons were made for air- and bone-conduction thresholds across ears tested with manual and automated audiometry. To avoid a floor effect thresholds of 15 dB HL were excluded in analyses. The Wilcoxon signed ranked test was used to compare threshold correspondence for manual and automated thresholds and the paired samples t-test was used to compare test time. Statistical significance was set as p ≤ 0.05. 85.7% of air-conduction thresholds and 44.6% of bone-conduction thresholds corresponded within the normal range (15 dB HL) for manual and automated audiometry. Both manual and automated audiometry air- and bone-conduction thresholds exceeded 15 dB HL in 9.9% and 34.0% of thresholds, respectively. For these thresholds, average absolute differences for air- and bone-conduction thresholds were 6.3 (SD = 8.3) and 2.2 dB (SD = 3.6) and they corresponded within 10 dB across frequencies in 87.7% and 100.0%, respectively. There was no significant difference between manual and automated air- and bone-conduction across frequencies for these thresholds. Using onsite automated diagnostic audiometry

  11. Image cytometer method for automated assessment of human spermatozoa concentration

    DEFF Research Database (Denmark)

    Egeberg, D L; Kjaerulff, S; Hansen, C

    2013-01-01

    In the basic clinical work-up of infertile couples, a semen analysis is mandatory and the sperm concentration is one of the most essential variables to be determined. Sperm concentration is usually assessed by manual counting using a haemocytometer and is hence labour intensive and may be subject...

  12. Lab on valve-multisyringe flow injection system (LOV-MSFIA) for fully automated uranium determination in environmental samples.

    Science.gov (United States)

    Avivar, Jessica; Ferrer, Laura; Casas, Montserrat; Cerdà, Víctor

    2011-06-15

    The hyphenation of lab-on-valve (LOV) and multisyringe flow analysis (MSFIA), coupled to a long path length liquid waveguide capillary cell (LWCC), allows the spectrophotometric determination of uranium in different types of environmental sample matrices, without any manual pre-treatment, and achieving high selectivity and sensitivity levels. On-line separation and preconcentration of uranium is carried out by means of UTEVA resin. The potential of the LOV-MSFIA makes possible the fully automation of the system by the in-line regeneration of the column. After elution, uranium(VI) is spectrophotometrically detected after reaction with arsenazo-III. The determination of levels of uranium present in environmental samples is required in order to establish an environmental control. Thus, we propose a rapid, cheap and fully automated method to determine uranium(VI) in environmental samples. The limit of detection reached is 1.9 ηg of uranium and depending on the preconcentrated volume; it results in ppt levels (10.3 ηg L(-1)). Different water sample matrices (seawater, well water, freshwater, tap water and mineral water) and a phosphogypsum sample (with natural uranium content) were satisfactorily analyzed. Copyright © 2010 Elsevier B.V. All rights reserved.

  13. Assessing Biological Samples with Scanning Probes

    Science.gov (United States)

    Engel, A.

    Scanning probe microscopes raster-scan an atomic scale sensor across an object. The scanning transmission electron microscope (STEM) uses an electron beam focused on a few Å, and measures the electron scattering power of the irradiated column of sample matter. Not only does the STEM create dark-filed images of superb clarity, but it also delivers the mass of single protein complexes within a range of 100 kDa to 100 MDa. The STEM appears to be the tool of choice to achieve high-throughput visual proteomics of single cells. In contrast, atomically sharp tips sample the object surface in the scanning tunneling microscope as well as the atomic force microscopes (AFM). Because the AFM can be operated on samples submerged in a physiological salt solution, biomacromolecules can be observed at work. Recent experiments provided new insights into the organization of different native biological membranes, and allowed molecular interaction forces, that determine protein folds and ligand binding, to be measured.

  14. Video and accelerometer-based motion analysis for automated surgical skills assessment.

    Science.gov (United States)

    Zia, Aneeq; Sharma, Yachna; Bettadapura, Vinay; Sarin, Eric L; Essa, Irfan

    2018-03-01

    Basic surgical skills of suturing and knot tying are an essential part of medical training. Having an automated system for surgical skills assessment could help save experts time and improve training efficiency. There have been some recent attempts at automated surgical skills assessment using either video analysis or acceleration data. In this paper, we present a novel approach for automated assessment of OSATS-like surgical skills and provide an analysis of different features on multi-modal data (video and accelerometer data). We conduct a large study for basic surgical skill assessment on a dataset that contained video and accelerometer data for suturing and knot-tying tasks. We introduce "entropy-based" features-approximate entropy and cross-approximate entropy, which quantify the amount of predictability and regularity of fluctuations in time series data. The proposed features are compared to existing methods of Sequential Motion Texture, Discrete Cosine Transform and Discrete Fourier Transform, for surgical skills assessment. We report average performance of different features across all applicable OSATS-like criteria for suturing and knot-tying tasks. Our analysis shows that the proposed entropy-based features outperform previous state-of-the-art methods using video data, achieving average classification accuracies of 95.1 and 92.2% for suturing and knot tying, respectively. For accelerometer data, our method performs better for suturing achieving 86.8% average accuracy. We also show that fusion of video and acceleration features can improve overall performance for skill assessment. Automated surgical skills assessment can be achieved with high accuracy using the proposed entropy features. Such a system can significantly improve the efficiency of surgical training in medical schools and teaching hospitals.

  15. Activity-based Sustainability Assessment of Highly Automated Manufacturing

    DEFF Research Database (Denmark)

    Rödger, Jan-Markus; Bey, Niki; Alting, Leo

    Sustainability of technology is a multifaceted endeavor and a main requirement from industry is to make it a profitable business case with clearly defined targets. To achieve that, a new assessment framework and applicable method [1] is presented which has been developed closely with industry. It.......g. “transportation”) down to smallest production units by using activity-based target setting in a consistent way to lowers risks in the planning phase of products and production........ It uses a top-down decision-making process known from financial target setting for each cost center and the well-known life-cycle perspective according to ISO 14040 [2] in Sustainability Assessment. Thereby it is possible to allocate absolute environmental thresholds of functionalities (e...

  16. Gaia: automated quality assessment of protein structure models.

    Science.gov (United States)

    Kota, Pradeep; Ding, Feng; Ramachandran, Srinivas; Dokholyan, Nikolay V

    2011-08-15

    Increasing use of structural modeling for understanding structure-function relationships in proteins has led to the need to ensure that the protein models being used are of acceptable quality. Quality of a given protein structure can be assessed by comparing various intrinsic structural properties of the protein to those observed in high-resolution protein structures. In this study, we present tools to compare a given structure to high-resolution crystal structures. We assess packing by calculating the total void volume, the percentage of unsatisfied hydrogen bonds, the number of steric clashes and the scaling of the accessible surface area. We assess covalent geometry by determining bond lengths, angles, dihedrals and rotamers. The statistical parameters for the above measures, obtained from high-resolution crystal structures enable us to provide a quality-score that points to specific areas where a given protein structural model needs improvement. We provide these tools that appraise protein structures in the form of a web server Gaia (http://chiron.dokhlab.org). Gaia evaluates the packing and covalent geometry of a given protein structure and provides quantitative comparison of the given structure to high-resolution crystal structures. dokh@unc.edu Supplementary data are available at Bioinformatics online.

  17. Comparison between manual and automated techniques for assessment of data from dynamic antral scintigraphy

    International Nuclear Information System (INIS)

    Misiara, Gustavo P.; Troncon, Luiz E.A.; Secaf, Marie; Moraes, Eder R.

    2008-01-01

    This work aimed at determining whether data from dynamic antral scintigraphy (DAS) yielded by a simple, manual technique are as accurate as those generated by a conventional automated technique (fast Fourier transform) for assessing gastric contractility. Seventy-one stretches (4 min) of 'activity versus time' curves obtained by DAS from 10 healthy volunteers and 11 functional dyspepsia patients, after ingesting a liquid meal (320 ml, 437 kcal) labeled with technetium-99m ( 99m Tc)-phytate, were independently analyzed by manual and automated techniques. Data obtained by both techniques for the frequency of antral contractions were similar. Contraction amplitude determined by the manual technique was significantly higher than that estimated by the automated method, in both patients and controls. The contraction frequency 30 min post-meal was significantly lower in patients than in controls, which was correctly shown by both techniques. A manual technique using ordinary resources of the gamma camera workstation, despite yielding higher figures for the amplitude of gastric contractions, is as accurate as the conventional automated technique of DAS analysis. These findings may favor a more intensive use of DAS coupled to gastric emptying studies, which would provide a more comprehensive assessment of gastric motor function in disease. (author)

  18. Automated Cognitive Health Assessment Using Smart Home Monitoring of Complex Tasks.

    Science.gov (United States)

    Dawadi, Prafulla N; Cook, Diane J; Schmitter-Edgecombe, Maureen

    2013-11-01

    One of the many services that intelligent systems can provide is the automated assessment of resident well-being. We hypothesize that the functional health of individuals, or ability of individuals to perform activities independently without assistance, can be estimated by tracking their activities using smart home technologies. In this paper, we introduce a machine learning-based method for assessing activity quality in smart homes. To validate our approach we quantify activity quality for 179 volunteer participants who performed a complex, interweaved set of activities in our smart home apartment. We observed a statistically significant correlation (r=0.79) between automated assessment of task quality and direct observation scores. Using machine learning techniques to predict the cognitive health of the participants based on task quality is accomplished with an AUC value of 0.64. We believe that this capability is an important step in understanding everyday functional health of individuals in their home environments.

  19. The automated sample preparation system MixMaster for investigation of volatile organic compounds with mid-infrared evanescent wave spectroscopy.

    Science.gov (United States)

    Vogt, F; Karlowatz, M; Jakusch, M; Mizaikoff, B

    2003-04-01

    For efficient development assessment, and calibration of new chemical analyzers a large number of independently prepared samples of target analytes is necessary. Whereas mixing units for gas analysis are readily available, there is a lack of instrumentation for accurate preparation of liquid samples containing volatile organic compounds (VOCs). Manual preparation of liquid samples containing VOCs at trace concentration levels is a particularly challenging and time consuming task. Furthermore, regularly scheduled calibration of sensors and analyzer systems demands for computer controlled automated sample preparation systems. In this paper we present a novel liquid mixing device enabling extensive measurement series with focus on volatile organic compounds, facilitating analysis of water polluted by traces of volatile hydrocarbons. After discussing the mixing system and control software, first results obtained by coupling with an FT-IR spectrometer are reported. Properties of the mixing system are assessed by mid-infrared attenuated total reflection (ATR) spectroscopy of methanol-acetone mixtures and by investigation of multicomponent samples containing volatile hydrocarbons such as 1,2,4-trichlorobenzene and tetrachloroethylene. Obtained ATR spectra are evaluated by principal component regression (PCR) algorithms. It is demonstrated that the presented sample mixing device provides reliable multicomponent mixtures with sufficient accuracy and reproducibility at trace concentration levels.

  20. Automated left heart chamber volumetric assessment using three-dimensional echocardiography in Chinese adolescents

    Directory of Open Access Journals (Sweden)

    Xiu-Xia Luo

    2017-10-01

    Full Text Available Background: Several studies have reported the accuracy and reproducibility of HeartModel for automated determination of three-dimensional echocardiography (3DE-derived left heart volumes and left ventricular (LV ejection fraction (LVEF in adult patients. However, it remains unclear whether this automated adaptive analytics algorithm, derived from a ‘training’ population, can encompass adequate echo images in Chinese adolescents. Objectives: The aim of our study was to explore the accuracy of HeartModel in adolescents compared with expert manual three-dimensional (3D echocardiography. Methods: Fifty-three Chinese adolescent subjects with or without heart disease underwent 3D echocardiographic imaging with an EPIQ system (Philips. 3D cardiac volumes and LVEF obtained with the automated HeartModel program were compared with manual 3D echocardiographic measurements by an experienced echocardiographer. Results: There was strong correlation between HeartModel and expert manual 3DE measurements (r = 0.875–0.965, all P < 0.001. Automated LV and left atrial (LA volumes were slightly overestimated when compared to expert manual measurements, while LVEF showed no significant differences from the manual method. Importantly, the intra- and inter-observer variability of automated 3D echocardiographic model was relatively low (<1%, surpassing the manual approach (3.5–17.4%, yet requiring significantly less analyzing time (20 ± 7 vs 177 ± 30 s, P < 0.001. Conclusion: Simultaneous quantification of left heart volumes and LVEF with the automated HeartModel program is rapid, accurate and reproducible in Chinese adolescent cohort. Therefore, it has a potential to bring 3D echocardiographic assessment of left heart chamber volumes and function into busy pediatric practice.

  1. Design and building of a homemade sample changer for automation of the irradiation in neutron activation analysis technique

    International Nuclear Information System (INIS)

    Gago, Javier; Hernandez, Yuri; Baltuano, Oscar; Bedregal, Patricia; Lopez, Yon; Urquizo, Rafael

    2014-01-01

    Because the RP-10 research reactor operates during weekends, it was necessary to design and build a sample changer for irradiation as part of the automation process of neutron activation analysis technique. The device is formed by an aluminum turntable disk which can accommodate 19 polyethylene capsules, containing samples to be sent using the pneumatic transfer system from the laboratory to the irradiation position. The system is operate by a control switchboard to send and return capsules in a variable preset time and by two different ways, allowing the determination of short, medium and long lived radionuclides. Also another mechanism is designed called 'exchange valve' for changing travel paths (pipelines) allowing the irradiated samples to be stored for a longer time in the reactor hall. The system design has allowed complete automation of this technique, enabling the irradiation of samples without the presence of an analyst. The design, construction and operation of the device is described and presented in this article. (authors).

  2. Operator functional state assessment for adaptive automation implementation

    Science.gov (United States)

    Wilson, Glenn F.

    2005-05-01

    Mission success in military operations depends upon optimal functioning of all system components, including the human operator. The cognitive demands of current systems can exceed the capabilities of the human operator. In some situations, such as Unmanned Combat Air Vehicle (UCAV) operations, one operator may be required to supervise several vehicles simultaneously. The functional state of the human operator is not currently considered in the overall system assessment. It has been assumed that the operator could "manage" any situation given a well designed system. However, with the requirement to monitor and remotely monitor several vehicles simultaneously during combat comes the possibility of cognitive overload. This increases the probability of committing errors. We have developed on-line measures of operator functional state using psychophysiological measures. These measures provide estimates of how well an operator can deal with the current task demands. When the operator is cognitively overloaded the system may be able to implement adaptive aiding procedures. This will reduce the task demands on the operator thereby improving mission success. We have demonstrated correct assessment of the functional state of the operator with accuracies of 95% or better. Psychophysiological measures were used with classifiers such as artificial neural networks. In one study, adaptive aiding was applied when the classifier determined operator overload. The aiding resulted in significantly improved performance.

  3. Towards Automated and Objective Assessment of Fabric Pilling

    Directory of Open Access Journals (Sweden)

    Rocco Furferi

    2014-10-01

    Full Text Available Pilling is a complex property of textile fabrics, representing, for the final user, a non-desired feature to be controlled and measured by companies working in the textile industry. Traditionally, pilling is assessed by visually comparing fabrics with reference to a set of standard images, thus often resulting in inconsistent quality control. A number of methods using machine vision have been proposed all over the world, with almost all sharing the idea that pilling can be assessed by determining the number of pills or the area occupied by the pills on the fabric surface. In the present work a different approach is proposed: instead of determining the number of pills, a machine vision-based procedure is devised with the aim of extracting a number of parameters characterizing the fabric. These are then used to train an artificial neural network to automatically grade the fabrics in terms of pilling. Tested against a set of differently pilled fabrics, the method shows its effectiveness.

  4. Automated Image Sampling and Classification Can Be Used to Explore Perceived Naturalness of Urban Spaces.

    Directory of Open Access Journals (Sweden)

    Roger Hyam

    Full Text Available The psychological restorative effects of exposure to nature are well established and extend to just viewing of images of nature. A previous study has shown that Perceived Naturalness (PN of images correlates with their restorative value. This study tests whether it is possible to detect degree of PN of images using an image classifier. It takes images that have been scored by humans for PN (including a subset that have been assessed for restorative value and passes them through the Google Vision API image classification service. The resulting labels are assigned to broad semantic classes to create a Calculated Semantic Naturalness (CSN metric for each image. It was found that CSN correlates with PN. CSN was then calculated for a geospatial sampling of Google Street View images across the city of Edinburgh. CSN was found to correlate with PN in this sample also indicating the technique may be useful in large scale studies. Because CSN correlates with PN which correlates with restorativeness it is suggested that CSN or a similar measure may be useful in automatically detecting restorative images and locations. In an exploratory aside CSN was not found to correlate with an indicator of socioeconomic deprivation.

  5. Automating data analysis for two-dimensional gas chromatography/time-of-flight mass spectrometry non-targeted analysis of comparative samples.

    Science.gov (United States)

    Titaley, Ivan A; Ogba, O Maduka; Chibwe, Leah; Hoh, Eunha; Cheong, Paul H-Y; Simonich, Staci L Massey

    2018-03-16

    Non-targeted analysis of environmental samples, using comprehensive two-dimensional gas chromatography coupled with time-of-flight mass spectrometry (GC × GC/ToF-MS), poses significant data analysis challenges due to the large number of possible analytes. Non-targeted data analysis of complex mixtures is prone to human bias and is laborious, particularly for comparative environmental samples such as contaminated soil pre- and post-bioremediation. To address this research bottleneck, we developed OCTpy, a Python™ script that acts as a data reduction filter to automate GC × GC/ToF-MS data analysis from LECO ® ChromaTOF ® software and facilitates selection of analytes of interest based on peak area comparison between comparative samples. We used data from polycyclic aromatic hydrocarbon (PAH) contaminated soil, pre- and post-bioremediation, to assess the effectiveness of OCTpy in facilitating the selection of analytes that have formed or degraded following treatment. Using datasets from the soil extracts pre- and post-bioremediation, OCTpy selected, on average, 18% of the initial suggested analytes generated by the LECO ® ChromaTOF ® software Statistical Compare feature. Based on this list, 63-100% of the candidate analytes identified by a highly trained individual were also selected by OCTpy. This process was accomplished in several minutes per sample, whereas manual data analysis took several hours per sample. OCTpy automates the analysis of complex mixtures of comparative samples, reduces the potential for human error during heavy data handling and decreases data analysis time by at least tenfold. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. Automated retroillumination photography analysis for objective assessment of Fuchs Corneal Dystrophy severity

    Science.gov (United States)

    Eghrari, Allen O.; Mumtaz, Aisha A.; Garrett, Brian; Rezaei, Mahsa; Akhavan, Mina S.; Riazuddin, S. Amer; Gottsch, John D.

    2016-01-01

    Purpose Retroillumination photography analysis (RPA) is an objective tool for assessment of the number and distribution of guttae in eyes affected with Fuchs Corneal Dystrophy (FCD). Current protocols include manual processing of images; here we assess validity and interrater reliability of automated analysis across various levels of FCD severity. Methods Retroillumination photographs of 97 FCD-affected corneas were acquired and total counts of guttae previously summated manually. For each cornea, a single image was loaded into ImageJ software. We reduced color variability and subtracted background noise. Reflection of light from each gutta was identified as a local area of maximum intensity and counted automatically. Noise tolerance level was titrated for each cornea by examining a small region of each image with automated overlay to ensure appropriate coverage of individual guttae. We tested interrater reliability of automated counts of guttae across a spectrum of clinical and educational experience. Results A set of 97 retroillumination photographs were analyzed. Clinical severity as measured by a modified Krachmer scale ranged from a severity level of 1 to 5 in the set of analyzed corneas. Automated counts by an ophthalmologist correlated strongly with Krachmer grading (R2=0.79) and manual counts (R2=0.88). Intraclass correlation coefficient demonstrated strong correlation, at 0.924 (95% CI, 0.870- 0.958) among cases analyzed by three students, and 0.869 (95% CI, 0.797- 0.918) among cases for which images was analyzed by an ophthalmologist and two students. Conclusions Automated RPA allows for grading of FCD severity with high resolution across a spectrum of disease severity. PMID:27811565

  7. EXPERIMENTS TOWARDS DETERMINING BEST TRAINING SAMPLE SIZE FOR AUTOMATED EVALUATION OF DESCRIPTIVE ANSWERS THROUGH SEQUENTIAL MINIMAL OPTIMIZATION

    Directory of Open Access Journals (Sweden)

    Sunil Kumar C

    2014-01-01

    Full Text Available With number of students growing each year there is a strong need to automate systems capable of evaluating descriptive answers. Unfortunately, there aren’t many systems capable of performing this task. In this paper, we use a machine learning tool called LightSIDE to accomplish auto evaluation and scoring of descriptive answers. Our experiments are designed to cater to our primary goal of identifying the optimum training sample size so as to get optimum auto scoring. Besides the technical overview and the experiments design, the paper also covers challenges, benefits of the system. We also discussed interdisciplinary areas for future research on this topic.

  8. A First Step for the Automation of Fugl-Meyer Assessment Scale for Stroke Subjects in Upper Limb Physical Neurorehabilitation.

    Science.gov (United States)

    Villán-Villán, Mailin A; Pérez-Rodríguez, Rodrigo; Gómez, Cristina; Opisso, Eloy; Tormos, Jose; Medina, Josep; Gómez, Enrique J

    2015-01-01

    This paper proposes a first approach for the automation of the Fugl-Meyer assessment scale used in physical neurorehabilitation. The main goal of this research is to automatically estimate an objective measurement for five Fugl-Meyer scale items related to the assessment of the upper limb motion. An objective score has been calculated for 7 patients. Obtained results indicate that the automation of the scale can be a useful tool for the objective assessment of upper limb motion of stroke survivors.

  9. High-frequency, long-duration water sampling in acid mine drainage studies: a short review of current methods and recent advances in automated water samplers

    Science.gov (United States)

    Chapin, Thomas

    2015-01-01

    Hand-collected grab samples are the most common water sampling method but using grab sampling to monitor temporally variable aquatic processes such as diel metal cycling or episodic events is rarely feasible or cost-effective. Currently available automated samplers are a proven, widely used technology and typically collect up to 24 samples during a deployment. However, these automated samplers are not well suited for long-term sampling in remote areas or in freezing conditions. There is a critical need for low-cost, long-duration, high-frequency water sampling technology to improve our understanding of the geochemical response to temporally variable processes. This review article will examine recent developments in automated water sampler technology and utilize selected field data from acid mine drainage studies to illustrate the utility of high-frequency, long-duration water sampling.

  10. The role of optical flow in automated quality assessment of full-motion video

    Science.gov (United States)

    Harguess, Josh; Shafer, Scott; Marez, Diego

    2017-09-01

    In real-world video data, such as full-motion-video (FMV) taken from unmanned vehicles, surveillance systems, and other sources, various corruptions to the raw data is inevitable. This can be due to the image acquisition process, noise, distortion, and compression artifacts, among other sources of error. However, we desire methods to analyze the quality of the video to determine whether the underlying content of the corrupted video can be analyzed by humans or machines and to what extent. Previous approaches have shown that motion estimation, or optical flow, can be an important cue in automating this video quality assessment. However, there are many different optical flow algorithms in the literature, each with their own advantages and disadvantages. We examine the effect of the choice of optical flow algorithm (including baseline and state-of-the-art), on motionbased automated video quality assessment algorithms.

  11. Donor disc attachment assessment with intraoperative spectral optical coherence tomography during descemet stripping automated endothelial keratoplasty

    Directory of Open Access Journals (Sweden)

    Edward Wylegala

    2013-01-01

    Full Text Available Optical coherence tomography has already been proven to be useful for pre- and post-surgical anterior eye segment assessment, especially in lamellar keratoplasty procedures. There is no evidence for intraoperative usefulness of optical coherence tomography (OCT. We present a case report of the intraoperative donor disc attachment assessment with spectral-domain optical coherence tomography in case of Descemet stripping automated endothelial keratoplasty (DSAEK surgery combined with corneal incisions. The effectiveness of the performed corneal stab incisions was visualized directly by OCT scan analysis. OCT assisted DSAEK allows the assessment of the accuracy of the Descemet stripping and donor disc attachment.

  12. Feasibility studies of safety assessment methods for programmable automation systems. Final report of the AVV project

    International Nuclear Information System (INIS)

    Haapanen, P.; Maskuniitty, M.; Pulkkinen, U.; Heikkinen, J.; Korhonen, J.; Tuulari, E.

    1995-10-01

    Feasibility studies of two different groups of methodologies for safety assessment of programmable automation systems has been executed at the Technical Research Centre of Finland (VTT). The studies concerned the dynamic testing methods and the fault tree (FT) and failure mode and effects analysis (FMEA) methods. In order to get real experience in the application of these methods, an experimental testing of two realistic pilot systems were executed and a FT/FMEA analysis of a programmable safety function accomplished. The purpose of the studies was not to assess the object systems, but to get experience in the application of methods and assess their potentials and development needs. (46 refs., 21 figs.)

  13. Temporal Dynamics of Health and Well-Being: A Crowdsourcing Approach to Momentary Assessments and Automated Generation of Personalized Feedback.

    Science.gov (United States)

    van der Krieke, Lian; Blaauw, Frank J; Emerencia, Ando C; Schenk, Hendrika M; Slaets, Joris P J; Bos, Elisabeth H; de Jonge, Peter; Jeronimus, Bertus F

    Recent developments in research and mobile health enable a quantitative idiographic approach in health research. The present study investigates the potential of an electronic diary crowdsourcing study in the Netherlands for (1) large-scale automated self-assessment for individual-based health promotion and (2) enabling research at both the between-persons and within-persons level. To illustrate the latter, we examined between-persons and within-persons associations between somatic symptoms and quality of life. A website provided the general Dutch population access to a 30-day (3 times a day) diary study assessing 43 items related to health and well-being, which gave participants personalized feedback. Associations between somatic symptoms and quality of life were examined with a linear mixed model. A total of 629 participants completed 28,430 assessments, with a mean (SD) of 45 (32) assessments per participant. Most participants (n = 517 [82%]) were women and 531 (84%) had high education. Almost 40% of the participants (n = 247) completed enough assessments (t = 68) to generate personalized feedback including temporal dynamics between well-being, health behavior, and emotions. Substantial between-person variability was found in the within-person association between somatic symptoms and quality of life. We successfully built an application for automated diary assessments and personalized feedback. The application was used by a sample of mainly highly educated women, which suggests that the potential of our intensive diary assessment method for large-scale health promotion is limited. However, a rich data set was collected that allows for group-level and idiographic analyses that can shed light on etiological processes and may contribute to the development of empirical-based health promotion solutions.

  14. A large-capacity sample-changer for automated gamma-ray spectroscopy

    International Nuclear Information System (INIS)

    Andeweg, A.H.

    1980-01-01

    An automatic sample-changer has been developed at the National Institute for Metallurgy for use in gamma-ray spectroscopy with a lithium-drifted germanium detector. The sample-changer features remote storage, which prevents cross-talk and reduces background. It has a capacity for 200 samples and a sample container that takes liquid or solid samples. The rotation and vibration of samples during counting ensure that powdered samples are compacted, and improve the precision and reproducibility of the counting geometry [af

  15. Sample registration software for process automation in the Neutron Activation Analysis (NAA) Facility in Malaysia nuclear agency

    Energy Technology Data Exchange (ETDEWEB)

    Rahman, Nur Aira Abd, E-mail: nur-aira@nuclearmalaysia.gov.my; Yussup, Nolida; Ibrahim, Maslina Bt. Mohd; Mokhtar, Mukhlis B.; Soh Shaari, Syirrazie Bin Che; Azman, Azraf B. [Technical Support Division, Malaysian Nuclear Agency, 43000, Kajang, Selangor (Malaysia); Salim, Nazaratul Ashifa Bt. Abdullah [Division of Waste and Environmental Technology, Malaysian Nuclear Agency, 43000, Kajang, Selangor (Malaysia); Ismail, Nadiah Binti [Fakulti Kejuruteraan Elektrik, UiTM Pulau Pinang, 13500 Permatang Pauh, Pulau Pinang (Malaysia)

    2015-04-29

    Neutron Activation Analysis (NAA) had been established in Nuclear Malaysia since 1980s. Most of the procedures established were done manually including sample registration. The samples were recorded manually in a logbook and given ID number. Then all samples, standards, SRM and blank were recorded on the irradiation vial and several forms prior to irradiation. These manual procedures carried out by the NAA laboratory personnel were time consuming and not efficient. Sample registration software is developed as part of IAEA/CRP project on ‘Development of Process Automation in the Neutron Activation Analysis (NAA) Facility in Malaysia Nuclear Agency (RC17399)’. The objective of the project is to create a pc-based data entry software during sample preparation stage. This is an effective method to replace redundant manual data entries that needs to be completed by laboratory personnel. The software developed will automatically generate sample code for each sample in one batch, create printable registration forms for administration purpose, and store selected parameters that will be passed to sample analysis program. The software is developed by using National Instruments Labview 8.6.

  16. An Automation Planning Primer.

    Science.gov (United States)

    Paynter, Marion

    1988-01-01

    This brief planning guide for library automation incorporates needs assessment and evaluation of options to meet those needs. A bibliography of materials on automation planning and software reviews, library software directories, and library automation journals is included. (CLB)

  17. A fully automated Drosophila olfactory classical conditioning and testing system for behavioral learning and memory assessment.

    Science.gov (United States)

    Jiang, Hui; Hanna, Eriny; Gatto, Cheryl L; Page, Terry L; Bhuva, Bharat; Broadie, Kendal

    2016-03-01

    Aversive olfactory classical conditioning has been the standard method to assess Drosophila learning and memory behavior for decades, yet training and testing are conducted manually under exceedingly labor-intensive conditions. To overcome this severe limitation, a fully automated, inexpensive system has been developed, which allows accurate and efficient Pavlovian associative learning/memory analyses for high-throughput pharmacological and genetic studies. The automated system employs a linear actuator coupled to an odorant T-maze with airflow-mediated transfer of animals between training and testing stages. Odorant, airflow and electrical shock delivery are automatically administered and monitored during training trials. Control software allows operator-input variables to define parameters of Drosophila learning, short-term memory and long-term memory assays. The approach allows accurate learning/memory determinations with operational fail-safes. Automated learning indices (immediately post-training) and memory indices (after 24h) are comparable to traditional manual experiments, while minimizing experimenter involvement. The automated system provides vast improvements over labor-intensive manual approaches with no experimenter involvement required during either training or testing phases. It provides quality control tracking of airflow rates, odorant delivery and electrical shock treatments, and an expanded platform for high-throughput studies of combinational drug tests and genetic screens. The design uses inexpensive hardware and software for a total cost of ∼$500US, making it affordable to a wide range of investigators. This study demonstrates the design, construction and testing of a fully automated Drosophila olfactory classical association apparatus to provide low-labor, high-fidelity, quality-monitored, high-throughput and inexpensive learning and memory behavioral assays. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. A conceptual model of the automated credibility assessment of the volunteered geographic information

    International Nuclear Information System (INIS)

    Idris, N H; Jackson, M J; Ishak, M H I

    2014-01-01

    The use of Volunteered Geographic Information (VGI) in collecting, sharing and disseminating geospatially referenced information on the Web is increasingly common. The potentials of this localized and collective information have been seen to complement the maintenance process of authoritative mapping data sources and in realizing the development of Digital Earth. The main barrier to the use of this data in supporting this bottom up approach is the credibility (trust), completeness, accuracy, and quality of both the data input and outputs generated. The only feasible approach to assess these data is by relying on an automated process. This paper describes a conceptual model of indicators (parameters) and practical approaches to automated assess the credibility of information contributed through the VGI including map mashups, Geo Web and crowd – sourced based applications. There are two main components proposed to be assessed in the conceptual model – metadata and data. The metadata component comprises the indicator of the hosting (websites) and the sources of data / information. The data component comprises the indicators to assess absolute and relative data positioning, attribute, thematic, temporal and geometric correctness and consistency. This paper suggests approaches to assess the components. To assess the metadata component, automated text categorization using supervised machine learning is proposed. To assess the correctness and consistency in the data component, we suggest a matching validation approach using the current emerging technologies from Linked Data infrastructures and using third party reviews validation. This study contributes to the research domain that focuses on the credibility, trust and quality issues of data contributed by web citizen providers

  19. Client-server architecture applied to system automation radioactivity sampling in atmosphere

    Science.gov (United States)

    Hubbard, C. W.; McKinnon, A. D.

    1997-06-01

    Control software for an automated particulate air sampler is described. The software is divided into a number of small, cooperating server processes, each of which is responsible for the control of a particular device or subsystem. For each process, an effort was made to isolate the details of the underlying device or subsystem from the server interface. This made it possible to change the hardware without making changes to any of the server's client processes. A single supervisor process is responsible for overall system control. The design of the control algorithm was facilitated by employing a state machine model. Such a model is easy to study, easy to modify, and provides a clear understanding of the control mechanism to programmers and non-programmers alike. A state machine library was developed which greatly eased the task of implementing the design and ensured that the control algorithm detailed by the state machine model was the same algorithm that was actually employed.

  20. Automated method for simultaneous lead and strontium isotopic analysis applied to rainwater samples and airborne particulate filters (PM10).

    Science.gov (United States)

    Beltrán, Blanca; Avivar, Jessica; Mola, Montserrat; Ferrer, Laura; Cerdà, Víctor; Leal, Luz O

    2013-09-03

    A new automated, sensitive, and fast system for the simultaneous online isolation and preconcentration of lead and strontium by sorption on a microcolumn packed with Sr-resin using an inductively coupled plasma mass spectrometry (ICP-MS) detector was developed, hyphenating lab-on-valve (LOV) and multisyringe flow injection analysis (MSFIA). Pb and Sr are directly retained on the sorbent column and eluted with a solution of 0.05 mol L(-1) ammonium oxalate. The detection limits achieved were 0.04 ng for lead and 0.03 ng for strontium. Mass calibration curves were used since the proposed system allows the use of different sample volumes for preconcentration. Mass linear working ranges were between 0.13 and 50 ng and 0.1 and 50 ng for lead and strontium, respectively. The repeatability of the method, expressed as RSD, was 2.1% and 2.7% for Pb and Sr, respectively. Environmental samples such as rainwater and airborne particulate (PM10) filters as well as a certified reference material SLRS-4 (river water) were satisfactorily analyzed obtaining recoveries between 90 and 110% for both elements. The main features of the LOV-MSFIA-ICP-MS system proposed are the capability to renew solid phase extraction at will in a fully automated way, the remarkable stability of the column which can be reused up to 160 times, and the potential to perform isotopic analysis.

  1. Feasibility of Commercially Available, Fully Automated Hepatic CT Volumetry for Assessing Both Total and Territorial Liver Volumes in Liver Transplantation

    International Nuclear Information System (INIS)

    Shin, Cheong Il; Kim, Se Hyung; Rhim, Jung Hyo; Yi, Nam Joon; Suh, Kyung Suk; Lee, Jeong Min; Han, Joon Koo; Choi, Byung Ihn

    2013-01-01

    To assess the feasibility of commercially-available, fully automated hepatic CT volumetry for measuring both total and territorial liver volumes by comparing with interactive manual volumetry and measured ex-vivo liver volume. For the assessment of total and territorial liver volume, portal phase CT images of 77 recipients and 107 donors who donated right hemiliver were used. Liver volume was measured using both the fully automated and interactive manual methods with Advanced Liver Analysis software. The quality of the automated segmentation was graded on a 4-point scale. Grading was performed by two radiologists in consensus. For the cases with excellent-to-good quality, the accuracy of automated volumetry was compared with interactive manual volumetry and measured ex-vivo liver volume which was converted from weight using analysis of variance test and Pearson's or Spearman correlation test. Processing time for both automated and interactive manual methods was also compared. Excellent-to-good quality of automated segmentation for total liver and right hemiliver was achieved in 57.1% (44/77) and 17.8% (19/107), respectively. For both total and right hemiliver volumes, there were no significant differences among automated, manual, and ex-vivo volumes except between automate volume and manual volume of the total liver (p = 0.011). There were good correlations between automate volume and ex-vivo liver volume (γ= 0.637 for total liver and γ= 0.767 for right hemiliver). Both correlation coefficients were higher than those with manual method. Fully automated volumetry required significantly less time than interactive manual method (total liver: 48.6 sec vs. 53.2 sec, right hemiliver: 182 sec vs. 244.5 sec). Fully automated hepatic CT volumetry is feasible and time-efficient for total liver volume measurement. However, its usefulness for territorial liver volumetry needs to be improved.

  2. a Psycholinguistic Model for Simultaneous Translation, and Proficiency Assessment by Automated Acoustic Analysis of Discourse.

    Science.gov (United States)

    Yaghi, Hussein M.

    Two separate but related issues are addressed: how simultaneous translation (ST) works on a cognitive level and how such translation can be objectively assessed. Both of these issues are discussed in the light of qualitative and quantitative analyses of a large corpus of recordings of ST and shadowing. The proposed ST model utilises knowledge derived from a discourse analysis of the data, many accepted facts in the psychology tradition, and evidence from controlled experiments that are carried out here. This model has three advantages: (i) it is based on analyses of extended spontaneous speech rather than word-, syllable-, or clause -bound stimuli; (ii) it draws equally on linguistic and psychological knowledge; and (iii) it adopts a non-traditional view of language called 'the linguistic construction of reality'. The discourse-based knowledge is also used to develop three computerised systems for the assessment of simultaneous translation: one is a semi-automated system that treats the content of the translation; and two are fully automated, one of which is based on the time structure of the acoustic signals whilst the other is based on their cross-correlation. For each system, several parameters of performance are identified, and they are correlated with assessments rendered by the traditional, subjective, qualitative method. Using signal processing techniques, the acoustic analysis of discourse leads to the conclusion that quality in simultaneous translation can be assessed quantitatively with varying degrees of automation. It identifies as measures of performance (i) three content-based standards; (ii) four time management parameters that reflect the influence of the source on the target language time structure; and (iii) two types of acoustical signal coherence. Proficiency in ST is shown to be directly related to coherence and speech rate but inversely related to omission and delay. High proficiency is associated with a high degree of simultaneity and

  3. A semi-automated method for bone age assessment using cervical vertebral maturation.

    Science.gov (United States)

    Baptista, Roberto S; Quaglio, Camila L; Mourad, Laila M E H; Hummel, Anderson D; Caetano, Cesar Augusto C; Ortolani, Cristina Lúcia F; Pisa, Ivan T

    2012-07-01

    To propose a semi-automated method for pattern classification to predict individuals' stage of growth based on morphologic characteristics that are described in the modified cervical vertebral maturation (CVM) method of Baccetti et al. A total of 188 lateral cephalograms were collected, digitized, evaluated manually, and grouped into cervical stages by two expert examiners. Landmarks were located on each image and measured. Three pattern classifiers based on the Naïve Bayes algorithm were built and assessed using a software program. The classifier with the greatest accuracy according to the weighted kappa test was considered best. The classifier showed a weighted kappa coefficient of 0.861 ± 0.020. If an adjacent estimated pre-stage or poststage value was taken to be acceptable, the classifier would show a weighted kappa coefficient of 0.992 ± 0.019. Results from this study show that the proposed semi-automated pattern classification method can help orthodontists identify the stage of CVM. However, additional studies are needed before this semi-automated classification method for CVM assessment can be implemented in clinical practice.

  4. Automated left ventricle segmentation in late gadolinium-enhanced MRI for objective myocardial scar assessment.

    Science.gov (United States)

    Tao, Qian; Piers, Sebastiaan R D; Lamb, Hildo J; van der Geest, Rob J

    2015-08-01

    To develop and validate an objective and reproducible left ventricle (LV) segmentation method for late gadolinium enhanced (LGE) magnetic resonance imaging (MRI), which can facilitate accurate myocardial scar assessment. A cohort of 25 ischemic patients and 25 nonischemic patients were included. A four-step algorithm was proposed: first, the Cine-MRI and LGE-MRI volume were globally registered; second, the registered Cine-MRI contours were fitted to each LGE-MRI slice via the constructed contour image; third, the fitting was optimized in full LGE-MRI stack; finally, the contours were refined by taking into account patient-specific scar patterns. The automated LV segmentation results were compared with that of manual segmentation from two experienced observers. The accuracy of automated segmentation, expressed as the average contour distances to manual segmentation, was 0.82 ± 0.19 pixels, in the same order as interobserver difference between manual results (0.90 ± 0.26 pixels), but with lower variability (0.60 ± 0.37 pixels, P segmentation further demonstrated higher consistency than that of manual segmentation (Pearson correlation 0.97 vs. 0.84). An automated LV segmentation method for LGE-MRI was developed, providing high segmentation accuracy and lower interobserver variability compared to fully manual image analysis. The method facilitates objective assessment of myocardial scar. © 2014 Wiley Periodicals, Inc.

  5. A method to establish seismic noise baselines for automated station assessment

    Science.gov (United States)

    McNamara, D.E.; Hutt, C.R.; Gee, L.S.; Benz, H.M.; Buland, R.P.

    2009-01-01

    We present a method for quantifying station noise baselines and characterizing the spectral shape of out-of-nominal noise sources. Our intent is to automate this method in order to ensure that only the highest-quality data are used in rapid earthquake products at NEIC. In addition, the station noise baselines provide a valuable tool to support the quality control of GSN and ANSS backbone data and metadata. The procedures addressed here are currently in development at the NEIC, and work is underway to understand how quickly changes from nominal can be observed and used within the NEIC processing framework. The spectral methods and software used to compute station baselines and described herein (PQLX) can be useful to both permanent and portable seismic stations operators. Applications include: general seismic station and data quality control (QC), evaluation of instrument responses, assessment of near real-time communication system performance, characterization of site cultural noise conditions, and evaluation of sensor vault design, as well as assessment of gross network capabilities (McNamara et al. 2005). Future PQLX development plans include incorporating station baselines for automated QC methods and automating station status report generation and notification based on user-defined QC parameters. The PQLX software is available through the USGS (http://earthquake. usgs.gov/research/software/pqlx.php) and IRIS (http://www.iris.edu/software/ pqlx/).

  6. Sampling efficiency for species composition assessments using the ...

    African Journals Online (AJOL)

    A pilot survey to determine the sampling efficiency of the wheel-point method, using the nearest plant method, to assess species composition (using replicate similarity related to sampling intensity, and total sampling time) was conducted on three plot sizes (20 x 20m, 30 x 30m, 40 x 40m) at two sites in a semi-arid savanna.

  7. The development of an automated sentence generator for the assessment of reading speed

    Directory of Open Access Journals (Sweden)

    Legge Gordon E

    2008-03-01

    Full Text Available Abstract Reading speed is an important outcome measure for many studies in neuroscience and psychology. Conventional reading speed tests have a limited corpus of sentences and usually require observers to read sentences aloud. Here we describe an automated sentence generator which can create over 100,000 unique sentences, scored using a true/false response. We propose that an estimate of the minimum exposure time required for observers to categorise the truth of such sentences is a good alternative to reading speed measures that guarantees comprehension of the printed material. Removing one word from the sentence reduces performance to chance, indicating minimal redundancy. Reading speed assessed using rapid serial visual presentation (RSVP of these sentences is not statistically different from using MNREAD sentences. The automated sentence generator would be useful for measuring reading speed with button-press response (such as within MRI scanners and for studies requiring many repeated measures of reading speed.

  8. Interdisciplinary development of manual and automated product usability assessments for older adults with dementia: lessons learned.

    Science.gov (United States)

    Boger, Jennifer; Taati, Babak; Mihailidis, Alex

    2016-10-01

    The changes in cognitive abilities that accompany dementia can make it difficult to use everyday products that are required to complete activities of daily living. Products that are inherently more usable for people with dementia could facilitate independent activity completion, thus reducing the need for caregiver assistance. The objectives of this research were to: (1) gain an understanding of how water tap design impacted tap usability and (2) create an automated computerized tool that could assess tap usability. 27 older adults, who ranged from cognitively intact to advanced dementia, completed 1309 trials on five tap designs. Data were manually analyzed to investigate tap usability as well as used to develop an automated usability analysis tool. Researchers collaborated to modify existing techniques and to create novel ones to accomplish both goals. This paper presents lessons learned through the course of this research, which could be applicable in the development of other usability studies, automated vision-based assessments and the development of assistive technologies for cognitively impaired older adults. Collaborative interdisciplinary teamwork, which included older adult with dementia participants, was key to enabling innovative advances that achieved the projects' research goals. Implications for Rehabilitation Products that are implicitly familiar and usable by older adults could foster independent activity completion, potentially reducing reliance on a caregiver. The computer-based automated tool can significantly reduce the time and effort required to perform product usability analysis, making this type of analysis more feasible. Interdisciplinary collaboration can result in a more holistic understanding of assistive technology research challenges and enable innovative solutions.

  9. A Framework to Automate Assessment of Upper-Limb Motor Function Impairment: A Feasibility Study

    Directory of Open Access Journals (Sweden)

    Paul Otten

    2015-08-01

    Full Text Available Standard upper-limb motor function impairment assessments, such as the Fugl-Meyer Assessment (FMA, are a critical aspect of rehabilitation after neurological disorders. These assessments typically take a long time (about 30 min for the FMA for a clinician to perform on a patient, which is a severe burden in a clinical environment. In this paper, we propose a framework for automating upper-limb motor assessments that uses low-cost sensors to collect movement data. The sensor data is then processed through a machine learning algorithm to determine a score for a patient’s upper-limb functionality. To demonstrate the feasibility of the proposed approach, we implemented a system based on the proposed framework that can automate most of the FMA. Our experiment shows that the system provides similar FMA scores to clinician scores, and reduces the time spent evaluating each patient by 82%. Moreover, the proposed framework can be used to implement customized tests or tests specified in other existing standard assessment methods.

  10. Automated spectrometer interface for measurement of short half-life samples for neutron activation analysis

    Energy Technology Data Exchange (ETDEWEB)

    Lapolli, André L.; Secco, Marcello; Genezini, Frederico A.; Zahn, Guilherme S.; Moreira, Edson G., E-mail: alapolli@ipen.br [Instituto de Pesquisas Energéticas e Nucleares (IPEN/CNEN-SP), São Paulo, SP (Brazil)

    2017-07-01

    In this paper a source positioning system was developed, based on a HPGe detector coupled to a Canberra DAS 1000 data acquisition system and Canberra's GENIE2K software and libraries. The system is composed of a step motor coupled to an Arduino Uno microcontroller, which is programmed using C language to allow for a source-detector distance between 0.3 and 20 cm - both components are coupled to a PC computer using the USB interface. In order to allow automated data acquisition, two additional pieces of software were developed. The first one, a Human-Machine Interface (HMI) programmed in Visual Basic 6, allows the programming and monitoring of the data acquisition process, and the other, in REXX language, controls the data acquisition process in the background. The HMI is user-friendly and versatile, so that the even rather complex data acquisition processes may be easily programmed. When the experiment scheme is saved, two files are created and used by the REXX code to control the acquisition process so that the data acquisition is automatically stopped and saved after a user-defined time, then the source is repositioned and data acquisition is cleared and restarted. While in the present stage the system only offers three distinct source positions, finer source-position adjusting is under development. In its present configuration the system has been tested for stability and repeatability in all three positions, with an an excellent performance (author)

  11. Automated Sample Preparation Platform for Mass Spectrometry-Based Plasma Proteomics and Biomarker Discovery

    Directory of Open Access Journals (Sweden)

    Vilém Guryča

    2014-03-01

    Full Text Available The identification of novel biomarkers from human plasma remains a critical need in order to develop and monitor drug therapies for nearly all disease areas. The discovery of novel plasma biomarkers is, however, significantly hampered by the complexity and dynamic range of proteins within plasma, as well as the inherent variability in composition from patient to patient. In addition, it is widely accepted that most soluble plasma biomarkers for diseases such as cancer will be represented by tissue leakage products, circulating in plasma at low levels. It is therefore necessary to find approaches with the prerequisite level of sensitivity in such a complex biological matrix. Strategies for fractionating the plasma proteome have been suggested, but improvements in sensitivity are often negated by the resultant process variability. Here we describe an approach using multidimensional chromatography and on-line protein derivatization, which allows for higher sensitivity, whilst minimizing the process variability. In order to evaluate this automated process fully, we demonstrate three levels of processing and compare sensitivity, throughput and reproducibility. We demonstrate that high sensitivity analysis of the human plasma proteome is possible down to the low ng/mL or even high pg/mL level with a high degree of technical reproducibility.

  12. Automated spectrometer interface for measurement of short half-life samples for neutron activation analysis

    International Nuclear Information System (INIS)

    Lapolli, André L.; Secco, Marcello; Genezini, Frederico A.; Zahn, Guilherme S.; Moreira, Edson G.

    2017-01-01

    In this paper a source positioning system was developed, based on a HPGe detector coupled to a Canberra DAS 1000 data acquisition system and Canberra's GENIE2K software and libraries. The system is composed of a step motor coupled to an Arduino Uno microcontroller, which is programmed using C language to allow for a source-detector distance between 0.3 and 20 cm - both components are coupled to a PC computer using the USB interface. In order to allow automated data acquisition, two additional pieces of software were developed. The first one, a Human-Machine Interface (HMI) programmed in Visual Basic 6, allows the programming and monitoring of the data acquisition process, and the other, in REXX language, controls the data acquisition process in the background. The HMI is user-friendly and versatile, so that the even rather complex data acquisition processes may be easily programmed. When the experiment scheme is saved, two files are created and used by the REXX code to control the acquisition process so that the data acquisition is automatically stopped and saved after a user-defined time, then the source is repositioned and data acquisition is cleared and restarted. While in the present stage the system only offers three distinct source positions, finer source-position adjusting is under development. In its present configuration the system has been tested for stability and repeatability in all three positions, with an an excellent performance (author)

  13. Automated multi-dimensional liquid chromatography : sample preparation and identification of peptides from human blood filtrate

    NARCIS (Netherlands)

    Machtejevas, Egidijus; John, Harald; Wagner, Knut; Standker, Ludger; Marko-Varga, Gyorgy; Georg Forssmann, Wolf; Bischoff, Rainer; K. Unger, Klaus

    2004-01-01

    A comprehensive on-line sample clean-up with an integrated two-dimensional HPLC system was developed for the analysis of natural peptides. Samples comprised of endogenous peptides with molecular weights up to 20 kDa were generated from human hemofiltrate (HF) obtained from patients with chronic

  14. Solid recovered fuels in the cement industry--semi-automated sample preparation unit as a means for facilitated practical application.

    Science.gov (United States)

    Aldrian, Alexia; Sarc, Renato; Pomberger, Roland; Lorber, Karl E; Sipple, Ernst-Michael

    2016-03-01

    One of the challenges for the cement industry is the quality assurance of alternative fuel (e.g., solid recovered fuel, SRF) in co-incineration plants--especially for inhomogeneous alternative fuels with large particle sizes (d95⩾100 mm), which will gain even more importance in the substitution of conventional fuels due to low production costs. Existing standards for sampling and sample preparation do not cover the challenges resulting from these kinds of materials. A possible approach to ensure quality monitoring is shown in the present contribution. For this, a specially manufactured, automated comminution and sample divider device was installed at a cement plant in Rohožnik. In order to prove its practical suitability with methods according to current standards, the sampling and sample preparation process were validated for alternative fuel with a grain size >30 mm (i.e., d95=approximately 100 mm), so-called 'Hotdisc SRF'. Therefore, series of samples were taken and analysed. A comparison of the analysis results with the yearly average values obtained through a reference investigation route showed good accordance. Further investigations during the validation process also showed that segregation or enrichment of material throughout the comminution plant does not occur. The results also demonstrate that compliance with legal standards regarding the minimum sample amount is not sufficient for inhomogeneous and coarse particle size alternative fuels. Instead, higher sample amounts after the first particle size reduction step are strongly recommended in order to gain a representative laboratory sample. © The Author(s) 2016.

  15. SU-E-I-94: Automated Image Quality Assessment of Radiographic Systems Using An Anthropomorphic Phantom

    International Nuclear Information System (INIS)

    Wells, J; Wilson, J; Zhang, Y; Samei, E; Ravin, Carl E.

    2014-01-01

    Purpose: In a large, academic medical center, consistent radiographic imaging performance is difficult to routinely monitor and maintain, especially for a fleet consisting of multiple vendors, models, software versions, and numerous imaging protocols. Thus, an automated image quality control methodology has been implemented using routine image quality assessment with a physical, stylized anthropomorphic chest phantom. Methods: The “Duke” Phantom (Digital Phantom 07-646, Supertech, Elkhart, IN) was imaged twice on each of 13 radiographic units from a variety of vendors at 13 primary care clinics. The first acquisition used the clinical PA chest protocol to acquire the post-processed “FOR PRESENTATION” image. The second image was acquired without an antiscatter grid followed by collection of the “FOR PROCESSING” image. Manual CNR measurements were made from the largest and thickest contrast-detail inserts in the lung, heart, and abdominal regions of the phantom in each image. An automated image registration algorithm was used to estimate the CNR of the same insert using similar ROIs. Automated measurements were then compared to the manual measurements. Results: Automatic and manual CNR measurements obtained from “FOR PRESENTATION” images had average percent differences of 0.42%±5.18%, −3.44%±4.85%, and 1.04%±3.15% in the lung, heart, and abdominal regions, respectively; measurements obtained from “FOR PROCESSING” images had average percent differences of -0.63%±6.66%, −0.97%±3.92%, and −0.53%±4.18%, respectively. The maximum absolute difference in CNR was 15.78%, 10.89%, and 8.73% in the respective regions. In addition to CNR assessment of the largest and thickest contrast-detail inserts, the automated method also provided CNR estimates for all 75 contrast-detail inserts in each phantom image. Conclusion: Automated analysis of a radiographic phantom has been shown to be a fast, robust, and objective means for assessing radiographic

  16. Automated Microfluidic Droplet-Based Sample Chopper for Detection of Small Fluorescence Differences Using Lock-In Analysis.

    Science.gov (United States)

    Negou, Jean T; Avila, L Adriana; Li, Xiangpeng; Hagos, Tesfagebriel M; Easley, Christopher J

    2017-06-06

    Fluorescence is widely used for small-volume analysis and is a primary tool for on-chip detection in microfluidic devices, yet additional expertise, more elaborate optics, and phase-locked detectors are needed for ultrasensitive measurements. Recently, we designed a microfluidic analog to an optical beam chopper (μChopper) that alternated formation of picoliter volume sample and reference droplets. Without complex optics, the device negated large signal drifts (1/f noise), allowing absorbance detection in a mere 27 μm optical path. Here, we extend the μChopper concept to fluorescence detection with standard wide-field microscope optics. Precision of droplet control in the μChopper was improved by automation with pneumatic valves, allowing fluorescence measurements to be strictly phase locked at 0.04 Hz bandwidth to droplets generated at 3.50 Hz. A detection limit of 12 pM fluorescein was achieved when sampling 20 droplets, and as few as 310 zeptomoles (3.1 × 10 -19 mol) were detectable in single droplets (8.8 nL). When applied to free fatty acid (FFA) uptake in 3T3-L1 adipocytes, this μChopper permitted single-cell FFA uptake rates to be quantified at 3.5 ± 0.2 × 10 -15 mol cell -1 for the first time. Additionally, homogeneous immunoassays in droplets exhibited insulin detection limits of 9.3 nM or 190 amol (1.9 × 10 -16 mol). The combination of this novel, automated μChopper with lock-in detection provides a high-performance platform for detecting small differences with standard fluorescence optics, particularly in situations where sample volume is limited. The technique should be simple to implement into a variety of other droplet fluidics devices.

  17. Assessing V and V Processes for Automation with Respect to Vulnerabilities to Loss of Airplane State Awareness

    Science.gov (United States)

    Whitlow, Stephen; Wilkinson, Chris; Hamblin, Chris

    2014-01-01

    Automation has contributed substantially to the sustained improvement of aviation safety by minimizing the physical workload of the pilot and increasing operational efficiency. Nevertheless, in complex and highly automated aircraft, automation also has unintended consequences. As systems become more complex and the authority and autonomy (A&A) of the automation increases, human operators become relegated to the role of a system supervisor or administrator, a passive role not conducive to maintaining engagement and airplane state awareness (ASA). The consequence is that flight crews can often come to over rely on the automation, become less engaged in the human-machine interaction, and lose awareness of the automation mode under which the aircraft is operating. Likewise, the complexity of the system and automation modes may lead to poor understanding of the interaction between a mode of automation and a particular system configuration or phase of flight. These and other examples of mode confusion often lead to mismanaging the aircraftâ€"TM"s energy state or the aircraft deviating from the intended flight path. This report examines methods for assessing whether, and how, operational constructs properly assign authority and autonomy in a safe and coordinated manner, with particular emphasis on assuring adequate airplane state awareness by the flight crew and air traffic controllers in off-nominal and/or complex situations.

  18. Automated radioanalytical system incorporating microwave-assisted sample preparation, chemical separation, and online radiometric detection for the monitoring of total 99Tc in nuclear waste processing streams.

    Science.gov (United States)

    Egorov, Oleg B; O'Hara, Matthew J; Grate, Jay W

    2012-04-03

    An automated fluidic instrument is described that rapidly determines the total (99)Tc content of aged nuclear waste samples, where the matrix is chemically and radiologically complex and the existing speciation of the (99)Tc is variable. The monitor links microwave-assisted sample preparation with an automated anion exchange column separation and detection using a flow-through solid scintillator detector. The sample preparation steps acidify the sample, decompose organics, and convert all Tc species to the pertechnetate anion. The column-based anion exchange procedure separates the pertechnetate from the complex sample matrix, so that radiometric detection can provide accurate measurement of (99)Tc. We developed a preprogrammed spike addition procedure to automatically determine matrix-matched calibration. The overall measurement efficiency that is determined simultaneously provides a self-diagnostic parameter for the radiochemical separation and overall instrument function. Continuous, automated operation was demonstrated over the course of 54 h, which resulted in the analysis of 215 samples plus 54 hly spike-addition samples, with consistent overall measurement efficiency for the operation of the monitor. A sample can be processed and measured automatically in just 12.5 min with a detection limit of 23.5 Bq/mL of (99)Tc in low activity waste (0.495 mL sample volume), with better than 10% RSD precision at concentrations above the quantification limit. This rapid automated analysis method was developed to support nuclear waste processing operations planned for the Hanford nuclear site.

  19. Automated retinal image quality assessment on the UK Biobank dataset for epidemiological studies.

    Science.gov (United States)

    Welikala, R A; Fraz, M M; Foster, P J; Whincup, P H; Rudnicka, A R; Owen, C G; Strachan, D P; Barman, S A

    2016-04-01

    Morphological changes in the retinal vascular network are associated with future risk of many systemic and vascular diseases. However, uncertainty over the presence and nature of some of these associations exists. Analysis of data from large population based studies will help to resolve these uncertainties. The QUARTZ (QUantitative Analysis of Retinal vessel Topology and siZe) retinal image analysis system allows automated processing of large numbers of retinal images. However, an image quality assessment module is needed to achieve full automation. In this paper, we propose such an algorithm, which uses the segmented vessel map to determine the suitability of retinal images for use in the creation of vessel morphometric data suitable for epidemiological studies. This includes an effective 3-dimensional feature set and support vector machine classification. A random subset of 800 retinal images from UK Biobank (a large prospective study of 500,000 middle aged adults; where 68,151 underwent retinal imaging) was used to examine the performance of the image quality algorithm. The algorithm achieved a sensitivity of 95.33% and a specificity of 91.13% for the detection of inadequate images. The strong performance of this image quality algorithm will make rapid automated analysis of vascular morphometry feasible on the entire UK Biobank dataset (and other large retinal datasets), with minimal operator involvement, and at low cost. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Accelerated Evaluation of Automated Vehicles Safety in Lane-Change Scenarios Based on Importance Sampling Techniques.

    Science.gov (United States)

    Zhao, Ding; Lam, Henry; Peng, Huei; Bao, Shan; LeBlanc, David J; Nobukawa, Kazutoshi; Pan, Christopher S

    2017-03-01

    Automated vehicles (AVs) must be thoroughly evaluated before their release and deployment. A widely used evaluation approach is the Naturalistic-Field Operational Test (N-FOT), which tests prototype vehicles directly on the public roads. Due to the low exposure to safety-critical scenarios, N-FOTs are time consuming and expensive to conduct. In this paper, we propose an accelerated evaluation approach for AVs. The results can be used to generate motions of the other primary vehicles to accelerate the verification of AVs in simulations and controlled experiments. Frontal collision due to unsafe cut-ins is the target crash type of this paper. Human-controlled vehicles making unsafe lane changes are modeled as the primary disturbance to AVs based on data collected by the University of Michigan Safety Pilot Model Deployment Program. The cut-in scenarios are generated based on skewed statistics of collected human driver behaviors, which generate risky testing scenarios while preserving the statistical information so that the safety benefits of AVs in nonaccelerated cases can be accurately estimated. The cross-entropy method is used to recursively search for the optimal skewing parameters. The frequencies of the occurrences of conflicts, crashes, and injuries are estimated for a modeled AV, and the achieved accelerated rate is around 2000 to 20 000. In other words, in the accelerated simulations, driving for 1000 miles will expose the AV with challenging scenarios that will take about 2 to 20 million miles of real-world driving to encounter. This technique thus has the potential to greatly reduce the development and validation time for AVs.

  1. Effects of sampling rate on automated fatigue recognition in surface EMG signals

    Directory of Open Access Journals (Sweden)

    Kahl Lorenz

    2015-09-01

    Full Text Available This study investigated the effects different sampling rates may produce on the quality of muscle fatigue detection algorithms. sEMG signals were obtained from isometric contractions of the arm. Subsampled signals resulting in technically relevant sampling rates were computationally deduced from the original recordings. The spectral based fatigue recognition methods mean and median frequency as well as spectral moment ratio were included in this investigation, as well as the sample and the fuzzy approximate entropy. The resulting fatigue indices were evaluated with respect to noise and separability of different load levels. We concluded that the spectral moment ratio provides the best results in fatigue detection over a wide range of sampling rates.

  2. Rapid DNA analysis for automated processing and interpretation of low DNA content samples.

    Science.gov (United States)

    Turingan, Rosemary S; Vasantgadkar, Sameer; Palombo, Luke; Hogan, Catherine; Jiang, Hua; Tan, Eugene; Selden, Richard F

    2016-01-01

    Short tandem repeat (STR) analysis of casework samples with low DNA content include those resulting from the transfer of epithelial cells from the skin to an object (e.g., cells on a water bottle, or brim of a cap), blood spatter stains, and small bone and tissue fragments. Low DNA content (LDC) samples are important in a wide range of settings, including disaster response teams to assist in victim identification and family reunification, military operations to identify friend or foe, criminal forensics to identify suspects and exonerate the innocent, and medical examiner and coroner offices to identify missing persons. Processing LDC samples requires experienced laboratory personnel, isolated workstations, and sophisticated equipment, requires transport time, and involves complex procedures. We present a rapid DNA analysis system designed specifically to generate STR profiles from LDC samples in field-forward settings by non-technical operators. By performing STR in the field, close to the site of collection, rapid DNA analysis has the potential to increase throughput and to provide actionable information in real time. A Low DNA Content BioChipSet (LDC BCS) was developed and manufactured by injection molding. It was designed to function in the fully integrated Accelerated Nuclear DNA Equipment (ANDE) instrument previously designed for analysis of buccal swab and other high DNA content samples (Investigative Genet. 4(1):1-15, 2013). The LDC BCS performs efficient DNA purification followed by microfluidic ultrafiltration of the purified DNA, maximizing the quantity of DNA available for subsequent amplification and electrophoretic separation and detection of amplified fragments. The system demonstrates accuracy, precision, resolution, signal strength, and peak height ratios appropriate for casework analysis. The LDC rapid DNA analysis system is effective for the generation of STR profiles from a wide range of sample types. The technology broadens the range of sample

  3. Iterative User Interface Design for Automated Sequential Organ Failure Assessment Score Calculator in Sepsis Detection.

    Science.gov (United States)

    Aakre, Christopher Ansel; Kitson, Jaben E; Li, Man; Herasevich, Vitaly

    2017-05-18

    The new sepsis definition has increased the need for frequent sequential organ failure assessment (SOFA) score recalculation and the clerical burden of information retrieval makes this score ideal for automated calculation. The aim of this study was to (1) estimate the clerical workload of manual SOFA score calculation through a time-motion analysis and (2) describe a user-centered design process for an electronic medical record (EMR) integrated, automated SOFA score calculator with subsequent usability evaluation study. First, we performed a time-motion analysis by recording time-to-task-completion for the manual calculation of 35 baseline and 35 current SOFA scores by 14 internal medicine residents over a 2-month period. Next, we used an agile development process to create a user interface for a previously developed automated SOFA score calculator. The final user interface usability was evaluated by clinician end users with the Computer Systems Usability Questionnaire. The overall mean (standard deviation, SD) time-to-complete manual SOFA score calculation time was 61.6 s (33). Among the 24% (12/50) usability survey respondents, our user-centered user interface design process resulted in >75% favorability of survey items in the domains of system usability, information quality, and interface quality. Early stakeholder engagement in our agile design process resulted in a user interface for an automated SOFA score calculator that reduced clinician workload and met clinicians' needs at the point of care. Emerging interoperable platforms may facilitate dissemination of similarly useful clinical score calculators and decision support algorithms as "apps." A user-centered design process and usability evaluation should be considered during creation of these tools. ©Christopher Ansel Aakre, Jaben E Kitson, Man Li, Vitaly Herasevich. Originally published in JMIR Human Factors (http://humanfactors.jmir.org), 18.05.2017.

  4. Accuracy of liver lesion assessment using automated measurement and segmentation software in biphasic multislice CT (MSCT)

    International Nuclear Information System (INIS)

    Puesken, M.; Juergens, K.U.; Edenfeld, A.; Buerke, B.; Seifarth, H.; Beyer, F.; Heindel, W.; Wessling, J.; Suehling, M.; Osada, N.

    2009-01-01

    Purpose: To assess the accuracy of liver lesion measurement using automated measurement and segmentation software depending on the vascularization level. Materials and Methods: Arterial and portal venous phase multislice CT (MSCT) was performed for 58 patients. 94 liver lesions were evaluated and classified according to vascularity (hypervascular: 13 hepatocellular carcinomas, 20 hemangiomas; hypovascular: 31 metastases, 3 lymphomas, 4 abscesses; liquid: 23 cysts). The RECIST diameter and volume were obtained using automated measurement and segmentation software and compared to corresponding measurements derived visually by two experienced radiologists as a reference standard. Statistical analysis was performed using the Wilcoxon test and concordance correlation coefficients. Results: Automated measurements revealed no significant difference between the arterial and portal venous phase in hypovascular (mean RECIST diameter: 31.4 vs. 30.2 mm; p = 0.65; κ = 0.875) and liquid lesions (20.4 vs. 20.1 mm; p = 0.1; κ = 0.996). The RECIST diameter and volume of hypervascular lesions were significantly underestimated in the portal venous phase as compared to the arterial phase (30.3 vs. 26.9 mm, p = 0.007, κ 0.834; 10.7 vs. 7.9 ml, p = 0.0045, κ = 0.752). Automated measurements for hypovascular and liquid lesions in the arterial and portal venous phase were concordant to the reference standard. Hypervascular lesion measurements were in line with the reference standard for the arterial phase (30.3 vs. 32.2 mm, p 0.66, κ = 0.754), but revealed a significant difference for the portal venous phase (26.9 vs. 32.1 mm; p = 0.041; κ = 0.606). (orig.)

  5. High-throughput automated microfluidic sample preparation for accurate microbial genomics.

    Science.gov (United States)

    Kim, Soohong; De Jonghe, Joachim; Kulesa, Anthony B; Feldman, David; Vatanen, Tommi; Bhattacharyya, Roby P; Berdy, Brittany; Gomez, James; Nolan, Jill; Epstein, Slava; Blainey, Paul C

    2017-01-27

    Low-cost shotgun DNA sequencing is transforming the microbial sciences. Sequencing instruments are so effective that sample preparation is now the key limiting factor. Here, we introduce a microfluidic sample preparation platform that integrates the key steps in cells to sequence library sample preparation for up to 96 samples and reduces DNA input requirements 100-fold while maintaining or improving data quality. The general-purpose microarchitecture we demonstrate supports workflows with arbitrary numbers of reaction and clean-up or capture steps. By reducing the sample quantity requirements, we enabled low-input (∼10,000 cells) whole-genome shotgun (WGS) sequencing of Mycobacterium tuberculosis and soil micro-colonies with superior results. We also leveraged the enhanced throughput to sequence ∼400 clinical Pseudomonas aeruginosa libraries and demonstrate excellent single-nucleotide polymorphism detection performance that explained phenotypically observed antibiotic resistance. Fully-integrated lab-on-chip sample preparation overcomes technical barriers to enable broader deployment of genomics across many basic research and translational applications.

  6. Development of an automated method for determination of thorium in soil samples and aerosols

    International Nuclear Information System (INIS)

    Stuart, J.E.; Robertson, R.

    1986-09-01

    Methodology for determining trace thorium levels in a variety of sample types was further developed. Thorium in filtered water samples is concentrated by ferric hydroxide precipitation followed by dissolution and co-precipitation with lanthanum fluoride. Aerosols on glass fibre, cellulose ester, or teflon filters and solid soil and sediment samples are acid digested. Subsequently thorium is concentrated by lanthanum fluoride co-precipitation. Chemical separation and measurement is then done on a Technicon AA11-C autoanalyzer, using solvent extraction into thenoyltrifuoroacetone in kerosene followed by back extraction into 2 N H NO 3 , and colourometric measurement of the thorium arsenazo III complex. Chemical yields are determined by the addition of thorium-234 tracer using gamma-ray spectrometry. The sensitivities of the methods for water, aerosol and solid samples are approximately 1.0 μg/L, 0.5 μg/g and 1.0 μg/g respectively. At thorium levels about ten times the detection limit, accuracy is estimated to be ± 10% for liquids and aerosols and ± 15% for solid samples, and precision ± 5% for all samples

  7. Automated Assessment of Patients' Self-Narratives for Posttraumatic Stress Disorder Screening Using Natural Language Processing and Text Mining.

    Science.gov (United States)

    He, Qiwei; Veldkamp, Bernard P; Glas, Cees A W; de Vries, Theo

    2017-03-01

    Patients' narratives about traumatic experiences and symptoms are useful in clinical screening and diagnostic procedures. In this study, we presented an automated assessment system to screen patients for posttraumatic stress disorder via a natural language processing and text-mining approach. Four machine-learning algorithms-including decision tree, naive Bayes, support vector machine, and an alternative classification approach called the product score model-were used in combination with n-gram representation models to identify patterns between verbal features in self-narratives and psychiatric diagnoses. With our sample, the product score model with unigrams attained the highest prediction accuracy when compared with practitioners' diagnoses. The addition of multigrams contributed most to balancing the metrics of sensitivity and specificity. This article also demonstrates that text mining is a promising approach for analyzing patients' self-expression behavior, thus helping clinicians identify potential patients from an early stage.

  8. Sequential sampling: a novel method in farm animal welfare assessment.

    Science.gov (United States)

    Heath, C A E; Main, D C J; Mullan, S; Haskell, M J; Browne, W J

    2016-02-01

    Lameness in dairy cows is an important welfare issue. As part of a welfare assessment, herd level lameness prevalence can be estimated from scoring a sample of animals, where higher levels of accuracy are associated with larger sample sizes. As the financial cost is related to the number of cows sampled, smaller samples are preferred. Sequential sampling schemes have been used for informing decision making in clinical trials. Sequential sampling involves taking samples in stages, where sampling can stop early depending on the estimated lameness prevalence. When welfare assessment is used for a pass/fail decision, a similar approach could be applied to reduce the overall sample size. The sampling schemes proposed here apply the principles of sequential sampling within a diagnostic testing framework. This study develops three sequential sampling schemes of increasing complexity to classify 80 fully assessed UK dairy farms, each with known lameness prevalence. Using the Welfare Quality herd-size-based sampling scheme, the first 'basic' scheme involves two sampling events. At the first sampling event half the Welfare Quality sample size is drawn, and then depending on the outcome, sampling either stops or is continued and the same number of animals is sampled again. In the second 'cautious' scheme, an adaptation is made to ensure that correctly classifying a farm as 'bad' is done with greater certainty. The third scheme is the only scheme to go beyond lameness as a binary measure and investigates the potential for increasing accuracy by incorporating the number of severely lame cows into the decision. The three schemes are evaluated with respect to accuracy and average sample size by running 100 000 simulations for each scheme, and a comparison is made with the fixed size Welfare Quality herd-size-based sampling scheme. All three schemes performed almost as well as the fixed size scheme but with much smaller average sample sizes. For the third scheme, an overall

  9. The T-lock: automated compensation of radio-frequency induced sample heating

    International Nuclear Information System (INIS)

    Hiller, Sebastian; Arthanari, Haribabu; Wagner, Gerhard

    2009-01-01

    Modern high-field NMR spectrometers can stabilize the nominal sample temperature at a precision of less than 0.1 K. However, the actual sample temperature may differ from the nominal value by several degrees because the sample heating caused by high-power radio frequency pulses is not readily detected by the temperature sensors. Without correction, transfer of chemical shifts between different experiments causes problems in the data analysis. In principle, the temperature differences can be corrected by manual procedures but this is cumbersome and not fully reliable. Here, we introduce the concept of a 'T-lock', which automatically maintains the sample at the same reference temperature over the course of different NMR experiments. The T-lock works by continuously measuring the resonance frequency of a suitable spin and simultaneously adjusting the temperature control, thus locking the sample temperature at the reference value. For three different nuclei, 13 C, 17 O and 31 P in the compounds alanine, water, and phosphate, respectively, the T-lock accuracy was found to be <0.1 K. The use of dummy scan periods with variable lengths allows a reliable establishment of the thermal equilibrium before the acquisition of an experiment starts

  10. Low-Cost 3D Printers Enable High-Quality and Automated Sample Preparation and Molecular Detection.

    Directory of Open Access Journals (Sweden)

    Kamfai Chan

    Full Text Available Most molecular diagnostic assays require upfront sample preparation steps to isolate the target's nucleic acids, followed by its amplification and detection using various nucleic acid amplification techniques. Because molecular diagnostic methods are generally rather difficult to perform manually without highly trained users, automated and integrated systems are highly desirable but too costly for use at point-of-care or low-resource settings. Here, we showcase the development of a low-cost and rapid nucleic acid isolation and amplification platform by modifying entry-level 3D printers that cost between $400 and $750. Our modifications consisted of replacing the extruder with a tip-comb attachment that houses magnets to conduct magnetic particle-based nucleic acid extraction. We then programmed the 3D printer to conduct motions that can perform high-quality extraction protocols. Up to 12 samples can be processed simultaneously in under 13 minutes and the efficiency of nucleic acid isolation matches well against gold-standard spin-column-based extraction technology. Additionally, we used the 3D printer's heated bed to supply heat to perform water bath-based polymerase chain reactions (PCRs. Using another attachment to hold PCR tubes, the 3D printer was programmed to automate the process of shuttling PCR tubes between water baths. By eliminating the temperature ramping needed in most commercial thermal cyclers, the run time of a 35-cycle PCR protocol was shortened by 33%. This article demonstrates that for applications in resource-limited settings, expensive nucleic acid extraction devices and thermal cyclers that are used in many central laboratories can be potentially replaced by a device modified from inexpensive entry-level 3D printers.

  11. Low-Cost 3D Printers Enable High-Quality and Automated Sample Preparation and Molecular Detection.

    Science.gov (United States)

    Chan, Kamfai; Coen, Mauricio; Hardick, Justin; Gaydos, Charlotte A; Wong, Kah-Yat; Smith, Clayton; Wilson, Scott A; Vayugundla, Siva Praneeth; Wong, Season

    2016-01-01

    Most molecular diagnostic assays require upfront sample preparation steps to isolate the target's nucleic acids, followed by its amplification and detection using various nucleic acid amplification techniques. Because molecular diagnostic methods are generally rather difficult to perform manually without highly trained users, automated and integrated systems are highly desirable but too costly for use at point-of-care or low-resource settings. Here, we showcase the development of a low-cost and rapid nucleic acid isolation and amplification platform by modifying entry-level 3D printers that cost between $400 and $750. Our modifications consisted of replacing the extruder with a tip-comb attachment that houses magnets to conduct magnetic particle-based nucleic acid extraction. We then programmed the 3D printer to conduct motions that can perform high-quality extraction protocols. Up to 12 samples can be processed simultaneously in under 13 minutes and the efficiency of nucleic acid isolation matches well against gold-standard spin-column-based extraction technology. Additionally, we used the 3D printer's heated bed to supply heat to perform water bath-based polymerase chain reactions (PCRs). Using another attachment to hold PCR tubes, the 3D printer was programmed to automate the process of shuttling PCR tubes between water baths. By eliminating the temperature ramping needed in most commercial thermal cyclers, the run time of a 35-cycle PCR protocol was shortened by 33%. This article demonstrates that for applications in resource-limited settings, expensive nucleic acid extraction devices and thermal cyclers that are used in many central laboratories can be potentially replaced by a device modified from inexpensive entry-level 3D printers.

  12. Automation of the radiation measuring facilities for samples in health physics - MA 9

    International Nuclear Information System (INIS)

    Martini, M.

    1980-12-01

    Routine radation measurements of samples are performed by the HMI health physics department by means of test stations for individual samples and multiple samples (using a changing equipment). The basic device of these test stations is a SCALER/TIMER system (BF 22/25, BERTHOLD Corp.). This measuring facility has been extended by a CAMAC intrumentation which incorporates an autonomous CAMAC processor (CAPRO-1, INCAA B.V.) for monitoring an automatic control of the system. The programming language is BASIC. A DECwriter (LA 34) is used for user interaction and for printing the measurement results. This report describes the features of this system and present some examples of, the dialogue with the system and the printout of data. (orig.) [de

  13. IntelliCages and automated assessment of learning in group-housed mice

    Science.gov (United States)

    Puścian, Alicja; Knapska, Ewelina

    2014-11-01

    IntelliCage is a fully automated, computer controlled system, which can be used for long-term monitoring of behavior of group-housed mice. Using standardized experimental protocols we can assess cognitive abilities and behavioral flexibility in appetitively and aversively motivated tasks, as well as measure social influences on learning of the subjects. We have also identified groups of neurons specifically activated by appetitively and aversively motivated learning within the amygdala, function of which we are going to investigate optogenetically in the future.

  14. Assessing mouse behaviour throughout the light/dark cycle using automated in-cage analysis tools.

    Science.gov (United States)

    Bains, Rasneer S; Wells, Sara; Sillito, Rowland R; Armstrong, J Douglas; Cater, Heather L; Banks, Gareth; Nolan, Patrick M

    2018-04-15

    An important factor in reducing variability in mouse test outcomes has been to develop assays that can be used for continuous automated home cage assessment. Our experience has shown that this has been most evidenced in long-term assessment of wheel-running activity in mice. Historically, wheel-running in mice and other rodents have been used as a robust assay to determine, with precision, the inherent period of circadian rhythms in mice. Furthermore, this assay has been instrumental in dissecting the molecular genetic basis of mammalian circadian rhythms. In teasing out the elements of this test that have determined its robustness - automated assessment of an unforced behaviour in the home cage over long time intervals - we and others have been investigating whether similar test apparatus could be used to accurately discriminate differences in distinct behavioural parameters in mice. Firstly, using these systems, we explored behaviours in a number of mouse inbred strains to determine whether we could extract biologically meaningful differences. Secondly, we tested a number of relevant mutant lines to determine how discriminative these parameters were. Our findings show that, when compared to conventional out-of-cage phenotyping, a far deeper understanding of mouse mutant phenotype can be established by monitoring behaviour in the home cage over one or more light:dark cycles. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.

  15. Automated on-line liquid–liquid extraction system for temporal mass spectrometric analysis of dynamic samples

    Energy Technology Data Exchange (ETDEWEB)

    Hsieh, Kai-Ta; Liu, Pei-Han [Department of Applied Chemistry, National Chiao Tung University, 1001 University Rd, Hsinchu, 300, Taiwan (China); Urban, Pawel L. [Department of Applied Chemistry, National Chiao Tung University, 1001 University Rd, Hsinchu, 300, Taiwan (China); Institute of Molecular Science, National Chiao Tung University, 1001 University Rd, Hsinchu, 300, Taiwan (China)

    2015-09-24

    Most real samples cannot directly be infused to mass spectrometers because they could contaminate delicate parts of ion source and guides, or cause ion suppression. Conventional sample preparation procedures limit temporal resolution of analysis. We have developed an automated liquid–liquid extraction system that enables unsupervised repetitive treatment of dynamic samples and instantaneous analysis by mass spectrometry (MS). It incorporates inexpensive open-source microcontroller boards (Arduino and Netduino) to guide the extraction and analysis process. Duration of every extraction cycle is 17 min. The system enables monitoring of dynamic processes over many hours. The extracts are automatically transferred to the ion source incorporating a Venturi pump. Operation of the device has been characterized (repeatability, RSD = 15%, n = 20; concentration range for ibuprofen, 0.053–2.000 mM; LOD for ibuprofen, ∼0.005 mM; including extraction and detection). To exemplify its usefulness in real-world applications, we implemented this device in chemical profiling of pharmaceutical formulation dissolution process. Temporal dissolution profiles of commercial ibuprofen and acetaminophen tablets were recorded during 10 h. The extraction-MS datasets were fitted with exponential functions to characterize the rates of release of the main and auxiliary ingredients (e.g. ibuprofen, k = 0.43 ± 0.01 h{sup −1}). The electronic control unit of this system interacts with the operator via touch screen, internet, voice, and short text messages sent to the mobile phone, which is helpful when launching long-term (e.g. overnight) measurements. Due to these interactive features, the platform brings the concept of the Internet-of-Things (IoT) to the chemistry laboratory environment. - Highlights: • Mass spectrometric analysis normally requires sample preparation. • Liquid–liquid extraction can isolate analytes from complex matrices. • The proposed system automates

  16. Automated on-line liquid–liquid extraction system for temporal mass spectrometric analysis of dynamic samples

    International Nuclear Information System (INIS)

    Hsieh, Kai-Ta; Liu, Pei-Han; Urban, Pawel L.

    2015-01-01

    Most real samples cannot directly be infused to mass spectrometers because they could contaminate delicate parts of ion source and guides, or cause ion suppression. Conventional sample preparation procedures limit temporal resolution of analysis. We have developed an automated liquid–liquid extraction system that enables unsupervised repetitive treatment of dynamic samples and instantaneous analysis by mass spectrometry (MS). It incorporates inexpensive open-source microcontroller boards (Arduino and Netduino) to guide the extraction and analysis process. Duration of every extraction cycle is 17 min. The system enables monitoring of dynamic processes over many hours. The extracts are automatically transferred to the ion source incorporating a Venturi pump. Operation of the device has been characterized (repeatability, RSD = 15%, n = 20; concentration range for ibuprofen, 0.053–2.000 mM; LOD for ibuprofen, ∼0.005 mM; including extraction and detection). To exemplify its usefulness in real-world applications, we implemented this device in chemical profiling of pharmaceutical formulation dissolution process. Temporal dissolution profiles of commercial ibuprofen and acetaminophen tablets were recorded during 10 h. The extraction-MS datasets were fitted with exponential functions to characterize the rates of release of the main and auxiliary ingredients (e.g. ibuprofen, k = 0.43 ± 0.01 h −1 ). The electronic control unit of this system interacts with the operator via touch screen, internet, voice, and short text messages sent to the mobile phone, which is helpful when launching long-term (e.g. overnight) measurements. Due to these interactive features, the platform brings the concept of the Internet-of-Things (IoT) to the chemistry laboratory environment. - Highlights: • Mass spectrometric analysis normally requires sample preparation. • Liquid–liquid extraction can isolate analytes from complex matrices. • The proposed system automates the

  17. Portable automation of static chamber sample collection for quantifying soil gas flux

    Science.gov (United States)

    The collection of soil gas flux using the static chamber method is labor intensive. The number of chambers that can be sampled in a given time period is limited by the spacing between chambers and the availability of trained research technicians. However, the static chamber method can limit spatial ...

  18. [Automated serial diagnosis of donor blood samples. Ergonomic and economic organization structure].

    Science.gov (United States)

    Stoll, T; Fischer-Fröhlich, C L; Mayer, G; Hanfland, P

    1990-01-01

    A comprehensive computer-aided administration-system for blood-donors is presented. Ciphered informations of barcode-labels allow the automatic and nevertheless selective pipetting of samples by pipetting-robots. Self-acting analysis-results are transferred to a host-computer in order to actualize a donor data-base.

  19. Automated sample-processing and titration system for determining uranium in nuclear materials

    International Nuclear Information System (INIS)

    Harrar, J.E.; Boyle, W.G.; Breshears, J.D.; Pomernacki, C.L.; Brand, H.R.; Kray, A.M.; Sherry, R.J.; Pastrone, J.A.

    1977-01-01

    The system is designed for accurate, precise, and selective determination of from 10 to 180 mg of uranium in 2 to 12 cm 3 of solution. Samples, standards, and their solutions are handled on a weight basis. These weights, together with their appropriate identification numbers, are stored in computer memory and are used automatically in the assay calculations after each titration. The measurement technique (controlled-current coulometry) is based on the Davies-Gray and New Brunswick Laboratory method, in which U(VI) is reduced to U(IV) in strong H 3 PO 4 , followed by titration of the U(IV) with electrogenerated V(V). Solution pretreatment and titration are automatic. The analyzer is able to process 44 samples per loading of the sample changer, at a rate of 4 to 9 samples per hour. The system includes a comprehensive fault-monitoring system that detects analytical errors, guards against abnormal conditions which might cause errors, and prevents unsafe operation. A detailed description of the system, information on the reliability of the component subsystems, and a summary of its evaluation by the New Brunswick Laboratory are presented

  20. An improved automated type-based method for area assessment of wound surface.

    Science.gov (United States)

    Qi, Xin; Ding, Lian; Huang, Wenjian; Wen, Bing; Guo, Xiaohui; Zhang, Jue

    2017-01-01

    Accurate and precise wound measurements are a critical component of wound detection and assessment. Digital cameras are convenient and objective tools that are being increasingly used worldwide to assist with wound measurements and assessments. However, heterogeneous wounds and poor lighting conditions continue to be obstacles to wound area recognition. This study, therefore, provides an improved automated type-based wound area assessment method that is robust to lighting conditions and can distinguish between different wound tissues based on wound colors. The results of both laboratory and clinical applications of the proposed method show excellent consistency of manual area measurements. This proposed technology is expected to provide wound care specialists with more clinical information about heterogeneous wounds, thereby enabling prospective cost savings for therapy and treatment. © 2016 by the Wound Healing Society.

  1. Sorbent Tube Sampling and an Automated Thermal Desorption System for Halocarbon Analysis

    Directory of Open Access Journals (Sweden)

    Md. Anwar Hossain Khan

    2009-01-01

    Full Text Available Development and deployment of the analytical sys tem, ATD-GC-ECD has been established to monitor a suite of halogenated com pounds found in the atmosphere at trace concentrations. The instrument has been used to monitor urban back ground emission flux levels in Bristol, UK as well as Yellow stone National Park, USA and an in door rain forest (Wild Walk@Bristol, UK. The newly established sorbent tube sampling system is small and easily portable and has been used for large volume sample collection from remote areas. Auto mated Thermal Desorption (ATD provides routine atmospheric measurements with out cryogenic pre-concentration. The instrument provides good precision where the detection limit was _T_n3 pptv for the species of interest and the reproducibility was within 4% for all of the selected halocarbons. The results from two field experiments have also pro vided insight about natural missing sources of some ozone depleting halocarbons.

  2. Toward automated assessment of health Web page quality using the DISCERN instrument.

    Science.gov (United States)

    Allam, Ahmed; Schulz, Peter J; Krauthammer, Michael

    2017-05-01

    As the Internet becomes the number one destination for obtaining health-related information, there is an increasing need to identify health Web pages that convey an accurate and current view of medical knowledge. In response, the research community has created multicriteria instruments for reliably assessing online medical information quality. One such instrument is DISCERN, which measures health Web page quality by assessing an array of features. In order to scale up use of the instrument, there is interest in automating the quality evaluation process by building machine learning (ML)-based DISCERN Web page classifiers. The paper addresses 2 key issues that are essential before constructing automated DISCERN classifiers: (1) generation of a robust DISCERN training corpus useful for training classification algorithms, and (2) assessment of the usefulness of the current DISCERN scoring schema as a metric for evaluating the performance of these algorithms. Using DISCERN, 272 Web pages discussing treatment options in breast cancer, arthritis, and depression were evaluated and rated by trained coders. First, different consensus models were compared to obtain a robust aggregated rating among the coders, suitable for a DISCERN ML training corpus. Second, a new DISCERN scoring criterion was proposed (features-based score) as an ML performance metric that is more reflective of the score distribution across different DISCERN quality criteria. First, we found that a probabilistic consensus model applied to the DISCERN instrument was robust against noise (random ratings) and superior to other approaches for building a training corpus. Second, we found that the established DISCERN scoring schema (overall score) is ill-suited to measure ML performance for automated classifiers. Use of a probabilistic consensus model is advantageous for building a training corpus for the DISCERN instrument, and use of a features-based score is an appropriate ML metric for automated DISCERN

  3. Development of an Automated and Sensitive Microfluidic Device for Capturing and Characterizing Circulating Tumor Cells (CTCs from Clinical Blood Samples.

    Directory of Open Access Journals (Sweden)

    Priya Gogoi

    Full Text Available Current analysis of circulating tumor cells (CTCs is hindered by sub-optimal sensitivity and specificity of devices or assays as well as lack of capability of characterization of CTCs with clinical biomarkers. Here, we validate a novel technology to enrich and characterize CTCs from blood samples of patients with metastatic breast, prostate and colorectal cancers using a microfluidic chip which is processed by using an automated staining and scanning system from sample preparation to image processing. The Celsee system allowed for the detection of CTCs with apparent high sensitivity and specificity (94% sensitivity and 100% specificity. Moreover, the system facilitated rapid capture of CTCs from blood samples and also allowed for downstream characterization of the captured cells by immunohistochemistry, DNA and mRNA fluorescence in-situ hybridization (FISH. In a subset of patients with prostate cancer we compared the technology with a FDA-approved CTC device, CellSearch and found a higher degree of sensitivity with the Celsee instrument. In conclusion, the integrated Celsee system represents a promising CTC technology for enumeration and molecular characterization.

  4. Analysis of polycyclic aromatic hydrocarbons in soil: minimizing sample pretreatment using automated Soxhlet with ethyl acetate as extraction solvent.

    Science.gov (United States)

    Szolar, Oliver H J; Rost, Helmut; Braun, Rudolf; Loibner, Andreas P

    2002-05-15

    A simplified sample pretreatment method for industrially PAH-contaminated soils applying automated Soxhlet (Soxtherm) with ethyl acetate as extraction solvent is presented. Laborious pretreatment steps such as drying of samples, cleanup of crude extracts, and solvent exchange were allowed to be bypassed without notable performance impact. Moisture of the soil samples did not significantly influence recoveries of PAHs at a wide range of water content for the newly developed method. However, the opposite was true for the standard procedure using the more apolar 1:1 (v/v) n-hexane/acetone solvent mixture including postextraction treatments recommended by the U.S. EPA. Moreover, ethyl acetate crude extracts did not appreciably effect the chromatographic performance (HPLC-(3D)FLD), which was confirmed by a comparison of the purity of PAH spectra from both pretreatment methods. Up to 20% (v/v) in acetonitrile, ethyl acetate proved to be fully compatible with the mobile phase of the HPLC whereas the same concentration of n-hexane/acetone in acetonitrile resulted in significant retention time shifts. The newly developed pretreatment method was applied to three historically contaminated soils from different sources with extraction efficiencies not being significantly different compared to the standard procedure. Finally, the certified reference soil CRM 524 was subjected to the simplified procedure resulting in quantitative recoveries (>92%) for all PAHs analyzed.

  5. A Simple Method for Automated Solid Phase Extraction of Water Samples for Immunological Analysis of Small Pollutants.

    Science.gov (United States)

    Heub, Sarah; Tscharner, Noe; Kehl, Florian; Dittrich, Petra S; Follonier, Stéphane; Barbe, Laurent

    2016-01-01

    A new method for solid phase extraction (SPE) of environmental water samples is proposed. The developed prototype is cost-efficient and user friendly, and enables to perform rapid, automated and simple SPE. The pre-concentrated solution is compatible with analysis by immunoassay, with a low organic solvent content. A method is described for the extraction and pre-concentration of natural hormone 17β-estradiol in 100 ml water samples. Reverse phase SPE is performed with octadecyl-silica sorbent and elution is done with 200 µl of methanol 50% v/v. Eluent is diluted by adding di-water to lower the amount of methanol. After preparing manually the SPE column, the overall procedure is performed automatically within 1 hr. At the end of the process, estradiol concentration is measured by using a commercial enzyme-linked immune-sorbent assay (ELISA). 100-fold pre-concentration is achieved and the methanol content in only 10% v/v. Full recoveries of the molecule are achieved with 1 ng/L spiked de-ionized and synthetic sea water samples.

  6. An Automated Quiet Sleep Detection Approach in Preterm Infants as a Gateway to Assess Brain Maturation.

    Science.gov (United States)

    Dereymaeker, Anneleen; Pillay, Kirubin; Vervisch, Jan; Van Huffel, Sabine; Naulaers, Gunnar; Jansen, Katrien; De Vos, Maarten

    2017-09-01

    Sleep state development in preterm neonates can provide crucial information regarding functional brain maturation and give insight into neurological well being. However, visual labeling of sleep stages from EEG requires expertise and is very time consuming, prompting the need for an automated procedure. We present a robust method for automated detection of preterm sleep from EEG, over a wide postmenstrual age ([Formula: see text] age) range, focusing first on Quiet Sleep (QS) as an initial marker for sleep assessment. Our algorithm, CLuster-based Adaptive Sleep Staging (CLASS), detects QS if it remains relatively more discontinuous than non-QS over PMA. CLASS was optimized on a training set of 34 recordings aged 27-42 weeks PMA, and performance then assessed on a distinct test set of 55 recordings of the same age range. Results were compared to visual QS labeling from two independent raters (with inter-rater agreement [Formula: see text]), using Sensitivity, Specificity, Detection Factor ([Formula: see text] of visual QS periods correctly detected by CLASS) and Misclassification Factor ([Formula: see text] of CLASS-detected QS periods that are misclassified). CLASS performance proved optimal across recordings at 31-38 weeks (median [Formula: see text], median MF 0-0.25, median Sensitivity 0.93-1.0, and median Specificity 0.80-0.91 across this age range), with minimal misclassifications at 35-36 weeks (median [Formula: see text]). To illustrate the potential of CLASS in facilitating clinical research, normal maturational trends over PMA were derived from CLASS-estimated QS periods, visual QS estimates, and nonstate specific periods (containing QS and non-QS) in the EEG recording. CLASS QS trends agreed with those from visual QS, with both showing stronger correlations than nonstate specific trends. This highlights the benefit of automated QS detection for exploring brain maturation.

  7. Assessment of the relative error in the automation task by sessile drop method

    Directory of Open Access Journals (Sweden)

    T. О. Levitskaya

    2015-11-01

    Full Text Available Assessment of the relative error in the sessile drop method automation. Further development of the sessile drop method is directly related to the development of new techniques and specially developed algorithms enabling automatic computer calculation of surface properties. The sessile drop method mathematical apparatus improvement, drop circuit equation transformation to a form suitable for working, the drop surface calculation method automation, analysis of relative errors in the calculation of surface tension are relevant and are important in experimental determinations. The surface tension measurement relative error, as well as the error caused by the drop ellipsoidness in the plan were determined in the task of the sessile drop automation. It should be noted that if the drop maximum diameter (l is big or if the ratio of l to the drop height above the equatorial diameter(h is big, the relative error in the measurement of surface tension by sessile drop method does not depend much on the equatorial diameter of the drop and ellipsoidness of the drop. In this case, the accuracy of determination of the surface tension varies from 1,0 to 0,5%. At lower values the drop ellipsoidness begins to affect the relative error of surface tension (from 1,2 to 0,8%, but in this case the drop ellipsoidness is less. Therefore, in subsequent experiments, we used larger drops. On the basis of the assessment of the relative error in determining the liquid surface tension by sessile drop method caused by drop ellipsoidness in the plan, the tables showing the limits of the drop parameters (h and l measurement necessary accuracy to get the overall relative error have been made up. Previously, the surface tension used to be calculated with the relative error in the range of 2-3%

  8. Automating Flood Hazard Mapping Methods for Near Real-time Storm Surge Inundation and Vulnerability Assessment

    Science.gov (United States)

    Weigel, A. M.; Griffin, R.; Gallagher, D.

    2015-12-01

    Storm surge has enough destructive power to damage buildings and infrastructure, erode beaches, and threaten human life across large geographic areas, hence posing the greatest threat of all the hurricane hazards. The United States Gulf of Mexico has proven vulnerable to hurricanes as it has been hit by some of the most destructive hurricanes on record. With projected rises in sea level and increases in hurricane activity, there is a need to better understand the associated risks for disaster mitigation, preparedness, and response. GIS has become a critical tool in enhancing disaster planning, risk assessment, and emergency response by communicating spatial information through a multi-layer approach. However, there is a need for a near real-time method of identifying areas with a high risk of being impacted by storm surge. Research was conducted alongside Baron, a private industry weather enterprise, to facilitate automated modeling and visualization of storm surge inundation and vulnerability on a near real-time basis. This research successfully automated current flood hazard mapping techniques using a GIS framework written in a Python programming environment, and displayed resulting data through an Application Program Interface (API). Data used for this methodology included high resolution topography, NOAA Probabilistic Surge model outputs parsed from Rich Site Summary (RSS) feeds, and the NOAA Census tract level Social Vulnerability Index (SoVI). The development process required extensive data processing and management to provide high resolution visualizations of potential flooding and population vulnerability in a timely manner. The accuracy of the developed methodology was assessed using Hurricane Isaac as a case study, which through a USGS and NOAA partnership, contained ample data for statistical analysis. This research successfully created a fully automated, near real-time method for mapping high resolution storm surge inundation and vulnerability for the

  9. Assessment for Operator Confidence in Automated Space Situational Awareness and Satellite Control Systems

    Science.gov (United States)

    Gorman, J.; Voshell, M.; Sliva, A.

    2016-09-01

    The United States is highly dependent on space resources to support military, government, commercial, and research activities. Satellites operate at great distances, observation capacity is limited, and operator actions and observations can be significantly delayed. Safe operations require support systems that provide situational understanding, enhance decision making, and facilitate collaboration between human operators and system automation both in-the-loop, and on-the-loop. Joint cognitive systems engineering (JCSE) provides a rich set of methods for analyzing and informing the design of complex systems that include both human decision-makers and autonomous elements as coordinating teammates. While, JCSE-based systems can enhance a system analysts' understanding of both existing and new system processes, JCSE activities typically occur outside of traditional systems engineering (SE) methods, providing sparse guidance about how systems should be implemented. In contrast, the Joint Director's Laboratory (JDL) information fusion model and extensions, such as the Dual Node Network (DNN) technical architecture, provide the means to divide and conquer such engineering and implementation complexity, but are loosely coupled to specialized organizational contexts and needs. We previously describe how Dual Node Decision Wheels (DNDW) extend the DNN to integrate JCSE analysis and design with the practicalities of system engineering and implementation using the DNN. Insights from Rasmussen's JCSE Decision Ladders align system implementation with organizational structures and processes. In the current work, we present a novel approach to assessing system performance based on patterns occurring in operational decisions that are documented by JCSE processes as traces in a decision ladder. In this way, system assessment is closely tied not just to system design, but the design of the joint cognitive system that includes human operators, decision-makers, information systems, and

  10. Advancing haemostasis automation--successful implementation of robotic centrifugation and sample processing in a tertiary service hospital.

    Science.gov (United States)

    Sédille-Mostafaie, Nazanin; Engler, Hanna; Lutz, Susanne; Korte, Wolfgang

    2013-06-01

    Laboratories today face increasing pressure to automate operations due to increasing workloads and the need to reduce expenditure. Few studies to date have focussed on the laboratory automation of preanalytical coagulation specimen processing. In the present study, we examined whether a clinical chemistry automation protocol meets the preanalytical requirements for the analyses of coagulation. During the implementation of laboratory automation, we began to operate a pre- and postanalytical automation system. The preanalytical unit processes blood specimens for chemistry, immunology and coagulation by automated specimen processing. As the production of platelet-poor plasma is highly dependent on optimal centrifugation, we examined specimen handling under different centrifugation conditions in order to produce optimal platelet deficient plasma specimens. To this end, manually processed models centrifuged at 1500 g for 5 and 20 min were compared to an automated centrifugation model at 3000 g for 7 min. For analytical assays that are performed frequently enough to be targets for full automation, Passing-Bablok regression analysis showed close agreement between different centrifugation methods, with a correlation coefficient between 0.98 and 0.99 and a bias between -5% and +6%. For seldom performed assays that do not mandate full automation, the Passing-Bablok regression analysis showed acceptable to poor agreement between different centrifugation methods. A full automation solution is suitable and can be recommended for frequent haemostasis testing.

  11. Assessing Exhaustiveness of Stochastic Sampling for Integrative Modeling of Macromolecular Structures.

    Science.gov (United States)

    Viswanath, Shruthi; Chemmama, Ilan E; Cimermancic, Peter; Sali, Andrej

    2017-12-05

    Modeling of macromolecular structures involves structural sampling guided by a scoring function, resulting in an ensemble of good-scoring models. By necessity, the sampling is often stochastic, and must be exhaustive at a precision sufficient for accurate modeling and assessment of model uncertainty. Therefore, the very first step in analyzing the ensemble is an estimation of the highest precision at which the sampling is exhaustive. Here, we present an objective and automated method for this task. As a proxy for sampling exhaustiveness, we evaluate whether two independently and stochastically generated sets of models are sufficiently similar. The protocol includes testing 1) convergence of the model score, 2) whether model scores for the two samples were drawn from the same parent distribution, 3) whether each structural cluster includes models from each sample proportionally to its size, and 4) whether there is sufficient structural similarity between the two model samples in each cluster. The evaluation also provides the sampling precision, defined as the smallest clustering threshold that satisfies the third, most stringent test. We validate the protocol with the aid of enumerated good-scoring models for five illustrative cases of binary protein complexes. Passing the proposed four tests is necessary, but not sufficient for thorough sampling. The protocol is general in nature and can be applied to the stochastic sampling of any set of models, not just structural models. In addition, the tests can be used to stop stochastic sampling as soon as exhaustiveness at desired precision is reached, thereby improving sampling efficiency; they may also help in selecting a model representation that is sufficiently detailed to be informative, yet also sufficiently coarse for sampling to be exhaustive. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  12. Reducing the cost of semi-automated in-gel tryptic digestion and GeLC sample preparation for high-throughput proteomics.

    Science.gov (United States)

    Ruelcke, Jayde E; Loo, Dorothy; Hill, Michelle M

    2016-10-21

    Peptide generation by trypsin digestion is typically the first step in mass spectrometry-based proteomics experiments, including 'bottom-up' discovery and targeted proteomics using multiple reaction monitoring. Manual tryptic digest and the subsequent clean-up steps can add variability even before the sample reaches the analytical platform. While specialized filter plates and tips have been designed for automated sample processing, the specialty reagents required may not be accessible or feasible due to their high cost. Here, we report a lower-cost semi-automated protocol for in-gel digestion and GeLC using standard 96-well microplates. Further cost savings were realized by re-using reagent tips with optimized sample ordering. To evaluate the methodology, we compared a simple mixture of 7 proteins and a complex cell-lysate sample. The results across three replicates showed that our semi-automated protocol had performance equal to or better than a manual in-gel digestion with respect to replicate variability and level of contamination. In this paper, we also provide the Agilent Bravo method file, which can be adapted to other liquid handlers. The simplicity, reproducibility, and cost-effectiveness of our semi-automated protocol make it ideal for routine in-gel and GeLC sample preparations, as well as high throughput processing of large clinical sample cohorts. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Development of a methodology for automated assessment of the quality of digitized images in mammography

    International Nuclear Information System (INIS)

    Santana, Priscila do Carmo

    2010-01-01

    The process of evaluating the quality of radiographic images in general, and mammography in particular, can be much more accurate, practical and fast with the help of computer analysis tools. The purpose of this study is to develop a computational methodology to automate the process of assessing the quality of mammography images through techniques of digital imaging processing (PDI), using an existing image processing environment (ImageJ). With the application of PDI techniques was possible to extract geometric and radiometric characteristics of the images evaluated. The evaluated parameters include spatial resolution, high-contrast detail, low contrast threshold, linear detail of low contrast, tumor masses, contrast ratio and background optical density. The results obtained by this method were compared with the results presented in the visual evaluations performed by the Health Surveillance of Minas Gerais. Through this comparison was possible to demonstrate that the automated methodology is presented as a promising alternative for the reduction or elimination of existing subjectivity in the visual assessment methodology currently in use. (author)

  14. A novel, fully-automated, chemiluminescent assay for the detection of 1,25-dihydroxyvitamin D in biological samples.

    Science.gov (United States)

    Valcour, Andre; Zierold, Claudia; Podgorski, Angela L; Olson, Gregory T; Wall, John V; DeLuca, Hector F; Bonelli, Fabrizio

    2016-11-01

    1,25-Dihydroxyvitamin D (1,25-(OH) 2 D), the hormonal form of vitamin D, is difficult to measure because of its low circulating levels (pg/mL), and similarity to more abundant metabolites. Here a fully-automated chemiluminescent assay that accurately and precisely measures 1,25-(OH) 2 D is described. The novel 1,25-(OH) 2 D assay was conceived based on four pillars: (1) the VDR's ligand binding domain (LBD) as a capture molecule; (2) reaction conditions wherein 1,25-(OH) 2 D favors binding to LBD vs. the vitamin D binding protein; (3) exploitation of liganded-LBD's conformational change; (4) a monoclonal antibody specific to liganded-LBD. This specific, conformational, sandwich approach, unique for automated measurement of haptens, is superior to more cumbersome, conventional competitive formats. Accuracy of the 1,25-(OH) 2 D assay was corroborated by its alignment against LC-MS/MS with fit Deming regression equations of y=0.98x + 1.93 (r=0.92), and y=1.07x+3.77 (r=0.94) for different methods from Endocrine Sciences, Laboratory Corporation of America ® and the University of Washington, respectively. Good analytical precision was manifested by its low estimated limit of quantitation (1.57pg/mL), average intra-assay imprecision (3.5%CV; range 1.1-4.7%), and average inter-assay imprecision (4.5%CV; range 3.4-7.2%). Expected and measured recovery values were congruent (93.4% mean). The novel 1,25-(OH) 2 D method exhibited excellent correlation with well validated LC-MS/MS assays from two laboratories. Significantly, its 65min turn-around time is quicker, and sample volume smaller (75μl) than current methods. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  15. Sampling theory and automated simulations for vertical sections, applied to human brain.

    Science.gov (United States)

    Cruz-Orive, L M; Gelšvartas, J; Roberts, N

    2014-02-01

    In recent years, there have been substantial developments in both magnetic resonance imaging techniques and automatic image analysis software. The purpose of this paper is to develop stereological image sampling theory (i.e. unbiased sampling rules) that can be used by image analysts for estimating geometric quantities such as surface area and volume, and to illustrate its implementation. The methods will ideally be applied automatically on segmented, properly sampled 2D images - although convenient manual application is always an option - and they are of wide applicability in many disciplines. In particular, the vertical sections design to estimate surface area is described in detail and applied to estimate the area of the pial surface and of the boundary between cortex and underlying white matter (i.e. subcortical surface area). For completeness, cortical volume and mean cortical thickness are also estimated. The aforementioned surfaces were triangulated in 3D with the aid of FreeSurfer software, which provided accurate surface area measures that served as gold standards. Furthermore, a software was developed to produce digitized trace curves of the triangulated target surfaces automatically from virtual sections. From such traces, a new method (called the 'lambda method') is presented to estimate surface area automatically. In addition, with the new software, intersections could be counted automatically between the relevant surface traces and a cycloid test grid for the classical design. This capability, together with the aforementioned gold standard, enabled us to thoroughly check the performance and the variability of the different estimators by Monte Carlo simulations for studying the human brain. In particular, new methods are offered to split the total error variance into the orientations, sectioning and cycloid components. The latter prediction was hitherto unavailable--one is proposed here and checked by way of simulations on a given set of digitized

  16. A study of automated self-assessment in a primary care student health centre setting.

    Science.gov (United States)

    Poote, Aimee E; French, David P; Dale, Jeremy; Powell, John

    2014-04-01

    We evaluated the advice given by a prototype self-assessment triage system in a university student health centre. Students attending the health centre with a new problem used the automated self-assessment system prior to a face-to-face consultation with the general practitioner (GP). The system's rating of urgency was available to the GP, and following the consultation, the GP recorded their own rating of the urgency of the patient's presentation. Full data were available for 154 of the 207 consultations. Perfect agreement, where both the GP and the self-assessment system selected the same category of advice, occurred in 39% of consultations. The association between the GP assessment and the self-assessment rankings of urgency was low but significant (rho = 0.19, P = 0.016). The self-assessment system tended to be risk averse compared to the GP assessments, with advice for more urgent level of care seeking being recommended in 86 consultations (56%) and less urgent advice in only 8 (5%). This difference in assessment of urgency was significant (P self-assessment system was more risk averse than the GPs, which resulted in a high proportion of patients being triaged as needing emergency or immediate care, the self-assessment system successfully identified a proportion of patients who were felt by the GP to have a self-limiting condition that did not need a consultation. In its prototype form, the self-assessment system was not a replacement for clinician assessment and further refinement is necessary.

  17. Standardizing measurement, sampling and reporting for public exposure assessments

    Energy Technology Data Exchange (ETDEWEB)

    Rochedo, Elaine R.R. [Instituto de Radioprotecao e Dosimetria, Comissao Nacional de Energia Nuclear, Av. Salvador Allende s/No. CEP 22780-160 Rio de Janeiro, RJ (Brazil)], E-mail: elaine@ird.gov.br

    2008-11-15

    UNSCEAR assesses worldwide public exposure from natural and man-made sources of ionizing radiation based on information submitted to UNSCEAR by United Nations Member States and from peer reviewed scientific literature. These assessments are used as a basis for radiation protection programs of international and national regulatory and research organizations. Although UNSCEAR describes its assessment methodologies, the data are based on various monitoring approaches. In order to reduce uncertainties and improve confidence in public exposure assessments, it would be necessary to harmonize the methodologies used for sampling, measuring and reporting of environmental results.

  18. Campylobacter in Broiler Chicken and Broiler Meat in Sri Lanka: Influence of Semi-Automated vs. Wet Market Processing on Campylobacter Contamination of Broiler Neck Skin Samples

    Directory of Open Access Journals (Sweden)

    Kottawattage S. A. Kottawatta

    2017-11-01

    Full Text Available Broiler meat can become contaminated with Campylobacter of intestinal origin during processing. The present study aimed to identify the prevalence of Campylobacter in broiler flocks and meat contamination at retail shops, and determine the influence of semi-automated and wet market processing on Campylobacter contamination of neck skin samples. Samples were collected from semi-automated plants (n = 102 and wet markets (n = 25. From each batch of broilers, pooled caecal samples and neck skin samples were tested for Campylobacter. Broiler meat purchased from retail outlets (n = 37 was also tested. The prevalence of Campylobacter colonized broiler flocks was 67%. The contamination of meat at retail was 59%. Both semi-automated and wet market processing resulted to contaminate the broiler neck skins to the levels of 27.4% and 48%, respectively. When Campylobacter-free broiler flocks were processed in semi-automated facilities 15% (5/33 of neck skin samples became contaminated by the end of processing whereas 25% (2/8 became contaminated after wet market processing. Characterization of isolates revealed a higher proportion of C. coli compared to C. jejuni. Higher proportions of isolates were resistant to important antimicrobials. This study shows the importance of Campylobacter in poultry industry in Sri Lanka and the need for controlling antimicrobial resistance.

  19. Campylobacter in Broiler Chicken and Broiler Meat in Sri Lanka: Influence of Semi-Automated vs. Wet Market Processing on Campylobacter Contamination of Broiler Neck Skin Samples.

    Science.gov (United States)

    Kottawatta, Kottawattage S A; Van Bergen, Marcel A P; Abeynayake, Preeni; Wagenaar, Jaap A; Veldman, Kees T; Kalupahana, Ruwani S

    2017-11-29

    Broiler meat can become contaminated with Campylobacter of intestinal origin during processing. The present study aimed to identify the prevalence of Campylobacter in broiler flocks and meat contamination at retail shops, and determine the influence of semi-automated and wet market processing on Campylobacter contamination of neck skin samples. Samples were collected from semi-automated plants ( n = 102) and wet markets ( n = 25). From each batch of broilers, pooled caecal samples and neck skin samples were tested for Campylobacter . Broiler meat purchased from retail outlets ( n = 37) was also tested. The prevalence of Campylobacter colonized broiler flocks was 67%. The contamination of meat at retail was 59%. Both semi-automated and wet market processing resulted to contaminate the broiler neck skins to the levels of 27.4% and 48%, respectively. When Campylobacter -free broiler flocks were processed in semi-automated facilities 15% (5/33) of neck skin samples became contaminated by the end of processing whereas 25% (2/8) became contaminated after wet market processing. Characterization of isolates revealed a higher proportion of C. coli compared to C. jejuni . Higher proportions of isolates were resistant to important antimicrobials. This study shows the importance of Campylobacter in poultry industry in Sri Lanka and the need for controlling antimicrobial resistance.

  20. Campylobacter in Broiler Chicken and Broiler Meat in Sri Lanka: Influence of Semi-Automated vs. Wet Market Processing on Campylobacter Contamination of Broiler Neck Skin Samples

    Science.gov (United States)

    Kottawatta, Kottawattage S. A.; Van Bergen, Marcel A. P.; Abeynayake, Preeni; Wagenaar, Jaap A.; Veldman, Kees T.; Kalupahana, Ruwani S.

    2017-01-01

    Broiler meat can become contaminated with Campylobacter of intestinal origin during processing. The present study aimed to identify the prevalence of Campylobacter in broiler flocks and meat contamination at retail shops, and determine the influence of semi-automated and wet market processing on Campylobacter contamination of neck skin samples. Samples were collected from semi-automated plants (n = 102) and wet markets (n = 25). From each batch of broilers, pooled caecal samples and neck skin samples were tested for Campylobacter. Broiler meat purchased from retail outlets (n = 37) was also tested. The prevalence of Campylobacter colonized broiler flocks was 67%. The contamination of meat at retail was 59%. Both semi-automated and wet market processing resulted to contaminate the broiler neck skins to the levels of 27.4% and 48%, respectively. When Campylobacter-free broiler flocks were processed in semi-automated facilities 15% (5/33) of neck skin samples became contaminated by the end of processing whereas 25% (2/8) became contaminated after wet market processing. Characterization of isolates revealed a higher proportion of C. coli compared to C. jejuni. Higher proportions of isolates were resistant to important antimicrobials. This study shows the importance of Campylobacter in poultry industry in Sri Lanka and the need for controlling antimicrobial resistance. PMID:29186018

  1. The Upgrade Programme for the Structural Biology beamlines at the European Synchrotron Radiation Facility – High throughput sample evaluation and automation

    International Nuclear Information System (INIS)

    Theveneau, P; Baker, R; Barrett, R; Beteva, A; Bowler, M W; Carpentier, P; Caserotto, H; Sanctis, D de; Dobias, F; Flot, D; Guijarro, M; Giraud, T; Lentini, M; Leonard, G A; Mattenet, M; McSweeney, S M; Morawe, C; Nurizzo, D; McCarthy, A A; Nanao, M

    2013-01-01

    Automation and advances in technology are the key elements in addressing the steadily increasing complexity of Macromolecular Crystallography (MX) experiments. Much of this complexity is due to the inter-and intra-crystal heterogeneity in diffraction quality often observed for crystals of multi-component macromolecular assemblies or membrane proteins. Such heterogeneity makes high-throughput sample evaluation an important and necessary tool for increasing the chances of a successful structure determination. The introduction at the ESRF of automatic sample changers in 2005 dramatically increased the number of samples that were tested for diffraction quality. This 'first generation' of automation, coupled with advances in software aimed at optimising data collection strategies in MX, resulted in a three-fold increase in the number of crystal structures elucidated per year using data collected at the ESRF. In addition, sample evaluation can be further complemented using small angle scattering experiments on the newly constructed bioSAXS facility on BM29 and the micro-spectroscopy facility (ID29S). The construction of a second generation of automated facilities on the MASSIF (Massively Automated Sample Screening Integrated Facility) beam lines will build on these advances and should provide a paradigm shift in how MX experiments are carried out which will benefit the entire Structural Biology community.

  2. The Influence of Rater Effects in Training Sets on the Psychometric Quality of Automated Scoring for Writing Assessments

    Science.gov (United States)

    Wind, Stefanie A.; Wolfe, Edward W.; Engelhard, George, Jr.; Foltz, Peter; Rosenstein, Mark

    2018-01-01

    Automated essay scoring engines (AESEs) are becoming increasingly popular as an efficient method for performance assessments in writing, including many language assessments that are used worldwide. Before they can be used operationally, AESEs must be "trained" using machine-learning techniques that incorporate human ratings. However, the…

  3. Automated assessment of diabetic retinopathy severity using content-based image retrieval in multimodal fundus photographs.

    Science.gov (United States)

    Quellec, Gwénolé; Lamard, Mathieu; Cazuguel, Guy; Bekri, Lynda; Daccache, Wissam; Roux, Christian; Cochener, Béatrice

    2011-10-21

    Recent studies on diabetic retinopathy (DR) screening in fundus photographs suggest that disagreements between algorithms and clinicians are now comparable to disagreements among clinicians. The purpose of this study is to (1) determine whether this observation also holds for automated DR severity assessment algorithms, and (2) show the interest of such algorithms in clinical practice. A dataset of 85 consecutive DR examinations (168 eyes, 1176 multimodal eye fundus photographs) was collected at Brest University Hospital (Brest, France). Two clinicians with different experience levels determined DR severity in each eye, according to the International Clinical Diabetic Retinopathy Disease Severity (ICDRS) scale. Based on Cohen's kappa (κ) measurements, the performance of clinicians at assessing DR severity was compared to the performance of state-of-the-art content-based image retrieval (CBIR) algorithms from our group. At assessing DR severity in each patient, intraobserver agreement was κ = 0.769 for the most experienced clinician. Interobserver agreement between clinicians was κ = 0.526. Interobserver agreement between the most experienced clinicians and the most advanced algorithm was κ = 0.592. Besides, the most advanced algorithm was often able to predict agreements and disagreements between clinicians. Automated DR severity assessment algorithms, trained to imitate experienced clinicians, can be used to predict when young clinicians would agree or disagree with their more experienced fellow members. Such algorithms may thus be used in clinical practice to help validate or invalidate their diagnoses. CBIR algorithms, in particular, may also be used for pooling diagnostic knowledge among peers, with applications in training and coordination of clinicians' prescriptions.

  4. Automated microfluidic sample-preparation platform for high-throughput structural investigation of proteins by small-angle X-ray scattering

    DEFF Research Database (Denmark)

    Lafleur, Josiane P.; Snakenborg, Detlef; Nielsen, Søren Skou

    2011-01-01

    A new microfluidic sample-preparation system is presented for the structural investigation of proteins using small-angle X-ray scattering (SAXS) at synchrotrons. The system includes hardware and software features for precise fluidic control, sample mixing by diffusion, automated X-ray exposure...... control, UV absorbance measurements and automated data analysis. As little as 15 l of sample is required to perform a complete analysis cycle, including sample mixing, SAXS measurement, continuous UV absorbance measurements, and cleaning of the channels and X-ray cell with buffer. The complete analysis...... cycle can be performed in less than 3 min. Bovine serum albumin was used as a model protein to characterize the mixing efficiency and sample consumption of the system. The N2 fragment of an adaptor protein (p120-RasGAP) was used to demonstrate how the device can be used to survey the structural space...

  5. Food and feed safety assessment: the importance of proper sampling.

    Science.gov (United States)

    Kuiper, Harry A; Paoletti, Claudia

    2015-01-01

    The general principles for safety and nutritional evaluation of foods and feed and the potential health risks associated with hazardous compounds are described as developed by the Food and Agriculture Organization (FAO) and the World Health Organization (WHO) and further elaborated in the European Union-funded project Safe Foods. We underline the crucial role of sampling in foods/feed safety assessment. High quality sampling should always be applied to ensure the use of adequate and representative samples as test materials for hazard identification, toxicological and nutritional characterization of identified hazards, as well as for estimating quantitative and reliable exposure levels of foods/feed or related compounds of concern for humans and animals. The importance of representative sampling is emphasized through examples of risk analyses in different areas of foods/feed production. The Theory of Sampling (TOS) is recognized as the only framework within which to ensure accuracy and precision of all sampling steps involved in the field-to-fork continuum, which is crucial to monitor foods and feed safety. Therefore, TOS must be integrated in the well-established FAO/WHO risk assessment approach in order to guarantee a transparent and correct frame for the risk assessment and decision making process.

  6. The use of automated assessments in internet-based CBT: The computer will be with you shortly

    Directory of Open Access Journals (Sweden)

    Elizabeth C. Mason

    2014-10-01

    Full Text Available There is evidence from randomized control trials that internet-based cognitive behavioral therapy (iCBT is efficacious in the treatment of anxiety and depression, and recent research demonstrates the effectiveness of iCBT in routine clinical care. The aims of this study were to implement and evaluate a new pathway by which patients could access online treatment by completing an automated assessment, rather than seeing a specialist health professional. We compared iCBT treatment outcomes in patients who received an automated pre-treatment questionnaire assessment with patients who were assessed by a specialist psychiatrist prior to treatment. Participants were treated as part of routine clinical care and were therefore not randomized. The results showed that symptoms of anxiety and depression decreased significantly with iCBT, and that the mode of assessment did not affect outcome. That is, a pre-treatment assessment by a psychiatrist conferred no additional treatment benefits over an automated assessment. These findings suggest that iCBT is effective in routine care and may be implemented with an automated assessment. By providing wider access to evidence-based interventions and reducing waiting times, the use of iCBT within a stepped-care model is a cost-effective way to reduce the burden of disease caused by these common mental disorders.

  7. Development testing of the chemical analysis automation polychlorinated biphenyl standard analysis method during surface soils sampling at the David Witherspoon 1630 site

    International Nuclear Information System (INIS)

    Hunt, M.A.; Klatt, L.N.; Thompson, D.H.

    1998-02-01

    The Chemical Analysis Automation (CAA) project is developing standardized, software-driven, site-deployable robotic laboratory systems with the objective of lowering the per-sample analysis cost, decreasing sample turnaround time, and minimizing human exposure to hazardous and radioactive materials associated with DOE remediation projects. The first integrated system developed by the CAA project is designed to determine polychlorinated biphenyls (PCB) content in soil matrices. A demonstration and development testing of this system was conducted in conjuction with surface soil characterization activities at the David Witherspoon 1630 Site in Knoxville, Tennessee. The PCB system consists of five hardware standard laboratory modules (SLMs), one software SLM, the task sequence controller (TSC), and the human-computer interface (HCI). Four of the hardware SLMs included a four-channel Soxhlet extractor, a high-volume concentrator, a column cleanup, and a gas chromatograph. These SLMs performed the sample preparation and measurement steps within the total analysis protocol. The fifth hardware module was a robot that transports samples between the SLMs and the required consumable supplies to the SLMs. The software SLM is an automated data interpretation module that receives raw data from the gas chromatograph SLM and analyzes the data to yield the analyte information. The TSC is a software system that provides the scheduling, management of system resources, and the coordination of all SLM activities. The HCI is a graphical user interface that presents the automated laboratory to the analyst in terms of the analytical procedures and methods. Human control of the automated laboratory is accomplished via the HCI. Sample information required for processing by the automated laboratory is entered through the HCI. Information related to the sample and the system status is presented to the analyst via graphical icons

  8. Fully automated breast density assessment from low-dose chest CT

    Science.gov (United States)

    Liu, Shuang; Margolies, Laurie R.; Xie, Yiting; Yankelevitz, David F.; Henschke, Claudia I.; Reeves, Anthony P.

    2017-03-01

    Breast cancer is the most common cancer diagnosed among US women and the second leading cause of cancer death 1 . Breast density is an independent risk factor for breast cancer and more than 25 states mandate its reporting to patients as part of the lay mammogram report 2 . Recent publications have demonstrated that breast density measured from low-dose chest CT (LDCT) correlates well with that measured from mammograms and MRIs 3-4 , thereby providing valuable information for many women who have undergone LDCT but not recent mammograms. A fully automated framework for breast density assessment from LDCT is presented in this paper. The whole breast region is first segmented using an anatomy-orientated novel approach based on the propagation of muscle fronts for separating the fibroglandular tissue from the underlying muscles. The fibroglandular tissue regions are then identified from the segmented whole breast and the percentage density is calculated based on the volume ratio of the fibroglandular tissue to the local whole breast region. The breast region segmentation framework was validated with 1270 LDCT scans, with 96.1% satisfactory outcomes based on visual inspection. The density assessment was evaluated by comparing with BI-RADS density grades established by an experienced radiologist in 100 randomly selected LDCT scans of female subjects. The continuous breast density measurement was shown to be consistent with the reference subjective grading, with the Spearman's rank correlation 0.91 (p-value < 0.001). After converting the continuous density to categorical grades, the automated density assessment was congruous with the radiologist's reading in 91% cases.

  9. Automated column liquid chromatographic determination of amoxicillin and cefadroxil in bovine serum and muscle tissue using on-line dialysis for sample preparation

    NARCIS (Netherlands)

    Snippe, N; van de Merbel, N C; Ruiter, F P; Steijger, O M; Lingeman, H; Brinkman, U A

    1994-01-01

    A fully automated method is described for the determination of amoxicillin and cefadroxil in bovine serum and muscle tissue. The method is based on the on-line combination of dialysis and solid-phase extraction for sample preparation, and column liquid chromatography with ultraviolet detection. In

  10. Automated on-line liquid-liquid extraction system for temporal mass spectrometric analysis of dynamic samples.

    Science.gov (United States)

    Hsieh, Kai-Ta; Liu, Pei-Han; Urban, Pawel L

    2015-09-24

    Most real samples cannot directly be infused to mass spectrometers because they could contaminate delicate parts of ion source and guides, or cause ion suppression. Conventional sample preparation procedures limit temporal resolution of analysis. We have developed an automated liquid-liquid extraction system that enables unsupervised repetitive treatment of dynamic samples and instantaneous analysis by mass spectrometry (MS). It incorporates inexpensive open-source microcontroller boards (Arduino and Netduino) to guide the extraction and analysis process. Duration of every extraction cycle is 17 min. The system enables monitoring of dynamic processes over many hours. The extracts are automatically transferred to the ion source incorporating a Venturi pump. Operation of the device has been characterized (repeatability, RSD = 15%, n = 20; concentration range for ibuprofen, 0.053-2.000 mM; LOD for ibuprofen, ∼0.005 mM; including extraction and detection). To exemplify its usefulness in real-world applications, we implemented this device in chemical profiling of pharmaceutical formulation dissolution process. Temporal dissolution profiles of commercial ibuprofen and acetaminophen tablets were recorded during 10 h. The extraction-MS datasets were fitted with exponential functions to characterize the rates of release of the main and auxiliary ingredients (e.g. ibuprofen, k = 0.43 ± 0.01 h(-1)). The electronic control unit of this system interacts with the operator via touch screen, internet, voice, and short text messages sent to the mobile phone, which is helpful when launching long-term (e.g. overnight) measurements. Due to these interactive features, the platform brings the concept of the Internet-of-Things (IoT) to the chemistry laboratory environment. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Assessing the accuracy of an inter-institutional automated patient-specific health problem list

    Directory of Open Access Journals (Sweden)

    Taylor Laurel

    2010-02-01

    Full Text Available Abstract Background Health problem lists are a key component of electronic health records and are instrumental in the development of decision-support systems that encourage best practices and optimal patient safety. Most health problem lists require initial clinical information to be entered manually and few integrate information across care providers and institutions. This study assesses the accuracy of a novel approach to create an inter-institutional automated health problem list in a computerized medical record (MOXXI that integrates three sources of information for an individual patient: diagnostic codes from medical services claims from all treating physicians, therapeutic indications from electronic prescriptions, and single-indication drugs. Methods Data for this study were obtained from 121 general practitioners and all medical services provided for 22,248 of their patients. At the opening of a patient's file, all health problems detected through medical service utilization or single-indication drug use were flagged to the physician in the MOXXI system. Each new arising health problem were presented as 'potential' and physicians were prompted to specify if the health problem was valid (Y or not (N or if they preferred to reassess its validity at a later time. Results A total of 263,527 health problems, representing 891 unique problems, were identified for the group of 22,248 patients. Medical services claims contributed to the majority of problems identified (77%, followed by therapeutic indications from electronic prescriptions (14%, and single-indication drugs (9%. Physicians actively chose to assess 41.7% (n = 106,950 of health problems. Overall, 73% of the problems assessed were considered valid; 42% originated from medical service diagnostic codes, 11% from single indication drugs, and 47% from prescription indications. Twelve percent of problems identified through other treating physicians were considered valid compared to 28

  12. ASSESSMENT OF PERFORMANCES OF VARIOUS MACHINE LEARNING ALGORITHMS DURING AUTOMATED EVALUATION OF DESCRIPTIVE ANSWERS

    Directory of Open Access Journals (Sweden)

    C. Sunil Kumar

    2014-07-01

    Full Text Available Automation of descriptive answers evaluation is the need of the hour because of the huge increase in the number of students enrolling each year in educational institutions and the limited staff available to spare their time for evaluations. In this paper, we use a machine learning workbench called LightSIDE to accomplish auto evaluation and scoring of descriptive answers. We attempted to identify the best supervised machine learning algorithm given a limited training set sample size scenario. We evaluated performances of Bayes, SVM, Logistic Regression, Random forests, Decision stump and Decision trees algorithms. We confirmed SVM as best performing algorithm based on quantitative measurements across accuracy, kappa, training speed and prediction accuracy with supplied test set.

  13. Automated high-capacity on-line extraction and bioanalysis of dried blood spot samples using liquid chromatography/high-resolution accurate mass spectrometry.

    Science.gov (United States)

    Oliveira, Regina V; Henion, Jack; Wickremsinhe, Enaksha R

    2014-11-30

    Pharmacokinetic data to support clinical development of pharmaceuticals are routinely obtained from liquid plasma samples. The plasma samples require frozen shipment and storage and are extracted off-line from the liquid chromatography/tandem mass spectrometry (LC/MS/MS) systems. In contrast, the use of dried blood spot (DBS) sampling is an attractive alternative in part due to its benefits in microsampling as well as simpler sample storage and transport. However, from a practical aspect, sample extraction from DBS cards can be challenging as currently performed. The goal of this report was to integrate automated serial extraction of large numbers of DBS cards with on-line liquid chromatography/high-resolution accurate mass spectrometry (LC/HRAMS) bioanalysis. An automated system for direct DBS extraction coupled to a LC/HRAMS was employed for the quantification of midazolam (MDZ) and α-hydroxymidazolam (α-OHMDZ) in human blood. The target analytes were directly extracted from the DBS cards onto an on-line chromatographic guard column followed by HRAMS detection. No additional sample treatment was required. The automated DBS LC/HRAMS method was developed and validated, based on the measurement at the accurate mass-to-charge ratio of the target analytes to ensure specificity for the assay. The automated DBS LC/HRAMS method analyzed a DBS sample within 2 min without the need for punching or additional off-line sample treatment. The fully automated analytical method was shown to be sensitive and selective over the concentration range of 5 to 2000 ng/mL. Intra- and inter-day precision and accuracy was less than 15% (less than 20% at the LLOQ). The validated method was successfully applied to measure MDZ and α-OHMDZ in an incurred human sample after a single 7.5 mg dose of MDZ. The direct DBS LC/HRAMS method demonstrated successful implementation of automated DBS extraction and bioanalysis for MDZ and α-OHMDZ. This approach has the potential to promote workload

  14. Automated adipose study for assessing cancerous human breast tissue using optical coherence tomography (Conference Presentation)

    Science.gov (United States)

    Gan, Yu; Yao, Xinwen; Chang, Ernest W.; Bin Amir, Syed A.; Hibshoosh, Hanina; Feldman, Sheldon; Hendon, Christine P.

    2017-02-01

    Breast cancer is the third leading cause of death in women in the United States. In human breast tissue, adipose cells are infiltrated or replaced by cancer cells during the development of breast tumor. Therefore, an adipose map can be an indicator of identifying cancerous region. We developed an automated classification method to generate adipose map within human breast. To facilitate the automated classification, we first mask the B-scans from OCT volumes by comparing the signal noise ratio with a threshold. Then, the image was divided into multiple blocks with a size of 30 pixels by 30 pixels. In each block, we extracted texture features such as local standard deviation, entropy, homogeneity, and coarseness. The features of each block were input to a probabilistic model, relevance vector machine (RVM), which was trained prior to the experiment, to classify tissue types. For each block within the B-scan, RVM identified the region with adipose tissue. We calculated the adipose ratio as the number of blocks identified as adipose over the total number of blocks within the B-scan. We obtained OCT images from patients (n = 19) in Columbia medical center. We automatically generated the adipose maps from 24 B-scans including normal samples (n = 16) and cancerous samples (n = 8). We found the adipose regions show an isolated pattern that in cancerous tissue while a clustered pattern in normal tissue. Moreover, the adipose ratio (52.30 ± 29.42%) in normal tissue was higher than the that in cancerous tissue (12.41 ± 10.07%).

  15. Mutanalyst, an online tool for assessing the mutational spectrum of epPCR libraries with poor sampling

    DEFF Research Database (Denmark)

    Ferla, Matteo

    2016-01-01

    , is presented, which not only automates the calculations, but also estimates errors involved. Specifically, the errors are calculated thanks to the complementarity of DNA, which means that a mutation has a complementary mutation on the other sequence. Additionally, in the case of determining the mean number...... of mutations per sequence it does so by fitting to a Poisson distribution, which is more robust than calculating the average in light of the small sampling size.Conclusion: As a result of the added measures to keep into account of small sample size the user can better assess whether the library is satisfactory...

  16. a preliminary assessment of groundwater samples around a filling ...

    African Journals Online (AJOL)

    Home

    This paper is a preliminary assessment of groundwater samples around a filling station in Diobu area of Port ... 2- and relatively lower pH than the groundwater. Other authors who have carried out similar researches in the same geological environment include Amojor (1986) and ...... Harikumar, P. S. and Jisha, T. S., 2010.

  17. A Preliminary Assessment of Groundwater Samples around a Filling ...

    African Journals Online (AJOL)

    This paper is a preliminary assessment of groundwater samples around a filling station in Diobu area of Port Harcourt for four years at intervals of two years with a view to determine the level of groundwater pollution. It examines the physiochemical, major ions and heavy metal aspect of groundwater quality around the study ...

  18. Research Note Pilot survey to assess sample size for herbaceous ...

    African Journals Online (AJOL)

    A pilot survey to determine sub-sample size (number of point observations per plot) for herbaceous species composition assessments, using a wheel-point apparatus applying the nearest-plant method, was conducted. Three plots differing in species composition on the Zululand coastal plain were selected, and on each plot ...

  19. Rapid mapping of compound eye visual sampling parameters with FACETS, a highly automated wide-field goniometer.

    Science.gov (United States)

    Douglass, John K; Wehling, Martin F

    2016-12-01

    A highly automated goniometer instrument (called FACETS) has been developed to facilitate rapid mapping of compound eye parameters for investigating regional visual field specializations. The instrument demonstrates the feasibility of analyzing the complete field of view of an insect eye in a fraction of the time required if using non-motorized, non-computerized methods. Faster eye mapping makes it practical for the first time to employ sample sizes appropriate for testing hypotheses about the visual significance of interspecific differences in regional specializations. Example maps of facet sizes are presented from four dipteran insects representing the Asilidae, Calliphoridae, and Stratiomyidae. These maps provide the first quantitative documentation of the frontal enlarged-facet zones (EFZs) that typify asilid eyes, which, together with the EFZs in male Calliphoridae, are likely to be correlated with high-spatial-resolution acute zones. The presence of EFZs contrasts sharply with the almost homogeneous distribution of facet sizes in the stratiomyid. Moreover, the shapes of EFZs differ among species, suggesting functional specializations that may reflect differences in visual ecology. Surveys of this nature can help identify species that should be targeted for additional studies, which will elucidate fundamental principles and constraints that govern visual field specializations and their evolution.

  20. Quantitative Assessment of Mouse Mammary Gland Morphology Using Automated Digital Image Processing and TEB Detection.

    Science.gov (United States)

    Blacher, Silvia; Gérard, Céline; Gallez, Anne; Foidart, Jean-Michel; Noël, Agnès; Péqueux, Christel

    2016-04-01

    The assessment of rodent mammary gland morphology is largely used to study the molecular mechanisms driving breast development and to analyze the impact of various endocrine disruptors with putative pathological implications. In this work, we propose a methodology relying on fully automated digital image analysis methods including image processing and quantification of the whole ductal tree and of the terminal end buds as well. It allows to accurately and objectively measure both growth parameters and fine morphological glandular structures. Mammary gland elongation was characterized by 2 parameters: the length and the epithelial area of the ductal tree. Ductal tree fine structures were characterized by: 1) branch end-point density, 2) branching density, and 3) branch length distribution. The proposed methodology was compared with quantification methods classically used in the literature. This procedure can be transposed to several software and thus largely used by scientists studying rodent mammary gland morphology.

  1. Control Performance Management in Industrial Automation Assessment, Diagnosis and Improvement of Control Loop Performance

    CERN Document Server

    Jelali, Mohieddine

    2013-01-01

    Control Performance Management in Industrial Automation provides a coherent and self-contained treatment of a group of methods and applications of burgeoning importance to the detection and solution of problems with control loops that are vital in maintaining product quality, operational safety, and efficiency of material and energy consumption in the process industries. The monograph deals with all aspects of control performance management (CPM), from controller assessment (minimum-variance-control-based and advanced methods), to detection and diagnosis of control loop problems (process non-linearities, oscillations, actuator faults), to the improvement of control performance (maintenance, re-design of loop components, automatic controller re-tuning). It provides a contribution towards the development and application of completely self-contained and automatic methodologies in the field. Moreover, within this work, many CPM tools have been developed that goes far beyond available CPM packages. Control Perform...

  2. Automated signal quality assessment of mobile phone-recorded heart sound signals.

    Science.gov (United States)

    Springer, David B; Brennan, Thomas; Ntusi, Ntobeko; Abdelrahman, Hassan Y; Zühlke, Liesl J; Mayosi, Bongani M; Tarassenko, Lionel; Clifford, Gari D

    Mobile phones, due to their audio processing capabilities, have the potential to facilitate the diagnosis of heart disease through automated auscultation. However, such a platform is likely to be used by non-experts, and hence, it is essential that such a device is able to automatically differentiate poor quality from diagnostically useful recordings since non-experts are more likely to make poor-quality recordings. This paper investigates the automated signal quality assessment of heart sound recordings performed using both mobile phone-based and commercial medical-grade electronic stethoscopes. The recordings, each 60 s long, were taken from 151 random adult individuals with varying diagnoses referred to a cardiac clinic and were professionally annotated by five experts. A mean voting procedure was used to compute a final quality label for each recording. Nine signal quality indices were defined and calculated for each recording. A logistic regression model for classifying binary quality was then trained and tested. The inter-rater agreement level for the stethoscope and mobile phone recordings was measured using Conger's kappa for multiclass sets and found to be 0.24 and 0.54, respectively. One-third of all the mobile phone-recorded phonocardiogram (PCG) signals were found to be of sufficient quality for analysis. The classifier was able to distinguish good- and poor-quality mobile phone recordings with 82.2% accuracy, and those made with the electronic stethoscope with an accuracy of 86.5%. We conclude that our classification approach provides a mechanism for substantially improving auscultation recordings by non-experts. This work is the first systematic evaluation of a PCG signal quality classification algorithm (using a separate test dataset) and assessment of the quality of PCG recordings captured by non-experts, using both a medical-grade digital stethoscope and a mobile phone.

  3. Bias Assessment of General Chemistry Analytes using Commutable Samples.

    Science.gov (United States)

    Koerbin, Gus; Tate, Jillian R; Ryan, Julie; Jones, Graham Rd; Sikaris, Ken A; Kanowski, David; Reed, Maxine; Gill, Janice; Koumantakis, George; Yen, Tina; St John, Andrew; Hickman, Peter E; Simpson, Aaron; Graham, Peter

    2014-11-01

    Harmonisation of reference intervals for routine general chemistry analytes has been a goal for many years. Analytical bias may prevent this harmonisation. To determine if analytical bias is present when comparing methods, the use of commutable samples, or samples that have the same properties as the clinical samples routinely analysed, should be used as reference samples to eliminate the possibility of matrix effect. The use of commutable samples has improved the identification of unacceptable analytical performance in the Netherlands and Spain. The International Federation of Clinical Chemistry and Laboratory Medicine (IFCC) has undertaken a pilot study using commutable samples in an attempt to determine not only country specific reference intervals but to make them comparable between countries. Australia and New Zealand, through the Australasian Association of Clinical Biochemists (AACB), have also undertaken an assessment of analytical bias using commutable samples and determined that of the 27 general chemistry analytes studied, 19 showed sufficiently small between method biases as to not prevent harmonisation of reference intervals. Application of evidence based approaches including the determination of analytical bias using commutable material is necessary when seeking to harmonise reference intervals.

  4. Sample size determination for equivalence assessment with multiple endpoints.

    Science.gov (United States)

    Sun, Anna; Dong, Xiaoyu; Tsong, Yi

    2014-01-01

    Equivalence assessment between a reference and test treatment is often conducted by two one-sided tests (TOST). The corresponding power function and sample size determination can be derived from a joint distribution of the sample mean and sample variance. When an equivalence trial is designed with multiple endpoints, it often involves several sets of two one-sided tests. A naive approach for sample size determination in this case would select the largest sample size required for each endpoint. However, such a method ignores the correlation among endpoints. With the objective to reject all endpoints and when the endpoints are uncorrelated, the power function is the production of all power functions for individual endpoints. With correlated endpoints, the sample size and power should be adjusted for such a correlation. In this article, we propose the exact power function for the equivalence test with multiple endpoints adjusted for correlation under both crossover and parallel designs. We further discuss the differences in sample size for the naive method without and with correlation adjusted methods and illustrate with an in vivo bioequivalence crossover study with area under the curve (AUC) and maximum concentration (Cmax) as the two endpoints.

  5. Automated Geospatial Watershed Assessment Tool (AGWA): Applications for Assessing the Impact of Urban Growth and the use of Low Impact Development Practices.

    Science.gov (United States)

    New tools and functionality have been incorporated into the Automated Geospatial Watershed Assessment Tool (AGWA) to assess the impact of urban growth and evaluate the effects of low impact development (LID) practices. AGWA (see: www.tucson.ars.ag.gov/agwa or http://www.epa.gov...

  6. Analytical and between-subject variation of thrombin generation measured by calibrated automated thrombography on plasma samples

    DEFF Research Database (Denmark)

    Kristensen, Anne F; Kristensen, Søren R; Falkmer, Ursula

    2018-01-01

    BACKGROUND: The Calibrated Automated Thrombography (CAT) is an in vitro thrombin generation (TG) assay that holds promise as a valuable tool within clinical diagnostics. However, the technique has a considerable analytical variation, and we therefore, investigated the analytical and between...

  7. An automated methodology for levodopa-induced dyskinesia: assessment based on gyroscope and accelerometer signals.

    Science.gov (United States)

    Tsipouras, Markos G; Tzallas, Alexandros T; Rigas, George; Tsouli, Sofia; Fotiadis, Dimitrios I; Konitsiotis, Spiros

    2012-06-01

    In this study, a methodology is presented for an automated levodopa-induced dyskinesia (LID) assessment in patients suffering from Parkinson's disease (PD) under real-life conditions. The methodology is based on the analysis of signals recorded from several accelerometers and gyroscopes, which are placed on the subjects' body while they were performing a series of standardised motor tasks as well as voluntary movements. Sixteen subjects were enrolled in the study. The recordings were analysed in order to extract several features and, based on these features, a classification technique was used for LID assessment, i.e. detection of LID symptoms and classification of their severity. The results were compared with the clinical annotation of the signals, provided by two expert neurologists. The analysis was performed related to the number and topology of sensors used; several different experimental settings were evaluated while a 10-fold stratified cross validation technique was employed in all cases. Moreover, several different classification techniques were examined. The ability of the methodology to be generalised was also evaluated using leave-one-patient-out cross validation. The sensitivity and positive predictive values (average for all LID severities) were 80.35% and 76.84%, respectively. The proposed methodology can be applied in real-life conditions since it can perform LID assessment in recordings which include various PD symptoms (such as tremor, dyskinesia and freezing of gait) of several motor tasks and random voluntary movements. Copyright © 2012 Elsevier B.V. All rights reserved.

  8. Enabling Wide-Scale Computer Science Education through Improved Automated Assessment Tools

    Science.gov (United States)

    Boe, Bryce A.

    There is a proliferating demand for newly trained computer scientists as the number of computer science related jobs continues to increase. University programs will only be able to train enough new computer scientists to meet this demand when two things happen: when there are more primary and secondary school students interested in computer science, and when university departments have the resources to handle the resulting increase in enrollment. To meet these goals, significant effort is being made to both incorporate computational thinking into existing primary school education, and to support larger university computer science class sizes. We contribute to this effort through the creation and use of improved automated assessment tools. To enable wide-scale computer science education we do two things. First, we create a framework called Hairball to support the static analysis of Scratch programs targeted for fourth, fifth, and sixth grade students. Scratch is a popular building-block language utilized to pique interest in and teach the basics of computer science. We observe that Hairball allows for rapid curriculum alterations and thus contributes to wide-scale deployment of computer science curriculum. Second, we create a real-time feedback and assessment system utilized in university computer science classes to provide better feedback to students while reducing assessment time. Insights from our analysis of student submission data show that modifications to the system configuration support the way students learn and progress through course material, making it possible for instructors to tailor assignments to optimize learning in growing computer science classes.

  9. Toward standardized quantitative image quality (IQ) assessment in computed tomography (CT): A comprehensive framework for automated and comparative IQ analysis based on ICRU Report 87.

    Science.gov (United States)

    Pahn, Gregor; Skornitzke, Stephan; Schlemmer, Hans-Peter; Kauczor, Hans-Ulrich; Stiller, Wolfram

    2016-01-01

    Based on the guidelines from "Report 87: Radiation Dose and Image-quality Assessment in Computed Tomography" of the International Commission on Radiation Units and Measurements (ICRU), a software framework for automated quantitative image quality analysis was developed and its usability for a variety of scientific questions demonstrated. The extendable framework currently implements the calculation of the recommended Fourier image quality (IQ) metrics modulation transfer function (MTF) and noise-power spectrum (NPS), and additional IQ quantities such as noise magnitude, CT number accuracy, uniformity across the field-of-view, contrast-to-noise ratio (CNR) and signal-to-noise ratio (SNR) of simulated lesions for a commercially available cone-beam phantom. Sample image data were acquired with different scan and reconstruction settings on CT systems from different manufacturers. Spatial resolution is analyzed in terms of edge-spread function, line-spread-function, and MTF. 3D NPS is calculated according to ICRU Report 87, and condensed to 2D and radially averaged 1D representations. Noise magnitude, CT numbers, and uniformity of these quantities are assessed on large samples of ROIs. Low-contrast resolution (CNR, SNR) is quantitatively evaluated as a function of lesion contrast and diameter. Simultaneous automated processing of several image datasets allows for straightforward comparative assessment. The presented framework enables systematic, reproducible, automated and time-efficient quantitative IQ analysis. Consistent application of the ICRU guidelines facilitates standardization of quantitative assessment not only for routine quality assurance, but for a number of research questions, e.g. the comparison of different scanner models or acquisition protocols, and the evaluation of new technology or reconstruction methods. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  10. Automated extraction of DNA from blood and PCR setup using a Tecan Freedom EVO liquid handler for forensic genetic STR typing of reference samples

    DEFF Research Database (Denmark)

    Stangegaard, Michael; Frøslev, Tobias G; Frank-Hansen, Rune

    2011-01-01

    17025 using the Qiagen MagAttract DNA Mini M48 kit (Qiagen GmbH, Hilden, Germany) from fresh whole blood and blood from deceased individuals. The workflow was simplified by returning the DNA extracts to the original tubes minimizing the risk of misplacing samples. The tubes that originally contained...... the samples were washed with MilliQ water before the return of the DNA extracts. The PCR was setup in 96-well microtiter plates. The methods were validated for the kits: AmpFlSTR Identifiler, SGM Plus and Yfiler (Applied Biosystems, Foster City, CA), GenePrint FFFL and PowerPlex Y (Promega, Madison, WI......). The automated protocols allowed for extraction and addition of PCR master mix of 96 samples within 3.5h. In conclusion, we demonstrated that (1) DNA extraction with magnetic beads and (2) PCR setup for accredited, forensic genetic short tandem repeat typing can be implemented on a simple automated liquid...

  11. Assessment of an automated surveillance system for detection of initial ventilator-associated events.

    Science.gov (United States)

    Nuckchady, Dooshanveer; Heckman, Michael G; Diehl, Nancy N; Creech, Tara; Carey, Darlene; Domnick, Robert; Hellinger, Walter C

    2015-10-01

    Surveillance for initial ventilator-associated events (VAEs) was automated and compared with nonautomated review of episodes of mechanical ventilation. Sensitivity, specificity, positive predictive value, and negative predictive value of automated surveillance were very high (>93%), and automated surveillance reduced the time spent on detection of VAEs by >90%. Copyright © 2015 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.

  12. Equilibrium sampling for a thermodynamic assessment of contaminated sediments

    DEFF Research Database (Denmark)

    ) govern diffusive uptake and partitioning. Equilibrium sampling of sediment was introduced 15 years ago to measure Cfree, and it has since developed into a straightforward, precise and sensitive approach for determining Cfree and other exposure parameters that allow for thermodynamic assessment...... of the biota relative to the sediment. Furthermore, concentrations in lipid at thermodynamic equilibrium with sediment (Clip?Sed) can be calculated via lipid/silicone partition ratios CSil × KLip:Sil, which has been done in studies with limnic, river and marine sediments. The data can then be compared to lipid...... will focus at the latest developments in equilibrium sampling concepts and methods. Further, we will explain how these approaches can provide a new basis for a thermodynamic assessment of polluted sediments....

  13. Odor assessment for sewage sludge samples 300A01002

    International Nuclear Information System (INIS)

    Cash, D.B.; Molton, P.M.

    1976-12-01

    The use of radiation as a means of detoxifying sewage sludge as an alternate to the more conventional biological digestion treatment method was studied. A combination of gamma irradiation and heat (thermoradiation) treatment is being considered. In support of this effort, Battelle's Pacific Northwest Laboratories (PNL) were requested to assess the odor change of the sewage sludge, if any, that occurs with time after the samples were subjected to the treatment conditions. The test methods and results are presented

  14. Odor assessment for sewage sludge samples 300A01002. [Thermoradiation

    Energy Technology Data Exchange (ETDEWEB)

    Cash, D.B.; Molton, P.M.

    1976-12-01

    The use of radiation as a means of detoxifying sewage sludge as an alternate to the more conventional biological digestion treatment method was studied. A combination of gamma irradiation and heat (thermoradiation) treatment is being considered. In support of this effort, Battelle's Pacific Northwest Laboratories (PNL) were requested to assess the odor change of the sewage sludge, if any, that occurs with time after the samples were subjected to the treatment conditions. The test methods and results are presented. (TFD)

  15. Automated single-trial assessment of laser-evoked potentials as an objective functional diagnostic tool for the nociceptive system.

    Science.gov (United States)

    Hatem, S M; Hu, L; Ragé, M; Gierasimowicz, A; Plaghki, L; Bouhassira, D; Attal, N; Iannetti, G D; Mouraux, A

    2012-12-01

    To assess the clinical usefulness of an automated analysis of event-related potentials (ERPs). Nociceptive laser-evoked potentials (LEPs) and non-nociceptive somatosensory electrically-evoked potentials (SEPs) were recorded in 37 patients with syringomyelia and 21 controls. LEP and SEP peak amplitudes and latencies were estimated using a single-trial automated approach based on time-frequency wavelet filtering and multiple linear regression, as well as a conventional approach based on visual inspection. The amplitudes and latencies of normal and abnormal LEP and SEP peaks were identified reliably using both approaches, with similar sensitivity and specificity. Because the automated approach provided an unbiased solution to account for average waveforms where no ERP could be identified visually, it revealed significant differences between patients and controls that were not revealed using the visual approach. The automated analysis of ERPs characterized reliably and objectively LEP and SEP waveforms in patients. The automated single-trial analysis can be used to characterize normal and abnormal ERPs with a similar sensitivity and specificity as visual inspection. While this does not justify its use in a routine clinical setting, the technique could be useful to avoid observer-dependent biases in clinical research. Copyright © 2012 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  16. Automated Cognitive Health Assessment From Smart Home-Based Behavior Data.

    Science.gov (United States)

    Dawadi, Prafulla Nath; Cook, Diane Joyce; Schmitter-Edgecombe, Maureen

    2016-07-01

    Smart home technologies offer potential benefits for assisting clinicians by automating health monitoring and well-being assessment. In this paper, we examine the actual benefits of smart home-based analysis by monitoring daily behavior in the home and predicting clinical scores of the residents. To accomplish this goal, we propose a clinical assessment using activity behavior (CAAB) approach to model a smart home resident's daily behavior and predict the corresponding clinical scores. CAAB uses statistical features that describe characteristics of a resident's daily activity performance to train machine learning algorithms that predict the clinical scores. We evaluate the performance of CAAB utilizing smart home sensor data collected from 18 smart homes over two years. We obtain a statistically significant correlation ( r=0.72) between CAAB-predicted and clinician-provided cognitive scores and a statistically significant correlation ( r=0.45) between CAAB-predicted and clinician-provided mobility scores. These prediction results suggest that it is feasible to predict clinical scores using smart home sensor data and learning-based data analysis.

  17. Lacunarity analysis: a promising method for the automated assessment of melanocytic naevi and melanoma.

    Directory of Open Access Journals (Sweden)

    Stephen Gilmore

    Full Text Available The early diagnosis of melanoma is critical to achieving reduced mortality and increased survival. Although clinical examination is currently the method of choice for melanocytic lesion assessment, there is a growing interest among clinicians regarding the potential diagnostic utility of computerised image analysis. Recognising that there exist significant shortcomings in currently available algorithms, we are motivated to investigate the utility of lacunarity, a simple statistical measure previously used in geology and other fields for the analysis of fractal and multi-scaled images, in the automated assessment of melanocytic naevi and melanoma. Digitised dermoscopic images of 111 benign melanocytic naevi, 99 dysplastic naevi and 102 melanomas were obtained over the period 2003 to 2008, and subject to lacunarity analysis. We found the lacunarity algorithm could accurately distinguish melanoma from benign melanocytic naevi or non-melanoma without introducing many of the limitations associated with other previously reported diagnostic algorithms. Lacunarity analysis suggests an ordering of irregularity in melanocytic lesions, and we suggest the clinical application of this ordering may have utility in the naked-eye dermoscopic diagnosis of early melanoma.

  18. Lacunarity analysis: a promising method for the automated assessment of melanocytic naevi and melanoma.

    Science.gov (United States)

    Gilmore, Stephen; Hofmann-Wellenhof, Rainer; Muir, Jim; Soyer, H Peter

    2009-10-13

    The early diagnosis of melanoma is critical to achieving reduced mortality and increased survival. Although clinical examination is currently the method of choice for melanocytic lesion assessment, there is a growing interest among clinicians regarding the potential diagnostic utility of computerised image analysis. Recognising that there exist significant shortcomings in currently available algorithms, we are motivated to investigate the utility of lacunarity, a simple statistical measure previously used in geology and other fields for the analysis of fractal and multi-scaled images, in the automated assessment of melanocytic naevi and melanoma. Digitised dermoscopic images of 111 benign melanocytic naevi, 99 dysplastic naevi and 102 melanomas were obtained over the period 2003 to 2008, and subject to lacunarity analysis. We found the lacunarity algorithm could accurately distinguish melanoma from benign melanocytic naevi or non-melanoma without introducing many of the limitations associated with other previously reported diagnostic algorithms. Lacunarity analysis suggests an ordering of irregularity in melanocytic lesions, and we suggest the clinical application of this ordering may have utility in the naked-eye dermoscopic diagnosis of early melanoma.

  19. An Automated BIM Model to Conceptually Design, Analyze, Simulate, and Assess Sustainable Building Projects

    Directory of Open Access Journals (Sweden)

    Farzad Jalaei

    2014-01-01

    Full Text Available Quantifying the environmental impacts and simulating the energy consumption of building’s components at the conceptual design stage are very helpful for designers needing to make decisions related to the selection of the best design alternative that would lead to a more energy efficient building. Building Information Modeling (BIM offers designers the ability to assess different design alternatives at the conceptual stage of the project so that energy and life cycle assessment (LCA strategies and systems are attained. This paper proposes an automated model that links BIM, LCA, energy analysis, and lighting simulation tools with green building certification systems. The implementation is within developing plug-ins on BIM tool capable of measuring the environmental impacts (EI and embodied energy of building components. Using this method, designers will be provided with a new way to visualize and to identify the potential gain or loss of energy for the building as a whole and for each of its associated components. Furthermore, designers will be able to detect and evaluate the sustainability of the proposed buildings based on Leadership in Energy and Environmental Design (LEED rating system. An actual building project will be used to illustrate the workability of the proposed methodology.

  20. Incremental Sampling Methodology: Applications for Background Screening Assessments.

    Science.gov (United States)

    Pooler, Penelope S; Goodrum, Philip E; Crumbling, Deana; Stuchal, Leah D; Roberts, Stephen M

    2018-01-01

    This article presents the findings from a numerical simulation study that was conducted to evaluate the performance of alternative statistical analysis methods for background screening assessments when data sets are generated with incremental sampling methods (ISMs). A wide range of background and site conditions are represented in order to test different ISM sampling designs. Both hypothesis tests and upper tolerance limit (UTL) screening methods were implemented following U.S. Environmental Protection Agency (USEPA) guidance for specifying error rates. The simulations show that hypothesis testing using two-sample t-tests can meet standard performance criteria under a wide range of conditions, even with relatively small sample sizes. Key factors that affect the performance include unequal population variances and small absolute differences in population means. UTL methods are generally not recommended due to conceptual limitations in the technique when applied to ISM data sets from single decision units and due to insufficient power given standard statistical sample sizes from ISM. © 2017 Society for Risk Analysis.

  1. Application of Automated Facial Expression Analysis and Qualitative Analysis to Assess Consumer Perception and Acceptability of Beverages and Water

    OpenAIRE

    Crist, Courtney Alissa

    2016-01-01

    Sensory and consumer sciences aim to understand the influences of product acceptability and purchase decisions. The food industry measures product acceptability through hedonic testing but often does not assess implicit or qualitative response. Incorporation of qualitative research and automated facial expression analysis (AFEA) may supplement hedonic acceptability testing to provide product insights. The purpose of this research was to assess the application of AFEA and qualitative analysis ...

  2. Assessing Library Automation and Virtual Library Development in Four Academic Libraries in Oyo, Oyo State, Nigeria

    Science.gov (United States)

    Gbadamosi, Belau Olatunde

    2011-01-01

    The paper examines the level of library automation and virtual library development in four academic libraries. A validated questionnaire was used to capture the responses from academic librarians of the libraries under study. The paper discovers that none of the four academic libraries is fully automated. The libraries make use of librarians with…

  3. Automated solid-phase extraction for trace-metal analysis of seawater: sample preparation for total-reflection X-ray fluorescence measurements

    Science.gov (United States)

    Gerwinski, Wolfgang; Schmidt, Diether

    1998-08-01

    Solid-phase chromatography on silica gel columns can be used as a sample preparation technique for seawater, followed by total-reflection X-ray fluorescence analysis (TXRF). An automated extraction system (Zymark AutoTrace SPE Workstation) was studied for the analysis of blank solutions, seawater samples and certified reference materials. After replacing some stainless steel parts in the system, adequate blanks could be obtained to allow the analysis of seawater samples. Replicate analyses yielded low standard deviations and good recoveries for certified reference materials. Using a six-channel model and user-defined software, the time needed for a complete analytical run was about 100 min.

  4. An open framework for automated chemical hazard assessment based on GreenScreen for Safer Chemicals: A proof of concept.

    Science.gov (United States)

    Wehage, Kristopher; Chenhansa, Panan; Schoenung, Julie M

    2017-01-01

    GreenScreen® for Safer Chemicals is a framework for comparative chemical hazard assessment. It is the first transparent, open and publicly accessible framework of its kind, allowing manufacturers and governmental agencies to make informed decisions about the chemicals and substances used in consumer products and buildings. In the GreenScreen® benchmarking process, chemical hazards are assessed and classified based on 18 hazard endpoints from up to 30 different sources. The result is a simple numerical benchmark score and accompanying assessment report that allows users to flag chemicals of concern and identify safer alternatives. Although the screening process is straightforward, aggregating and sorting hazard data is tedious, time-consuming, and prone to human error. In light of these challenges, the present work demonstrates the usage of automation to cull chemical hazard data from publicly available internet resources, assign metadata, and perform a GreenScreen® hazard assessment using the GreenScreen® "List Translator." The automated technique, written as a module in the Python programming language, generates GreenScreen® List Translation data for over 3000 chemicals in approximately 30 s. Discussion of the potential benefits and limitations of automated techniques is provided. By embedding the library into a web-based graphical user interface, the extensibility of the library is demonstrated. The accompanying source code is made available to the hazard assessment community. Integr Environ Assess Manag 2017;13:167-176. © 2016 SETAC. © 2016 SETAC.

  5. Automated large-volume sample stacking procedure to detect labeled peptides at picomolar concentration using capillary electrophoresis and laser-induced fluorescence detection.

    Science.gov (United States)

    Siri, Nathalie; Riolet, Pierre; Bayle, Christophe; Couderc, François

    2003-08-05

    We have developed an automated large-volume sample stacking (LVSS) procedure to detect fluorescein isothiocyanate-labeled peptides in the picomolar range. The injection duration is 10 min at 50 mbar to fill 62% of the capillary volume to the detection cell. The calculated limit of detection (S/N=3), filling 1% of the capillary volume, is 74 pM for bradykinin and 45 pM for L-enkephalin with samples diluted in water and analyzed in a 50 mM borate buffer, pH 9.2. With the automated LVSS system, the limits of detection are 7 pM for bradykinin, 3 pM for L-enkephalin and 2 pM for substance P. LVSS is shown to be quantitative from 500 to 10 pM.

  6. Automated gravimetric sample pretreatment using an industrial robot for the high-precision determination of plutonium by isotope dilution mass spectrometry.

    Science.gov (United States)

    Surugaya, Naoki; Hiyama, Toshiaki; Watahiki, Masaru

    2008-06-01

    A robotized sample-preparation method for the determination of Pu, which is recovered by extraction reprocessing of spent nuclear fuel, by isotope dilution mass spectrometry (IDMS) is described. The automated system uses a six-axis industrial robot, whose motility is very fast, accurate, and flexible, installed in a glove box. The automation of the weighing and dilution steps enables operator-unattended sample pretreatment for the high-precision analysis of Pu in aqueous solutions. Using the developed system, the Pu concentration in a HNO(3) medium was successfully determined using a set of subsequent mass spectrometric measurements. The relative uncertainty in determining the Pu concentration by IDMS using this system was estimated to be less than 0.1% (k = 2), which is equal to that expected of a talented analyst. The operation time required was the same as that for a skilled operator.

  7. Automated gravimetric sample pretreatment using an industrial robot for the high-precision determination of plutonium by isotope dilution mass spectrometry

    International Nuclear Information System (INIS)

    Surugaya, Naoki; Hiyama, Toshiaki; Watahiki, Masaru

    2008-01-01

    A robotized sample-preparation method for the determination of Pu, which is recovered by extraction reprocessing of spent nuclear fuel, by isotope dilution mass spectrometry (IDMS) is described. The automated system uses a six-axis industrial robot, whose motility is very fast, accurate, and flexible, installed in a glove box. The automation of the weighing and dilution steps enables operator-unattended sample pretreatment for the high-precision analysis of Pu in aqueous solutions. Using the developed system, the Pu concentration in a HNO 3 medium was successfully determined using a set of subsequent mass spectrometric measurements. The relative uncertainty in determining the Pu concentration by IDMS using this system was estimated to be less than 0.1% (k=2), which is equal to that expected of a talented analysis. The operation time required was the same as that for a skilled operator. (author)

  8. Assessing drivers' response during automated driver support system failures with non-driving tasks.

    Science.gov (United States)

    Shen, Sijun; Neyens, David M

    2017-06-01

    With the increase in automated driver support systems, drivers are shifting from operating their vehicles to supervising their automation. As a result, it is important to understand how drivers interact with these automated systems and evaluate their effect on driver responses to safety critical events. This study aimed to identify how drivers responded when experiencing a safety critical event in automated vehicles while also engaged in non-driving tasks. In total 48 participants were included in this driving simulator study with two levels of automated driving: (a) driving with no automation and (b) driving with adaptive cruise control (ACC) and lane keeping (LK) systems engaged; and also two levels of a non-driving task (a) watching a movie or (b) no non-driving task. In addition to driving performance measures, non-driving task performance and the mean glance duration for the non-driving task were compared between the two levels of automated driving. Drivers using the automated systems responded worse than those manually driving in terms of reaction time, lane departure duration, and maximum steering wheel angle to an induced lane departure event. These results also found that non-driving tasks further impaired driver responses to a safety critical event in the automated system condition. In the automated driving condition, driver responses to the safety critical events were slower, especially when engaged in a non-driving task. Traditional driver performance variables may not necessarily effectively and accurately evaluate driver responses to events when supervising autonomous vehicle systems. Thus, it is important to develop and use appropriate variables to quantify drivers' performance under these conditions. Copyright © 2017 Elsevier Ltd and National Safety Council. All rights reserved.

  9. Spiked natural matrix materials as quality assessment samples

    International Nuclear Information System (INIS)

    Feiner, M.S.; Sanderson, C.G.

    1988-01-01

    The Environmental Measurements Laboratory has conducted the Quality Assessment Program since 1976 to evaluate the quality of the environmental radioactivity data, which is reported to the Department of Energy by as many as 42 commercial contractors involved in nuclear work. In this program, matrix materials of known radionuclide concentrations are distributed routinely to the contractors and the reported results are compared. The five matrices used are: soil, vegetation, animal tissue, water and filter paper. Environmental soil, vegetation and animal tissue are used, but the water and filter paper samples are prepared by spiking with known amounts of standard solutions traceable to the National Bureau of Standards. A summary of results is given to illustrate the successful operation of the program. Because of the difficulty and high cost of collecting large samples of natural matrix material and to increase the versatility of the program, an attempt was recently made to prepare the soil, vegetation and animal tissue samples with spiked solutions. A description of the preparation of these reference samples and the results of analyses are presented along with a discussion of the pitfalls and advantages of this approach. 19 refs.; 6 tabs

  10. Equilibrium sampling for a thermodynamic assessment of contaminated sediments

    DEFF Research Database (Denmark)

    Mayer, Philipp; Nørgaard Schmidt, Stine; Mäenpää, Kimmo

    ) govern diffusive uptake and partitioning. Equilibrium sampling of sediment was introduced 15 years ago to measure Cfree, and it has since developed into a straightforward, precise and sensitive approach for determining Cfree and other exposure parameters that allow for thermodynamic assessment...... valid equilibrium sampling (method incorporated QA/QC). The measured equilibrium concentrations in silicone (Csil) can then be divided by silicone/water partition ratios to yield Cfree. CSil can also be compared to CSil from silicone equilibrated with biota in order to determine the equilibrium status...... of the biota relative to the sediment. Furthermore, concentrations in lipid at thermodynamic equilibrium with sediment (Clip?Sed) can be calculated via lipid/silicone partition ratios CSil × KLip:Sil, which has been done in studies with limnic, river and marine sediments. The data can then be compared to lipid...

  11. Assessing user acceptance towards automated and conventional sink use for hand decontamination using the technology acceptance model.

    Science.gov (United States)

    Dawson, Carolyn H; Mackrill, Jamie B; Cain, Rebecca

    2017-12-01

    Hand hygiene (HH) prevents harmful contaminants spreading in settings including domestic, health care and food handling. Strategies to improve HH range from behavioural techniques through to automated sinks that ensure hand surface cleaning. This study aimed to assess user experience and acceptance towards a new automated sink, compared to a normal sink. An adapted version of the technology acceptance model (TAM) assessed each mode of handwashing. A within-subjects design enabled N = 46 participants to evaluate both sinks. Perceived Ease of Use and Satisfaction of Use were significantly lower for the automated sink, compared to the conventional sink (p design features including jet strength, water temperature and device affordance may improve HH technology. We provide recommendations for future HH technology development to contribute a positive user experience, relevant to technology developers, ergonomists and those involved in HH across all sectors. Practitioner Summary: The need to facilitate timely, effective hand hygiene to prevent illness has led to a rise in automated handwashing systems across different contexts. User acceptance is a key factor in system uptake. This paper applies the technology acceptance model as a means to explore and optimise the design of such systems.

  12. Automated Liquid Microjunction Surface Sampling-HPLC-MS/MS Analysis of Drugs and Metabolites in Whole-Body Thin Tissue Sections

    Energy Technology Data Exchange (ETDEWEB)

    Kertesz, Vilmos [ORNL; Van Berkel, Gary J [ORNL

    2013-01-01

    A fully automated liquid extraction-based surface sampling system utilizing a commercially available autosampler coupled to high performance liquid chromatography-tandem mass spectrometry (HPLC-MS/MS) detection is reported. Discrete spots selected for droplet-based sampling and automated sample queue generation for both the autosampler and MS were enabled by using in-house developed software. In addition, co-registration of spatially resolved sampling position and HPLC-MS information to generate heatmaps of compounds monitored for subsequent data analysis was also available in the software. The system was evaluated with whole-body thin tissue sections from propranolol dosed rat. The hands-free operation of the system was demonstrated by creating heatmaps of the parent drug and its hydroxypropranolol glucuronide metabolites with 1 mm resolution in the areas of interest. The sample throughput was approximately 5 min/sample defined by the time needed for chromatographic separation. The spatial distributions of both the drug and its metabolites were consistent with previous studies employing other liquid extraction-based surface sampling methodologies.

  13. Assessment of a scalp EEG-based automated seizure detection system.

    Science.gov (United States)

    Kelly, K M; Shiau, D S; Kern, R T; Chien, J H; Yang, M C K; Yandora, K A; Valeriano, J P; Halford, J J; Sackellares, J C

    2010-11-01

    The purpose of this study was to evaluate and validate an offline, automated scalp EEG-based seizure detection system and to compare its performance to commercially available seizure detection software. The test seizure detection system, IdentEvent™, was developed to enhance the efficiency of post-hoc long-term EEG review in epilepsy monitoring units. It translates multi-channel scalp EEG signals into multiple EEG descriptors and recognizes ictal EEG patterns. Detection criteria and thresholds were optimized in 47 long-term scalp EEG recordings selected for training (47 subjects, ∼3653h with 141 seizures). The detection performance of IdentEvent was evaluated using a separate test dataset consisting of 436 EEG segments obtained from 55 subjects (∼1200h with 146 seizures). Each of the test EEG segments was reviewed by three independent epileptologists and the presence or absence of seizures in each epoch was determined by majority rule. Seizure detection sensitivity and false detection rate were calculated for IdentEvent as well as for the comparable detection software (Persyst's Reveal®, version 2008.03.13, with three parameter settings). Bootstrap re-sampling was applied to establish the 95% confidence intervals of the estimates and for the performance comparison between two detection algorithms. The overall detection sensitivity of IdentEvent was 79.5% with a false detection rate (FDR) of 2 per 24h, whereas the comparison system had 80.8%, 76%, and 74% sensitivity using its three detection thresholds (perception score) with FDRs of 13, 8, and 6 per 24h, respectively. Bootstrap 95% confidence intervals of the performance difference revealed that the two detection systems had comparable detection sensitivity, but IdentEvent generated a significantly (p<0.05) smaller FDR. The study validates the performance of the IdentEvent™ seizure detection system. With comparable detection sensitivity, an improved false detection rate makes the automated seizure

  14. Implementing and managing change: A guide for assessing information technology. [Office automation

    Energy Technology Data Exchange (ETDEWEB)

    Morell, J.A.; Gryder, R.; Fleischer, M.

    1987-08-01

    Assessing the impact of office automation (OA) requires expertise in the generic aspects of evaluation and innovation adoption, combined with specialized knowledge of OA. There is an extensive literature on the two generic subjects, but no companion literature concerning the application of the knowledge to the unique case of OA. By providing that specialized information, this report assists the implementors of OA in two ways: it shows them how to monitor implementation efforts, thus providing feedback to facilitate adoption of OA technology; and it provides guidance for measuring OA's impact on people and organizations. The report assumes an immediate impact of OA on the work groups where the technology is implemented, and a continually spreading effect from that locus of immediate use. Included in the report are discussions of: sources of data, methods of data collection, factors which affect implementation, and measures of impact. Special attention is given to measuring productivity changes that may result from the use of OA. A detailed appendix supplies a variety of examples which show how the variables discussed in the report were actually measured in applied settings.

  15. Assessing the impact of automated coding & grouping technology at St Vincent's Hospital, Sydney.

    Science.gov (United States)

    Howes, M H

    1993-12-01

    In 1992 the Hospital recognised that the existing casemix data reporting systems were too removed from individual patients to have any meaning for clinicians, analysis of the data was difficult and the processes involved in the DRG assignment were subject to considerable error. Consequently, the Hospital approved the purchase of technology that would facilitate the coding and grouping process. The impact of automated coding and grouping technology is assessed by three methods. Firstly, by looking at by-product information systems, secondly, through subjective responses by coders to a satisfaction questionnaire and, thirdly, by objectively measuring hospital activity and identified coding elements before and after implementation of the 3M technology. It was concluded that while the 3M Coding and Grouping software should not be viewed as a panacea to all coding and documentation ills, objective evidence and subjective comment from the coders indicated an improvement in data quality and more accurate DRG assignment. Development of an in-house casemix information system and a feedback mechanism between coder and clinician had been effected. The product had been used as a training tool for coders and had also proven to be a useful auditing tool. Finally, linkage with other systems and the generation of timely reports had been realised.

  16. Holistic approach for automated background EEG assessment in asphyxiated full-term infants

    Science.gov (United States)

    Matic, Vladimir; Cherian, Perumpillichira J.; Koolen, Ninah; Naulaers, Gunnar; Swarte, Renate M.; Govaert, Paul; Van Huffel, Sabine; De Vos, Maarten

    2014-12-01

    Objective. To develop an automated algorithm to quantify background EEG abnormalities in full-term neonates with hypoxic ischemic encephalopathy. Approach. The algorithm classifies 1 h of continuous neonatal EEG (cEEG) into a mild, moderate or severe background abnormality grade. These classes are well established in the literature and a clinical neurophysiologist labeled 272 1 h cEEG epochs selected from 34 neonates. The algorithm is based on adaptive EEG segmentation and mapping of the segments into the so-called segments’ feature space. Three features are suggested and further processing is obtained using a discretized three-dimensional distribution of the segments’ features represented as a 3-way data tensor. Further classification has been achieved using recently developed tensor decomposition/classification methods that reduce the size of the model and extract a significant and discriminative set of features. Main results. Effective parameterization of cEEG data has been achieved resulting in high classification accuracy (89%) to grade background EEG abnormalities. Significance. For the first time, the algorithm for the background EEG assessment has been validated on an extensive dataset which contained major artifacts and epileptic seizures. The demonstrated high robustness, while processing real-case EEGs, suggests that the algorithm can be used as an assistive tool to monitor the severity of hypoxic insults in newborns.

  17. Automated size-specific CT dose monitoring program: Assessing variability in CT dose

    International Nuclear Information System (INIS)

    Christianson, Olav; Li Xiang; Frush, Donald; Samei, Ehsan

    2012-01-01

    Purpose: The potential health risks associated with low levels of ionizing radiation have created a movement in the radiology community to optimize computed tomography (CT) imaging protocols to use the lowest radiation dose possible without compromising the diagnostic usefulness of the images. Despite efforts to use appropriate and consistent radiation doses, studies suggest that a great deal of variability in radiation dose exists both within and between institutions for CT imaging. In this context, the authors have developed an automated size-specific radiation dose monitoring program for CT and used this program to assess variability in size-adjusted effective dose from CT imaging. Methods: The authors radiation dose monitoring program operates on an independent health insurance portability and accountability act compliant dosimetry server. Digital imaging and communication in medicine routing software is used to isolate dose report screen captures and scout images for all incoming CT studies. Effective dose conversion factors (k-factors) are determined based on the protocol and optical character recognition is used to extract the CT dose index and dose-length product. The patient's thickness is obtained by applying an adaptive thresholding algorithm to the scout images and is used to calculate the size-adjusted effective dose (ED adj ). The radiation dose monitoring program was used to collect data on 6351 CT studies from three scanner models (GE Lightspeed Pro 16, GE Lightspeed VCT, and GE Definition CT750 HD) and two institutions over a one-month period and to analyze the variability in ED adj between scanner models and across institutions. Results: No significant difference was found between computer measurements of patient thickness and observer measurements (p= 0.17), and the average difference between the two methods was less than 4%. Applying the size correction resulted in ED adj that differed by up to 44% from effective dose estimates that were not

  18. Managing expectations: assessment of chemistry databases generated by automated extraction of chemical structures from patents.

    Science.gov (United States)

    Senger, Stefan; Bartek, Luca; Papadatos, George; Gaulton, Anna

    2015-12-01

    First public disclosure of new chemical entities often takes place in patents, which makes them an important source of information. However, with an ever increasing number of patent applications, manual processing and curation on such a large scale becomes even more challenging. An alternative approach better suited for this large corpus of documents is the automated extraction of chemical structures. A number of patent chemistry databases generated by using the latter approach are now available but little is known that can help to manage expectations when using them. This study aims to address this by comparing two such freely available sources, SureChEMBL and IBM SIIP (IBM Strategic Intellectual Property Insight Platform), with manually curated commercial databases. When looking at the percentage of chemical structures successfully extracted from a set of patents, using SciFinder as our reference, 59 and 51 % were also found in our comparison in SureChEMBL and IBM SIIP, respectively. When performing this comparison with compounds as starting point, i.e. establishing if for a list of compounds the databases provide the links between chemical structures and patents they appear in, we obtained similar results. SureChEMBL and IBM SIIP found 62 and 59 %, respectively, of the compound-patent pairs obtained from Reaxys. In our comparison of automatically generated vs. manually curated patent chemistry databases, the former successfully provided approximately 60 % of links between chemical structure and patents. It needs to be stressed that only a very limited number of patents and compound-patent pairs were used for our comparison. Nevertheless, our results will hopefully help to manage expectations of users of patent chemistry databases of this type and provide a useful framework for more studies like ours as well as guide future developments of the workflows used for the automated extraction of chemical structures from patents. The challenges we have encountered

  19. Assessment of Natural Gamma Emitting Radionuclides in Composite Food Samples

    International Nuclear Information System (INIS)

    Aslam, M.; Orfi, S.D.; Khan, K.; Rashid, A.; Jabbar, A.; Akhter, P.; Malik, G.M.; Jan, F.; Shafiq, M.

    2001-01-01

    The Environmental Monitoring Laboratory has also been engaged in radiometric analysis of composite food as a part of environmental surveillance and monitoring of low-level radioactivity in various environmental media. The samples of cooked meal, served at PINSTECH cafeteria, were collected and assessed for gamma emitting radionuclides. A high purity germanium (HPGe) detector coupled with high-resolution multichannel analyser (MCA) and Gennie-2000 software was used for the detection, analysis and data acquisition. Radio potassium (40K) was the only radionuclide present in all the lunch samples. The range of activities was found to be from 18.06+-.51 to 74.93+-.97 Bq meal-1 with cumulative average value of 42.58+-0.14 Bq meal-1 for the sampling period from 1991-1998. Based on cooked meal taken by a man in the cafeteria (250 lunches y-1), the annual intake of 40K was found to be 1.06x104 Bq y-1 which is 0.11% of the annual limit on intake (ALI) of this radionuclide as specified by IAEA. (author)

  20. A novel automated behavioral test battery assessing cognitive rigidity in two genetic mouse models of autism.

    Directory of Open Access Journals (Sweden)

    Alicja ePuścian

    2014-04-01

    Full Text Available Repetitive behaviors are a key feature of many pervasive developmental disorders, such as autism. As a heterogeneous group of symptoms, repetitive behaviors are conceptualized into two main subgroups: sensory/motor (lower-order and cognitive rigidity (higher-order. Although lower-order repetitive behaviors are measured in mouse models in several paradigms, so far there have been no high-throughput tests directly measuring cognitive rigidity. We describe a novel approach for monitoring repetitive behaviors during reversal learning in mice in the automated IntelliCage system. During the reward-motivated place preference reversal learning, designed to assess cognitive abilities of mice, visits to the previously rewarded places were recorded to measure cognitive flexibility. Thereafter, emotional flexibility was assessed by measuring conditioned fear extinction. Additionally, to look for neuronal correlates of cognitive impairments, we measured CA3-CA1 hippocampal long term potentiation (LTP. To standardize the designed tests we used C57BL/6 and BALB/c mice, representing two genetic backgrounds, for induction of autism by prenatal exposure to the sodium valproate. We found impairments of place learning related to perseveration and no LTP impairments in C57BL/6 valproate-treated mice. In contrast, BALB/c valproate-treated mice displayed severe deficits of place learning not associated with perseverative behaviors and accompanied by hippocampal LTP impairments. Alterations of cognitive flexibility observed in C57BL/6 valproate-treated mice were related to neither restricted exploration pattern nor to emotional flexibility. Altogether, we showed that the designed tests of cognitive performance and perseverative behaviors are efficient and highly replicable. Moreover, the results suggest that genetic background is crucial for the behavioral effects of prenatal valproate treatment.

  1. Automated quantitative analysis to assess motor function in different rat models of impaired coordination and ataxia.

    Science.gov (United States)

    Kyriakou, Elisavet I; van der Kieft, Jan G; de Heer, Raymond C; Spink, Andrew; Nguyen, Huu Phuc; Homberg, Judith R; van der Harst, Johanneke E

    2016-08-01

    An objective and automated method for assessing alterations in gait and motor coordination in different animal models is important for proper gait analysis. The CatWalk system has been used in pain research, ischemia, arthritis, spinal cord injury and some animal models for neurodegenerative diseases. Our goals were to obtain a comprehensive gait analysis of three different rat models and to identify which motor coordination parameters are affected and are the most suitable and sensitive to describe and detect ataxia with a secondary focus on possible training effects. Both static and dynamic parameters showed significant differences in all three models: enriched housed rats show higher walking and swing speed and longer stride length, ethanol-induced ataxia affects mainly the hind part of the body, and the SCA17 rats show coordination disturbances. Coordination changes were revealed only in the case of the ethanol-induced ataxia and the SCA17 rat model. Although training affected some gait parameters, it did not obscure group differences when those were present. To our knowledge, a comparative gait assessment in rats with enriched housing conditions, ethanol-induced ataxia and SCA17 has not been presented before. There is no gold standard for the use of CatWalk. Dependent on the specific effects expected, the protocol can be adjusted. By including all sessions in the analysis, any training effect should be detectable and the development of the performance over the sessions can provide insight in effects attributed to intervention, treatment or injury. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Continuous Automated Model EvaluatiOn (CAMEO) complementing the critical assessment of structure prediction in CASP12.

    Science.gov (United States)

    Haas, Jürgen; Barbato, Alessandro; Behringer, Dario; Studer, Gabriel; Roth, Steven; Bertoni, Martino; Mostaguir, Khaled; Gumienny, Rafal; Schwede, Torsten

    2018-03-01

    Every second year, the community experiment "Critical Assessment of Techniques for Structure Prediction" (CASP) is conducting an independent blind assessment of structure prediction methods, providing a framework for comparing the performance of different approaches and discussing the latest developments in the field. Yet, developers of automated computational modeling methods clearly benefit from more frequent evaluations based on larger sets of data. The "Continuous Automated Model EvaluatiOn (CAMEO)" platform complements the CASP experiment by conducting fully automated blind prediction assessments based on the weekly pre-release of sequences of those structures, which are going to be published in the next release of the PDB Protein Data Bank. CAMEO publishes weekly benchmarking results based on models collected during a 4-day prediction window, on average assessing ca. 100 targets during a time frame of 5 weeks. CAMEO benchmarking data is generated consistently for all participating methods at the same point in time, enabling developers to benchmark and cross-validate their method's performance, and directly refer to the benchmarking results in publications. In order to facilitate server development and promote shorter release cycles, CAMEO sends weekly email with submission statistics and low performance warnings. Many participants of CASP have successfully employed CAMEO when preparing their methods for upcoming community experiments. CAMEO offers a variety of scores to allow benchmarking diverse aspects of structure prediction methods. By introducing new scoring schemes, CAMEO facilitates new development in areas of active research, for example, modeling quaternary structure, complexes, or ligand binding sites. © 2017 Wiley Periodicals, Inc.

  3. Validation of an semi-automated multi component method using protein precipitation LC-MS-MS for the analysis of whole blood samples

    DEFF Research Database (Denmark)

    Slots, Tina

    BACKGROUND: Solid phase extraction (SPE) are one of many multi-component methods, but can be very time-consuming and labour-intensive. Protein precipitation is, on the other hand, a much simpler and faster sample pre-treatment than SPE, and protein precipitation also has the ability to cover...... a wider range of components. AIM: The aim was to develop a robust semi-automated analytical method for whole blood samples based on a protein precipitation method already used in the lab (Sørensen and Hasselstrøm, 2013). The setup should improve the speed, robustness, and reliability of anteand post...

  4. AUTOMATED ANALYSIS OF AQUEOUS SAMPLES CONTAINING PESTICIDES, ACIDIC/BASIC/NEUTRAL SEMIVOLATILES AND VOLATILE ORGANIC COMPOUNDS BY SOLID PHASE EXTRACTION COUPLED IN-LINE TO LARGE VOLUME INJECTION GC/MS

    Science.gov (United States)

    Data is presented on the development of a new automated system combining solid phase extraction (SPE) with GC/MS spectrometry for the single-run analysis of water samples containing a broad range of organic compounds. The system uses commercially available automated in-line 10-m...

  5. Comparison of Visual and automated assessment of Ki-67 proliferative activity and their impact on outcome in primary operable invasive ductal breast cancer

    OpenAIRE

    Mohammed, Z M A; McMillan, D C; Elsberger, B; Going, J J; Orange, C; Mallon, E; Doughty, J C; Edwards, J

    2012-01-01

    Background: Immunohistochemistry of Ki-67 protein is widely used to assess tumour proliferation, and is an established prognostic factor in breast cancer. There is interest in automating the assessment of Ki-67 labelling index (LI) with possible benefits in handling increased workload, with improved accuracy and precision. Patients and methods: Visual and automated assessment of Ki-67 LI and survival were examined in patients with primary operable invasive ductal breast cancer. Tissue microar...

  6. Assessment of Automating Safety Surveillance From Electronic Health Records: Analysis for the Quality and Safety Review System.

    Science.gov (United States)

    Fong, Allan; Adams, Katharine; Samarth, Anita; McQueen, Laura; Trivedi, Manan; Chappel, Tahleah; Grace, Erin; Terrillion, Susan; Ratwani, Raj M

    2017-06-30

    In an effort to improve and standardize the collection of adverse event data, the Agency for Healthcare Research and Quality is developing and testing a patient safety surveillance system called the Quality and Safety Review System (QSRS). Its current abstraction from medical records is through manual human coders, taking an average of 75 minutes to complete the review and abstraction tasks for one patient record. With many healthcare systems across the country adopting electronic health record (EHR) technology, there is tremendous potential for more efficient abstraction by automatically populating QSRS. In the absence of real-world testing data and models, which require a substantial investment, we provide a heuristic assessment of the feasibility of automatically populating QSRS questions from EHR data. To provide an assessment of the automation feasibility for QSRS, we first developed a heuristic framework, the Relative Abstraction Complexity Framework, to assess relative complexity of data abstraction questions. This framework assesses the relative complexity of characteristics or features of abstraction questions that should be considered when determining the feasibility of automating QSRS. Questions are assigned a final relative complexity score (RCS) of low, medium, or high by a team of clinicians, human factors, and natural language processing researchers. One hundred thirty-four QSRS questions were coded using this framework by a team of natural language processing and clinical experts. Fifty-five questions (41%) had high RCS and would be more difficult to automate, such as "Was use of a device associated with an adverse outcome(s)?" Forty-two questions (31%) had medium RCS, such as "Were there any injuries as a result of the fall(s)?' and 37 questions (28%) had low RCS, such as "Did the patient deliver during this stay?' These results suggest that Blood and Hospital Acquired Infections-Clostridium Difficile Infection (HAI-CDI) modules would be relatively

  7. Assessment of Pain Response in Capsaicin-Induced Dynamic Mechanical Allodynia Using a Novel and Fully Automated Brushing Device

    Directory of Open Access Journals (Sweden)

    Kristian G du Jardin

    2013-01-01

    Full Text Available BACKGROUND: Dynamic mechanical allodynia is traditionally induced by manual brushing of the skin. Brushing force and speed have been shown to influence the intensity of brush-evoked pain. There are still limited data available with respect to the optimal stroke number, length, force, angle and speed. Therefore, an automated brushing device (ABD was developed, for which brushing angle and speed could be controlled to enable quantitative assessment of dynamic mechanical allodynia.

  8. Sampling for Soil Carbon Stock Assessment in Rocky Agricultural Soils

    Science.gov (United States)

    Beem-Miller, Jeffrey P.; Kong, Angela Y. Y.; Ogle, Stephen; Wolfe, David

    2016-01-01

    Coring methods commonly employed in soil organic C (SOC) stock assessment may not accurately capture soil rock fragment (RF) content or soil bulk density (rho (sub b)) in rocky agricultural soils, potentially biasing SOC stock estimates. Quantitative pits are considered less biased than coring methods but are invasive and often cost-prohibitive. We compared fixed-depth and mass-based estimates of SOC stocks (0.3-meters depth) for hammer, hydraulic push, and rotary coring methods relative to quantitative pits at four agricultural sites ranging in RF content from less than 0.01 to 0.24 cubic meters per cubic meter. Sampling costs were also compared. Coring methods significantly underestimated RF content at all rocky sites, but significant differences (p is less than 0.05) in SOC stocks between pits and corers were only found with the hammer method using the fixed-depth approach at the less than 0.01 cubic meters per cubic meter RF site (pit, 5.80 kilograms C per square meter; hammer, 4.74 kilograms C per square meter) and at the 0.14 cubic meters per cubic meter RF site (pit, 8.81 kilograms C per square meter; hammer, 6.71 kilograms C per square meter). The hammer corer also underestimated rho (sub b) at all sites as did the hydraulic push corer at the 0.21 cubic meters per cubic meter RF site. No significant differences in mass-based SOC stock estimates were observed between pits and corers. Our results indicate that (i) calculating SOC stocks on a mass basis can overcome biases in RF and rho (sub b) estimates introduced by sampling equipment and (ii) a quantitative pit is the optimal sampling method for establishing reference soil masses, followed by rotary and then hydraulic push corers.

  9. Automation in an Addiction Treatment Research Clinic: Computerized Contingency Management, Ecological Momentary Assessment, and a Protocol Workflow System

    Science.gov (United States)

    Vahabzadeh, Massoud; Lin, Jia-Ling; Mezghanni, Mustapha; Epstein, David H.; Preston, Kenzie L.

    2009-01-01

    Issues A challenge in treatment research is the necessity of adhering to protocol and regulatory strictures while maintaining flexibility to meet patients’ treatment needs and accommodate variations among protocols. Another challenge is the acquisition of large amounts of data in an occasionally hectic environment, along with provision of seamless methods for exporting, mining, and querying the data. Approach We have automated several major functions of our outpatient treatment research clinic for studies in drug abuse and dependence. Here we describe three such specialized applications: the Automated Contingency Management (ACM) system for delivery of behavioral interventions, the Transactional Electronic Diary (TED) system for management of behavioral assessments, and the Protocol Workflow System (PWS) for computerized workflow automation and guidance of each participant’s daily clinic activities. These modules are integrated into our larger information system to enable data sharing in real time among authorized staff. Key Findings ACM and TED have each permitted us to conduct research that was not previously possible. In addition, the time to data analysis at the end of each study is substantially shorter. With the implementation of the PWS, we have been able to manage a research clinic with an 80-patient capacity having an annual average of 18,000 patient-visits and 7,300 urine collections with a research staff of five. Finally, automated data management has considerably enhanced our ability to monitor and summarize participant-safety data for research oversight. Implications and conclusion When developed in consultation with end users, automation in treatment-research clinics can enable more efficient operations, better communication among staff, and expansions in research methods. PMID:19320669

  10. PTR-ToF-MS Coupled with an Automated Sampling System and Tailored Data Analysis for Food Studies: Bioprocess Monitoring, Screening and Nose-space Analysis.

    Science.gov (United States)

    Capozzi, Vittorio; Yener, Sine; Khomenko, Iuliia; Farneti, Brian; Cappellin, Luca; Gasperi, Flavia; Scampicchio, Matteo; Biasioli, Franco

    2017-05-11

    Proton Transfer Reaction (PTR), combined with a Time-of-Flight (ToF) Mass Spectrometer (MS) is an analytical approach based on chemical ionization that belongs to the Direct-Injection Mass Spectrometric (DIMS) technologies. These techniques allow the rapid determination of volatile organic compounds (VOCs), assuring high sensitivity and accuracy. In general, PTR-MS requires neither sample preparation nor sample destruction, allowing real time and non-invasive analysis of samples. PTR-MS are exploited in many fields, from environmental and atmospheric chemistry to medical and biological sciences. More recently, we developed a methodology based on coupling PTR-ToF-MS with an automated sampler and tailored data analysis tools, to increase the degree of automation and, consequently, to enhance the potential of the technique. This approach allowed us to monitor bioprocesses (e.g. enzymatic oxidation, alcoholic fermentation), to screen large sample sets (e.g. different origins, entire germoplasms) and to analyze several experimental modes (e.g. different concentrations of a given ingredient, different intensities of a specific technological parameter) in terms of VOC content. Here, we report the experimental protocols exemplifying different possible applications of our methodology: i.e. the detection of VOCs released during lactic acid fermentation of yogurt (on-line bioprocess monitoring), the monitoring of VOCs associated with different apple cultivars (large-scale screening), and the in vivo study of retronasal VOC release during coffee drinking (nosespace analysis).

  11. Accelerated solvent extraction (ASE) - a fast and automated technique with low solvent consumption for the extraction of solid samples (T12)

    International Nuclear Information System (INIS)

    Hoefler, F.

    2002-01-01

    Full text: Accelerated solvent extraction (ASE) is a modern extraction technique that significantly streamlines sample preparation. A common organic solvent as well as water is used as extraction solvent at elevated temperature and pressure to increase extraction speed and efficiency. The entire extraction process is fully automated and performed within 15 minutes with a solvent consumption of 18 ml for a 10 g sample. For many matrices and for a variety of solutes, ASE has proven to be equivalent or superior to sonication, Soxhlet, and reflux extraction techniques while requiring less time, solvent and labor. First ASE has been applied for the extraction of environmental hazards from solid matrices. Within a very short time ASE was approved by the U.S. EPA for the extraction of BNAs, PAHs, PCBs, pesticides, herbicides, TPH, and dioxins from solid samples in method 3545. Especially for the extraction of dioxins the extraction time with ASE is reduced to 20 minutes in comparison to 18 h using Soxhlet. In food analysis ASE is used for the extraction of pesticide and mycotoxin residues from fruits and vegetables, the fat determination and extraction of vitamins. Time consuming and solvent intensive methods for the extraction of additives from polymers as well as for the extraction of marker compounds from herbal supplements can be performed with higher efficiencies using ASE. For the analysis of chemical weapons the extraction process and sample clean-up including derivatization can be automated and combined with GC-MS using an online ASE-APEC-GC system. (author)

  12. Comparison of automated devices UX-2000 and SediMAX/AutionMax for urine samples screening: A multicenter Spanish study.

    Science.gov (United States)

    Sánchez-Mora, Catalina; Acevedo, Delia; Porres, Maria Amelia; Chaqués, Ana María; Zapardiel, Javier; Gallego-Cabrera, Aurelia; López, Jose María; Maesa, Jose María

    2017-08-01

    In this study we aim to compare UX2000 (Sysmex Corp, Japan) and SediMAX/AutionMax (Arkray Factory Inc., Japan), totally automatized analyzers, against Fuchs-Rosenthal counting chamber, the gold standard technique for sediment analysis. Urine samples of 1454 patients from three Spanish hospitals were assessed for red and white blood cells (RBC; WBC) using three different techniques: flow cytometry, image-based method and Fuchs-Rosenthal counting chamber. Test strip results were subjected to concordance evaluation. Agreement was assessed by Cohen's weighted kappa for multinomial results. Sensitivity (SE) and specificity (SP) were calculated. The categorization of the results showed that UX-2000 had higher concordance over SediMAX for WBC (0.819 vs. 0.546) and similar for RBC (0.573 vs. 0.630). For RBC, UX-2000 had higher SE (92.7% vs. 80.3%) but lower SP (77.1% vs. 87.4%), and showed higher both SE (94.3% vs. 76.7%) and SP (94.7% vs. 88.2%) for WBC. Inter-devices test strip agreement was substantial (kappa>0.600) for all variables except for bilirubin (kappa: 0.598). Intra-device test strip agreement was similar for UX2000 and SediMAX with regard to RBC (kappa: 0.553 vs. 0.482) but better for UX2000 with regard to WBC (0.688 vs. 0.465). Both analyzers studied are acceptable for daily routine lab work, even though SediMAX is easier to use in laboratories thanks to its lower maintenance procedure. UX-2000 has shown to have better concordance with the gold standard method. However, it needs some improvements such as an image module in order to decrease manual microscopy review for urine samples. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  13. Manual versus automated streaking system in clinical microbiology laboratory: Performance evaluation of Previ Isola for blood culture and body fluid samples.

    Science.gov (United States)

    Choi, Qute; Kim, Hyun Jin; Kim, Jong Wan; Kwon, Gye Cheol; Koo, Sun Hoe

    2018-01-04

    The process of plate streaking has been automated to improve routine workflow of clinical microbiology laboratories. Although there were many evaluation reports about the inoculation of various body fluid samples, few evaluations have been reported for blood. In this study, we evaluated the performance of automated inoculating system, Previ Isola for various routine clinical samples including blood. Blood culture, body fluid, and urine samples were collected. All samples were inoculated on both sheep blood agar plate (BAP) and MacConkey agar plate (MCK) using Previ Isola and manual method. We compared two methods in aspect of quality and quantity of cultures, and sample processing time. To ensure objective colony counting, an enumeration reading reference was made through a preliminary experiment. A total of 377 nonduplicate samples (102 blood culture, 203 urine, 72 body fluid) were collected and inoculated. The concordance rate of quality was 100%, 97.0%, and 98.6% in blood, urine, and other body fluids, respectively. In quantitative aspect, it was 98.0%, 97.0%, and 95.8%, respectively. The Previ Isola took a little longer to inoculate the specimen than manual method, but the hands-on time decreased dramatically. The shortened hands-on time using Previ Isola was about 6 minutes per 10 samples. We demonstrated that the Previ Isola showed high concordance with the manual method in the inoculation of various body fluids, especially in blood culture sample. The use of Previ Isola in clinical microbiology laboratories is expected to save considerable time and human resources. © 2018 Wiley Periodicals, Inc.

  14. Initial Assessment and Modeling Framework Development for Automated Mobility Districts: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Hou, Yi [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Young, Stanley E [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Garikapati, Venu [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Chen, Yuche [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Zhu, Lei [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2018-02-07

    Automated vehicles (AVs) are increasingly being discussed as the basis for on-demand mobility services, introducing a new paradigm in which a fleet of AVs displaces private automobiles for day-to-day travel in dense activity districts. This paper examines a concept to displace privately owned automobiles within a region containing dense activity generators (jobs, retail, entertainment, etc.), referred to as an automated mobility district (AMD). This paper reviews several such districts, including airports, college campuses, business parks, downtown urban cores, and military bases, with examples of previous attempts to meet the mobility needs apart from private automobiles, some with automated technology and others with more traditional transit-based solutions. The issues and benefits of AMDs are framed within the perspective of intra-district, inter-district, and border issues, and the requirements for a modeling framework are identified to adequately reflect the breadth of mobility, energy, and emissions impact anticipated with AMDs

  15. Risk Assessment on the Transition Program for Air Traffic Control Automation System Upgrade

    Directory of Open Access Journals (Sweden)

    Li Dong Bin

    2016-01-01

    Full Text Available We analyzed the safety risks of the transition program for Air Traffic Control (ATC automation system upgrade by using the event tree analysis method in this paper. We decomposed the occurrence progress of the three transition phase and built the event trees corresponding to the three stages, and then we determined the probability of success of each factor and calculated probability of success of the air traffic control automation system upgrade transition. In the conclusion, we illustrate the transition program safety risk according to the results.

  16. Automation systems for radioimmunoassay

    International Nuclear Information System (INIS)

    Yamasaki, Paul

    1974-01-01

    The application of automation systems for radioimmunoassay (RIA) was discussed. Automated systems could be useful in the second step, of the four basic processes in the course of RIA, i.e., preparation of sample for reaction. There were two types of instrumentation, a semi-automatic pipete, and a fully automated pipete station, both providing for fast and accurate dispensing of the reagent or for the diluting of sample with reagent. Illustrations of the instruments were shown. (Mukohata, S.)

  17. Automated quality assessment of structural magnetic resonance images in children: Comparison with visual inspection and surface-based reconstruction.

    Science.gov (United States)

    White, Tonya; Jansen, Philip R; Muetzel, Ryan L; Sudre, Gustavo; El Marroun, Hanan; Tiemeier, Henning; Qiu, Anqi; Shaw, Philip; Michael, Andrew M; Verhulst, Frank C

    2018-03-01

    Motion-related artifacts are one of the major challenges associated with pediatric neuroimaging. Recent studies have shown a relationship between visual quality ratings of T 1 images and cortical reconstruction measures. Automated algorithms offer more precision in quantifying movement-related artifacts compared to visual inspection. Thus, the goal of this study was to test three different automated quality assessment algorithms for structural MRI scans. The three algorithms included a Fourier-, integral-, and a gradient-based approach which were run on raw T 1 -weighted imaging data collected from four different scanners. The four cohorts included a total of 6,662 MRI scans from two waves of the Generation R Study, the NIH NHGRI Study, and the GUSTO Study. Using receiver operating characteristics with visually inspected quality ratings of the T 1 images, the area under the curve (AUC) for the gradient algorithm, which performed better than either the integral or Fourier approaches, was 0.95, 0.88, and 0.82 for the Generation R, NHGRI, and GUSTO studies, respectively. For scans of poor initial quality, repeating the scan often resulted in a better quality second image. Finally, we found that even minor differences in automated quality measurements were associated with FreeSurfer derived measures of cortical thickness and surface area, even in scans that were rated as good quality. Our findings suggest that the inclusion of automated quality assessment measures can augment visual inspection and may find use as a covariate in analyses or to identify thresholds to exclude poor quality data. © 2017 Wiley Periodicals, Inc.

  18. Assessing Racial Microaggression Distress in a Diverse Sample.

    Science.gov (United States)

    Torres-Harding, Susan; Turner, Tasha

    2015-12-01

    Racial microaggressions are everyday subtle or ambiguous racially related insults, slights, mistreatment, or invalidations. Racial microaggressions are a type of perceived racism that may negatively impact the health and well-being of people of color in the United States. This study examined the reliability and validity of the Racial Microaggression Scale distress subscales, which measure the perceived stressfulness of six types of microaggression experiences in a racially and ethnically diverse sample. These subscales exhibited acceptable to good internal consistency. The distress subscales also evidenced good convergent validity; the distress subscales were positively correlated with additional measures of stressfulness due to experiencing microaggressions or everyday discrimination. When controlling for the frequency of one's exposure to microaggression incidents, some racial/ethnic group differences were found. Asian Americans reported comparatively lower distress and Latinos reporting comparatively higher distress in response to Foreigner, Low-Achieving, Invisibility, and Environmental microaggressions. African Americans reported higher distress than the other groups in response to Environmental microaggressions. Results suggest that the Racial Microaggressions Scale distress subscales may aid health professionals in assessing the distress elicited by different types of microaggressions. In turn, this may facilitate diagnosis and treatment planning in order to provide multiculturally competent care for African American, Latino, and Asian American clients. © The Author(s) 2014.

  19. Automating Clinical Score Calculation within the Electronic Health Record. A Feasibility Assessment.

    Science.gov (United States)

    Aakre, Christopher; Dziadzko, Mikhail; Keegan, Mark T; Herasevich, Vitaly

    2017-04-12

    Evidence-based clinical scores are used frequently in clinical practice, but data collection and data entry can be time consuming and hinder their use. We investigated the programmability of 168 common clinical calculators for automation within electronic health records. We manually reviewed and categorized variables from 168 clinical calculators as being extractable from structured data, unstructured data, or both. Advanced data retrieval methods from unstructured data sources were tabulated for diagnoses, non-laboratory test results, clinical history, and examination findings. We identified 534 unique variables, of which 203/534 (37.8%) were extractable from structured data and 269/534 (50.4.7%) were potentially extractable using advanced techniques. Nearly half (265/534, 49.6%) of all variables were not retrievable. Only 26/168 (15.5%) of scores were completely programmable using only structured data and 43/168 (25.6%) could potentially be programmable using widely available advanced information retrieval techniques. Scores relying on clinical examination findings or clinical judgments were most often not completely programmable. Complete automation is not possible for most clinical scores because of the high prevalence of clinical examination findings or clinical judgments - partial automation is the most that can be achieved. The effect of fully or partially automated score calculation on clinical efficiency and clinical guideline adherence requires further study.

  20. Evaluation of the falls telephone: an automated system for enduring assessment of falls

    NARCIS (Netherlands)

    Marck, M.A. van der; Overeem, S.; Klok, P.C.; Bloem, B.R.; Munneke, M.

    2011-01-01

    OBJECTIVES: To evaluate the reliability and user experiences of an automated telephone system to monitor falls during a prolonged period of time. DESIGN: Prospective cohort study. SETTING: Four neurological outpatient clinics in the Netherlands. PARTICIPANTS: One hundred nineteen community-dwelling

  1. Taking Advantage of Automated Assessment of Student-Constructed Graphs in Science

    Science.gov (United States)

    Vitale, Jonathan M.; Lai, Kevin; Linn, Marcia C.

    2015-01-01

    We present a new system for automated scoring of graph construction items that address complex science concepts, feature qualitative prompts, and support a range of possible solutions. This system utilizes analysis of spatial features (e.g., slope of a line) to evaluate potential student ideas represented within graphs. Student ideas are then…

  2. Automated Diabetic Retinopathy Image Assessment Software: Diagnostic Accuracy and Cost-Effectiveness Compared with Human Graders.

    Science.gov (United States)

    Tufail, Adnan; Rudisill, Caroline; Egan, Catherine; Kapetanakis, Venediktos V; Salas-Vega, Sebastian; Owen, Christopher G; Lee, Aaron; Louw, Vern; Anderson, John; Liew, Gerald; Bolter, Louis; Srinivas, Sowmya; Nittala, Muneeswar; Sadda, SriniVas; Taylor, Paul; Rudnicka, Alicja R

    2017-03-01

    With the increasing prevalence of diabetes, annual screening for diabetic retinopathy (DR) by expert human grading of retinal images is challenging. Automated DR image assessment systems (ARIAS) may provide clinically effective and cost-effective detection of retinopathy. We aimed to determine whether ARIAS can be safely introduced into DR screening pathways to replace human graders. Observational measurement comparison study of human graders following a national screening program for DR versus ARIAS. Retinal images from 20 258 consecutive patients attending routine annual diabetic eye screening between June 1, 2012, and November 4, 2013. Retinal images were manually graded following a standard national protocol for DR screening and were processed by 3 ARIAS: iGradingM, Retmarker, and EyeArt. Discrepancies between manual grades and ARIAS results were sent to a reading center for arbitration. Screening performance (sensitivity, false-positive rate) and diagnostic accuracy (95% confidence intervals of screening-performance measures) were determined. Economic analysis estimated the cost per appropriate screening outcome. Sensitivity point estimates (95% confidence intervals) of the ARIAS were as follows: EyeArt 94.7% (94.2%-95.2%) for any retinopathy, 93.8% (92.9%-94.6%) for referable retinopathy (human graded as either ungradable, maculopathy, preproliferative, or proliferative), 99.6% (97.0%-99.9%) for proliferative retinopathy; Retmarker 73.0% (72.0 %-74.0%) for any retinopathy, 85.0% (83.6%-86.2%) for referable retinopathy, 97.9% (94.9%-99.1%) for proliferative retinopathy. iGradingM classified all images as either having disease or being ungradable. EyeArt and Retmarker saved costs compared with manual grading both as a replacement for initial human grading and as a filter prior to primary human grading, although the latter approach was less cost-effective. Retmarker and EyeArt systems achieved acceptable sensitivity for referable retinopathy when compared

  3. Automation in Clinical Microbiology

    Science.gov (United States)

    Ledeboer, Nathan A.

    2013-01-01

    Historically, the trend toward automation in clinical pathology laboratories has largely bypassed the clinical microbiology laboratory. In this article, we review the historical impediments to automation in the microbiology laboratory and offer insight into the reasons why we believe that we are on the cusp of a dramatic change that will sweep a wave of automation into clinical microbiology laboratories. We review the currently available specimen-processing instruments as well as the total laboratory automation solutions. Lastly, we outline the types of studies that will need to be performed to fully assess the benefits of automation in microbiology laboratories. PMID:23515547

  4. Assessment of helminth load in faecal samples of free range ...

    African Journals Online (AJOL)

    Helminths load in faecal sample of free range indigenous chicken in Port Harcourt Metropolis was examined. Faecal samples were collected from 224 birds in 15 homesteads and 4 major markets - Mile 3, Mile 1, Borokiri and Eneka Village market where poultry birds are gathered for sale. 0.2-0.5g of feacal sample was ...

  5. Validated sampling strategy for assessing contaminants in soil stockpiles

    International Nuclear Information System (INIS)

    Lame, Frank; Honders, Ton; Derksen, Giljam; Gadella, Michiel

    2005-01-01

    Dutch legislation on the reuse of soil requires a sampling strategy to determine the degree of contamination. This sampling strategy was developed in three stages. Its main aim is to obtain a single analytical result, representative of the true mean concentration of the soil stockpile. The development process started with an investigation into how sample pre-treatment could be used to obtain representative results from composite samples of heterogeneous soil stockpiles. Combining a large number of random increments allows stockpile heterogeneity to be fully represented in the sample. The resulting pre-treatment method was then combined with a theoretical approach to determine the necessary number of increments per composite sample. At the second stage, the sampling strategy was evaluated using computerised models of contaminant heterogeneity in soil stockpiles. The now theoretically based sampling strategy was implemented by the Netherlands Centre for Soil Treatment in 1995. It was applied to all types of soil stockpiles, ranging from clean to heavily contaminated, over a period of four years. This resulted in a database containing the analytical results of 2570 soil stockpiles. At the final stage these results were used for a thorough validation of the sampling strategy. It was concluded that the model approach has indeed resulted in a sampling strategy that achieves analytical results representative of the mean concentration of soil stockpiles. - A sampling strategy that ensures analytical results representative of the mean concentration in soil stockpiles is presented and validated

  6. Automation of analytical systems in power cycles

    International Nuclear Information System (INIS)

    Staub Lukas

    2008-01-01

    'Automation' is a widely used term in instrumentation and is often applied to signal exchange, PLC and SCADA systems. Common use, however, does not necessarily described autonomous operation of analytical devices. We define an automated analytical system as a black box with an input (sample) and an output (measured value). In addition we need dedicated status lines for assessing the validities of the input for our black box and the output for subsequent systems. We will discuss input parameters, automated analytical processes and output parameters. Further considerations will be given to signal exchange and integration into the operating routine of a power plant. Local control loops (chemical dosing) and the automation of sampling systems are not discussed here. (author)

  7. An automated method for analysis of microcirculation videos for accurate assessment of tissue perfusion

    Directory of Open Access Journals (Sweden)

    Demir Sumeyra U

    2012-12-01

    Full Text Available Abstract Background Imaging of the human microcirculation in real-time has the potential to detect injuries and illnesses that disturb the microcirculation at earlier stages and may improve the efficacy of resuscitation. Despite advanced imaging techniques to monitor the microcirculation, there are currently no tools for the near real-time analysis of the videos produced by these imaging systems. An automated system tool that can extract microvasculature information and monitor changes in tissue perfusion quantitatively might be invaluable as a diagnostic and therapeutic endpoint for resuscitation. Methods The experimental algorithm automatically extracts microvascular network and quantitatively measures changes in the microcirculation. There are two main parts in the algorithm: video processing and vessel segmentation. Microcirculatory videos are first stabilized in a video processing step to remove motion artifacts. In the vessel segmentation process, the microvascular network is extracted using multiple level thresholding and pixel verification techniques. Threshold levels are selected using histogram information of a set of training video recordings. Pixel-by-pixel differences are calculated throughout the frames to identify active blood vessels and capillaries with flow. Results Sublingual microcirculatory videos are recorded from anesthetized swine at baseline and during hemorrhage using a hand-held Side-stream Dark Field (SDF imaging device to track changes in the microvasculature during hemorrhage. Automatically segmented vessels in the recordings are analyzed visually and the functional capillary density (FCD values calculated by the algorithm are compared for both health baseline and hemorrhagic conditions. These results were compared to independently made FCD measurements using a well-known semi-automated method. Results of the fully automated algorithm demonstrated a significant decrease of FCD values. Similar, but more variable FCD

  8. Preliminary biogeochemical assessment of EPICA LGM and Holocene ice samples

    Science.gov (United States)

    Bulat, S.; Alekhina, I.; Marie, D.; Wagenbach, D.; Raynaud, D.; Petit, J. R.

    2009-04-01

    weak signals were possible to generate which are now under cloning. The signals were hard to reproduce because of rather low volume of samples. More ice volume is needed to get the biosignal stronger and reproducible. Meantime we are adjusting PCR and in addition testing DNA repair-enzyme cocktail in case of DNA damage. As a preliminary conclusion we would like to highlight the following. Both Holocene and LGM ice samples (EDC99 and EDML) are very clean in terms of Ultra low biomass and Ultra low DOC content. The most basal ice of EDC and EDML ice cores could help in assessing microbial biomass and diversity if present under the glacier at the ice-bedrock boundary. * The present-day consortium includes S. Bulat, I. Alekhina, P. Normand, D. Prieur, J-R. Petit and D. Raynaud (France) and E. Willerslev and J.P. Steffensen (Denmark)

  9. Laboratory automation in clinical bacteriology: what system to choose?

    Science.gov (United States)

    Croxatto, A; Prod'hom, G; Faverjon, F; Rochais, Y; Greub, G

    2016-03-01

    Automation was introduced many years ago in several diagnostic disciplines such as chemistry, haematology and molecular biology. The first laboratory automation system for clinical bacteriology was released in 2006, and it rapidly proved its value by increasing productivity, allowing a continuous increase in sample volumes despite limited budgets and personnel shortages. Today, two major manufacturers, BD Kiestra and Copan, are commercializing partial or complete laboratory automation systems for bacteriology. The laboratory automation systems are rapidly evolving to provide improved hardware and software solutions to optimize laboratory efficiency. However, the complex parameters of the laboratory and automation systems must be considered to determine the best system for each given laboratory. We address several topics on laboratory automation that may help clinical bacteriologists to understand the particularities and operative modalities of the different systems. We present (a) a comparison of the engineering and technical features of the various elements composing the two different automated systems currently available, (b) the system workflows of partial and complete laboratory automation, which define the basis for laboratory reorganization required to optimize system efficiency, (c) the concept of digital imaging and telebacteriology, (d) the connectivity of laboratory automation to the laboratory information system, (e) the general advantages and disadvantages as well as the expected impacts provided by laboratory automation and (f) the laboratory data required to conduct a workflow assessment to determine the best configuration of an automated system for the laboratory activities and specificities. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  10. Aquarius's Instrument Science Data System (ISDS) Automated to Acquire, Process, Trend Data and Produce Radiometric System Assessment Reports

    Science.gov (United States)

    2008-01-01

    The Aquarius Radiometer, a subsystem of the Aquarius Instrument required a data acquisition ground system to support calibration and radiometer performance assessment. To support calibration and compose performance assessments, we developed an automated system which uploaded raw data to a ftp server and saved raw and processed data to a database. This paper details the overall functionalities of the Aquarius Instrument Science Data System (ISDS) and the individual electrical ground support equipment (EGSE) which produced data files that were infused into the ISDS. Real time EGSEs include an ICDS Simulator, Calibration GSE, Labview controlled power supply, and a chamber data acquisition system. ICDS Simulator serves as a test conductor primary workstation, collecting radiometer housekeeping (HK) and science data and passing commands and HK telemetry collection request to the radiometer. Calibration GSE (Radiometer Active Test Source) provides source choice from multiple targets for the radiometer external calibration. Power Supply GSE, controlled by labview, provides real time voltage and current monitoring of the radiometer. And finally the chamber data acquisition system produces data reflecting chamber vacuum pressure, thermistor temperatures, AVG and watts. Each GSE system produce text based data files every two to six minutes and automatically copies the data files to the Central Archiver PC. The Archiver PC stores the data files, schedules automated uploads of these files to an external FTP server, and accepts request to copy all data files to the ISDS for offline data processing and analysis. Aquarius Radiometer ISDS contains PHP and MATLab programs to parse, process and save all data to a MySQL database. Analysis tools (MATLab programs) in the ISDS system are capable of displaying radiometer science, telemetry and auxiliary data in near real time as well as performing data analysis and producing automated performance assessment reports of the Aquarius

  11. Performance of automated software in the assessment of segmental left ventricular function in cardiac CT: Comparison with cardiac magnetic resonance

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Rui [Medical University of South Carolina, Department of Radiology and Radiological Science, Charleston, SC (United States); Capital Medical University, Department of Radiology, Beijing Anzhen Hospital, Beijing (China); Meinel, Felix G. [Medical University of South Carolina, Department of Radiology and Radiological Science, Charleston, SC (United States); Ludwig-Maximilians-University Hospital, Institute for Clinical Radiology, Munich (Germany); Schoepf, U.J. [Medical University of South Carolina, Department of Radiology and Radiological Science, Charleston, SC (United States); Medical University of South Carolina, Division of Cardiology, Department of Medicine, Charleston, SC (United States); Canstein, Christian [Siemens Medical Solutions USA, Malvern, PA (United States); Spearman, James V. [Medical University of South Carolina, Department of Radiology and Radiological Science, Charleston, SC (United States); De Cecco, Carlo N. [Medical University of South Carolina, Department of Radiology and Radiological Science, Charleston, SC (United States); University of Rome ' ' Sapienza' ' , Departments of Radiological Sciences, Oncology and Pathology, Latina (Italy)

    2015-12-15

    To evaluate the accuracy, reliability and time saving potential of a novel cardiac CT (CCT)-based, automated software for the assessment of segmental left ventricular function compared to visual and manual quantitative assessment of CCT and cardiac magnetic resonance (CMR). Forty-seven patients with suspected or known coronary artery disease (CAD) were enrolled in the study. Wall thickening was calculated. Segmental LV wall motion was automatically calculated and shown as a colour-coded polar map. Processing time for each method was recorded. Mean wall thickness in both systolic and diastolic phases on polar map, CCT, and CMR was 9.2 ± 0.1 mm and 14.9 ± 0.2 mm, 8.9 ± 0.1 mm and 14.5 ± 0.1 mm, 8.3 ± 0.1 mm and 13.6 ± 0.1 mm, respectively. Mean wall thickening was 68.4 ± 1.5 %, 64.8 ± 1.4 % and 67.1 ± 1.4 %, respectively. Agreement for the assessment of LV wall motion between CCT, CMR and polar maps was good. Bland-Altman plots and ICC indicated good agreement between CCT, CMR and automated polar maps of the diastolic and systolic segmental wall thickness and thickening. The processing time using polar map was significantly decreased compared with CCT and CMR. Automated evaluation of segmental LV function with polar maps provides similar measurements to manual CCT and CMR evaluation, albeit with substantially reduced analysis time. (orig.)

  12. Machine-Learning Algorithms to Automate Morphological and Functional Assessments in 2D Echocardiography.

    Science.gov (United States)

    Narula, Sukrit; Shameer, Khader; Salem Omar, Alaa Mabrouk; Dudley, Joel T; Sengupta, Partho P

    2016-11-29

    Machine-learning models may aid cardiac phenotypic recognition by using features of cardiac tissue deformation. This study investigated the diagnostic value of a machine-learning framework that incorporates speckle-tracking echocardiographic data for automated discrimination of hypertrophic cardiomyopathy (HCM) from physiological hypertrophy seen in athletes (ATH). Expert-annotated speckle-tracking echocardiographic datasets obtained from 77 ATH and 62 HCM patients were used for developing an automated system. An ensemble machine-learning model with 3 different machine-learning algorithms (support vector machines, random forests, and artificial neural networks) was developed and a majority voting method was used for conclusive predictions with further K-fold cross-validation. Feature selection using an information gain (IG) algorithm revealed that volume was the best predictor for differentiating between HCM ands. ATH (IG = 0.24) followed by mid-left ventricular segmental (IG = 0.134) and average longitudinal strain (IG = 0.131). The ensemble machine-learning model showed increased sensitivity and specificity compared with early-to-late diastolic transmitral velocity ratio (p 13 mm. In this subgroup analysis, the automated model continued to show equal sensitivity, but increased specificity relative to early-to-late diastolic transmitral velocity ratio, e', and strain. Our results suggested that machine-learning algorithms can assist in the discrimination of physiological versus pathological patterns of hypertrophic remodeling. This effort represents a step toward the development of a real-time, machine-learning-based system for automated interpretation of echocardiographic images, which may help novice readers with limited experience. Copyright © 2016 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  13. Assessment of Automated Analyses of Cell Migration on Flat and Nanostructured Surfaces

    DEFF Research Database (Denmark)

    Gradinaru, Cristian; Lopacinska, Joanna M.; Huth, Johannes

    2012-01-01

    Motility studies of cells often rely on computer software that analyzes time-lapse recorded movies and establishes cell trajectories fully automatically. This raises the question of reproducibility of results, since different programs could yield significantly different results of such automated ...... to different segmentation methods. Unfortunately, population averages based on such different cell populations, differ significantly in some cases. Thus, results obtained with one software package are not necessarily reproducible by other software....

  14. Remote monitoring field trial. Application to automated air sampling. Report on Task FIN-E935 of the Finnish Support Programme to IAEA Safeguards

    International Nuclear Information System (INIS)

    Poellaenen, R.; Ilander, T.; Lehtinen, J.; Leppaenen, A.; Nikkinen, M.; Toivonen, H.; Ylaetalo, S.; Smartt, H.; Garcia, R.; Martinez, R.; Glidewell, D.; Krantz, K.

    1999-01-01

    An automated air sampling station has recently been developed by Radiation and Nuclear Safety Authority (STUK). The station is furnished with equipment that allows comprehensive remote monitoring of the station and the data. Under the Finnish Support Programme to IAEA Safeguards, STUK and Sandia National Laboratories (SNL) established a field trial to demonstrate the use of remote monitoring technologies. STUK provided means for real-lime radiation monitoring and sample authentication whereas SNL delivered means for authenticated surveillance of the equipment and its location. The field trial showed that remote monitoring can be carried out using simple means although advanced facilities are needed for comprehensive surveillance. Authenticated measurement data could be reliably transferred from the monitoring site to the headquarters without the presence of authorized personnel in the monitoring site. The operation of the station and the remote monitoring system were reliable. (orig.)

  15. Assessment of Automated Analyses of Cell Migration on Flat and Nanostructured Surfaces

    Directory of Open Access Journals (Sweden)

    Hans A Kestler

    2012-07-01

    Full Text Available Motility studies of cells often rely on computer software that analyzes time-lapse recorded movies and establishes cell trajectories fully automatically. This raises the question of reproducibility of results, since different programs could yield significantly different results of such automated analysis. The fact that the segmentation routines of such programs are often challenged by nanostructured surfaces makes the question more pertinent. Here we illustrate how it is possible to track cells on bright field microscopy images with image analysis routines implemented in an open-source cell tracking program, PACT (Program for Automated Cell Tracking. We compare the automated motility analysis of three cell tracking programs, PACT, Autozell, and TLA, using the same movies as input for all three programs. We find that different programs track overlapping, but different subsets of cells due to different segmentation methods. Unfortunately, population averages based on such different cell populations, differ significantly in some cases. Thus, results obtained with one software package are not necessarily reproducible by other software.

  16. Advanced manufacturing rules check (MRC) for fully automated assessment of complex reticle designs: Part II

    Science.gov (United States)

    Straub, J. A.; Aguilar, D.; Buck, P. D.; Dawkins, D.; Gladhill, R.; Nolke, S.; Riddick, J.

    2006-10-01

    Advanced electronic design automation (EDA) tools, with their simulation, modeling, design rule checking, and optical proximity correction capabilities, have facilitated the improvement of first pass wafer yields. While the data produced by these tools may have been processed for optimal wafer manufacturing, it is possible for the same data to be far from ideal for photomask manufacturing, particularly at lithography and inspection stages, resulting in production delays and increased costs. The same EDA tools used to produce the data can be used to detect potential problems for photomask manufacturing in the data. In the previous paper, it was shown how photomask MRC is used to uncover data related problems prior to automated defect inspection. It was demonstrated how jobs which are likely to have problems at inspection could be identified and separated from those which are not. The use of photomask MRC in production was shown to reduce time lost to aborted runs and troubleshooting due to data issues. In this paper, the effectiveness of this photomask MRC program in a high volume photomask factory over the course of a year as applied to more than ten thousand jobs will be shown. Statistics on the results of the MRC runs will be presented along with the associated impact to the automated defect inspection process. Common design problems will be shown as well as their impact to mask manufacturing throughput and productivity. Finally, solutions to the most common and most severe problems will be offered and discussed.

  17. Influence of sample preparation and reliability of automated numerical refocusing in stain-free analysis of dissected tissues with quantitative phase digital holographic microscopy

    Science.gov (United States)

    Kemper, Björn; Lenz, Philipp; Bettenworth, Dominik; Krausewitz, Philipp; Domagk, Dirk; Ketelhut, Steffi

    2015-05-01

    Digital holographic microscopy (DHM) has been demonstrated to be a versatile tool for high resolution non-destructive quantitative phase imaging of surfaces and multi-modal minimally-invasive monitoring of living cell cultures in-vitro. DHM provides quantitative monitoring of physiological processes through functional imaging and structural analysis which, for example, gives new insight into signalling of cellular water permeability and cell morphology changes due to toxins and infections. Also the analysis of dissected tissues quantitative DHM phase contrast prospects application fields by stain-free imaging and the quantification of tissue density changes. We show that DHM allows imaging of different tissue layers with high contrast in unstained tissue sections. As the investigation of fixed samples represents a very important application field in pathology, we also analyzed the influence of the sample preparation. The retrieved data demonstrate that the quality of quantitative DHM phase images of dissected tissues depends strongly on the fixing method and common staining agents. As in DHM the reconstruction is performed numerically, multi-focus imaging is achieved from a single digital hologram. Thus, we evaluated the automated refocussing feature of DHM for application on different types of dissected tissues and revealed that on moderately stained samples highly reproducible holographic autofocussing can be achieved. Finally, it is demonstrated that alterations of the spatial refractive index distribution in murine and human tissue samples represent a reliable absolute parameter that is related of different degrees of inflammation in experimental colitis and Crohn's disease. This paves the way towards the usage of DHM in digital pathology for automated histological examinations and further studies to elucidate the translational potential of quantitative phase microscopy for the clinical management of patients, e.g., with inflammatory bowel disease.

  18. chemical and microbiological assessment of surface water samples ...

    African Journals Online (AJOL)

    PROF EKWUEME

    The importance of good quality water cannot be over emphasized. This is because it is only next to air as a critical sustainer of life therefore it is appropriate to evaluate its quality and quantity. A total number of thirteen water samples were investigated in this study: Nine samples from different surface water bodies, two ...

  19. Chemical and microbiological assessment of surface water samples ...

    African Journals Online (AJOL)

    The importance of good quality water cannot be over emphasized. This is because it is only next to air as a critical sustainer of life therefore it is appropriate to evaluate its quality and quantity. A total number of thirteen water samples were investigated in this study: Nine samples from different surface water bodies, two ...

  20. Teaching Principles of Assessment Literacy through Teacher Work Sample Methodology

    Science.gov (United States)

    Bangert, Art; Kelting-Gibson, Lynn

    2006-01-01

    Recent accountability efforts at state and national levels highlight the importance of preparing future teacher in the skills required to produce sound classroom assessments that are capable of improving student learning through informed instruction. Stiggins (1995) suggests that the quality of classroom assessments will not improve unless teacher…

  1. Towards a fully automated lab-on-a-disc system integrating sample enrichment and detection of analytes from complex matrices

    DEFF Research Database (Denmark)

    Andreasen, Sune Zoëga

    Lab-on-a-chip systems (LoC) has been actively researched and developed for at least the last 25 years, yet integration of an efficient and robust sample pretreatment method still proves to be a challenge in the field. This lack of sample pre-treatment methods in LoC platforms prevents the technol...

  2. Automated polyvinylidene difluoride hollow fiber liquid-phase microextraction of flunitrazepam in plasma and urine samples for gas chromatography/tandem mass spectrometry.

    Science.gov (United States)

    Cui, Shufen; Tan, Shuo; Ouyang, Gangfeng; Pawliszyn, Janusz

    2009-03-20

    A new polyvinylidene difluoride (PVDF) hollow fiber (200 microm wall thickness, 1.2mm internal diameter, 0.2 microm pore size) was compared with two other polypropylene (PP) hollow fibers (200, 300 microm wall thickness, 1.2mm internal diameter, 0.2 microm pore size) in the automated hollow fiber liquid-phase microextraction (HF-LPME) of flunitrazepam (FLNZ) in biological samples. With higher porosity and better solvent compatibility, the PVDF hollow fiber showed advantages with faster extraction efficiency and operational accuracy. Parameters of the CTC autosampler program for HF-LPME in plasma and urine samples were carefully investigated to ensure accuracy and reproducibility. Several parameters influencing the efficiency of HF-LPME of FLNZ in plasma and urine samples were optimized, including type of porous hollow fiber, organic solvent, agitation rate, extraction time, salt concentration, organic modifier, and pH. Under optimal conditions, extraction recoveries of FLNZ in plasma and urine samples were 6.5% and 83.5%, respectively, corresponding to the enrichment factor of 13 in plasma matrix and 167 in urine matrix. Excellent sample clean-up was observed and good linearities (r(2)=0.9979 for plasma sample and 0.9995 for urine sample) were obtained in the range of 0.1-1000 ng/mL (plasma sample) and 0.01-1000 ng/mL (urine sample). The limits of detection (S/N=3) were 0.025 ng/mL in plasma matrix and 0.001 ng/mL in urine matrix by gas chromatography/mass spectrometry/mass spectrometry.

  3. An assessment of common atmospheric particulate matter sampling ...

    African Journals Online (AJOL)

    USER

    PM) was sampled using the tapered element oscillating microbalance, and the inductively coupled plasma mass spectroscopy (ICP-MS) and scanning electron microscopy coupled with energy dispersive spectrometry (SEM/EDS) were used for.

  4. Assessment of toxic heavy metal loading in topsoil samples within ...

    African Journals Online (AJOL)

    gilly

    with atomic absorption spectrophotomer (AAS) technique. Soil pH ranged from 6.65 to 8.23, sand from. 55.8 to 75.0%, silt from 16.6 to 34.6%, clay from 8.43 to 13.6%, and organic matter from 0.97 to 4.84%, respectively. These properties compared with those of background samples. Rock samples (RK) showed high Fe and ...

  5. Automated ambulatory assessment of cognitive performance, environmental conditions, and motor activity during military operations

    Science.gov (United States)

    Lieberman, Harris R.; Kramer, F. Matthew; Montain, Scott J.; Niro, Philip; Young, Andrew J.

    2005-05-01

    Until recently scientists had limited opportunities to study human cognitive performance in non-laboratory, fully ambulatory situations. Recently, advances in technology have made it possible to extend behavioral assessment to the field environment. One of the first devices to measure human behavior in the field was the wrist-worn actigraph. This device, now widely employed, can acquire minute-by-minute information on an individual"s level of motor activity. Actigraphs can, with reasonable accuracy, distinguish sleep from waking, the most critical and basic aspect of human behavior. However, rapid technologic advances have provided the opportunity to collect much more information from fully ambulatory humans. Our laboratory has developed a series of wrist-worn devices, which are not much larger then a watch, which can assess simple and choice reaction time, vigilance and memory. In addition, the devices can concurrently assess motor activity with much greater temporal resolution then the standard actigraph. Furthermore, they continuously monitor multiple environmental variables including temperature, humidity, sound and light. We have employed these monitors during training and simulated military operations to collect information that would typically be unavailable under such circumstances. In this paper we will describe various versions of the vigilance monitor and how each successive version extended the capabilities of the device. Samples of data from several studies are presented, included studies conducted in harsh field environments during simulated infantry assaults, a Marine Corps Officer training course and mechanized infantry (Stryker) operations. The monitors have been useful for documenting environmental conditions experienced by wearers, studying patterns of sleep and activity and examining the effects of nutritional manipulations on warfighter performance.

  6. Automated low-contrast pattern recognition algorithm for magnetic resonance image quality assessment.

    Science.gov (United States)

    Ehman, Morgan O; Bao, Zhonghao; Stiving, Scott O; Kasam, Mallik; Lanners, Dianna; Peterson, Teresa; Jonsgaard, Renee; Carter, Rickey; McGee, Kiaran P

    2017-08-01

    Low contrast (LC) detectability is a common test criterion for diagnostic radiologic quality control (QC) programs. Automation of this test is desirable in order to reduce human variability and to speed up analysis. However, automation is challenging due to the complexity of the human visual perception system and the ability to create algorithms that mimic this response. This paper describes the development and testing of an automated LC detection algorithm for use in the analysis of magnetic resonance (MR) images of the American College of Radiology (ACR) QC phantom. The detection algorithm includes fuzzy logic decision processes and various edge detection methods to quantify LC detectability. Algorithm performance was first evaluated using a single LC phantom MR image with the addition of incremental zero mean Gaussian noise resulting in a total of 200 images. A c-statistic was calculated to determine the role of CNR to indicate when the algorithm would detect ten spokes. To evaluate inter-rater agreement between experienced observers and the algorithm, a blinded observer study was performed on 196 LC phantom images acquired from nine clinical MR scanners. The nine scanners included two MR manufacturers and two field strengths (1.5 T, 3.0 T). Inter-rater and algorithm-rater agreement was quantified using Krippendorff's alpha. For the Gaussian noise added data, CNR ranged from 0.519 to 11.7 with CNR being considered an excellent discriminator of algorithm performance (c-statistic = 0.9777). Reviewer scoring of the clinical phantom data resulted in an inter-rater agreement of 0.673 with the agreement between observers and algorithm equal to 0.652, both of which indicate significant agreement. This study demonstrates that the detection of LC test patterns for MR imaging QC programs can be successfully developed and that their response can model the human visual detection system of expert MR QC readers. © 2017 American Association of Physicists in Medicine.

  7. A feasibility assessment of automated FISH image and signal analysis to assist cervical cancer detection

    Science.gov (United States)

    Wang, Xingwei; Li, Yuhua; Liu, Hong; Li, Shibo; Zhang, Roy R.; Zheng, Bin

    2012-02-01

    Fluorescence in situ hybridization (FISH) technology provides a promising molecular imaging tool to detect cervical cancer. Since manual FISH analysis is difficult, time-consuming, and inconsistent, the automated FISH image scanning systems have been developed. Due to limited focal depth of scanned microscopic image, a FISH-probed specimen needs to be scanned in multiple layers that generate huge image data. To improve diagnostic efficiency of using automated FISH image analysis, we developed a computer-aided detection (CAD) scheme. In this experiment, four pap-smear specimen slides were scanned by a dual-detector fluorescence image scanning system that acquired two spectrum images simultaneously, which represent images of interphase cells and FISH-probed chromosome X. During image scanning, once detecting a cell signal, system captured nine image slides by automatically adjusting optical focus. Based on the sharpness index and maximum intensity measurement, cells and FISH signals distributed in 3-D space were projected into a 2-D con-focal image. CAD scheme was applied to each con-focal image to detect analyzable interphase cells using an adaptive multiple-threshold algorithm and detect FISH-probed signals using a top-hat transform. The ratio of abnormal cells was calculated to detect positive cases. In four scanned specimen slides, CAD generated 1676 con-focal images that depicted analyzable cells. FISH-probed signals were independently detected by our CAD algorithm and an observer. The Kappa coefficients for agreement between CAD and observer ranged from 0.69 to 1.0 in detecting/counting FISH signal spots. The study demonstrated the feasibility of applying automated FISH image and signal analysis to assist cyto-geneticists in detecting cervical cancers.

  8. Needs Assessment Study in Science Education: Sample of Turkey

    OpenAIRE

    Z. Ozdilek; M. Ozkan

    2008-01-01

    A needs assessment process was conducted to determine the difficulties and requirements of a science unit as an example how needs assessment process can be used in science education in Turkey. A 40-item teacher questionnaire containing four dimensions related to a chemistry unit named “Travel to the Inner Structure of Matter” as presented in the current curriculum materials was administered. The questionnaire was completed by 130 elementary school science teachers in order to get their views ...

  9. Assessment of hearing threshold in adults with hearing loss using an automated system of cortical auditory evoked potential detection.

    Science.gov (United States)

    Durante, Alessandra Spada; Wieselberg, Margarita Bernal; Roque, Nayara; Carvalho, Sheila; Pucci, Beatriz; Gudayol, Nicolly; de Almeida, Kátia

    The use of hearing aids by individuals with hearing loss brings a better quality of life. Access to and benefit from these devices may be compromised in patients who present difficulties or limitations in traditional behavioral audiological evaluation, such as newborns and small children, individuals with auditory neuropathy spectrum, autism, and intellectual deficits, and in adults and the elderly with dementia. These populations (or individuals) are unable to undergo a behavioral assessment, and generate a growing demand for objective methods to assess hearing. Cortical auditory evoked potentials have been used for decades to estimate hearing thresholds. Current technological advances have lead to the development of equipment that allows their clinical use, with features that enable greater accuracy, sensitivity, and specificity, and the possibility of automated detection, analysis, and recording of cortical responses. To determine and correlate behavioral auditory thresholds with cortical auditory thresholds obtained from an automated response analysis technique. The study included 52 adults, divided into two groups: 21 adults with moderate to severe hearing loss (study group); and 31 adults with normal hearing (control group). An automated system of detection, analysis, and recording of cortical responses (HEARLab ® ) was used to record the behavioral and cortical thresholds. The subjects remained awake in an acoustically treated environment. Altogether, 150 tone bursts at 500, 1000, 2000, and 4000Hz were presented through insert earphones in descending-ascending intensity. The lowest level at which the subject detected the sound stimulus was defined as the behavioral (hearing) threshold (BT). The lowest level at which a cortical response was observed was defined as the cortical electrophysiological threshold. These two responses were correlated using linear regression. The cortical electrophysiological threshold was, on average, 7.8dB higher than the

  10. Assessment Study on Sensors and Automation in the Industries of the Future. Reports on Industrial Controls, Information Processing, Automation, and Robotics

    Energy Technology Data Exchange (ETDEWEB)

    Bennett, Bonnie [Adventium Labs; Boddy, Mark [Adventium Labs; Doyle, Frank [Univ. of California, Santa Barbara, CA (United States); Jamshidi, Mo [Univ. of New Mexico, Albuquerque, NM (United States); Ogunnaike, Tunde [Univ. of Delaware, Newark, DE (United States)

    2004-11-01

    This report presents the results of an expert study to identify research opportunities for Sensors & Automation, a sub-program of the U.S. Department of Energy (DOE) Industrial Technologies Program (ITP). The research opportunities are prioritized by realizable energy savings. The study encompasses the technology areas of industrial controls, information processing, automation, and robotics. These areas have been central areas of focus of many Industries of the Future (IOF) technology roadmaps. This report identifies opportunities for energy savings as a direct result of advances in these areas and also recognizes indirect means of achieving energy savings, such as product quality improvement, productivity improvement, and reduction of recycle.

  11. Evaluation of the falls telephone: an automated system for enduring assessment of falls.

    Science.gov (United States)

    van der Marck, Marjolein A; Overeem, Sebastiaan; Klok, Philomène C M; Bloem, Bastiaan R; Munneke, Marten

    2011-02-01

    To evaluate the reliability and user experiences of an automated telephone system to monitor falls during a prolonged period of time. Prospective cohort study. Four neurological outpatient clinics in the Netherlands. One hundred nineteen community-dwelling people with Parkinson's disease without dementia, because falls are common in this population. Clinical and demographic data were obtained. The Falls Telephone is a computerized telephone system through which participants can enter the number of falls during a particular period. During a follow-up of 1 to 40 weekly calls, 2,465 calls were made. In total, 173 no-fall entries and 115 fall entries were verified using personal telephone interviews. User experiences were evaluated in 90 of the 119 participants using structured telephone interviews. All no-fall entries and 78% of fall entries were confirmed to be correct. Sensitivity to detect falls was 100%, and specificity was 87%. Users regarded the Falls Telephone as a convenient tool to monitor falls. The Falls Telephone is a convenient and reliable instrument to monitor falls. The automated system has high specificity, obviating the need for time-consuming personal follow-up calls in the majority of nonfallers. As such, the Falls Telephone lends itself well to data collection in large trials with prolonged follow-up in participants with Parkinson's disease. © 2011, Copyright the Authors. Journal compilation © 2011, The American Geriatrics Society.

  12. Fully automated muscle quality assessment by Gabor filtering of second harmonic generation images

    Science.gov (United States)

    Paesen, Rik; Smolders, Sophie; Vega, José Manolo de Hoyos; Eijnde, Bert O.; Hansen, Dominique; Ameloot, Marcel

    2016-02-01

    Although structural changes on the sarcomere level of skeletal muscle are known to occur due to various pathologies, rigorous studies of the reduced sarcomere quality remain scarce. This can possibly be explained by the lack of an objective tool for analyzing and comparing sarcomere images across biological conditions. Recent developments in second harmonic generation (SHG) microscopy and increasing insight into the interpretation of sarcomere SHG intensity profiles have made SHG microscopy a valuable tool to study microstructural properties of sarcomeres. Typically, sarcomere integrity is analyzed by fitting a set of manually selected, one-dimensional SHG intensity profiles with a supramolecular SHG model. To circumvent this tedious manual selection step, we developed a fully automated image analysis procedure to map the sarcomere disorder for the entire image at once. The algorithm relies on a single-frequency wavelet-based Gabor approach and includes a newly developed normalization procedure allowing for unambiguous data interpretation. The method was validated by showing the correlation between the sarcomere disorder, quantified by the M-band size obtained from manually selected profiles, and the normalized Gabor value ranging from 0 to 1 for decreasing disorder. Finally, to elucidate the applicability of our newly developed protocol, Gabor analysis was used to study the effect of experimental autoimmune encephalomyelitis on the sarcomere regularity. We believe that the technique developed in this work holds great promise for high-throughput, unbiased, and automated image analysis to study sarcomere integrity by SHG microscopy.

  13. Nationwide Drinking Water Sampling Campaign for Exposure Assessments in Denmark

    Science.gov (United States)

    Voutchkova, Denitza Dimitrova; Hansen, Birgitte; Ernstsen, Vibeke; Kristiansen, Søren Munch

    2018-01-01

    Nationwide sampling campaign of treated drinking water of groundwater origin was designed and implemented in Denmark in 2013. The main purpose of the sampling was to obtain data on the spatial variation of iodine concentration and speciation in treated drinking water, which was supplied to the majority of the Danish population. This data was to be used in future exposure and epidemiologic studies. The water supply sector (83 companies, owning 144 waterworks throughout Denmark) was involved actively in the planning and implementation process, which reduced significantly the cost and duration of data collection. The dataset resulting from this collaboration covers not only iodine species (I−, IO3−, TI), but also major elements and parameters (pH, electrical conductivity, DOC, TC, TN, F−, Cl−, NO3−, SO42−, Ca2+, Mg2+, K+, Na+) and a long list of trace elements (n = 66). The water samples represent 144 waterworks abstracting about 45% of the annual Danish groundwater abstraction for drinking water purposes, which supply about 2.5 million Danes (45% of all Danish residents). This technical note presents the design, implementation, and limitations of such a sampling design in detail in order (1) to facilitate the future use of this dataset, (2) to inform future replication studies, or (3) to provide an example for other researchers. PMID:29518987

  14. an assessment of methods for sampling carabid beetles

    African Journals Online (AJOL)

    Mgina

    ground beetles. The number of samples, total number of individual carabid beetles, total number of species and species complementality for the six replicates are shown in Table 1. The total number of individuals caught using the two methods was 3025 for ground-searching methods. (mean of 4.97 individuals per one-hour.

  15. Determining sample size for assessing species composition in ...

    African Journals Online (AJOL)

    Species composition is measured in grasslands for a variety of reasons. Commonly, observations are made using the wheel-point apparatus, but the problem of determining optimum sample size has not yet been satisfactorily resolved. In this study the wheel-point apparatus was used to record 2 000 observations in each of ...

  16. GenomEra MRSA/SA, a fully automated homogeneous PCR assay for rapid detection of Staphylococcus aureus and the marker of methicillin resistance in various sample matrixes.

    Science.gov (United States)

    Hirvonen, Jari J; Kaukoranta, Suvi-Sirkku

    2013-09-01

    The GenomEra MRSA/SA assay (Abacus Diagnostica, Turku, Finland) is the first commercial homogeneous PCR assay using thermally stable, intrinsically fluorescent time-resolved fluorometric (TRF) labels resistant to autofluorescence and other background effects. This fully automated closed tube PCR assay simultaneously detects Staphylococcus aureus specific DNA and the mecA gene within 50 min. It can be used for both screening and confirmation of methicillin-resistant and -sensitive S. aureus (MRSA and MSSA) directly in different specimen types or from preceding cultures. The assay has shown excellent performance in comparisons with other diagnostic methods in all the sample types tested. The GenomEra MRSA/SA assay provides rapid assistance for the detection of MRSA as well as invasive staphylococcal infections and helps the early targeting of antimicrobial therapy to patients with potential MRSA infection.

  17. A new assay for cytotoxic lymphocytes, based on a radioautographic readout of 111In release, suitable for rapid, semi-automated assessment of limit-dilution cultures

    International Nuclear Information System (INIS)

    Shortman, K.; Wilson, A.

    1981-01-01

    A new assay for cytotoxic T lymphocytes is described, of general application, but particularly suitable for rapid, semi-automated assessment of multiple microculture tests. Target cells are labelled with high efficiency and to high specific activity with the oxine chelate of 111 indium. After a 3-4 h incubation of test cells with 5 X 10 3 labelled target cells in V wells of microtitre trays, samples of the supernatant are spotted on paper (5 μl) or transferred to soft-plastic U wells (25-50 μl) and the 111 In release assessed by radioautography. Overnight exposure of X-ray film with intensifying screens at -70 0 C gives an image which is an intense dark spot for maximum release, a barely visible darkening with the low spontaneous release, and a definite positive with 10% specific lysis. The degree of film darkening, which can be quantitated by microdensitometry, shows a linear relationship with cytotoxic T lymphocyte dose up to the 40% lysis level. The labelling intensity and sensitivity can be adjusted over a wide range, allowing a single batch of the short half-life isotope to serve for 2 weeks. The 96 assays from a single tray are developed simultaneously on a single small sheet of film. Many trays can be processed together, and handling is rapid if 96-channel automatic pipettors are used. The method allows rapid visual scanning for positive and negative limit dilution cultures in cytotoxic T cell precursor frequency and specificity studies. In addition, in conjunction with an automated densitometer designed to scan microtitre trays, the method provides an efficient alternative to isotope counting in routine cytotoxic assays. (Auth.)

  18. Evaluating the potential of automated telephony systems in rural communities: Field assessment for project Lwazi of HLT Meraka

    CSIR Research Space (South Africa)

    Gumede, T

    2008-11-01

    Full Text Available the potential role automated telephony services in the improving access to important government information and services. Our interviews, focus groups and surveys revealed that an automated telephony service could be greatly support current government efforts...

  19. Simultaneous analysis of organochlorinated pesticides (OCPs) and polychlorinated biphenyls (PCBs) from marine samples using automated pressurized liquid extraction (PLE) and Power Prep™ clean-up.

    Science.gov (United States)

    Helaleh, Murad I H; Al-Rashdan, Amal; Ibtisam, A

    2012-05-30

    An automated pressurized liquid extraction (PLE) method followed by Power Prep™ clean-up was developed for organochlorinated pesticide (OCP) and polychlorinated biphenyl (PCB) analysis in environmental marine samples of fish, squid, bivalves, shells, octopus and shrimp. OCPs and PCBs were simultaneously determined in a single chromatographic run using gas chromatography-mass spectrometry-negative chemical ionization (GC-MS-NCI). About 5 g of each biological marine sample was mixed with anhydrous sodium sulphate and placed in the extraction cell of the PLE system. PLE is controlled by means of a PC using DMS 6000 software. Purification of the extract was accomplished using automated Power Prep™ clean-up with a pre-packed disposable silica column (6 g) supplied by Fluid Management Systems (FMS). All OCPs and PCBs were eluted from the silica column using two types of solvent: 80 mL of hexane and a 50 mL mixture of hexane and dichloromethane (1:1). A wide variety of fish and shellfish were collected from the fish market and analyzed using this method. The total PCB concentrations were 2.53, 0.25, 0.24, 0.24, 0.17 and 1.38 ng g(-1) (w/w) for fish, squid, bivalves, shells, octopus and shrimp, respectively, and the corresponding total OCP concentrations were 30.47, 2.86, 0.92, 10.72, 5.13 and 18.39 ng g(-1) (w/w). Lipids were removed using an SX-3 Bio-Beads gel permeation chromatography (GPC) column. Analytical criteria such as recovery, reproducibility and repeatability were evaluated through a range of biological matrices. Copyright © 2012 Elsevier B.V. All rights reserved.

  20. A simple and automated method to determine macrocyclic musk fragrances in sewage sludge samples by headspace solid-phase microextraction and gas chromatography-mass spectrometry.

    Science.gov (United States)

    Vallecillos, Laura; Pocurull, Eva; Borrull, Francesc

    2013-11-01

    For the first time, headspace solid-phase microextraction (HS-SPME) has shown to be a powerful technique to extract macrocyclic musk fragrances directly from sewage sludge. It avoids the need to use additional extraction/preconcentration techniques or clean-up procedure and facilitates the automation of the method. Thus, a simple and fully automated method based on HS-SPME and GC-MS has been developed which allows the determination of eight macrocyclic musk fragrances at ngg(-1) (d.w.) levels. The optimal HS-SPME conditions were achieved when a PDMS/DVB 65μm fibre was exposed for 45min in the headspace of 0.25g sewage sludge samples mixed with 0.5mL of water stirred at 750rpm at 80°C. Optimal desorption conditions were found to be 250°C for 3min. Method detection limits were found in the low pgg(-1) range between 10pgg(-1) (d.w.) and 25pgg(-1) (d.w.) depending on the target analytes. In addition, under optimized conditions, the method gave good levels of intra-day and inter-day repeatabilities in sewage sludge with relative standard deviations varying between 1% to 9% and 6% to 15% respectively (n=5, 1000pgg(-1) d.w.). The applicability of the method was tested with sewage sludge from three urban sewage treatment plants (STPs). The analysis revealed the presence of the macrocyclic musks studied in several samples, with concentrations ranging between below MQL (method quantification limit) and 0.89ngg(-1) (d.w.). Copyright © 2013 Elsevier B.V. All rights reserved.

  1. chemical and microbiological assessment of surface water samples ...

    African Journals Online (AJOL)

    PROF EKWUEME

    are to assess, ascertain and evaluate the level, degree and type of pollution that characterize the surface water resources of Enugu area of ... implications for economic development since people relies heavily on it for various uses such as ... surface water bodies are prone to impacts from anthropogenic activities apart from ...

  2. Standard Format for Chromatographic-polarimetric System small samples assessment

    International Nuclear Information System (INIS)

    Naranjo, S.; Fajer, V.; Fonfria, C.; Patinno, R.

    2012-01-01

    The treatment of samples containing optically active substances to be evaluated as part of quality control of raw material entering industrial process, and also during the modifications exerted on it to obtain the desired final composition is still and unsolved problem for many industries. That is the case of sugarcane industry. Sometimes the troubles implied are enlarged because samples to be evaluated are not bigger than one milliliter. Reduction of gel beds in G-10 and G-50 chromatographic columns having an inner diameter of 16 mm, instead of 25, and bed heights adjustable to requirements by means of sliding stoppers to increase analytical power were evaluated with glucose and sucrose standards in concentrations from 1 to 10 g/dL, using aliquots of 1 ml without undesirable dilutions that could affect either detection or chromatographic profile. Assays with seaweed extracts gave good results that are shown. It is established the advantage to know concentration of a separated substance by the height of its peak and the savings in time and reagents resulting . Sample expanded uncertainty in both systems is compared. It is also presented several programs for data acquisition, storing and processing. (Author)

  3. Assessing total and volatile solids in municipal solid waste samples.

    Science.gov (United States)

    Peces, M; Astals, S; Mata-Alvarez, J

    2014-01-01

    Municipal solid waste is broadly generated in everyday activities and its treatment is a global challenge. Total solids (TS) and volatile solids (VS) are typical control parameters measured in biological treatments. In this study, the TS and VS were determined using the standard methods, as well as introducing some variants: (i) the drying temperature for the TS assays was 105°C, 70°C and 50°C and (ii) the VS were determined using different heating ramps from room tempature to 550°C. TS could be determined at either 105°C or 70°C, but oven residence time was tripled at 70°C, increasing from 48 to 144 h. The VS could be determined by smouldering the sample (where the sample is burnt without a flame), which avoids the release of fumes and odours in the laboratory. However, smouldering can generate undesired pyrolysis products as a consequence of carbonization, which leads to VS being underestimated. Carbonization can be avoided using slow heating ramps to prevent the oxygen limitation. Furthermore, crushing the sample cores decreased the time to reach constant weight and decreased the potential to underestimate VS.

  4. Automated system for generation of soil moisture products for agricultural drought assessment

    Science.gov (United States)

    Raja Shekhar, S. S.; Chandrasekar, K.; Sesha Sai, M. V. R.; Diwakar, P. G.; Dadhwal, V. K.

    2014-11-01

    Drought is a frequently occurring disaster affecting lives of millions of people across the world every year. Several parameters, indices and models are being used globally to forecast / early warning of drought and monitoring drought for its prevalence, persistence and severity. Since drought is a complex phenomenon, large number of parameter/index need to be evaluated to sufficiently address the problem. It is a challenge to generate input parameters from different sources like space based data, ground data and collateral data in short intervals of time, where there may be limitation in terms of processing power, availability of domain expertise, specialized models & tools. In this study, effort has been made to automate the derivation of one of the important parameter in the drought studies viz Soil Moisture. Soil water balance bucket model is in vogue to arrive at soil moisture products, which is widely popular for its sensitivity to soil conditions and rainfall parameters. This model has been encoded into "Fish-Bone" architecture using COM technologies and Open Source libraries for best possible automation to fulfill the needs for a standard procedure of preparing input parameters and processing routines. The main aim of the system is to provide operational environment for generation of soil moisture products by facilitating users to concentrate on further enhancements and implementation of these parameters in related areas of research, without re-discovering the established models. Emphasis of the architecture is mainly based on available open source libraries for GIS and Raster IO operations for different file formats to ensure that the products can be widely distributed without the burden of any commercial dependencies. Further the system is automated to the extent of user free operations if required with inbuilt chain processing for every day generation of products at specified intervals. Operational software has inbuilt capabilities to automatically

  5. Sampling

    CERN Document Server

    Thompson, Steven K

    2012-01-01

    Praise for the Second Edition "This book has never had a competitor. It is the only book that takes a broad approach to sampling . . . any good personal statistics library should include a copy of this book." —Technometrics "Well-written . . . an excellent book on an important subject. Highly recommended." —Choice "An ideal reference for scientific researchers and other professionals who use sampling." —Zentralblatt Math Features new developments in the field combined with all aspects of obtaining, interpreting, and using sample data Sampling provides an up-to-date treat

  6. A filter paper-based microdevice for low-cost, rapid, and automated DNA extraction and amplification from diverse sample types.

    Science.gov (United States)

    Gan, Wupeng; Zhuang, Bin; Zhang, Pengfei; Han, Junping; Li, Cai-Xia; Liu, Peng

    2014-10-07

    A plastic microfluidic device that integrates a filter disc as a DNA capture phase was successfully developed for low-cost, rapid and automated DNA extraction and PCR amplification from various raw samples. The microdevice was constructed by sandwiching a piece of Fusion 5 filter, as well as a PDMS (polydimethylsiloxane) membrane, between two PMMA (poly(methyl methacrylate)) layers. An automated DNA extraction from 1 μL of human whole blood can be finished on the chip in 7 minutes by sequentially aspirating NaOH, HCl, and water through the filter. The filter disc containing extracted DNA was then taken out directly for PCR. On-chip DNA purification from 0.25-1 μL of human whole blood yielded 8.1-21.8 ng of DNA, higher than those obtained using QIAamp® DNA Micro kits. To realize DNA extraction from raw samples, an additional sample loading chamber containing a filter net with an 80 μm mesh size was designed in front of the extraction chamber to accommodate sample materials. Real-world samples, including whole blood, dried blood stains on Whatman® 903 paper, dried blood stains on FTA™ cards, buccal swabs, saliva, and cigarette butts, can all be processed in the system in 8 minutes. In addition, multiplex amplification of 15 STR (short tandem repeat) loci and Sanger-based DNA sequencing of the 520 bp GJB2 gene were accomplished from the filters that contained extracted DNA from blood. To further prove the feasibility of integrating this extraction method with downstream analyses, "in situ" PCR amplifications were successfully performed in the DNA extraction chamber following DNA purification from blood and blood stains without DNA elution. Using a modified protocol to bond the PDMS and PMMA, our plastic PDMS devices withstood the PCR process without any leakage. This study represents a significant step towards the practical application of on-chip DNA extraction methods, as well as the development of fully integrated genetic analytical systems.

  7. Automated finite element updating using strain data for the lifetime reliability assessment of bridges

    International Nuclear Information System (INIS)

    Okasha, Nader M.; Frangopol, Dan M.; Orcesi, André D.

    2012-01-01

    The importance of improving the understanding of the performance of structures over their lifetime under uncertainty with information obtained from structural health monitoring (SHM) has been widely recognized. However, frameworks that efficiently integrate monitoring data into the life-cycle management of structures are yet to be developed. The objective of this paper is to propose and illustrate an approach for updating the lifetime reliability of aging bridges using monitored strain data obtained from crawl tests. It is proposed to use automated finite element model updating techniques as a tool for updating the resistance parameters of the structure. In this paper, the results from crawl tests are used to update the finite element model and, in turn, update the lifetime reliability. The original and updated lifetime reliabilities are computed using advanced computational tools. The approach is illustrated on an existing bridge.

  8. Assessing the Validity of Automated Webcrawlers as Data Collection Tools to Investigate Online Child Sexual Exploitation.

    Science.gov (United States)

    Westlake, Bryce; Bouchard, Martin; Frank, Richard

    2017-10-01

    The distribution of child sexual exploitation (CE) material has been aided by the growth of the Internet. The graphic nature and prevalence of the material has made researching and combating difficult. Although used to study online CE distribution, automated data collection tools (e.g., webcrawlers) have yet to be shown effective at targeting only relevant data. Using CE-related image and keyword criteria, we compare networks starting from CE websites to those from similar non-CE sexuality websites and dissimilar sports websites. Our results provide evidence that (a) webcrawlers have the potential to provide valid CE data, if the appropriate criterion is selected; (b) CE distribution is still heavily image-based suggesting images as an effective criterion; (c) CE-seeded networks are more hub-based and differ from non-CE-seeded networks on several website characteristics. Recommendations for improvements to reliable criteria selection are discussed.

  9. Impact assessment of an automated drug-dispensing system in a tertiary hospital.

    Science.gov (United States)

    de-Carvalho, Débora; Alvim-Borges, José Luiz; Toscano, Cristiana Maria

    2017-10-01

    To evaluate the costs and patient safety of a pilot implementation of an automated dispensing cabinet in a critical care unit of a private tertiary hospital in São Paulo/Brazil. This study considered pre- (January-August 2013) and post- (October 2013-October 2014) intervention periods. We considered the time and cost of personnel, number of adverse events, audit adjustments to patient bills, and urgent requests and returns of medications to the central pharmacy. Costs were evaluated based on a 5-year analytical horizon and are reported in Brazilian Reals (R$) and US dollars (USD). The observed decrease in the mean number of events reported with regard to the automated drug-dispensing system between pre- and post-implementation periods was not significant. Importantly, the numbers are small, which limits the power of the mean comparative analysis between the two periods. A reduction in work time was observed among the nurses and administrative assistants, whereas pharmacist assistants showed an increased work load that resulted in an overall 6.5 hours of work saved/day and a reduction of R$ 33,598 (USD 14,444) during the first year. The initial investment (R$ 206,065; USD 88,592) would have been paid off in 5 years considering only personnel savings. Other findings included significant reductions of audit adjustments to patient hospital bills and urgent requests and returns of medications to the central pharmacy. Evidence of the positive impact of this technology on personnel time and costs and on other outcomes of interest is important for decision making by health managers.

  10. Impact assessment of an automated drug-dispensing system in a tertiary hospital

    Directory of Open Access Journals (Sweden)

    Débora de-Carvalho

    Full Text Available OBJECTIVE: To evaluate the costs and patient safety of a pilot implementation of an automated dispensing cabinet in a critical care unit of a private tertiary hospital in São Paulo/Brazil. METHODS: This study considered pre- (January-August 2013 and post- (October 2013-October 2014 intervention periods. We considered the time and cost of personnel, number of adverse events, audit adjustments to patient bills, and urgent requests and returns of medications to the central pharmacy. Costs were evaluated based on a 5-year analytical horizon and are reported in Brazilian Reals (R$ and US dollars (USD. RESULTS: The observed decrease in the mean number of events reported with regard to the automated drug-dispensing system between pre- and post-implementation periods was not significant. Importantly, the numbers are small, which limits the power of the mean comparative analysis between the two periods. A reduction in work time was observed among the nurses and administrative assistants, whereas pharmacist assistants showed an increased work load that resulted in an overall 6.5 hours of work saved/day and a reduction of R$ 33,598 (USD 14,444 during the first year. The initial investment (R$ 206,065; USD 88,592 would have been paid off in 5 years considering only personnel savings. Other findings included significant reductions of audit adjustments to patient hospital bills and urgent requests and returns of medications to the central pharmacy. CONCLUSIONS: Evidence of the positive impact of this technology on personnel time and costs and on other outcomes of interest is important for decision making by health managers.

  11. Automated annotation and classification of BI-RADS assessment from radiology reports.

    Science.gov (United States)

    Castro, Sergio M; Tseytlin, Eugene; Medvedeva, Olga; Mitchell, Kevin; Visweswaran, Shyam; Bekhuis, Tanja; Jacobson, Rebecca S

    2017-05-01

    The Breast Imaging Reporting and Data System (BI-RADS) was developed to reduce variation in the descriptions of findings. Manual analysis of breast radiology report data is challenging but is necessary for clinical and healthcare quality assurance activities. The objective of this study is to develop a natural language processing (NLP) system for automated BI-RADS categories extraction from breast radiology reports. We evaluated an existing rule-based NLP algorithm, and then we developed and evaluated our own method using a supervised machine learning approach. We divided the BI-RADS category extraction task into two specific tasks: (1) annotation of all BI-RADS category values within a report, (2) classification of the laterality of each BI-RADS category value. We used one algorithm for task 1 and evaluated three algorithms for task 2. Across all evaluations and model training, we used a total of 2159 radiology reports from 18 hospitals, from 2003 to 2015. Performance with the existing rule-based algorithm was not satisfactory. Conditional random fields showed a high performance for task 1 with an F-1 measure of 0.95. Rules from partial decision trees (PART) algorithm showed the best performance across classes for task 2 with a weighted F-1 measure of 0.91 for BIRADS 0-6, and 0.93 for BIRADS 3-5. Classification performance by class showed that performance improved for all classes from Naïve Bayes to Support Vector Machine (SVM), and also from SVM to PART. Our system is able to annotate and classify all BI-RADS mentions present in a single radiology report and can serve as the foundation for future studies that will leverage automated BI-RADS annotation, to provide feedback to radiologists as part of a learning health system loop. Copyright © 2017. Published by Elsevier Inc.

  12. A Large-Sample Test of a Semi-Automated Clavicle Search Engine to Assist Skeletal Identification by Radiograph Comparison.

    Science.gov (United States)

    D'Alonzo, Susan S; Guyomarc'h, Pierre; Byrd, John E; Stephan, Carl N

    2017-01-01

    In 2014, a morphometric capability to search chest radiograph databases by quantified clavicle shape was published to assist skeletal identification. Here, we extend the validation tests conducted by increasing the search universe 18-fold, from 409 to 7361 individuals to determine whether there is any associated decrease in performance under these more challenging circumstances. The number of trials and analysts were also increased, respectively, from 17 to 30 skeletons, and two to four examiners. Elliptical Fourier analysis was conducted on clavicles from each skeleton by each analyst (shadowgrams trimmed from scratch in every instance) and compared to the search universe. Correctly matching individuals were found in shortlists of 10% of the sample 70% of the time. This rate is similar to, although slightly lower than, rates previously found for much smaller samples (80%). Accuracy and reliability are thereby maintained, even when the comparison system is challenged by much larger search universes. © 2016 American Academy of Forensic Sciences.

  13. Automated isotope dilution liquid chromatography-tandem mass spectrometry with on-line dilution and solid phase extraction for the measurement of cortisol in human serum sample.

    Science.gov (United States)

    Kawaguchi, Migaku; Eyama, Sakae; Takatsu, Akiko

    2014-08-05

    A candidate reference measurement procedure involving automated isotope dilution coupled with liquid chromatography-tandem mass spectrometry (ID-LC-MS/MS) with on-line dilution and solid phase extraction (SPE) has been developed and critically evaluated. We constructed the LC-MS/MS with on-line dilution and SPE system. An isotopically labelled internal standard, cortisol-d4, was added to serum sample. After equilibration, the methanol was added to the sample, and deproteination was performed. Then, the sample was applied to the LC-MS/MS system. The limit of detection (LOD) and limit of quantification (LOQ) were 0.2 and 1ngg(-1), respectively. Excellent precision was obtained with within-day variation (RSD) of 1.9% for ID-LC-MS/MS analysis (n=6). This method, which demonstrates simple, easy, good accuracy, high precision, and is free from interferences from structural analogues, qualifies as a reference measurement procedure. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Rapid and automated on-line solid phase extraction HPLC-MS/MS with peak focusing for the determination of ochratoxin A in wine samples.

    Science.gov (United States)

    Campone, Luca; Piccinelli, Anna Lisa; Celano, Rita; Pagano, Imma; Russo, Mariateresa; Rastrelli, Luca

    2018-04-01

    This study reports a fast and automated analytical procedure based on an on-line SPE-HPLC-MS/MS method for the automatic pre-concentration, clean up and sensitive determination of OTA in wine. The amount of OTA contained in 100μL of sample (pH≅5.5) was retained and concentrated on an Oasis MAX SPE cartridge. After a washing step to remove matrix interferents, the analyte was eluted in back-flush mode and the eluent from the SPE column was diluted through a mixing Tee, using an aqueous solution before the chromatographic separation achieved on a monolithic column. The developed method has been validated according to EU regulation N. 519/2014 and applied for the analysis of 41 red and 17 white wines. The developed method features minimal sample handling, low solvent consumption, high sample throughput, low analysis cost and provides an accurate and highly selective results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Assessing rare earth elements in quartz rich geological samples.

    Science.gov (United States)

    Santoro, A; Thoss, V; Ribeiro Guevara, S; Urgast, D; Raab, A; Mastrolitti, S; Feldmann, J

    2016-01-01

    Sodium peroxide (Na2O2) fusion coupled to Inductively Coupled Plasma Tandem Mass Spectrometry (ICP-MS/MS) measurements was used to rapidly screen quartz-rich geological samples for rare earth element (REE) content. The method accuracy was checked with a geological reference material and Instrumental Neutron Activation Analysis (INAA) measurements. The used mass-mode combinations presented accurate results (only exception being (157)Gd in He gas mode) with recovery of the geological reference material QLO-1 between 80% and 98% (lower values for Lu, Nd and Sm) and in general comparable to INAA measurements. Low limits of detection for all elements were achieved, generally below 10 pg g(-1), as well as measurement repeatability below 15%. Overall, the Na2O2/ICP-MS/MS method proved to be a suitable lab-based method to quickly and accurately screen rock samples originating from quartz-rich geological areas for rare earth element content; particularly useful if checking commercial viability. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Assessing the sampling strategy in the Northwestern Mediterranean Sea

    Science.gov (United States)

    Margirier, Félix; Testor, Pierre; Bosse, Anthony; Heslop, Emma; L'Hévéder, Blandine; Arsouze, Thomas; Houpert, Loic; Mortier, Laurent

    2017-04-01

    The deployment of numerous autonomous platforms (gliders, argo floats, moorings) added to the repeated ship cruises in the Northwestern Mediterranean Sea account for a considerable data coverage of the area through the past 10 years. In this study, we analyse the in-situ observations' ability to assess for the changes in the Northwester Mediterranean basin water masses properties over time. Comparing the observed time series for the different regions and different water masses to that of a glider simulator in the NEMO-Med12 model, we estimate both the quality of the model and the skill of the in-situ observations in reproducing the evolution of the basin properties.

  17. Maximizing the value of mobile health monitoring by avoiding redundant patient reports: prediction of depression-related symptoms and adherence problems in automated health assessment services.

    Science.gov (United States)

    Piette, John D; Sussman, Jeremy B; Pfeiffer, Paul N; Silveira, Maria J; Singh, Satinder; Lavieri, Mariel S

    2013-07-05

    Interactive voice response (IVR) calls enhance health systems' ability to identify health risk factors, thereby enabling targeted clinical follow-up. However, redundant assessments may increase patient dropout and represent a lost opportunity to collect more clinically useful data. We determined the extent to which previous IVR assessments predicted subsequent responses among patients with depression diagnoses, potentially obviating the need to repeatedly collect the same information. We also evaluated whether frequent (ie, weekly) IVR assessment attempts were significantly more predictive of patients' subsequent reports than information collected biweekly or monthly. Using data from 1050 IVR assessments for 208 patients with depression diagnoses, we examined the predictability of four IVR-reported outcomes: moderate/severe depressive symptoms (score ≥10 on the PHQ-9), fair/poor general health, poor antidepressant adherence, and days in bed due to poor mental health. We used logistic models with training and test samples to predict patients' IVR responses based on their five most recent weekly, biweekly, and monthly assessment attempts. The marginal benefit of more frequent assessments was evaluated based on Receiver Operator Characteristic (ROC) curves and statistical comparisons of the area under the curves (AUC). Patients' reports about their depressive symptoms and perceived health status were highly predictable based on prior assessment responses. For models predicting moderate/severe depression, the AUC was 0.91 (95% CI 0.89-0.93) when assuming weekly assessment attempts and only slightly less when assuming biweekly assessments (AUC: 0.89; CI 0.87-0.91) or monthly attempts (AUC: 0.89; CI 0.86-0.91). The AUC for models predicting reports of fair/poor health status was similar when weekly assessments were compared with those occurring biweekly (P value for the difference=.11) or monthly (P=.81). Reports of medication adherence problems and days in bed were

  18. A study of techniques applicable to the automated assessment of alpha and proton-induced etch pits

    International Nuclear Information System (INIS)

    Harvey, J.R.; Weeks, A.R.

    1984-01-01

    Two approaches to the automation of the read-out of chemically-etched tracks have been explored,. In the first, the etch pits are filled with a scintillator and irradiated with alpha particles within a light-tight assembly. The size and number of light pulses generated are related to the size and number of etch pits present. This technique has found application in the assessment of etch pits due to alpha particles from atmospheric radon. In the second approach, several similar techniques have been explored. These techniques have the common feature that optical systems can be developed such that light is transmitted preferentially through etch pits. A variety of techniques for illumination, detection and interpretation of the resultant information have been explored and conclusions drawn. (author)

  19. AUTOMATED GEOSPATIAL WATERSHED ASSESSMENT: A GIS-BASED HYDROLOGIC MODELING TOOL

    Science.gov (United States)

    Planning and assessment in land and water resource management are evolving toward complex, spatially explicit regional assessments. These problems have to be addressed with distributed models that can compute runoff and erosion at different spatial and temporal scales. The extens...

  20. Automation system risk assessment; Gestao de riscos de sistemas de automacao

    Energy Technology Data Exchange (ETDEWEB)

    Almeida, Felipe; Furuta, Margarete [PricewaterhouseCoopers, Sao Paulo, SP (Brazil)

    2008-07-01

    In spite of what was learnt with the history of several industrial accidents and the initiative in relation to the protection measures taken by several Organizations, even today extremely serious accidents keep happening in the automation environment. If on one hand, the growing competitive demands an increase in the productivity, which several times is only possible through more complex processes that make the facilities operate in their limits, on the other, it is noticeable that the control, automation and security related to these more complex processes are also more difficult to manage. The incessant investigation of past accidents resulted in the prevention of specific dangerous events in relation to industrial facilities but it also brought to light the importance of actions related to the Risk Management Process. Without doubt, the consequences resulting from the materialization of an event can reach disastrous and unrecoverable levels, taking into account the comprehensiveness of the potential risks. Studies carried out by international entities illustrate that the inadequate management of risks is the factor that contributes more for the occurrence of accidents. The initial phase of the risk management results from the analysis of the risks inherent to the process (e.g. to determine the probability of occurring each different potential failure), the study of the consequences if these failures occur, the definition of the risk considered acceptable according to the appetite for risks established by the Organization, and the identification of the action on the risk that can vary in a spectrum that can involve decreasing, transferring, avoiding or accepting the risk. This work has as objective to exploit the aspects for the implementation of Risk Management in the Oil and Gas segment. The study also seeks to explicit, based on the systematic registry of the measured items, how it is possible to evaluate the financial exposure of the risk to which a project

  1. Assessment of Sr-90 in water samples: precision and accuracy

    Energy Technology Data Exchange (ETDEWEB)

    Nisti, Marcelo B.; Saueia, Cátia H.R.; Castilho, Bruna; Mazzilli, Barbara P., E-mail: mbnisti@ipen.br, E-mail: chsaueia@ipen.br, E-mail: bcastilho@ipen.br, E-mail: mazzilli@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2017-11-01

    The study of artificial radionuclides dispersion into the environment is very important to control the nuclear waste discharges, nuclear accidents and nuclear weapons testing. The accidents in Fukushima Daiichi Nuclear Power Plant and Chernobyl Nuclear Power Plant, released several radionuclides in the environment by aerial deposition and liquid discharge, with various level of radioactivity. The {sup 90}Sr was one of the elements released into the environment. The {sup 90}Sr is produced by nuclear fission with a physical half-life of 28.79 years with decay energy of 0.546 MeV. The aims of this study are to evaluate the precision and accuracy of three methodologies for the determination of {sup 90}Sr in water samples: Cerenkov, LSC direct method and with radiochemical separation. The performance of the methodologies was evaluated by using two scintillation counters (Quantulus and Hidex). The parameters Minimum Detectable Activity (MDA) and Figure Of Merit (FOM) were determined for each method, the precision and accuracy were checked using {sup 90}Sr standard solutions. (author)

  2. Environmental Assessment of Natural Radioactivity in Soil Samples

    Directory of Open Access Journals (Sweden)

    Ryuta Hazama

    2009-07-01

    Full Text Available The environmental impacts and hazards due to the unstoppable hot mud flow by the East Java ‘LUSI’ Mud Volcano are increasing since its unexpected eruption on May 29, 2006. Analysis should be undertaken, not only to examine its impact on human health and the environment, but also to explore the potential benefits of the mud flow. One may be able to tap the mud flow as a material source for brick and cement. Recently there has been great concern about the health risks associated with exposure to natural radioactivity present in soil and building materials all over the world. In this context, measurements for natural radioactive isotopes such as 238U and 232Th series, and 40K in mud samples were carried out using the HPGe (High-Purity Germanium detector to determine the re-usability of the mud. 226Ra, 232Th and 40K activity concentrations were found to be 13±1, 15±1 and 111±3 Bq/kg (1 Bq = 1 sec-1, respectively, and the corresponding activity index was found to be 0.16±0.02. These values were compared with previous data and our measured accuracy was improved by a factor of nine at the maximum. Radium equivalent activity, external and internal hazard indices, and annual effective dose equivalent were also evaluated and all were found to be within acceptable limits.

  3. Independent assessment of matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS) sample preparation quality: Effect of sample preparation on MALDI-MS of synthetic polymers.

    Science.gov (United States)

    Kooijman, Pieter C; Kok, Sander; Honing, Maarten

    2017-02-28

    Matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS) provides detailed and in-depth information about the molecular characteristics of synthetic polymers. To obtain the most accurate results the sample preparation parameters should be chosen to suit the sample and the aim of the experiment. Because the underlying principles of MALDI are still not fully known, a priori determination of optimal sample preparation protocols is often not possible. Employing an automated sample preparation quality assessment method recently presented by us we quantified the sample preparation quality obtained using various sample preparation protocols. Six conventional matrices with and without added potassium as a cationization agent and six ionic liquid matrices (ILMs) were assessed using poly(ethylene glycol) (PEG), polytetrahydrofuran (PTHF) and poly(methyl methacrylate) (PMMA) as samples. All sample preparation protocols were scored and ranked based on predefined quality parameters and spot-to-spot repeatability. Clearly distinctive preferences were observed in matrix identity and cationization agent for PEG, PTHF and PMMA, as the addition of an excess of potassium cationization agent results in an increased score for PMMA and a contrasting matrix-dependent effect for PTHF and PEG. The addition of excess cationization agent to sample mixtures dissipates any overrepresentation of high molecular weight polymer species. Our results show reduced ionization efficiency and similar sample deposit homogeneity for all tested ILMs, compared with well-performing conventional MALDI matrices. The results published here represent a start in the unsupervised quantification of sample preparation quality for MALDI samples. This method can select the best sample preparation parameters for any synthetic polymer sample and the results can be used to formulate hypotheses on MALDI principles. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  4. Assessing Rotation-Invariant Feature Classification for Automated Wildebeest Population Counts.

    Directory of Open Access Journals (Sweden)

    Colin J Torney

    Full Text Available Accurate and on-demand animal population counts are the holy grail for wildlife conservation organizations throughout the world because they enable fast and responsive adaptive management policies. While the collection of image data from camera traps, satellites, and manned or unmanned aircraft has advanced significantly, the detection and identification of animals within images remains a major bottleneck since counting is primarily conducted by dedicated enumerators or citizen scientists. Recent developments in the field of computer vision suggest a potential resolution to this issue through the use of rotation-invariant object descriptors combined with machine learning algorithms. Here we implement an algorithm to detect and count wildebeest from aerial images collected in the Serengeti National Park in 2009 as part of the biennial wildebeest count. We find that the per image error rates are greater than, but comparable to, two separate human counts. For the total count, the algorithm is more accurate than both manual counts, suggesting that human counters have a tendency to systematically over or under count images. While the accuracy of the algorithm is not yet at an acceptable level for fully automatic counts, our results show this method is a promising avenue for further research and we highlight specific areas where future research should focus in order to develop fast and accurate enumeration of aerial count data. If combined with a bespoke image collection protocol, this approach may yield a fully automated wildebeest count in the near future.

  5. An automated headspace solid-phasemicroextraction followed by gas chromatography–mass spectrometry method to determine macrocyclic musk fragrances in wastewater samples.

    Science.gov (United States)

    Vallecillos, Laura; Borrull, Francesc; Pocurull, Eva

    2013-11-01

    A fully automated method has been developed for determining eight macrocyclic musk fragrances in wastewater samples. The method is based on headspace solid-phase microextraction (HS-SPME) followed by gas chromatography–mass spectrometry (GC-MS). Five different fibres (PDMS 7 μm, PDMS 30 μm, PDMS 100 μm, PDMS/DVB 65 μm and PA 85 μm) were tested. The best conditions were achieved when a PDMS/DVB 65 μm fibre was exposed for 45 min in the headspace of 10 mL water samples at 100 °C. Method detection limits were found in the low ng L−1 range between 0.75 and 5 ng L−1 depending on the target analytes. Moreover, under optimized conditions, the method gave good levels of intra-day and inter-day repeatabilities in wastewater samples with relative standard deviations (n =5, 1,000 ng L−1) less than 9 and 14 %, respectively. The applicability of the method was tested with influent and effluent urban wastewater samples from different wastewater treatment plants (WWTPs). The analysis of influent urban wastewater revealed the presence of most of the target macrocyclic musks with, most notably, the maximum concentration of ambrettolide being obtained in WWTP A (4.36 μg L−1) and WWTP B (12.29 μg L−1), respectively. The analysis of effluent urban wastewater showed a decrease in target analyte concentrations, with exaltone and ambrettolide being the most abundant compounds with concentrations varying between below method quantification limit (MQL) and 2.46 μg L−1.

  6. A lab-on-a-chip system integrating tissue sample preparation and multiplex RT-qPCR for gene expression analysis in point-of-care hepatotoxicity assessment.

    Science.gov (United States)

    Lim, Geok Soon; Chang, Joseph S; Lei, Zhang; Wu, Ruige; Wang, Zhiping; Cui, Kemi; Wong, Stephen

    2015-10-21

    A truly practical lab-on-a-chip (LOC) system for point-of-care testing (POCT) hepatotoxicity assessment necessitates the embodiment of full-automation, ease-of-use and "sample-in-answer-out" diagnostic capabilities. To date, the reported microfluidic devices for POCT hepatotoxicity assessment remain rudimentary as they largely embody only semi-quantitative or single sample/gene detection capabilities. In this paper, we describe, for the first time, an integrated LOC system that is somewhat close to a practical POCT hepatotoxicity assessment device - it embodies both tissue sample preparation and multiplex real-time RT-PCR. It features semi-automation, is relatively easy to use, and has "sample-in-answer-out" capabilities for multiplex gene expression analysis. Our tissue sample preparation module incorporating both a microhomogenizer and surface-treated paramagnetic microbeads yielded high purity mRNA extracts, considerably better than manual means of extraction. A primer preloading surface treatment procedure and the single-loading inlet on our multiplex real-time RT-PCR module simplify off-chip handling procedures for ease-of-use. To demonstrate the efficacy of our LOC system for POCT hepatotoxicity assessment, we perform a preclinical animal study with the administration of cyclophosphamide, followed by gene expression analysis of two critical protein biomarkers for liver function tests, aspartate transaminase (AST) and alanine transaminase (ALT). Our experimental results depict normalized fold changes of 1.62 and 1.31 for AST and ALT, respectively, illustrating up-regulations in their expression levels and hence validating their selection as critical genes of interest. In short, we illustrate the feasibility of multiplex gene expression analysis in an integrated LOC system as a viable POCT means for hepatotoxicity assessment.

  7. Congestive heart failure information extraction framework for automated treatment performance measures assessment.

    Science.gov (United States)

    Meystre, Stéphane M; Kim, Youngjun; Gobbel, Glenn T; Matheny, Michael E; Redd, Andrew; Bray, Bruce E; Garvin, Jennifer H

    2017-04-01

    This paper describes a new congestive heart failure (CHF) treatment performance measure information extraction system - CHIEF - developed as part of the Automated Data Acquisition for Heart Failure project, a Veterans Health Administration project aiming at improving the detection of patients not receiving recommended care for CHF. CHIEF is based on the Apache Unstructured Information Management Architecture framework, and uses a combination of rules, dictionaries, and machine learning methods to extract left ventricular function mentions and values, CHF medications, and documented reasons for a patient not receiving these medications. The training and evaluation of CHIEF were based on subsets of a reference standard of various clinical notes from 1083 Veterans Health Administration patients. Domain experts manually annotated these notes to create our reference standard. Metrics used included recall, precision, and the F 1 -measure. In general, CHIEF extracted CHF medications with high recall (>0.990) and good precision (0.960-0.978). Mentions of Left Ventricular Ejection Fraction were also extracted with high recall (0.978-0.986) and precision (0.986-0.994), and quantitative values of Left Ventricular Ejection Fraction were found with 0.910-0.945 recall and with high precision (0.939-0.976). Reasons for not prescribing CHF medications were more difficult to extract, only reaching fair accuracy with about 0.310-0.400 recall and 0.250-0.320 precision. This study demonstrated that applying natural language processing to unlock the rich and detailed clinical information found in clinical narrative text notes makes fast and scalable quality improvement approaches possible, eventually improving management and outpatient treatment of patients suffering from CHF. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  8. [A simple method for assessment of RNA integrity in laser capture microdissection samples].

    Science.gov (United States)

    Tian, Ying-fang; Wei, Zhao-ming; Chen, Xin-lin; Qiu, Fen; Xiao, Xin-li; Kang, Qian-yan; Zhu, Bo-feng; Tian, Yu-mei; Zhang, Jun-feng; Liu, Yong

    2008-10-01

    To develop a simple method for assessment of RNA integrity in laser capture microdissection (LCM) samples. The total RNA were isolated from the LCM samples and the sections before and after microdissection and examined by agarose gel electrophoresis. Real-time PCR was employed to assess the RNA from LCM samples, and the quantity of RNA was theoretically estimated according to the average total RNA product in mammalian cells (10 ng/1000 cells). When the total RNA from the sections before and after microdissection was intact, the RNA from LCM samples also had good quality, and the 28S and 18S rRNAs were visualized by ethidium bromide staining. Real-time PCR also showed good RNA quality in the LCM samples. A simple method for quantitative and qualitative assessment of the RNA from LCM samples is established, which can also be applied to assessment of DNA or proteins in LCM samples.

  9. A neural-symbolic system for automated assessment in training simulators - A position paper

    NARCIS (Netherlands)

    Penning, H.L.H. de; Kappé, B.; Bosch, K. van den

    2009-01-01

    Performance assessment in training simulators is a complex task. It requires monitoring and interpreting the student’s behaviour in the simulator using knowledge of the training task, the environment and a lot of experience. Assessment in simulators is therefore generally done by human observers. To

  10. Design of a Screen Based Simulation for Training and Automated Assessment of Teamwork Skills

    Science.gov (United States)

    2017-08-01

    training, assessment, screen-based simulation, communication, leadership, situation monitoring, mutual support, psychological safety 16. SECURITY...feedback. Teamwork training, assessment, screen-based simulation, communication, leadership, situation monitoring, mutual support, psychological safety...themes were gathered from the interviews including introduction styles from team members, roles/structure of teams, psychological safety to encourage

  11. Automated Assessment of the Quality of Peer Reviews Using Natural Language Processing Techniques

    Science.gov (United States)

    Ramachandran, Lakshmi; Gehringer, Edward F.; Yadav, Ravi K.

    2017-01-01

    A "review" is textual feedback provided by a reviewer to the author of a submitted version. Peer reviews are used in academic publishing and in education to assess student work. While reviews are important to e-commerce sites like Amazon and e-bay, which use them to assess the quality of products and services, our work focuses on…

  12. The development of photo-transferred thermoluminescence (PTTL) technique and its application to the routine re-assessment of absorbed dose in the NRPB automated personal dosimetry system

    Science.gov (United States)

    McKinlay, A. F.; Bartlett, D. T.; Smith, P. A.

    1980-09-01

    The re-assessment of absorbed dose has been performed routinely on customer dosimeters as part of the NRPB Automated Thermoluminescence Dosimetry Service. The technique of photo-transferred thermoluminescence (PTTL), as applied to the 2 element (LiF: PTFE discs) NRPB dosimeter inserts, is described.

  13. Locoregional control of non-small cell lung cancer in relation to automated early assessment of tumor regression on cone beam computed tomography

    DEFF Research Database (Denmark)

    Brink, Carsten; Bernchou, Uffe; Bertelsen, Anders

    2014-01-01

    PURPOSE: Large interindividual variations in volume regression of non-small cell lung cancer (NSCLC) are observable on standard cone beam computed tomography (CBCT) during fractionated radiation therapy. Here, a method for automated assessment of tumor volume regression is presented and its...... therapy provides biological information on the specific tumor. This could potentially form the basis for personalized response adaptive therapy....

  14. Comparing Visually Assessed BI-RADS Breast Density and Automated Volumetric Breast Density Software : A Cross-Sectional Study in a Breast Cancer Screening Setting

    NARCIS (Netherlands)

    van der Waal, Danielle; den Heeten, Gerard J.; Pijnappel, Ruud M.; Schuur, Klaas H.; Timmers, Johanna M. H.; Verbeek, Andre L. M.; Broeders, Mireille J. M.

    2015-01-01

    Introduction The objective of this study is to compare different methods for measuring breast density, both visual assessments and automated volumetric density, in a breast cancer screening setting. These measures could potentially be implemented in future screening programmes, in the context of

  15. Comparing Visually Assessed BI-RADS Breast Density and Automated Volumetric Breast Density Software: A Cross-Sectional Study in a Breast Cancer Screening Setting

    NARCIS (Netherlands)

    Waal, D. van der; Heeten, GJ. den; Pijnappel, R.M.; Schuur, K.H.; Timmers, J.M.; Verbeek, A.L.; Broeders, M.J.

    2015-01-01

    INTRODUCTION: The objective of this study is to compare different methods for measuring breast density, both visual assessments and automated volumetric density, in a breast cancer screening setting. These measures could potentially be implemented in future screening programmes, in the context of

  16. Comparing Visually Assessed BI-RADS Breast Density and Automated Volumetric Breast Density Software: A Cross-Sectional Study in a Breast Cancer Screening Setting

    NARCIS (Netherlands)

    van der Waal, Daniëlle; den Heeten, Gerard J.; Pijnappel, Ruud M.; Schuur, Klaas H.; Timmers, Johanna M. H.; Verbeek, André L. M.; Broeders, Mireille J. M.

    2015-01-01

    The objective of this study is to compare different methods for measuring breast density, both visual assessments and automated volumetric density, in a breast cancer screening setting. These measures could potentially be implemented in future screening programmes, in the context of personalised

  17. A METHOD FOR AUTOMATED ANALYSIS OF 10 ML WATER SAMPLES CONTAINING ACIDIC, BASIC, AND NEUTRAL SEMIVOLATILE COMPOUNDS LISTED IN USEPA METHOD 8270 BY SOLID PHASE EXTRACTION COUPLED IN-LINE TO LARGE VOLUME INJECTION GAS CHROMATOGRAPHY/MASS SPECTROMETRY

    Science.gov (United States)

    Data is presented showing the progress made towards the development of a new automated system combining solid phase extraction (SPE) with gas chromatography/mass spectrometry for the single run analysis of water samples containing a broad range of acid, base and neutral compounds...

  18. Devices used by automated milking systems are similarly accurate in estimating milk yield and in collecting a representative milk sample compared with devices used by farms with conventional milk recording

    NARCIS (Netherlands)

    Kamphuis, Claudia; Dela Rue, B.; Turner, S.A.; Petch, S.

    2015-01-01

    Information on accuracy of milk-sampling devices used on farms with automated milking systems (AMS) is essential for development of milk recording protocols. The hypotheses of this study were (1) devices used by AMS units are similarly accurate in estimating milk yield and in collecting

  19. Automated sequential injection-microcolumn approach with on-line flame atomic absorption spectrometric detection for implementing metal fractionation schemes of homogeneous and non-homogeneous solid samples of environmental interest

    DEFF Research Database (Denmark)

    Chomchoei, Roongrat; Miró, Manuel; Hansen, Elo Harald

    2005-01-01

    An automated sequential injection (SI) system incorporating a dual-conical microcolumn is proposed as a versatile approach for the accommodation of both single and sequential extraction schemes for metal fractionation of solid samples of environmental concern. Coupled to flame atomic absorption...

  20. SEMI–AUTOMATED ASSESSMENT OF MICROMECHANICAL PROPERTIES OF THE METAL FOAMS ON THE CELL-WALL LEVEL

    Directory of Open Access Journals (Sweden)

    Nela Krčmářová

    2016-12-01

    Full Text Available Metal foams are innovative porous material used for wide range of application such as deformation energy or sound absorption, filter material, or microbiological incubation carrier. To predict mechanical properties of the metal foam is necessary to precisely describe elasto–plastic properties of the foam on cell–wall level. Indentation with low load is suitable tool for this purpose. In this paper custom designed instrumented microindentation device was used for measurement of cell-wall characteristics of two different aluminium foams (ALPORAS and ALCORAS. To demonstrate the possibility of automated statistical estimation of measured characteristics the device had been enhanced by semi-automatic indent positioning and evaluation procedures based on user-defined grid. Vickers hardness was measured on two samples made from ALPORAS aluminium foam and one sample from ALCORAS aluminium foam. Average Vickers hardness of ALPORAS foam was 24.465HV1.019 and average Vickers hardness of ALCORAS was 36.585HV1.019.

  1. Automated assessment of joint synovitis activity from medical ultrasound and power doppler examinations using image processing and machine learning methods

    Directory of Open Access Journals (Sweden)

    Rafal Cupek

    2016-11-01

    Full Text Available Objectives : Rheumatoid arthritis is the most common rheumatic disease with arthritis, and causes substantial functional disability in approximately 50% patients after 10 years. Accurate measurement of the disease activity is crucial to provide an adequate treatment and care to the patients. The aim of this study is focused on a computer aided diagnostic system that supports an assessment of synovitis severity. Material and methods : This paper focus on a computer aided diagnostic system that was developed within joint Polish–Norwegian research project related to the automated assessment of the severity of synovitis. Semiquantitative ultrasound with power Doppler is a reliable and widely used method of assessing synovitis. Synovitis is estimated by ultrasound examiner using the scoring system graded from 0 to 3. Activity score is estimated on the basis of the examiner’s experience or standardized ultrasound atlases. The method needs trained medical personnel and the result can be affected by a human error. Results : The porotype of a computer-aided diagnostic system and algorithms essential for an analysis of ultrasonic images of finger joints are main scientific output of the MEDUSA project. Medusa Evaluation System prototype uses bone, skin, joint and synovitis area detectors for mutual structural model based evaluation of synovitis. Finally, several algorithms that support the semi-automatic or automatic detection of the bone region were prepared as well as a system that uses the statistical data processing approach in order to automatically localize the regions of interest. Conclusions : Semiquantitative ultrasound with power Doppler is a reliable and widely used method of assessing synovitis. Activity score is estimated on the basis of the examiner’s experience and the result can be affected by a human error. In this paper we presented the MEDUSA project which is focused on a computer aided diagnostic system that supports an

  2. Using integrated environmental modeling to automate a process-based Quantitative Microbial Risk Assessment

    Science.gov (United States)

    Integrated Environmental Modeling (IEM) organizes multidisciplinary knowledge that explains and predicts environmental-system response to stressors. A Quantitative Microbial Risk Assessment (QMRA) is an approach integrating a range of disparate data (fate/transport, exposure, an...

  3. Using Integrated Environmental Modeling to Automate a Process-Based Quantitative Microbial Risk Assessment (presentation)

    Science.gov (United States)

    Integrated Environmental Modeling (IEM) organizes multidisciplinary knowledge that explains and predicts environmental-system response to stressors. A Quantitative Microbial Risk Assessment (QMRA) is an approach integrating a range of disparate data (fate/transport, exposure, and...

  4. Towards Automating Clinical Assessments: A Survey of the Timed Up and Go (TUG)

    OpenAIRE

    Sprint, Gina; Cook, Diane; Weeks, Douglas

    2015-01-01

    Older adults often suffer from functional impairments that affect their ability to perform everyday tasks. To detect the onset and changes in abilities, healthcare professionals administer standardized assessments. Recently, technology has been utilized to complement these clinical assessments to gain a more objective and detailed view of functionality. In the clinic and at home, technology is able to provide more information about patient performance and reduce subjectivity in outcome measur...

  5. Assessing the Alcohol-BMI Relationship in a US National Sample of College Students

    Science.gov (United States)

    Barry, Adam E.; Piazza-Gardner, Anna K.; Holton, M. Kim

    2015-01-01

    Objective: This study sought to assess the body mass index (BMI)-alcohol relationship among a US national sample of college students. Design: Secondary data analysis using the Fall 2011 National College Health Assessment (NCHA). Setting: A total of 44 US higher education institutions. Methods: Participants included a national sample of college…

  6. Evaluating hydrological response to forecasted land-use change—scenario testing with the automated geospatial watershed assessment (AGWA) tool

    Science.gov (United States)

    Kepner, William G.; Semmens, Darius J.; Hernandez, Mariano; Goodrich, David C.

    2009-01-01

    Envisioning and evaluating future scenarios has emerged as a critical component of both science and social decision-making. The ability to assess, report, map, and forecast the life support functions of ecosystems is absolutely critical to our capacity to make informed decisions to maintain the sustainable nature of our ecosystem services now and into the future. During the past two decades, important advances in the integration of remote imagery, computer processing, and spatial-analysis technologies have been used to develop landscape information that can be integrated with hydrologic models to determine long-term change and make predictive inferences about the future. Two diverse case studies in northwest Oregon (Willamette River basin) and southeastern Arizona (San Pedro River) were examined in regard to future land use scenarios relative to their impact on surface water conditions (e.g., sediment yield and surface runoff) using hydrologic models associated with the Automated Geospatial Watershed Assessment (AGWA) tool. The base reference grid for land cover was modified in both study locations to reflect stakeholder preferences 20 to 60 yrs into the future, and the consequences of landscape change were evaluated relative to the selected future scenarios. The two studies provide examples of integrating hydrologic modeling with a scenario analysis framework to evaluate plausible future forecasts and to understand the potential impact of landscape change on ecosystem services.

  7. Mammographic Breast Density Assessment Using Automated Volumetric Software and Breast Imaging Reporting and Data System (BIRADS) Categorization by Expert Radiologists.

    Science.gov (United States)

    Damases, Christine N; Brennan, Patrick C; Mello-Thoms, Claudia; McEntee, Mark F

    2016-01-01

    To investigate agreement on mammographic breast density (MD) assessment between automated volumetric software and Breast Imaging Reporting and Data System (BIRADS) categorization by expert radiologists. Forty cases of left craniocaudal and mediolateral oblique mammograms from 20 women were used. All images had their volumetric density classified using Volpara density grade (VDG) and average volumetric breast density percentage. The same images were then classified into BIRADS categories (I-IV) by 20 American Board of Radiology examiners. The results demonstrated a moderate agreement (κ = 0.537; 95% CI = 0.234-0.699) between VDG classification and radiologists' BIRADS density assessment. Interreader agreement using BIRADS also demonstrated moderate agreement (κ = 0.565; 95% CI = 0.519-0.610) ranging from 0.328 to 0.669. Radiologists' average BIRADS was lower than average VDG scores by 0.33, with their mean being 2.13, whereas the mean VDG was 2.48 (U = -3.742; P BIRADS showed a very strong positive correlation (ρ = 0.91; P BIRADS and average volumetric breast density percentage (ρ = 0.94; P BIRADS; interreader variations still exist within BIRADS. Because of the increasing importance of MD measurement in clinical management of patients, widely accepted, reproducible, and accurate measures of MD are required. Copyright © 2016 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  8. An Automated Grass-Based Procedure to Assess the Geometrical Accuracy of the Openstreetmap Paris Road Network

    Science.gov (United States)

    Brovelli, M. A.; Minghini, M.; Molinari, M. E.

    2016-06-01

    OpenStreetMap (OSM) is the largest spatial database of the world. One of the most frequently occurring geospatial elements within this database is the road network, whose quality is crucial for applications such as routing and navigation. Several methods have been proposed for the assessment of OSM road network quality, however they are often tightly coupled to the characteristics of the authoritative dataset involved in the comparison. This makes it hard to replicate and extend these methods. This study relies on an automated procedure which was recently developed for comparing OSM with any road network dataset. It is based on three Python modules for the open source GRASS GIS software and provides measures of OSM road network spatial accuracy and completeness. Provided that the user is familiar with the authoritative dataset used, he can adjust the values of the parameters involved thanks to the flexibility of the procedure. The method is applied to assess the quality of the Paris OSM road network dataset through a comparison against the French official dataset provided by the French National Institute of Geographic and Forest Information (IGN). The results show that the Paris OSM road network has both a high completeness and spatial accuracy. It has a greater length than the IGN road network, and is found to be suitable for applications requiring spatial accuracies up to 5-6 m. Also, the results confirm the flexibility of the procedure for supporting users in carrying out their own comparisons between OSM and reference road datasets.

  9. Inadequacy of manual measurements compared to automated CT volumetry in assessment of treatment response of pulmonary metastases using RECIST criteria

    International Nuclear Information System (INIS)

    Marten, Katharina; Auer, Florian; Schmidt, Stefan; Rummeny, Ernst J.; Engelke, Christoph; Kohl, Gerhard

    2006-01-01

    The purpose of this study was to compare relative values of manual unidimensional measurements (MD) and automated volumetry (AV) for longitudinal treatment response assessment in patients with pulmonary metastases. Fifty consecutive patients with pulmonary metastases and repeat chest multidetector-row CT (median interval=2 months) were independently assessed by two radiologists for treatment response using Response Evaluation Criteria In Solid Tumours (RECIST). Statistics included relative measurement errors (RME), intra-/interobserver correlations, limits of agreement (95% LoA), and kappa. A total of 202 metastases (median volume=182.22 mm 3 ; range=3.16-5,195.13 mm 3 ) were evaluated. RMEs were significantly higher for MD than for AV (intraobserver RME=2.34-3.73% and 0.15-0.22% for MD and AV respectively; P 3 for AV. The interobserver 95% LoA were -1.46 to 1.92 mm for MD and -11.17 to 9.33 mm 3 for AV. There was total intra-/interobserver agreement on response using AV (κ=1). MD intra- and interobserver agreements were 0.73-0.84 and 0.77-0.80 respectively. Of the 200 MD response ratings, 28 (14/50 patients) were discordant. Agreement using MD dropped significantly from total remission to progressive disease (P<0.05). We therefore conclude that AV allows for better reproducibility of response evaluation in pulmonary metastases and should be preferred to MD in these patients. (orig.)

  10. Assessment of social cognition in non-human primates using a network of computerized automated learning device (ALDM) test systems.

    Science.gov (United States)

    Fagot, Joël; Marzouki, Yousri; Huguet, Pascal; Gullstrand, Julie; Claidière, Nicolas

    2015-05-05

    Fagot & Paleressompoulle(1) and Fagot & Bonte(2) have published an automated learning device (ALDM) for the study of cognitive abilities of monkeys maintained in semi-free ranging conditions. Data accumulated during the last five years have consistently demonstrated the efficiency of this protocol to investigate individual/physical cognition in monkeys, and have further shown that this procedure reduces stress level during animal testing(3). This paper demonstrates that networks of ALDM can also be used to investigate different facets of social cognition and in-group expressed behaviors in monkeys, and describes three illustrative protocols developed for that purpose. The first study demonstrates how ethological assessments of social behavior and computerized assessments of cognitive performance could be integrated to investigate the effects of socially exhibited moods on the cognitive performance of individuals. The second study shows that batteries of ALDM running in parallel can provide unique information on the influence of the presence of others on task performance. Finally, the last study shows that networks of ALDM test units can also be used to study issues related to social transmission and cultural evolution. Combined together, these three studies demonstrate clearly that ALDM testing is a highly promising experimental tool for bridging the gap in the animal literature between research on individual cognition and research on social cognition.

  11. Calibration of a liquid scintillation counter to assess tritium levels in various samples

    CERN Document Server

    Al-Haddad, M N; Abu-Jarad, F A

    1999-01-01

    An LKB-Wallac 1217 Liquid Scintillation Counter (LSC) was calibrated with a newly adopted cocktail. The LSC was then used to measure tritium levels in various samples to assess the compliance of tritium levels with the recommended international levels. The counter was calibrated to measure both biological and operational samples for personnel and for an accelerator facility at KFUPM. The biological samples include the bioassay (urine), saliva, and nasal tests. The operational samples of the light ion linear accelerator include target cooling water, organic oil, fomblin oil, and smear samples. Sets of standards, which simulate various samples, were fabricated using traceable certified tritium standards. The efficiency of the counter was obtained for each sample. The typical range of the efficiencies varied from 33% for smear samples down to 1.5% for organic oil samples. A quenching curve for each sample is presented. The minimum detectable activity for each sample was established. Typical tritium levels in bio...

  12. Devices used by automated milking systems are similarly accurate in estimating milk yield and in collecting a representative milk sample compared with devices used by farms with conventional milk recording

    OpenAIRE

    Kamphuis, Claudia; Dela Rue, B.; Turner, S.A.; Petch, S.

    2015-01-01

    Information on accuracy of milk-sampling devices used on farms with automated milking systems (AMS) is essential for development of milk recording protocols. The hypotheses of this study were (1) devices used by AMS units are similarly accurate in estimating milk yield and in collecting representative milk samples compared with devices used by certified milk recording providers on farms with conventional milking systems (CMS) and (2) devices used on both AMS and CMS comply with accuracy crite...

  13. Usability of a virtual reality environment simulating an automated teller machine for assessing and training persons with acquired brain injury.

    Science.gov (United States)

    Fong, Kenneth N K; Chow, Kathy Y Y; Chan, Bianca C H; Lam, Kino C K; Lee, Jeff C K; Li, Teresa H Y; Yan, Elaine W H; Wong, Asta T Y

    2010-04-30

    This study aimed to examine the usability of a newly designed virtual reality (VR) environment simulating the operation of an automated teller machine (ATM) for assessment and training. Part I involved evaluation of the sensitivity and specificity of a non-immersive VR program simulating an ATM (VR-ATM). Part II consisted of a clinical trial providing baseline and post-intervention outcome assessments. A rehabilitation hospital and university-based teaching facilities were used as the setting. A total of 24 persons in the community with acquired brain injury (ABI)--14 in Part I and 10 in Part II--made up the participants in the study. In Part I, participants were randomized to receive instruction in either an "early" or a "late" VR-ATM program and were assessed using both the VR program and a real ATM. In Part II, participants were assigned in matched pairs to either VR training or computer-assisted instruction (CAI) teaching programs for six 1-hour sessions over a three-week period. Two behavioral checklists based on activity analysis of cash withdrawals and money transfers using a real ATM were used to measure average reaction time, percentage of incorrect responses, level of cues required, and time spent as generated by the VR system; also used was the Neurobehavioral Cognitive Status Examination. The sensitivity of the VR-ATM was 100% for cash withdrawals and 83.3% for money transfers, and the specificity was 83% and 75%, respectively. For cash withdrawals, the average reaction time of the VR group was significantly shorter than that of the CAI group (p = 0.021). We found no significant differences in average reaction time or accuracy between groups for money transfers, although we did note positive improvement for the VR-ATM group. We found the VR-ATM to be usable as a valid assessment and training tool for relearning the use of ATMs prior to real-life practice in persons with ABI.

  14. Capturing the ineffable : Collecting, analysing, and automating web document quality assessments

    NARCIS (Netherlands)

    Ceolin, Davide; Noordegraaf, Julia; Aroyo, Lora

    2016-01-01

    Automatic estimation of the quality of Web documents is a challenging task, especially because the definition of quality heavily depends on the individuals who define it, on the context where it applies, and on the nature of the tasks at hand. Our long-term goal is to allow automatic assessment of

  15. GIS-BASED HYDROLOGIC MODELING: THE AUTOMATED GEOSPATIAL WATERSHED ASSESSMENT TOOL

    Science.gov (United States)

    Planning and assessment in land and water resource management are evolving from simple, local scale problems toward complex, spatially explicit regional ones. Such problems have to be addressed with distributed models that can compute runoff and erosion at different spatial a...

  16. Comparison of Visual Assessment of Breast Density in BI-RADS 4th and 5th Editions With Automated Volumetric Measurement.

    Science.gov (United States)

    Youk, Ji Hyun; Kim, So Jung; Son, Eun Ju; Gweon, Hye Mi; Kim, Jeong-Ah

    2017-09-01

    The purpose of this study was to compare visual assessments of mammographic breast density by radiologists using BI-RADS 4th and 5th editions in correlation with automated volumetric breast density measurements. A total of 337 consecutive full-field digital mammographic examinations with standard views were retrospectively assessed by two radiologists for mammographic breast density according to BI-RADS 4th and 5th editions. Fully automated measurement of the volume of fibroglandular tissue and total breast and percentage breast density was performed with a commercially available software program. Interobserver and intraobserver agreement was assessed with kappa statistics. The distributions of breast density categories for both editions of BI-RADS were compared and correlated with volumetric data. Interobserver agreement on breast density category was moderate to substantial (κ = 0.58-0.63) with use of BI-RADS 4th edition and substantial (κ = 0.63-0.66) with use of the 5th edition but without significant difference between the two editions. For intraobserver agreement between the two editions, the distributions of density category were significantly different (p BI-RADS (p BI-RADS 5th edition revealed a higher proportion of dense breast than assessment using BI-RADS 4th edition. Nevertheless, automated volumetric density assessment had good correlation with visual assessment for both editions of BI-RADS.

  17. A Comparison of Visual Assessment and Automated Digital Image Analysis of Ki67 Labeling Index in Breast Cancer.

    Science.gov (United States)

    Zhong, Fangfang; Bi, Rui; Yu, Baohua; Yang, Fei; Yang, Wentao; Shui, Ruohong

    2016-01-01

    Ki67 labeling index (LI) is critical for treatment options and prognosis evaluation in breast cancer. Visual assessment (VA) is widely used to assess Ki67 LI, but has some limitations. In this study, we compared the consistency between VA and automated digital image analysis (DIA) of Ki67 LI in breast cancer, and to evaluate the application value of DIA in Ki67 LI assessment. Ki67 immunostained slides of 155 cases of primary invasive breast cancer were eyeballing assessed by five breast pathologists and automated digital image analyzed by one breast pathologist respectively. Two score methods, hot-spot score and average score, were used to choose score areas. The intra-class correlation coefficient (ICC) was used to analyze the consistency between VA and DIA, and Wilcoxon signed-rank test was used to compare the median of paired-difference between VA and DIA values. (1) A perfect agreement was demonstrated between VA and DIA of Ki67 LI by ICC analysis (PKi67 LI was also showed in G2-G3, ER positive/HER2 negative cases. Average score and hot-spot score methods both demonstrated a perfect concordance between VA and DIA of Ki67 LI. (2) All cases were classified into three groups by VA values (≤10%, 11%-30% and >30% Ki67 LI). The concordance was relatively lower in intermediate Ki67 LI group (11%-30%) compared with high (>30%) Ki67 LI groups according to both methods. (3) All cases were classified into three groups by paired-difference (d) between VA values of hot-spot score and average score (dKi67 staining distribution (heterogeneous or homogenous) and reproducibility of assessment. A perfect agreement was all demonstrated in three groups, and a slightly better Ki67 LI agreement between VA and DIA was indicated in homogenous staining slides than in heterogeneous staining ones. (4) VA values were relatively smaller than DIA values (average score: median of paired-difference -3.72; hot-spot score: median of paired-difference -9.12). An excellent agreement between VA

  18. Fully automated ionic liquid-based headspace single drop microextraction coupled to GC-MS/MS to determine musk fragrances in environmental water samples.

    Science.gov (United States)

    Vallecillos, Laura; Pocurull, Eva; Borrull, Francesc

    2012-09-15

    A fully automated ionic liquid-based headspace single drop microextraction (IL-HS-SDME) procedure has been developed for the first time to preconcentrate trace amounts of ten musk fragrances extensively used in personal care products (six polycyclic musks, three nitro musks and one polycyclic musk degradation product) from wastewater samples prior to analysis by gas chromatography and ion trap tandem mass spectrometry (GC-IT-MS/MS). Due to the low volatility of the ILs, a large internal diameter liner (3.4 mm i.d.) was used to improve the ILs evaporation. Furthermore, a piece of glass wool was introduced into the liner to avoid the entrance of the ILs in the GC column and a guard column was used to prevent analytical column damages. The main factors influencing the IL-HS-SDME were optimized. For all species, the highest enrichments factors were achieved using 1 μL of 1-octyl-3-methylimidazolium hexafluorophosphate ([OMIM][PF(6)]) ionic liquid exposed in the headspace of 10 mL water samples containing 300 g L(-1) of NaCl and stirred at 750 rpm and 60 °C for 45 min. All compounds were determined by direct injection GC-IT-MS/MS with a chromatographic time of 19 min. Method detection limits were found in the low ng mL(-1) range between 0.010 ng mL(-1) and 0.030 ng mL(-1) depending on the target analytes. Also, under optimized conditions, the method gave good levels of intra-day and inter-day repeatabilities in wastewater samples with relative standard deviations varying between 3% and 6% and 5% and 11%, respectively (n=3, 1 ng mL(-1)). The applicability of the method was tested with different wastewater samples from influent and effluent urban wastewater treatment plants (WWTPs) and one potable treatment plant (PTP). The analysis of influent urban wastewater revealed the presence of galaxolide and tonalide at concentrations of between 2.10 ng mL(-1) and 0.29 ng mL(-1) and 0.32 ng mL(-1) and MQL (Method Quantification Limit), respectively; while the remaining

  19. TU-H-CAMPUS-JeP3-02: Automated Dose Accumulation and Dose Accuracy Assessment for Online Or Offline Adaptive Replanning

    Energy Technology Data Exchange (ETDEWEB)

    Chen, G; Ahunbay, E; Li, X [Medical College of Wisconsin, Milwaukee, WI (United States)

    2016-06-15

    Purpose: With introduction of high-quality treatment imaging during radiation therapy (RT) delivery, e.g., MR-Linac, adaptive replanning of either online or offline becomes appealing. Dose accumulation of delivered fractions, a prerequisite for the adaptive replanning, can be cumbersome and inaccurate. The purpose of this work is to develop an automated process to accumulate daily doses and to assess the dose accumulation accuracy voxel-by-voxel for adaptive replanning. Methods: The process includes the following main steps: 1) reconstructing daily dose for each delivered fraction with a treatment planning system (Monaco, Elekta) based on the daily images using machine delivery log file and considering patient repositioning if applicable, 2) overlaying the daily dose to the planning image based on deformable image registering (DIR) (ADMIRE, Elekta), 3) assessing voxel dose deformation accuracy based on deformation field using predetermined criteria, and 4) outputting accumulated dose and dose-accuracy volume histograms and parameters. Daily CTs acquired using a CT-on-rails during routine CT-guided RT for sample patients with head and neck and prostate cancers were used to test the process. Results: Daily and accumulated doses (dose-volume histograms, etc) along with their accuracies (dose-accuracy volume histogram) can be robustly generated using the proposed process. The test data for a head and neck cancer case shows that the gross tumor volume decreased by 20% towards the end of treatment course, and the parotid gland mean dose increased by 10%. Such information would trigger adaptive replanning for the subsequent fractions. The voxel-based accuracy in the accumulated dose showed that errors in accumulated dose near rigid structures were small. Conclusion: A procedure as well as necessary tools to automatically accumulate daily dose and assess dose accumulation accuracy is developed and is useful for adaptive replanning. Partially supported by Elekta, Inc.

  20. Assessing breast cancer masking risk in full field digital mammography with automated texture analysis

    DEFF Research Database (Denmark)

    Kallenberg, Michiel Gijsbertus J; Lillholm, Martin; Diao, Pengfei

    2015-01-01

    and subsequently appeared as interval cancers. To obtain mammograms without cancerous tissue, we took the contralateral mammograms. We developed a novel machine learning based method called convolutional sparse autoencoder to characterize mammographic texture. The method was trained and tested on raw mammograms......Purpose: The goal of this work is to develop a method to assess the risk of breast cancer masking, based on image characteristics beyond breast density. Method: From the Dutch breast cancer screening program we collected 285 screen detected cancers, and 109 cancers that were screen negative...... to determine cancer detection status in a five-fold cross validation. To assess the interaction of the texture scores with breast density, Volpara Density Grade was determined for each image. Results: We grouped women into low (VDG 1/2) versus high (VDG 3/4) dense, and low (Quartile 1/2) versus high (Q 3...

  1. Automated Neuropsychological Assessment Metrics Version 4 (ANAM4): Select Psychometric Properties and Administration Procedures

    Science.gov (United States)

    2012-12-01

    Vestibular Pathology in Blast-Related Traumatic Brain Injury. Otology & Neurotology. 2011;32(4):571-580 510.1097/MAO.1090b1013e318210b318218fa. 8...the Assessment of Vestibular Deficits. Otology & Neurotology. 1998;19(6):790-796. 28. Schubert MC, Herdman SJ, Tusa RJ. Vertical Dynamic Visual Acuity...in Normal Subjects and Patients with Vestibular Hypofunction. Otology & Neurotology. 2002;23(3):372-377. 29. Mohammed M WS, Marchetti G, Sparto P

  2. Automated Neuropsychological assessment Metrics Version 4 (ANAM4): Select Psychometric Properties and Administration Procedures

    Science.gov (United States)

    2014-12-01

    and Reports (0704-0188), 1215 Jefferson Davis Highway, Suite 1204, Arlington, VA 22202-4302. Respondents should be aware that notwithstanding any...staff. This screening included the assessment for symptoms of at- tention defi cit disorder (Conners Adult ADHD Rating Scale – Self-Report: Short...positive for ADD/ ADHD symptomatology (t-score . 70 on the CAARS-S:S, or self- report of prior ADD/ ADHD diagnosis) or brain injury (positive or

  3. Automated breast tissue density assessment using high order regional texture descriptors in mammography

    Science.gov (United States)

    Law, Yan Nei; Lieng, Monica Keiko; Li, Jingmei; Khoo, David Aik-Aun

    2014-03-01

    Breast cancer is the most common cancer and second leading cause of cancer death among women in the US. The relative survival rate is lower among women with a more advanced stage at diagnosis. Early detection through screening is vital. Mammography is the most widely used and only proven screening method for reliably and effectively detecting abnormal breast tissues. In particular, mammographic density is one of the strongest breast cancer risk factors, after age and gender, and can be used to assess the future risk of disease before individuals become symptomatic. A reliable method for automatic density assessment would be beneficial and could assist radiologists in the evaluation of mammograms. To address this problem, we propose a density classification method which uses statistical features from different parts of the breast. Our method is composed of three parts: breast region identification, feature extraction and building ensemble classifiers for density assessment. It explores the potential of the features extracted from second and higher order statistical information for mammographic density classification. We further investigate the registration of bilateral pairs and time-series of mammograms. The experimental results on 322 mammograms demonstrate that (1) a classifier using features from dense regions has higher discriminative power than a classifier using only features from the whole breast region; (2) these high-order features can be effectively combined to boost the classification accuracy; (3) a classifier using these statistical features from dense regions achieves 75% accuracy, which is a significant improvement from 70% accuracy obtained by the existing approaches.

  4. Assessing the reliability of the five minute speech sample against the Camberwell family interview in a chronic fatigue syndrome sample.

    Science.gov (United States)

    Band, Rebecca; Chadwick, Ella; Hickman, Hannah; Barrowclough, Christine; Wearden, Alison

    2016-05-01

    The current study aimed to examine the reliability of the Five Minute Speech Sample (FMSS) for assessing relative Expressed Emotion (EE) compared with the Camberwell Family Interview (CFI) in a sample of relatives of adult patients with Chronic Fatigue Syndrome (CFS). 21 relatives were recruited and completed both assessments. The CFI was conducted first for all participants, with the FMSS conducted approximately one month later. Trained raters independently coded both EE measures; high levels of rating reliability were established for both measures. Comparisons were conducted for overall EE status, emotional over-involvement (EOI) and criticism. The distribution of high and low-EE was equivalent across the two measures, with the FMSS correctly classifying EE is 71% of cases (n=15). The correspondence between the FMSS and CFI ratings was found to be non-significant for all categorical variables. However, the number of critical comments made by relatives during the FMSS significantly correlated with the number of critical comments made during the CFI. The poorest correspondence between the measures was observed for the EOI dimension. The findings suggest that the FMSS may be a useful screening tool for identifying high-EE, particularly criticism, within a sample of relatives of patients with CFS. However, the two measures should not be assumed equivalent, and the CFI should be used where possible, particularly with respect to understanding EOI. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  5. The application of x-ray fluorescence and diffraction to the characterization of environmental assessment samples

    International Nuclear Information System (INIS)

    Censullo, A.C.; Briden, F.E.

    1982-01-01

    Some of the results of tests on environmental assessment samples are reported on. The utility of the J.W. Criss fundamental parameters computer program is evaluated for samples in which only one standard per element was used and where the standard matrix did not strictly resemble the unknown matrix. The environmental significance of a sample is dependent not only on its elemental composition, but also on the species or phases which the elements comprise. X-ray powder diffraction may be used to advantage for speciation. Multi-phase environmental assessment samples are amenable to XRD interpretation. Some results of the application of the Joint Committee on Power Diffraction Standards computer interpretatin of typical environmental samples are discussed. They were shown to contribute to the specification of the complex samples that are encountered in environmental assessments

  6. AUTOMATED GEOSPATIAL WATERSHED ASSESSMENT (AGWA): A GIS-BASED HYDROLOGIC MODELING TOOL FOR LANDSCAPE ASSESSMENT AND WATERSHED MANAGEMENT

    Science.gov (United States)

    The assessment of land use and land cover is an extremely important activity for contemporary land management. A large body of current literature suggests that human land-use practice is the most important factor influencing natural resource management and environmental condition...

  7. Automated modal tracking and fatigue assessment of a wind turbine based on continuous dynamic monitoring

    Directory of Open Access Journals (Sweden)

    Oliveira Gustavo

    2015-01-01

    Full Text Available The paper describes the implementation of a dynamic monitoring system at a 2.0 MW onshore wind turbine. The system is composed by two components aiming at the structural integrity and fatigue assessment. The first component enables the continuous tracking of modal characteristics of the wind turbine (natural frequency values, modal damping ratios and mode shapes in order to detect abnormal deviations of these properties, which may be caused by the occurrence of structural damage. On the other hand, the second component allows the estimation of the remaining fatigue lifetime of the structure based on the analysis of the measured cycles of structural vibration.

  8. An automated procedure for the assessment of white matter hyperintensities by multispectral (T1, T2, PD) MRI and an evaluation of its between-centre reproducibility based on two large community databases

    International Nuclear Information System (INIS)

    Maillard, Pauline; Delcroix, Nicolas; Crivello, Fabrice; Gicquel, Sebastien; Joliot, Marc; Tzourio-Mazoyer, Nathalie; Dufouil, Carole; Alperovitch, Annick; Tzourio, Christophe; Mazoyer, Bernard

    2008-01-01

    An automated procedure for the detection, quantification, localization and statistical mapping of white matter hyperintensities (WMH) on T2-weighted magnetic resonance (MR) images is presented and validated based on the results of a between-centre reproducibility study. The first step is the identification of white matter (WM) tissue using a multispectral (T1, T2, PD) segmentation. In a second step, WMH are identified within the WM tissue by segmenting T2 images, isolating two different classes of WMH voxels - low- and high-contrast WMH voxels, respectively. The reliability of the whole procedure was assessed by applying it to the analysis of two large MR imaging databases (n = 650 and n710, respectively) of healthy elderly subjects matched for demographic characteristics. Average overall WMH load and spatial distribution were found to be similar in the two samples, (1.81 and 1.79% of the WM volume, respectively). White matter hyperintensity load was found to be significantly associated with both age and high blood pressure, with similar effects in both samples. With specific reference to the 650 subject cohort, we also found that WMH load provided by this automated procedure was significantly associated with visual grading of the severity of WMH, as assessed by a trained neurologist. The results show that this method is sensitive, well correlated with semi-quantitative visual rating and highly reproducible. (orig.)

  9. An automated procedure for the assessment of white matter hyperintensities by multispectral (T1, T2, PD) MRI and an evaluation of its between-centre reproducibility based on two large community databases

    Energy Technology Data Exchange (ETDEWEB)

    Maillard, Pauline; Delcroix, Nicolas; Crivello, Fabrice; Gicquel, Sebastien; Joliot, Marc; Tzourio-Mazoyer, Nathalie [GIP Cyceron, Centre d' Imagerie-Neurosciences et Applications aux Pathologies, CI-NAPS, CNRS, CEA, Universite de Caen/Universite Paris Descartes, Boulevard Becquerel, BP 5229, Caen (France); Dufouil, Carole; Alperovitch, Annick; Tzourio, Christophe [Universite Pierre et Marie Curie, INSERM U708, Neuroepidemiologie, Paris (France); Mazoyer, Bernard [GIP Cyceron, Centre d' Imagerie-Neurosciences et Applications aux Pathologies, CI-NAPS, CNRS, CEA, Universite de Caen/Universite Paris Descartes, Boulevard Becquerel, BP 5229, Caen (France); Institut Universitaire de France, Paris (France); CHU du Caen, Unite IRM, Caen (France)

    2008-01-15

    An automated procedure for the detection, quantification, localization and statistical mapping of white matter hyperintensities (WMH) on T2-weighted magnetic resonance (MR) images is presented and validated based on the results of a between-centre reproducibility study. The first step is the identification of white matter (WM) tissue using a multispectral (T1, T2, PD) segmentation. In a second step, WMH are identified within the WM tissue by segmenting T2 images, isolating two different classes of WMH voxels - low- and high-contrast WMH voxels, respectively. The reliability of the whole procedure was assessed by applying it to the analysis of two large MR imaging databases (n = 650 and n= 710, respectively) of healthy elderly subjects matched for demographic characteristics. Average overall WMH load and spatial distribution were found to be similar in the two samples, (1.81 and 1.79% of the WM volume, respectively). White matter hyperintensity load was found to be significantly associated with both age and high blood pressure, with similar effects in both samples. With specific reference to the 650 subject cohort, we also found that WMH load provided by this automated procedure was significantly associated with visual grading of the severity of WMH, as assessed by a trained neurologist. The results show that this method is sensitive, well correlated with semi-quantitative visual rating and highly reproducible. (orig.)

  10. Efficiency comparisons of fish sampling gears for a lentic ecosystem health assessments in Korea

    Directory of Open Access Journals (Sweden)

    Jeong-Ho Han

    2016-12-01

    Full Text Available The key objective of this study was to analyze the sampling efficiency of various fish sampling gears for a lentic ecosystem health assessment. A fish survey for the lentic ecosystem health assessment model was sampled twice from 30 reservoirs during 2008–2012. During the study, fishes of 81 species comprising 53,792 individuals were sampled from 30 reservoirs. A comparison of sampling gears showed that casting nets were the best sampling gear with high species richness (69 species, whereas minnow traps were the worst gear with low richness (16 species. Fish sampling efficiency, based on the number of individual catch per unit effort, was best in fyke nets (28,028 individuals and worst in minnow traps (352 individuals. When we compared trammel nets and kick nets versus fyke nets and casting nets, the former were useful in terms of the number of fish individuals but not in terms of the number of fish species.

  11. Determination of aflatoxins in food samples by automated on-line in-tube solid-phase microextraction coupled with liquid chromatography-mass spectrometry.

    Science.gov (United States)

    Nonaka, Y; Saito, K; Hanioka, N; Narimatsu, S; Kataoka, H

    2009-05-15

    A simple and sensitive automated method for determination of aflatoxins (B1, B2, G1, and G2) in nuts, cereals, dried fruits, and spices was developed consisting of in-tube solid-phase microextraction (SPME) coupled with liquid chromatography-mass spectrometry (LC-MS). Aflatoxins were separated within 8 min by high-performance liquid chromatography using a Zorbax Eclipse XDB-C8 column with methanol/acetonitrile (60/40, v/v): 5mM ammonium formate (45:55) as the mobile phase. Electrospray ionization conditions in the positive ion mode were optimized for MS detection of aflatoxins. The pseudo-molecular ions [M+H](+) were used to detect aflatoxins in selected ion monitoring (SIM) mode. The optimum in-tube SPME conditions were 25draw/eject cycles of 40 microL of sample using a Supel-Q PLOT capillary column as an extraction device. The extracted aflatoxins were readily desorbed from the capillary by passage of the mobile phase, and no carryover was observed. Using the in-tube SPME LC-MS with SIM method, good linearity of the calibration curve (r>0.9994) was obtained in the concentration range of 0.05-2.0 ng/mL using aflatoxin M1 as an internal standard, and the detection limits (S/N=3) of aflatoxins were 2.1-2.8 pg/mL. The in-tube SPME method showed >23-fold higher sensitivity than the direct injection method (10 microL injection volume). The within-day and between-day precision (relative standard deviations) at the concentration of 1 ng/mL aflatoxin mixture were below 3.3% and 7.7% (n=5), respectively. This method was applied successfully to analysis of food samples without interference peaks. The recoveries of aflatoxins spiked into nuts and cereals were >80%, and the relative standard deviations were Aflatoxins were detected at <10 ng/g in several commercial food samples.

  12. Measurement of acceleration while walking as an automated method for gait assessment in dairy cattle

    DEFF Research Database (Denmark)

    Chapinal, N.; de Passillé, A.M.; Pastell, M.

    2011-01-01

    The aims were to determine whether measures of acceleration of the legs and back of dairy cows while they walk could help detect changes in gait or locomotion associated with lameness and differences in the walking surface. In 2 experiments, 12 or 24 multiparous dairy cows were fitted with five 3......-dimensional accelerometers, 1 attached to each leg and 1 to the back, and acceleration data were collected while cows walked in a straight line on concrete (experiment 1) or on both concrete and rubber (experiment 2). Cows were video-recorded while walking to assess overall gait, asymmetry of the steps......, and walking speed. In experiment 1, cows were selected to maximize the range of gait scores, whereas no clinically lame cows were enrolled in experiment 2. For each accelerometer location, overall acceleration was calculated as the magnitude of the 3-dimensional acceleration vector and the variance of overall...

  13. Survey material choices in haematology EQA: a confounding factor in automated counting performance assessment.

    Science.gov (United States)

    De la Salle, Barbara

    2017-02-15

    The complete blood count (CBC) is one of the most frequently requested tests in laboratory medicine, performed in a range of healthcare situations. The provision of an ideal assay material for external quality assessment is confounded by the fragility of the cellular components of blood, the lack of commutability of stabilised whole blood material and the lack of certified reference materials and methods to which CBC results can be traced. The choice of assay material between fresh blood, extended life assay material and fully stabilised, commercially prepared, whole blood material depends upon the scope and objectives of the EQA scheme. The introduction of new technologies in blood counting and the wider clinical application of parameters from the extended CBC will bring additional challenges for the EQA provider.

  14. Automated texture scoring for assessing breast cancer masking risk in full field digital mammography

    DEFF Research Database (Denmark)

    Kallenberg, Michiel Gijsbertus J; Petersen, Peter Kersten; Lillholm, Martin

    2015-01-01

    of cancerous tissue, we took the contralateral mammograms. We developed a novel machine learning based method called convolutional sparse autoencoder (CSAE) to characterize mammographic texture. The CSAE was trained and tested on raw mammograms to separate interval cancers from controls in a five-fold cross......PURPOSE: The goal of this work is to develop a method to identify women at high risk for having breast cancer that is easily missed in regular mammography screening. Such a method will provide a rationale for selecting women for adjunctive screening. It goes beyond current risk assessment models...... that are not specifically adapted to reduce the number of interval cancers. METHOD AND MATERIALS: From the Dutch breast cancer screening program we collected 109 cancers that were screen negative and subsequently appeared as interval cancers, and 327 age matched healthy controls. To obtain mammograms without signs...

  15. Assessing breast cancer masking risk with automated texture analysis in full field digital mammography

    DEFF Research Database (Denmark)

    Kallenberg, Michiel Gijsbertus J; Lillholm, Martin; Diao, Pengfei

    2015-01-01

    negative and subsequently appeared as interval cancers. To obtain mammograms without cancerous tissue, we took the contralateral mammograms. We developed a novel machine learning based method called convolutional sparse autoencoder to characterize mammographic texture. The reason for focusing......PURPOSE The goal of this work is to develop a method to assess the risk of breast cancer masking, based on image characteristics beyond breast density. METHOD AND MATERIALS From the Dutch breast cancer screening program we collected 285 screen detected cancers, and 109 cancers that were screen...... on mammographic texture rather than the amount of breast density is that a developing cancer may not only be masked because it is obscured; it may also be masked because its mammographic signs resemble the texture of normal tissue. The method was trained and tested on raw mammograms to determine cancer detection...

  16. Taking account of human factors for interface assessment and design in monitoring automated systems

    International Nuclear Information System (INIS)

    Musso, J.-F.; Sicard, Y.; Martin, M.

    1990-01-01

    Optimum balance between control means and the operator capacities is sought for to achieve computerization of Man-Machine interfaces. Observation of the diagnosis activity of populations of operators in situation on simulators enables design criteria to be defined which are well-suited to the characteristics of the tasks with which they are confronted. This observation provides an assessment of the interfaces from the standpoint of the graphic layer, of the Human behaviour induced by the Machine and of the nature of the interaction between these two systems. This requires an original approach dialectically involving cognitive psychology, dynamic management of the knowledge bases (artificial intelligence) in a critical industrial control and monitoring application. (author)

  17. Automated Assessment of Keratocyte Density in Stromal Images from the ConfoScan 4 Confocal Microscope

    Science.gov (United States)

    Bourne, William M.; Patel, Sanjay V.

    2010-01-01

    Purpose. To develop a program to determine cell densities in images from the ConfoScan 4 (Nidek, Inc., Freemont, CA) confocal microscope and compare the densities with those determined in images obtained by the Tandem Scanning confocal microscope (Tandem Scanning Corp., Reston, VA). Methods. A program was developed that used image-processing routines to identify stromal cell nuclei in images from the ConfoScan 4 confocal microscope. Cell selection parameters were set to match cell densities from the program with those determined manually in 15 normal corneas of 15 volunteers. The program was tested on scans from 16 other normal volunteers and 17 volunteers 3 years after LASIK. Cell densities were compared to densities determined by manual assessment and to those in scans by the Tandem Scanning confocal microscope in the same corneas. Results. The difference in cell density between the automatic and manual assessment was −539 ± 3005 cells/mm3 (mean ± SD, P = 0.11) in the 16 test corneas. Densities estimated from the ConfoScan 4 agreed with those from the Tandem Scanning confocal microscope in all regions of the stroma except in the anterior 10%, where the ConfoScan 4 indicated a 30% lower density. Conclusions. Differences in anterior stromal cell density between the ConfoScan 4 and the Tandem Scanning confocal microscope can be explained by the different optical designs. The lower spatial resolution of the ConfoScan 4 limits its ability to resolve thin layers. The adaptation of our earlier cell-counting program to the ConfoScan 4 provides a timesaving, objective, and reproducible means of determining stromal cell densities in images from the ConfoScan 4. PMID:19892869

  18. Lung ventilation-perfusion imbalance in pulmonary emphysema: assessment with automated V/Q quotient SPECT.

    Science.gov (United States)

    Suga, Kazuyoshi; Kawakami, Yasuhiko; Koike, Hiroaki; Iwanaga, Hideyuki; Tokuda, Osamu; Okada, Munemasa; Matsunaga, Naofumi

    2010-05-01

    Tc-99m-Technegas-MAA single photon emission computed tomography (SPECT)-derived ventilation (V)/perfusion (Q) quotient SPECT was used to assess lung V-Q imbalance in patients with pulmonary emphysema. V/Q quotient SPECT and V/Q profile were automatically built in 38 patients with pulmonary emphysema and 12 controls, and V/Q distribution and V/Q profile parameters were compared. V/Q distribution on V/Q quotient SPECT was correlated with low attenuation areas (LAA) on density-mask computed tomography (CT). Parameters of V/Q profile such as the median, standard deviation (SD), kurtosis and skewness were proposed to objectively evaluate the severity of lung V-Q imbalance. In contrast to uniform V/Q distribution on V/Q quotient SPECT and a sharp peak with symmetrical V/Q distribution on V/Q profile in controls, lung areas showing heterogeneously high or low V/Q and flattened peaks with broadened V/Q distribution were frequently seen in patients with emphysema, including lung areas with only slight LAA. V/Q distribution was also often asymmetric regardless of symmetric LAA. All the proposed parameters of V/Q profile in entire lungs of patients with emphysema showed large variations compared with controls; SD and kurtosis were significantly different from controls (P lungs compared with morphologic CT in patients with emphysema. SD and kurtosis of V/Q profile can be adequate parameters to assess the severity of lung V-Q imbalance causing gas-exchange impairment in patients with emphysema.

  19. Automated security management

    CERN Document Server

    Al-Shaer, Ehab; Xie, Geoffrey

    2013-01-01

    In this contributed volume, leading international researchers explore configuration modeling and checking, vulnerability and risk assessment, configuration analysis, and diagnostics and discovery. The authors equip readers to understand automated security management systems and techniques that increase overall network assurability and usability. These constantly changing networks defend against cyber attacks by integrating hundreds of security devices such as firewalls, IPSec gateways, IDS/IPS, authentication servers, authorization/RBAC servers, and crypto systems. Automated Security Managemen

  20. Automated graphic assessment of respiratory activity is superior to pulse oximetry and visual assessment for the detection of early respiratory depression during therapeutic upper endoscopy.

    Science.gov (United States)

    Vargo, John J; Zuccaro, Gregory; Dumot, John A; Conwell, Darwin L; Morrow, J Brad; Shay, Steven S

    2002-06-01

    Recommendations from the American Society of Anesthesiologists suggest that monitoring for apnea using the detection of exhaled carbon dioxide (capnography) is a useful adjunct in the assessment of ventilatory status of patients undergoing sedation and analgesia. There are no data on the utility of capnography in GI endoscopy, nor is the frequency of abnormal ventilatory activity during endoscopy known. The aims of this study were to determine the following: (1) the frequency of abnormal ventilatory activity during therapeutic upper endoscopy, (2) the sensitivity of observation and pulse oximetry in the detection of apnea or disordered respiration, and (3) whether capnography provides an improvement over accepted monitoring techniques. Forty-nine patients undergoing therapeutic upper endoscopy were monitored with standard methods including pulse oximetry, automated blood pressure measurement, and visual assessment. In addition, graphic assessment of respiratory activity with sidestream capnography was performed in all patients. Endoscopy personnel were blinded to capnography data. Episodes of apnea or disordered respiration detected by capnography were documented and compared with the occurrence of hypoxemia, hypercapnea, hypotension, and the recognition of abnormal respiratory activity by endoscopy personnel. Comparison of simultaneous respiratory rate measurements obtained by capnography and by auscultation with a pretracheal stethoscope verified that capnography was an excellent indicator of respiratory rate when compared with the reference standard (auscultation) (r = 0.967, p < 0.001). Fifty-four episodes of apnea or disordered respiration occurred in 28 patients (mean duration 70.8 seconds). Only 50% of apnea or disordered respiration episodes were eventually detected by pulse oximetry. None were detected by visual assessment (p < 0.0010). Apnea/disordered respiration occurs commonly during therapeutic upper endoscopy and frequently precedes the development

  1. Toward Semi-automated Assessment of Target Volume Delineation in Radiotherapy Trials: The SCOPE 1 Pretrial Test Case

    International Nuclear Information System (INIS)

    Gwynne, Sarah; Spezi, Emiliano; Wills, Lucy; Nixon, Lisette; Hurt, Chris; Joseph, George; Evans, Mererid; Griffiths, Gareth; Crosby, Tom; Staffurth, John

    2012-01-01

    Purpose: To evaluate different conformity indices (CIs) for use in the analysis of outlining consistency within the pretrial quality assurance (Radiotherapy Trials Quality Assurance [RTTQA]) program of a multicenter chemoradiation trial of esophageal cancer and to make recommendations for their use in future trials. Methods and Materials: The National Cancer Research Institute SCOPE 1 trial is an ongoing Cancer Research UK-funded phase II/III randomized controlled trial of chemoradiation with capecitabine and cisplatin with or without cetuximab for esophageal cancer. The pretrial RTTQA program included a detailed radiotherapy protocol, an educational package, and a single mid-esophageal tumor test case that were sent to each investigator to outline. Investigator gross tumor volumes (GTVs) were received from 50 investigators in 34 UK centers, and CERR (Computational Environment for Radiotherapy Research) was used to perform an assessment of each investigator GTV against a predefined gold-standard GTV using different CIs. A new metric, the local conformity index (l-CI), that can localize areas of maximal discordance was developed. Results: The median Jaccard conformity index (JCI) was 0.69 (interquartile range, 0.62-0.70), with 14 of 50 investigators (28%) achieving a JCI of 0.7 or greater. The median geographical miss index was 0.09 (interquartile range, 0.06-0.16), and the mean discordance index was 0.27 (95% confidence interval, 0.25-0.30). The l-CI was highest in the middle section of the volume, where the tumor was bulky and more easily definable, and identified 4 slices where fewer than 20% of investigators achieved an l-CI of 0.7 or greater. Conclusions: The available CIs analyze different aspects of a gold standard–observer variation, with JCI being the most useful as a single metric. Additional information is provided by the l-CI and can focus the efforts of the RTTQA team in these areas, possibly leading to semi-automated outlining assessment.

  2. Forest resource inventory assessment in Gunung Rara Forest Reserve, Sabah, using stratified field sampling

    Science.gov (United States)

    Kamaruzaman Jusoff

    2000-01-01

    The objective of this paper is to assess the current timber volume by stratified sampling on a proposed plantation area. The study area is located in Gunung Rara Forest Reserve in the district of Tawau, Sabah, Malaysia.

  3. Automated Soil Physical Parameter Assessment Using Smartphone and Digital Camera Imagery

    Directory of Open Access Journals (Sweden)

    Matt Aitkenhead

    2016-12-01

    Full Text Available Here we present work on using different types of soil profile imagery (topsoil profiles captured with a smartphone camera and full-profile images captured with a conventional digital camera to estimate the structure, texture and drainage of the soil. The method is adapted from earlier work on developing smartphone apps for estimating topsoil organic matter content in Scotland and uses an existing visual soil structure assessment approach. Colour and image texture information was extracted from the imagery. This information was linked, using geolocation information derived from the smartphone GPS system or from field notes, with existing collections of topography, land cover, soil and climate data for Scotland. A neural network model was developed that was capable of estimating soil structure (on a five-point scale, soil texture (sand, silt, clay, bulk density, pH and drainage category using this information. The model is sufficiently accurate to provide estimates of these parameters from soils in the field. We discuss potential improvements to the approach and plans to integrate the model into a set of smartphone apps for estimating health and fertility indicators for Scottish soils.

  4. Automated In-Home Fall Risk Assessment and Detection Sensor System for Elders.

    Science.gov (United States)

    Rantz, Marilyn; Skubic, Marjorie; Abbott, Carmen; Galambos, Colleen; Popescu, Mihail; Keller, James; Stone, Erik; Back, Jessie; Miller, Steven J; Petroski, Gregory F

    2015-06-01

    Falls are a major problem for the elderly people leading to injury, disability, and even death. An unobtrusive, in-home sensor system that continuously monitors older adults for fall risk and detects falls could revolutionize fall prevention and care. A fall risk and detection system was developed and installed in the apartments of 19 older adults at a senior living facility. The system includes pulse-Doppler radar, a Microsoft Kinect, and 2 web cameras. To collect data for comparison with sensor data and for algorithm development, stunt actors performed falls in participants' apartments each month for 2 years and participants completed fall risk assessments (FRAs) using clinically valid, standardized instruments. The FRAs were scored by clinicians and recorded by the sensing modalities. Participants' gait parameters were measured as they walked on a GAITRite mat. These data were used as ground truth, objective data to use in algorithm development and to compare with radar and Kinect generated variables. All FRAs are highly correlated (p falls are being sent to clinicians providing faster responses to urgent situations. The in-home FRA and detection system has the potential to help older adults remain independent, maintain functional ability, and live at home longer. © The Author 2015. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  5. Automated assessment of symptom severity changes during deep brain stimulation (DBS) therapy for Parkinson's disease.

    Science.gov (United States)

    Angeles, Paolo; Tai, Yen; Pavese, Nicola; Wilson, Samuel; Vaidyanathan, Ravi

    2017-07-01

    Deep brain stimulation (DBS) is currently being used as a treatment for symptoms of Parkinson's disease (PD). Tracking symptom severity progression and deciding the optimal stimulation parameters for people with PD is extremely difficult. This study presents a sensor system that can quantify the three cardinal motor symptoms of PD - rigidity, bradykinesia and tremor. The first phase of this study assesses whether data recorded from the system during physical examinations can be used to correlate to clinician's severity score using supervised machine learning (ML) models. The second phase concludes whether the sensor system can distinguish differences before and after DBS optimisation by a clinician when Unified Parkinson's Disease Rating Scale (UPDRS) scores did not change. An average accuracy of 90.9 % was achieved by the best ML models in the first phase, when correlating sensor data to clinician's scores. Adding on to this, in the second phase of the study, the sensor system was able to pick up discernible differences before and after DBS optimisation sessions in instances where UPDRS scores did not change.

  6. An automated model for rooftop PV systems assessment in ArcGIS using LIDAR

    Directory of Open Access Journals (Sweden)

    Mesude Bayrakci Boz

    2015-08-01

    Full Text Available As photovoltaic (PV systems have become less expensive, building rooftops have come to be attractive for local power production. Identifying rooftops suitable for solar energy systems over large geographic areas is needed for cities to obtain more accurate assessments of production potential and likely patterns of development. This paper presents a new method for extracting roof segments and locating suitable areas for PV systems using Light Detection and Ranging (LIDAR data and building footprints. Rooftop segments are created using seven slope (tilt, ve aspect (azimuth classes and 6 different building types. Moreover, direct beam shading caused by nearby objects and the surrounding terrain is taken into account on a monthly basis. Finally, the method is implemented as an ArcGIS model in ModelBuilder and a tool is created. In order to show its validity, the method is applied to city of Philadelphia, PA, USA with the criteria of slope, aspect, shading and area used to locate suitable areas for PV system installation. The results show that 33.7% of the buildings footprints areas and 48.6% of the rooftop segments identi ed is suitable for PV systems. Overall, this study provides a replicable model using commercial software that is capable of extracting individual roof segments with more detailed criteria across an urban area.

  7. 296-B-5 Stack monitoring and sampling system annual system assessment report

    International Nuclear Information System (INIS)

    Ridge, T.M.

    1995-02-01

    The B Plant Administration Manual requires an annual system assessment to evaluate and report the present condition of the sampling and monitoring system associated with Stack 296-B-5 at B Plant. The sampling and monitoring system associated with stack 296-B-5 is functional and performing satisfactorily. This document is an annual assessment report of the systems associated with the 296-B-5 stack

  8. Assessment of the Current Level of Automation in the Manufacture of Fuel Cell Systems for Combined Heat and Power Applications

    Energy Technology Data Exchange (ETDEWEB)

    Ulsh, M.; Wheeler, D.; Protopappas, P.

    2011-08-01

    The U.S. Department of Energy (DOE) is interested in supporting manufacturing research and development (R&D) for fuel cell systems in the 10-1,000 kilowatt (kW) power range relevant to stationary and distributed combined heat and power applications, with the intent to reduce manufacturing costs and increase production throughput. To assist in future decision-making, DOE requested that the National Renewable Energy Laboratory (NREL) provide a baseline understanding of the current levels of adoption of automation in manufacturing processes and flow, as well as of continuous processes. NREL identified and visited or interviewed key manufacturers, universities, and laboratories relevant to the study using a standard questionnaire. The questionnaire covered the current level of vertical integration, the importance of quality control developments for automation, the current level of automation and source of automation design, critical balance of plant issues, potential for continuous cell manufacturing, key manufacturing steps or processes that would benefit from DOE support for manufacturing R&D, the potential for cell or stack design changes to support automation, and the relationship between production volume and decisions on automation.

  9. Usability of a virtual reality environment simulating an automated teller machine for assessing and training persons with acquired brain injury

    Directory of Open Access Journals (Sweden)

    Li Teresa HY

    2010-04-01

    Full Text Available Abstract Objective This study aimed to examine the usability of a newly designed virtual reality (VR environment simulating the operation of an automated teller machine (ATM for assessment and training. Design Part I involved evaluation of the sensitivity and specificity of a non-immersive VR program simulating an ATM (VR-ATM. Part II consisted of a clinical trial providing baseline and post-intervention outcome assessments. Setting A rehabilitation hospital and university-based teaching facilities were used as the setting. Participants A total of 24 persons in the community with acquired brain injury (ABI - 14 in Part I and 10 in Part II - made up the participants in the study. Interventions In Part I, participants were randomized to receive instruction in either an "early" or a "late" VR-ATM program and were assessed using both the VR program and a real ATM. In Part II, participants were assigned in matched pairs to either VR training or computer-assisted instruction (CAI teaching programs for six 1-hour sessions over a three-week period. Outcome Measures Two behavioral checklists based on activity analysis of cash withdrawals and money transfers using a real ATM were used to measure average reaction time, percentage of incorrect responses, level of cues required, and time spent as generated by the VR system; also used was the Neurobehavioral Cognitive Status Examination. Results The sensitivity of the VR-ATM was 100% for cash withdrawals and 83.3% for money transfers, and the specificity was 83% and 75%, respectively. For cash withdrawals, the average reaction time of the VR group was significantly shorter than that of the CAI group (p = 0.021. We found no significant differences in average reaction time or accuracy between groups for money transfers, although we did note positive improvement for the VR-ATM group. Conclusion We found the VR-ATM to be usable as a valid assessment and training tool for relearning the use of ATMs prior to real

  10. Contamination rates of three urine-sampling methods to assess bacteriuria in pregnant women

    NARCIS (Netherlands)

    Schneeberger, Caroline; van den Heuvel, Edwin R.; Erwich, Jan Jaap H. M.; Stolk, Ronald P.; Visser, Caroline E.; Geerlings, Suzanne E.

    2013-01-01

    To estimate and compare contamination rates of three different urine-sampling methods in pregnant women to assess bacteriuria. In this cross-sectional study, 113 pregnant women collected three different midstream urine samples consecutively: morning (first void); midstream (void without further

  11. Contamination Rates of Three Urine-Sampling Methods to Assess Bacteriuria in Pregnant Women

    NARCIS (Netherlands)

    Schneeberger, Caroline; van den Heuvel, Edwin R.; Erwich, Jan Jaap H. M.; Stolk, Ronald P.; Visser, Caroline E.; Geerlings, Suzanne E.

    OBJECTIVE: To estimate and compare contamination rates of three different urine-sampling methods in pregnant women to assess bacteriuria. METHODS: In this cross-sectional study, 113 pregnant women collected three different midstream urine samples consecutively: morning (first void); midstream (void

  12. Using Language Sample Analysis to Assess Spoken Language Production in Adolescents

    Science.gov (United States)

    Miller, Jon F.; Andriacchi, Karen; Nockerts, Ann

    2016-01-01

    Purpose: This tutorial discusses the importance of language sample analysis and how Systematic Analysis of Language Transcripts (SALT) software can be used to simplify the process and effectively assess the spoken language production of adolescents. Method: Over the past 30 years, thousands of language samples have been collected from typical…

  13. Assessment of the pro-inflammatory activity of water sampled from ...

    African Journals Online (AJOL)

    2014-03-20

    Mar 20, 2014 ... Assessment of the pro-inflammatory activity of water sampled from major water treatment facilities ... Although these procedures have been used to assess the human health-related quality of water from ..... CITY OF TSHWANE (2003) Rietvlei Nature Reserve – historical back- ground. Water and Sanitation ...

  14. Using bioavailability to assess contaminated sediment risk: Passive sampling and Pore Water Remedial Guidelines (PWRGs)

    Science.gov (United States)

    Hosted by the Contaminated Sediment Forum, this half-day course will introduce the RPM to the use of passive samplers to assess bioavailability and in ecological risk assessment. Passive sampling devices (PSD) are a technology with growing acceptance for measuring porewater conce...

  15. Analysis of nitrosamines in water by automated SPE and isotope dilution GC/HRMS Occurrence in the different steps of a drinking water treatment plant, and in chlorinated samples from a reservoir and a sewage treatment plant effluent.

    Science.gov (United States)

    Planas, Carles; Palacios, Oscar; Ventura, Francesc; Rivera, Josep; Caixach, Josep

    2008-08-15

    A method based on automated solid-phase extraction (SPE) and isotope dilution gas chromatography/high resolution mass spectrometry (GC/HRMS) has been developed for the analysis of nine nitrosamines in water samples. The combination of automated SPE and GC/HRMS for the analysis of nitrosamines has not been reported previously. The method shows as advantages the selectivity and sensitivity of GC/HRMS analysis and the high efficiency of automated SPE with coconut charcoal EPA 521 cartridges. Low method detection limits (MDLs) were achieved, along with a greater facility of the procedure and less dependence on the operator with regard to the methods based on manual SPE. Quality requirements for isotope dilution-based methods were accomplished for most analysed nitrosamines, regarding to trueness (80-120%), method precision (water samples (16 samples from a drinking water treatment plant {DWTP}, 2 chlorinated samples from a sewage treatment plant {STP} effluent, and 1 chlorinated sample from a reservoir) were analysed. Concentrations of nitrosamines in the STP effluent were 309.4 and 730.2 ng/L, being higher when higher doses of chlorine were applied. N-Nitrosodimethylamine (NDMA) and N-nitrosodiethylamine (NDEA) were the main compounds identified in the STP effluent, and NDEA was detected above 200 ng/L, regulatory level for NDMA in effluents stated in Ontario (Canada). Lower concentrations of nitrosamines were found in the reservoir (20.3 ng/L) and in the DWTP samples (n.d. -28.6 ng/L). NDMA and NDEA were respectively found in the reservoir and in treated and highly chlorinated DWTP samples at concentrations above 10 ng/L (guide value established in different countries). The highest concentrations of nitrosamines were found after chlorination and ozonation processes (ozonated, treated and highly chlorinated water) in DWTP samples.

  16. Identification of Psychometric Characteristics of Sample of Vocal Behaviour and Interaction Assessment Record Forms

    OpenAIRE

    Aksoy, Veysel; Diken, İbrahim H.

    2016-01-01

    The purpose of thisresearch is to study the Sample of Vocal Behaviour Record Form (SVBRF),included in the third edition of Autism Screening Instrument for EducationalPlanning-3 (ASIEP-3), and the psychometric properties of the trucked forms forthe informal assessment tools of Autism Screening Instrument for EducationalPlanning-3 (ASIEP-3). The rationale for the research is the lack ofstandardised informal assessment tools to use for the educational assessment ofchildren with Autism Spectrum D...

  17. Comparison of sampling methods for the assessment of indoor microbial exposure

    DEFF Research Database (Denmark)

    Frankel, M; Timm, Michael; Hansen, E W

    2012-01-01

    regarding their assessment of microbial exposures, including culturable fungi and bacteria, endotoxin, as well as the total inflammatory potential (TIP) of dust samples from Danish homes. The Gesamtstaubprobenahme (GSP) filter sampler and BioSampler were used for sampling of airborne dust, whereas the dust...... fall collector (DFC), the electrostatic dust fall collector (EDC), and vacuum cleaner were used for sampling of settled dust. The GSP assessed significantly higher microbial levels than the BioSampler, yet measurements from both samplers correlated significantly. Considerably higher levels of fungi...... with those from GSP. Settled dust from the EDC was most representative of airborne dust and may thus be considered as a surrogate for the assessment of indoor airborne microbial exposure. PRACTICAL IMPLICATIONS: Significant discrepancies between sampling methods regarding indoor microbial exposures have been...

  18. The use of importance sampling in a trial assessment to obtain converged estimates of radiological risk

    International Nuclear Information System (INIS)

    Johnson, K.; Lucas, R.

    1986-12-01

    In developing a methodology for assessing potential sites for the disposal of radioactive wastes, the Department of the Environment has conducted a series of trial assessment exercises. In order to produce converged estimates of radiological risk using the SYVAC A/C simulation system an efficient sampling procedure is required. Previous work has demonstrated that importance sampling can substantially increase sampling efficiency. This study used importance sampling to produce converged estimates of risk for the first DoE trial assessment. Four major nuclide chains were analysed. In each case importance sampling produced converged risk estimates with between 10 and 170 times fewer runs of the SYVAC A/C model. This increase in sampling efficiency can reduce the total elapsed time required to obtain a converged estimate of risk from one nuclide chain by a factor of 20. The results of this study suggests that the use of importance sampling could reduce the elapsed time required to perform a risk assessment of a potential site by a factor of ten. (author)

  19. Library Automation

    OpenAIRE

    Dhakne, B. N.; Giri, V. V; Waghmode, S. S.

    2010-01-01

    New technologies library provides several new materials, media and mode of storing and communicating the information. Library Automation reduces the drudgery of repeated manual efforts in library routine. By use of library automation collection, Storage, Administration, Processing, Preservation and communication etc.

  20. Assessment of fully-automated atlas-based segmentation of novel oral mucosal surface organ-at-risk.

    Science.gov (United States)

    Dean, Jamie A; Welsh, Liam C; McQuaid, Dualta; Wong, Kee H; Aleksic, Aleksandar; Dunne, Emma; Islam, Mohammad R; Patel, Anushka; Patel, Priyanka; Petkar, Imran; Phillips, Iain; Sham, Jackie; Newbold, Kate L; Bhide, Shreerang A; Harrington, Kevin J; Gulliford, Sarah L; Nutting, Christopher M

    2016-04-01

    Current oral mucositis normal tissue complication probability models, based on the dose distribution to the oral cavity volume, have suboptimal predictive power. Improving the delineation of the oral mucosa is likely to improve these models, but is resource intensive. We developed and evaluated fully-automated atlas-based segmentation (ABS) of a novel delineation technique for the oral mucosal surfaces. An atlas of mucosal surface contours (MSC) consisting of 46 patients was developed. It was applied to an independent test cohort of 10 patients for whom manual segmentation of MSC structures, by three different clinicians, and conventional outlining of oral cavity contours (OCC), by an additional clinician, were also performed. Geometric comparisons were made using the dice similarity coefficient (DSC), validation index (VI) and Hausdorff distance (HD). Dosimetric comparisons were carried out using dose-volume histograms. The median difference, in the DSC and HD, between automated-manual comparisons and manual-manual comparisons were small and non-significant (-0.024; p=0.33 and -0.5; p=0.88, respectively). The median VI was 0.086. The maximum normalised volume difference between automated and manual MSC structures across all of the dose levels, averaged over the test cohort, was 8%. This difference reached approximately 28% when comparing automated MSC and OCC structures. Fully-automated ABS of MSC is suitable for use in radiotherapy dose-response modelling. Copyright © 2016 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  1. Campylobacter in Broiler Chicken and Broiler Meat in Sri Lanka : Influence of Semi-Automated vs. Wet Market Processing on Campylobacter Contamination of Broiler Neck Skin Samples

    NARCIS (Netherlands)

    Kottawatta, Kottawattage S A; van Bergen, Marcel A P; Abeynayake, Preeni; Wagenaar, Jaap A|info:eu-repo/dai/nl/126613354; Veldman, Kees T; Kalupahana, Ruwani S

    2017-01-01

    Broiler meat can become contaminated with Campylobacter of intestinal origin during processing. The present study aimed to identify the prevalence of Campylobacter in broiler flocks and meat contamination at retail shops, and determine the influence of semi-automated and wet market processing on

  2. Automated home cage assessment shows behavioral changes in a transgenic mouse model of spinocerebellar ataxia type 17.

    Science.gov (United States)

    Portal, Esteban; Riess, Olaf; Nguyen, Huu Phuc

    2013-08-01

    Spinocerebellar Ataxia type 17 (SCA17) is an autosomal dominantly inherited, neurodegenerative disease characterized by ataxia, involuntary movements, and dementia. A novel SCA17 mouse model having a 71 polyglutamine repeat expansion in the TATA-binding protein (TBP) has shown age related motor deficit using a classic motor test, yet concomitant weight increase might be a confounding factor for this measurement. In this study we used an automated home cage system to test several motor readouts for this same model to confirm pathological behavior results and evaluate benefits of automated home cage in behavior phenotyping. Our results confirm motor deficits in the Tbp/Q71 mice and present previously unrecognized behavioral characteristics obtained from the automated home cage, indicating its use for high-throughput screening and testing, e.g. of therapeutic compounds. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. Assessment of the accuracy of portion size reports using computer-based food photographs aids in the development of an automated self-administered 24-hour recall.

    Science.gov (United States)

    Subar, Amy F; Crafts, Jennifer; Zimmerman, Thea Palmer; Wilson, Michael; Mittl, Beth; Islam, Noemi G; McNutt, Suzanne; Potischman, Nancy; Buday, Richard; Hull, Stephen G; Baranowski, Tom; Guenther, Patricia M; Willis, Gordon; Tapia, Ramsey; Thompson, Frances E

    2010-01-01

    To assess the accuracy of portion-size estimates and participant preferences using various presentations of digital images. Two observational feeding studies were conducted. In both, each participant selected and consumed foods for breakfast and lunch, buffet style, serving themselves portions of nine foods representing five forms (eg, amorphous, pieces). Serving containers were weighed unobtrusively before and after selection as was plate waste. The next day, participants used a computer software program to select photographs representing portion sizes of foods consumed the previous day. Preference information was also collected. In Study 1 (n=29), participants were presented with four different types of images (aerial photographs, angled photographs, images of mounds, and household measures) and two types of screen presentations (simultaneous images vs an empty plate that filled with images of food portions when clicked). In Study 2 (n=20), images were presented in two ways that varied by size (large vs small) and number (4 vs 8). Convenience sample of volunteers of varying background in an office setting. Repeated-measures analysis of variance of absolute differences between actual and reported portions sizes by presentation methods. Accuracy results were largely not statistically significant, indicating that no one image type was most accurate. Accuracy results indicated the use of eight vs four images was more accurate. Strong participant preferences supported presenting simultaneous vs sequential images. These findings support the use of aerial photographs in the automated self-administered 24-hour recall. For some food forms, images of mounds or household measures are as accurate as images of food and, therefore, are a cost-effective alternative to photographs of foods. Copyright 2010 American Dietetic Association. Published by Elsevier Inc. All rights reserved.

  4. An integrative pharmacological approach to radio telemetry and blood sampling in pharmaceutical drug discovery and safety assessment

    Directory of Open Access Journals (Sweden)

    Kamendi Harriet W

    2011-01-01

    Full Text Available Abstract Background A successful integration of the automated blood sampling (ABS and telemetry (ABST system is described. The new ABST system facilitates concomitant collection of physiological variables with blood and urine samples for determination of drug concentrations and other biochemical measures in the same rat without handling artifact. Method Integration was achieved by designing a 13 inch circular receiving antenna that operates as a plug-in replacement for the existing pair of DSI's orthogonal antennas which is compatible with the rotating cage and open floor design of the BASi Culex® ABS system. The circular receiving antenna's electrical configuration consists of a pair of electrically orthogonal half-toroids that reinforce reception of a dipole transmitter operating within the coil's interior while reducing both external noise pickup and interference from other adjacent dipole transmitters. Results For validation, measured baclofen concentration (ABST vs. satellite (μM: 69.6 ± 23.8 vs. 76.6 ± 19.5, p = NS and mean arterial pressure (ABST vs. traditional DSI telemetry (mm Hg: 150 ± 5 vs.147 ± 4, p = NS variables were quantitatively and qualitatively similar between rats housed in the ABST system and traditional home cage approaches. Conclusion The ABST system offers unique advantages over traditional between-group study paradigms that include improved data quality and significantly reduced animal use. The superior within-group model facilitates assessment of multiple physiological and biochemical responses to test compounds in the same animal. The ABST also provides opportunities to evaluate temporal relations between parameters and to investigate anomalous outlier events because drug concentrations, physiological and biochemical measures for each animal are available for comparisons.

  5. A support vector density-based importance sampling for reliability assessment

    International Nuclear Information System (INIS)

    Dai, Hongzhe; Zhang, Hao; Wang, Wei

    2012-01-01

    An importance sampling method based on the adaptive Markov chain simulation and support vector density estimation is developed in this paper for efficient structural reliability assessment. The methodology involves the generation of samples that can adaptively populate the important region by the adaptive Metropolis algorithm, and the construction of importance sampling density by support vector density. The use of the adaptive Metropolis algorithm may effectively improve the convergence and stability of the classical Markov chain simulation. The support vector density can approximate the sampling density with fewer samples in comparison to the conventional kernel density estimation. The proposed importance sampling method can effectively reduce the number of structural analysis required for achieving a given accuracy. Examples involving both numerical and practical structural problems are given to illustrate the application and efficiency of the proposed methodology.

  6. Assessing terpene content variability of whitebark pine in order to estimate representative sample size

    Directory of Open Access Journals (Sweden)

    Stefanović Milena

    2013-01-01

    Full Text Available In studies of population variability, particular attention has to be paid to the selection of a representative sample. The aim of this study was to assess the size of the new representative sample on the basis of the variability of chemical content of the initial sample on the example of a whitebark pine population. Statistical analysis included the content of 19 characteristics (terpene hydrocarbons and their derivates of the initial sample of 10 elements (trees. It was determined that the new sample should contain 20 trees so that the mean value calculated from it represents a basic set with a probability higher than 95 %. Determination of the lower limit of the representative sample size that guarantees a satisfactory reliability of generalization proved to be very important in order to achieve cost efficiency of the research. [Projekat Ministarstva nauke Republike Srbije, br. OI-173011, br. TR-37002 i br. III-43007

  7. Assessment of sampling and analytical uncertainty of trace element contents in arable field soils.

    Science.gov (United States)

    Buczko, Uwe; Kuchenbuch, Rolf O; Ubelhör, Walter; Nätscher, Ludwig

    2012-07-01

    Assessment of trace element contents in soils is required in Germany (and other countries) before sewage sludge application on arable soils. The reliability of measured element contents is affected by measurement uncertainty, which consists of components due to (1) sampling, (2) laboratory repeatability (intra-lab) and (3) reproducibility (between-lab). A complete characterization of average trace element contents in field soils should encompass the uncertainty of all these components. The objectives of this study were to elucidate the magnitude and relative proportions of uncertainty components for the metals As, B, Cd, Co, Cr, Mo, Ni, Pb, Tl and Zn in three arable fields of different field-scale heterogeneity, based on a collaborative trial (CT) (standardized procedure) and two sampling proficiency tests (PT) (individual sampling procedure). To obtain reference values and estimates of field-scale heterogeneity, a detailed reference sampling was conducted. Components of uncertainty (sampling person, sampling repetition, laboratory) were estimated by variance component analysis, whereas reproducibility uncertainty was estimated using results from numerous laboratory proficiency tests. Sampling uncertainty in general increased with field-scale heterogeneity; however, total uncertainty was mostly dominated by (total) laboratory uncertainty. Reproducibility analytical uncertainty was on average by a factor of about 3 higher than repeatability uncertainty. Therefore, analysis within one single laboratory and, for heterogeneous fields, a reduction of sampling uncertainty (for instance by larger numbers of sample increments and/or a denser coverage of the field area) would be most effective to reduce total uncertainty. On the other hand, when only intra-laboratory analytical uncertainty was considered, total sampling uncertainty on average prevailed over analytical uncertainty by a factor of 2. Both sampling and laboratory repeatability uncertainty were highly variable

  8. Process automation

    International Nuclear Information System (INIS)

    Moser, D.R.

    1986-01-01

    Process automation technology has been pursued in the chemical processing industries and to a very limited extent in nuclear fuel reprocessing. Its effective use has been restricted in the past by the lack of diverse and reliable process instrumentation and the unavailability of sophisticated software designed for process control. The Integrated Equipment Test (IET) facility was developed by the Consolidated Fuel Reprocessing Program (CFRP) in part to demonstrate new concepts for control of advanced nuclear fuel reprocessing plants. A demonstration of fuel reprocessing equipment automation using advanced instrumentation and a modern, microprocessor-based control system is nearing completion in the facility. This facility provides for the synergistic testing of all chemical process features of a prototypical fuel reprocessing plant that can be attained with unirradiated uranium-bearing feed materials. The unique equipment and mission of the IET facility make it an ideal test bed for automation studies. This effort will provide for the demonstration of the plant automation concept and for the development of techniques for similar applications in a full-scale plant. A set of preliminary recommendations for implementing process automation has been compiled. Some of these concepts are not generally recognized or accepted. The automation work now under way in the IET facility should be useful to others in helping avoid costly mistakes because of the underutilization or misapplication of process automation. 6 figs

  9. A content validated questionnaire for assessment of self reported venous blood sampling practices

    Directory of Open Access Journals (Sweden)

    Bölenius Karin

    2012-01-01

    Full Text Available Abstract Background Venous blood sampling is a common procedure in health care. It is strictly regulated by national and international guidelines. Deviations from guidelines due to human mistakes can cause patient harm. Validated questionnaires for health care personnel can be used to assess preventable "near misses"--i.e. potential errors and nonconformities during venous blood sampling practices that could transform into adverse events. However, no validated questionnaire that assesses nonconformities in venous blood sampling has previously been presented. The aim was to test a recently developed questionnaire in self reported venous blood sampling practices for validity and reliability. Findings We developed a questionnaire to assess deviations from best practices during venous blood sampling. The questionnaire contained questions about patient identification, test request management, test tube labeling, test tube handling, information search procedures and frequencies of error reporting. For content validity, the questionnaire was confirmed by experts on questionnaires and venous blood sampling. For reliability, test-retest statistics were used on the questionnaire answered twice. The final venous blood sampling questionnaire included 19 questions out of which 9 had in total 34 underlying items. It was found to have content validity. The test-retest analysis demonstrated that the items were generally stable. In total, 82% of the items fulfilled the reliability acceptance criteria. Conclusions The questionnaire could be used for assessment of "near miss" practices that could jeopardize patient safety and gives several benefits instead of assessing rare adverse events only. The higher frequencies of "near miss" practices allows for quantitative analysis of the effect of corrective interventions and to benchmark preanalytical quality not only at the laboratory/hospital level but also at the health care unit/hospital ward.

  10. Automated assessments of circumferential strain from cine CMR correlate with LVEF declines in cancer patients early after receipt of cardio-toxic chemotherapy.

    Science.gov (United States)

    Jolly, Marie-Pierre; Jordan, Jennifer H; Meléndez, Giselle C; McNeal, Gary R; D'Agostino, Ralph B; Hundley, W Gregory

    2017-08-02

    In patients with cancer receiving potentially cardio-toxic chemotherapy, measurements of left ventricular (LV) circumferential or longitudinal strain are often used clinically to identify myocardial dysfunction. Using a new software algorithm, we sought to determine in individuals receiving treatment for cancer the association between automated assessments of LV mean mid-wall circumferential strain and conventional measures of LV ejection fraction (EF) both obtained from cardiovascular magnetic resonance (CMR) cine balanced steady-state free-precession (bSSFP) white-blood acquisitions. Before and 3 months after initiating treatment with potentially cardio-toxic chemotherapy, 72 individuals (aged 54 ± 14 years with breast cancer [39%], lymphoma [49%], or sarcoma [12%]) underwent serial CMR cine bSSFP assessments of LV volumes and EF, and mean mid-wall circumferential strain determined from these same cine images as well as from additional tagged CMR images. On the cine images, assessments of strain were obtained using the newly developed deformation-based segmentation algorithm. Assessments of LV volumes/EF from the cine images and strain from tagged CMR were accomplished using commercially available software. All measures were analyzed in a blinded fashion independent of one another. Acceptable measures for the automated assessments of mean mid-wall circumferential strain from the cine images were obtained in 142 of 144 visits (98.6%) with an overall analysis time averaging 6:47 ± 1:06 min. The results from these