WorldWideScience

Sample records for automated video analysis

  1. Automated analysis and annotation of basketball video

    Science.gov (United States)

    Saur, Drew D.; Tan, Yap-Peng; Kulkarni, Sanjeev R.; Ramadge, Peter J.

    1997-01-01

    Automated analysis and annotation of video sequences are important for digital video libraries, content-based video browsing and data mining projects. A successful video annotation system should provide users with useful video content summary in a reasonable processing time. Given the wide variety of video genres available today, automatically extracting meaningful video content for annotation still remains hard by using current available techniques. However, a wide range video has inherent structure such that some prior knowledge about the video content can be exploited to improve our understanding of the high-level video semantic content. In this paper, we develop tools and techniques for analyzing structured video by using the low-level information available directly from MPEG compressed video. Being able to work directly in the video compressed domain can greatly reduce the processing time and enhance storage efficiency. As a testbed, we have developed a basketball annotation system which combines the low-level information extracted from MPEG stream with the prior knowledge of basketball video structure to provide high level content analysis, annotation and browsing for events such as wide- angle and close-up views, fast breaks, steals, potential shots, number of possessions and possession times. We expect our approach can also be extended to structured video in other domains.

  2. A Modular Approach for Automating Video Analysis

    OpenAIRE

    Nadarajan, Gayathri; Renouf, Arnaud

    2007-01-01

    International audience; Automating the steps involved in video processing has yet to be tackled with much success by vision developers and knowledge engineers. This is due to the difficulty in formulating vision problems and their solutions in a generalised manner. In this collaborated work, we introduce a modular approach that utilises ontologies to capture the goals, domain description and capabilities for performing video analysis. This modularisation is tested on real-world videos from an...

  3. Automated Large-Scale Shoreline Variability Analysis From Video

    Science.gov (United States)

    Pearre, N. S.

    2006-12-01

    Land-based video has been used to quantify changes in nearshore conditions for over twenty years. By combining the ability to track rapid, short-term shoreline change and changes associated with longer term or seasonal processes, video has proved to be a cost effective and versatile tool for coastal science. Previous video-based studies of shoreline change have typically examined the position of the shoreline along a small number of cross-shore lines as a proxy for the continuous coast. The goal of this study is twofold: (1) to further develop automated shoreline extraction algorithms for continuous shorelines, and (2) to track the evolution of a nourishment project at Rehoboth Beach, DE that was concluded in June 2005. Seven cameras are situated approximately 30 meters above mean sea level and 70 meters from the shoreline. Time exposure and variance images are captured hourly during daylight and transferred to a local processing computer. After correcting for lens distortion and geo-rectifying to a shore-normal coordinate system, the images are merged to form a composite planform image of 6 km of coast. Automated extraction algorithms establish shoreline and breaker positions throughout a tidal cycle on a daily basis. Short and long term variability in the daily shoreline will be characterized using empirical orthogonal function (EOF) analysis. Periodic sediment volume information will be extracted by incorporating the results of monthly ground-based LIDAR surveys and by correlating the hourly shorelines to the corresponding tide level under conditions with minimal wave activity. The Delaware coast in the area downdrift of the nourishment site is intermittently interrupted by short groins. An Even/Odd analysis of the shoreline response around these groins will be performed. The impact of groins on the sediment volume transport along the coast during periods of accretive and erosive conditions will be discussed. [This work is being supported by DNREC and the

  4. An automated method for analysis of microcirculation videos for accurate assessment of tissue perfusion

    Directory of Open Access Journals (Sweden)

    Demir Sumeyra U

    2012-12-01

    Full Text Available Abstract Background Imaging of the human microcirculation in real-time has the potential to detect injuries and illnesses that disturb the microcirculation at earlier stages and may improve the efficacy of resuscitation. Despite advanced imaging techniques to monitor the microcirculation, there are currently no tools for the near real-time analysis of the videos produced by these imaging systems. An automated system tool that can extract microvasculature information and monitor changes in tissue perfusion quantitatively might be invaluable as a diagnostic and therapeutic endpoint for resuscitation. Methods The experimental algorithm automatically extracts microvascular network and quantitatively measures changes in the microcirculation. There are two main parts in the algorithm: video processing and vessel segmentation. Microcirculatory videos are first stabilized in a video processing step to remove motion artifacts. In the vessel segmentation process, the microvascular network is extracted using multiple level thresholding and pixel verification techniques. Threshold levels are selected using histogram information of a set of training video recordings. Pixel-by-pixel differences are calculated throughout the frames to identify active blood vessels and capillaries with flow. Results Sublingual microcirculatory videos are recorded from anesthetized swine at baseline and during hemorrhage using a hand-held Side-stream Dark Field (SDF imaging device to track changes in the microvasculature during hemorrhage. Automatically segmented vessels in the recordings are analyzed visually and the functional capillary density (FCD values calculated by the algorithm are compared for both health baseline and hemorrhagic conditions. These results were compared to independently made FCD measurements using a well-known semi-automated method. Results of the fully automated algorithm demonstrated a significant decrease of FCD values. Similar, but more variable FCD

  5. Mass asymmetry and tricyclic wobble motion assessment using automated launch video analysis

    Institute of Scientific and Technical Information of China (English)

    Ryan DECKER; Joseph DONINI; William GARDNER; Jobin JOHN; Walter KOENIG

    2016-01-01

    This paper describes an approach to identify epicyclic and tricyclic motion during projectile flight caused by mass asymmetries in spin-stabilized projectiles. Flight video was captured following projectile launch of several M110A2E1 155 mm artillery projectiles. These videos were then analyzed using the automated flight video analysis method to attain their initial position and orientation histories. Examination of the pitch and yaw histories clearly indicates that in addition to epicyclic motion’s nutation and precession oscillations, an even faster wobble amplitude is present during each spin revolution, even though some of the amplitudes of the oscillation are smaller than 0.02 degree. The results are compared to a sequence of shots where little appreciable mass asymmetries were present, and only nutation and precession frequencies are predominantly apparent in the motion history results. Magnitudes of the wobble motion are estimated and compared to product of inertia measurements of the asymmetric projectiles.

  6. Use of automated video analysis for the evaluation of bicycle movement and interaction

    Science.gov (United States)

    Twaddle, Heather; Schendzielorz, Tobias; Fakler, Oliver; Amini, Sasan

    2014-03-01

    With the purpose of developing valid models of microscopic bicycle behavior, a large quantity of video data is collected at three busy urban intersections in Munich, Germany. Due to the volume of data, the manual processing of this data is infeasible and an automated or semi-automated analysis method must be implemented. An open source software, "Traffic Intelligence", is used and extended to analyze the collected video data with regard to research questions concerning the tactical behavior of bicyclists. In a first step, the feature detection parameters, the tracking parameters and the object grouping parameters are calibrated, making it possible to accurately track and group the objects at intersections used by large volumes of motor vehicles, bicycles and pedestrians. The resulting parameters for the three intersections are presented. A methodology for the classification of road users as cars, bicycles or pedestrians is presented and evaluated. This is achieved by making hypotheses about which features belong to cars, or bicycles and pedestrians, and using grouping parameters specified for that road user group to cluster the features into objects. These objects are then classified based on their dynamic characteristics. A classification structure for the maneuvers of different road users is presented and future applications are discussed.

  7. Automated Video Analysis of Non-verbal Communication in a Medical Setting.

    Science.gov (United States)

    Hart, Yuval; Czerniak, Efrat; Karnieli-Miller, Orit; Mayo, Avraham E; Ziv, Amitai; Biegon, Anat; Citron, Atay; Alon, Uri

    2016-01-01

    Non-verbal communication plays a significant role in establishing good rapport between physicians and patients and may influence aspects of patient health outcomes. It is therefore important to analyze non-verbal communication in medical settings. Current approaches to measure non-verbal interactions in medicine employ coding by human raters. Such tools are labor intensive and hence limit the scale of possible studies. Here, we present an automated video analysis tool for non-verbal interactions in a medical setting. We test the tool using videos of subjects that interact with an actor portraying a doctor. The actor interviews the subjects performing one of two scripted scenarios of interviewing the subjects: in one scenario the actor showed minimal engagement with the subject. The second scenario included active listening by the doctor and attentiveness to the subject. We analyze the cross correlation in total kinetic energy of the two people in the dyad, and also characterize the frequency spectrum of their motion. We find large differences in interpersonal motion synchrony and entrainment between the two performance scenarios. The active listening scenario shows more synchrony and more symmetric followership than the other scenario. Moreover, the active listening scenario shows more high-frequency motion termed jitter that has been recently suggested to be a marker of followership. The present approach may be useful for analyzing physician-patient interactions in terms of synchrony and dominance in a range of medical settings.

  8. Quantitative analysis of spider locomotion employing computer-automated video tracking

    DEFF Research Database (Denmark)

    Baatrup, E; Bayley, M

    1993-01-01

    The locomotor activity of adult specimens of the wolf spider Pardosa amentata was measured in an open-field setup, using computer-automated colour object video tracking. The x,y coordinates of the animal in the digitized image of the test arena were recorded three times per second during four con...

  9. Towards an automated analysis of video-microscopy images of fungal morphogenesis

    Directory of Open Access Journals (Sweden)

    Diéguez-Uribeondo, Javier

    2005-06-01

    Full Text Available Fungal morphogenesis is an exciting field of cell biology and several mathematical models have been developed to describe it. These models require experimental evidences to be corroborated and, therefore, there is a continuous search for new microscopy and image analysis techniques. In this work, we have used a Canny-edge-detector based technique to automate the generation of hyphal profiles and calculation of morphogenetic parameters such as diameter, elongation rates and hyphoid fitness. The results show that the data obtained with this technique are similar to published data generated with manualbased tracing techniques and that have been carried out on the same species or genus. Thus, we show that application of edge detector-based technique to hyphal growth represents an efficient and accurate method to study hyphal morphogenesis. This represents the first step towards an automated analysis of videomicroscopy images of fungal morphogenesis.La morfogénesis de los hongos es un área de estudio de gran relevancia en la biología celular y en la que se han desarrollado varios modelos matemáticos. Los modelos matemáticos de procesos biológicos precisan de pruebas experimentales que apoyen y corroboren las predicciones teóricas y, por este motivo, existe una búsqueda continua de nuevas técnicas de microscopía y análisis de imágenes para su aplicación en el estudio del crecimiento celular. En este trabajo hemos utilizado una técnica basada en un detector de contornos llamado “Canny-edge-detectorâ€� con el objetivo de automatizar la generación de perfiles de hifas y el cálculo de parámetros morfogenéticos, tales como: el diámetro, la velocidad de elongación y el ajuste con el perfil hifoide, es decir, el perfil teórico de las hifas de los hongos. Los resultados obtenidos son similares a los datos publicados a partir de técnicas manuales de trazado de contornos, generados en la misma especie y género. De esta manera

  10. Artificial Video for Video Analysis

    Science.gov (United States)

    Gallis, Michael R.

    2010-01-01

    This paper discusses the use of video analysis software and computer-generated animations for student activities. The use of artificial video affords the opportunity for students to study phenomena for which a real video may not be easy or even possible to procure, using analysis software with which the students are already familiar. We will…

  11. Pitch and Yaw Trajectory Measurement Comparison Between Automated Video Analysis and Onboard Sensor Data Analysis Techniques

    Science.gov (United States)

    2013-09-01

    are differenced from the corresponding elevation and azimuth components of the projectile’s velocity vector , and corrected for the azimuth plane...analyzed, the measured pitch angle was accurate to within a small fraction of a degree for each frame (9). These results validated the projectile...video cameras. For this experiment that rate was 2000 frames/s. After the heading vector, estimates are generated by each method; they are differenced

  12. Automated high-speed video analysis of the bubble dynamics in subcooled flow boiling

    Energy Technology Data Exchange (ETDEWEB)

    Maurus, Reinhold; Ilchenko, Volodymyr; Sattelmayer, Thomas [Technische Univ. Muenchen, Lehrstuhl fuer Thermodynamik, Garching (Germany)

    2004-04-01

    Subcooled flow boiling is a commonly applied technique for achieving efficient heat transfer. In the study, an experimental investigation in the nucleate boiling regime was performed for water circulating in a closed loop at atmospheric pressure. The test-section consists of a rectangular channel with a one side heated copper strip and a very good optical access. For the optical observation of the bubble behaviour the high-speed cinematography is used. Automated image processing and analysis algorithms developed by the authors were applied for a wide range of mass flow rates and heat fluxes in order to extract characteristic length and time scales of the bubbly layer during the boiling process. Using this methodology, a huge number of bubble cycles could be analysed. The structure of the developed algorithms for the detection of the bubble diameter, the bubble lifetime, the lifetime after the detachment process and the waiting time between two bubble cycles is described. Subsequently, the results from using these automated procedures are presented. A remarkable novelty is the presentation of all results as distribution functions. This is of physical importance because the commonly applied spatial and temporal averaging leads to a loss of information and, moreover, to an unjustified deterministic view of the boiling process, which exhibits in reality a very wide spread of bubble sizes and characteristic times. The results show that the mass flux dominates the temporal bubble behaviour. An increase of the liquid mass flux reveals a strong decrease of the bubble life - and waiting time. In contrast, the variation of the heat flux has a much smaller impact. It is shown in addition that the investigation of the bubble history using automated algorithms delivers novel information with respect to the bubble lift-off probability. (Author)

  13. A Novel Morphometry-Based Protocol of Automated Video-Image Analysis for Species Recognition and Activity Rhythms Monitoring in Deep-Sea Fauna

    Directory of Open Access Journals (Sweden)

    Paolo Menesatti

    2009-10-01

    Full Text Available The understanding of ecosystem dynamics in deep-sea areas is to date limited by technical constraints on sampling repetition. We have elaborated a morphometry-based protocol for automated video-image analysis where animal movement tracking (by frame subtraction is accompanied by species identification from animals’ outlines by Fourier Descriptors and Standard K-Nearest Neighbours methods. One-week footage from a permanent video-station located at 1,100 m depth in Sagami Bay (Central Japan was analysed. Out of 150,000 frames (1 per 4 s, a subset of 10.000 was analyzed by a trained operator to increase the efficiency of the automated procedure. Error estimation of the automated and trained operator procedure was computed as a measure of protocol performance. Three displacing species were identified as the most recurrent: Zoarcid fishes (eelpouts, red crabs (Paralomis multispina, and snails (Buccinum soyomaruae. Species identification with KNN thresholding produced better results in automated motion detection. Results were discussed assuming that the technological bottleneck is to date deeply conditioning the exploration of the deep-sea.

  14. Video analysis platform

    OpenAIRE

    FLORES, Pablo; Arias, Pablo; Lecumberry, Federico; Pardo, Álvaro

    2006-01-01

    In this article we present the Video Analysis Platform (VAP) which is an open source software framework for video analysis, processing and description. The main goals of VAP are: to provide a multiplatform system which allows the easy implementation of video algorithms, provide structures and algorithms for the segmentation of video data in its different levels of abstraction: shots, frames, objects, regions, etc, permit the generation and comparison of MPEG7-like descriptors, and develop tes...

  15. Contextual analysis of videos

    CERN Document Server

    Thida, Myo; Monekosso, Dorothy

    2013-01-01

    Video context analysis is an active and vibrant research area, which provides means for extracting, analyzing and understanding behavior of a single target and multiple targets. Over the last few decades, computer vision researchers have been working to improve the accuracy and robustness of algorithms to analyse the context of a video automatically. In general, the research work in this area can be categorized into three major topics: 1) counting number of people in the scene 2) tracking individuals in a crowd and 3) understanding behavior of a single target or multiple targets in the scene.

  16. Customizing Multiprocessor Implementation of an Automated Video Surveillance System

    Directory of Open Access Journals (Sweden)

    Morteza Biglari-Abhari

    2006-09-01

    Full Text Available This paper reports on the development of an automated embedded video surveillance system using two customized embedded RISC processors. The application is partitioned into object tracking and video stream encoding subsystems. The real-time object tracker is able to detect and track moving objects by video images of scenes taken by stationary cameras. It is based on the block-matching algorithm. The video stream encoding involves the optimization of an international telecommunications union (ITU-T H.263 baseline video encoder for quarter common intermediate format (QCIF and common intermediate format (CIF resolution images. The two subsystems running on two processor cores were integrated and a simple protocol was added to realize the automated video surveillance system. The experimental results show that the system is capable of detecting, tracking, and encoding QCIF and CIF resolution images with object movements in them in real-time. With low cycle-count, low-transistor count, and low-power consumption requirements, the system is ideal for deployment in remote locations.

  17. Automated sea floor extraction from underwater video

    Science.gov (United States)

    Kelly, Lauren; Rahmes, Mark; Stiver, James; McCluskey, Mike

    2016-05-01

    Ocean floor mapping using video is a method to simply and cost-effectively record large areas of the seafloor. Obtaining visual and elevation models has noteworthy applications in search and recovery missions. Hazards to navigation are abundant and pose a significant threat to the safety, effectiveness, and speed of naval operations and commercial vessels. This project's objective was to develop a workflow to automatically extract metadata from marine video and create image optical and elevation surface mosaics. Three developments made this possible. First, optical character recognition (OCR) by means of two-dimensional correlation, using a known character set, allowed for the capture of metadata from image files. Second, exploiting the image metadata (i.e., latitude, longitude, heading, camera angle, and depth readings) allowed for the determination of location and orientation of the image frame in mosaic. Image registration improved the accuracy of mosaicking. Finally, overlapping data allowed us to determine height information. A disparity map was created using the parallax from overlapping viewpoints of a given area and the relative height data was utilized to create a three-dimensional, textured elevation map.

  18. Optical tracking of embryonic vertebrates behavioural responses using automated time-resolved video-microscopy system

    Science.gov (United States)

    Walpitagama, Milanga; Kaslin, Jan; Nugegoda, Dayanthi; Wlodkowic, Donald

    2016-12-01

    The fish embryo toxicity (FET) biotest performed on embryos of zebrafish (Danio rerio) has gained significant popularity as a rapid and inexpensive alternative approach in chemical hazard and risk assessment. The FET was designed to evaluate acute toxicity on embryonic stages of fish exposed to the test chemical. The current standard, similar to most traditional methods for evaluating aquatic toxicity provides, however, little understanding of effects of environmentally relevant concentrations of chemical stressors. We postulate that significant environmental effects such as altered motor functions, physiological alterations reflected in heart rate, effects on development and reproduction can occur at sub-lethal concentrations well below than LC10. Behavioral studies can, therefore, provide a valuable integrative link between physiological and ecological effects. Despite the advantages of behavioral analysis development of behavioral toxicity, biotests is greatly hampered by the lack of dedicated laboratory automation, in particular, user-friendly and automated video microscopy systems. In this work we present a proof-of-concept development of an optical system capable of tracking embryonic vertebrates behavioral responses using automated and vastly miniaturized time-resolved video-microscopy. We have employed miniaturized CMOS cameras to perform high definition video recording and analysis of earliest vertebrate behavioral responses. The main objective was to develop a biocompatible embryo positioning structures that were suitable for high-throughput imaging as well as video capture and video analysis algorithms. This system should support the development of sub-lethal and behavioral markers for accelerated environmental monitoring.

  19. Effects of the pyrethroid insecticide Cypermethrin on the locomotor activity of the wolf spider Pardosa amentata: quantitative analysis employing computer-automated video tracking.

    Science.gov (United States)

    Baatrup, E; Bayley, M

    1993-10-01

    Wildlife in areas surrounding arable land is almost inevitably exposed to pesticide spray. Even at doses far below the lethal level, this presents a threat to vulnerable species. The widely used pyrethroid insecticides, including Cypermethrin, are known for their direct effect on the locomotor apparatus of animals, inducing varying degrees of paresis. Quantitative measurements of the voluntary locomotion of animals express an integrated response to changes in biochemical and physiological processes. In the present study, the effect of Cypermethrin on the voluntary locomotion of the wolf spider Pardosa amentata was quantified in an open field setup, using computer-automated video tracking. Each spider was recorded for 24 hr prior to pesticide exposure. After topical application of 4.6 ng of Cypermethrin, the animal was recorded for a further 48 hr. Finally, after 9 days of recovery, the spider was tracked for 24 hr. Initially, Cypermethrin induced an almost instant paralysis of the hind legs and a lack of coordination in movement seen in the jagged and circular track appearance. This phase culminated in total quiescence, lasting approximately 12 hr in males and 24-48 hr in females. Following paresis, the effects of Cypermethrin were evident in reduced path length, average velocity, and maximum velocity and an increase in the time spent in quiescence. Also, the pyrethroid disrupted the consistent distributions of walking velocity and periods of quiescence seen prior to pesticide application. Our results suggest that normal locomotion had returned 9 days after Cypermethrin application, but that recovery of high velocities was still incomplete.

  20. Toy Trucks in Video Analysis

    DEFF Research Database (Denmark)

    2015-01-01

    Video fieldstudies of people who could be potential users is widespread in design projects. How to analyse such video is, however, often challenging, as it is time consuming and requires a trained eye to unlock experiential knowledge in people’s practices. In our work with industrialists, we have...... discovered that using scale-models like toy trucks has a strongly encouraging effect on developers/designers to collaboratively make sense of field videos. In our analysis of such scale-model sessions, we found some quite fundamental patterns of how participants utilise objects; the participants build shared...

  1. Contaminant analysis automation demonstration proposal

    Energy Technology Data Exchange (ETDEWEB)

    Dodson, M.G.; Schur, A.; Heubach, J.G.

    1993-10-01

    The nation-wide and global need for environmental restoration and waste remediation (ER&WR) presents significant challenges to the analytical chemistry laboratory. The expansion of ER&WR programs forces an increase in the volume of samples processed and the demand for analysis data. To handle this expanding volume, productivity must be increased. However. The need for significantly increased productivity, faces contaminant analysis process which is costly in time, labor, equipment, and safety protection. Laboratory automation offers a cost effective approach to meeting current and future contaminant analytical laboratory needs. The proposed demonstration will present a proof-of-concept automated laboratory conducting varied sample preparations. This automated process also highlights a graphical user interface that provides supervisory, control and monitoring of the automated process. The demonstration provides affirming answers to the following questions about laboratory automation: Can preparation of contaminants be successfully automated?; Can a full-scale working proof-of-concept automated laboratory be developed that is capable of preparing contaminant and hazardous chemical samples?; Can the automated processes be seamlessly integrated and controlled?; Can the automated laboratory be customized through readily convertible design? and Can automated sample preparation concepts be extended to the other phases of the sample analysis process? To fully reap the benefits of automation, four human factors areas should be studied and the outputs used to increase the efficiency of laboratory automation. These areas include: (1) laboratory configuration, (2) procedures, (3) receptacles and fixtures, and (4) human-computer interface for the full automated system and complex laboratory information management systems.

  2. Automated Identification and Reconstruction of YouTube Video Access

    Directory of Open Access Journals (Sweden)

    Jonathan Patterson

    2012-06-01

    Full Text Available YouTube is one of the most popular video-sharing websites on the Internet, allowing users to upload, view and share videos with other users all over the world. YouTube contains many different types of videos, from homemade sketches to instructional and educational tutorials, and therefore attracts a wide variety of users with different interests. The majority of YouTube visits are perfectly innocent, but there may be circumstances where YouTube video access is related to a digital investigation, e.g. viewing instructional videos on how to perform potentially unlawful actions or how to make unlawful articles.When a user accesses a YouTube video through their browser, certain digital artefacts relating to that video access may be left on their system in a number of different locations. However, there has been very little research published in the area of YouTube video artefacts.The paper discusses the identification of some of the artefacts that are left by the Internet Explorer web browser on a Windows system after accessing a YouTube video. The information that can be recovered from these artefacts can include the video ID, the video name and possibly a cached copy of the video itself. In addition to identifying the artefacts that are left, the paper also investigates how these artefacts can be brought together and analysed to infer specifics about the user’s interaction with the YouTube website, for example whether the video was searched for or visited as a result of a suggestion after viewing a previous video.The result of this research is a Python based prototype that will analyse a mounted disk image, automatically extract the artefacts related to YouTube visits and produce a report summarising the YouTube video accesses on a system.

  3. Automated sugar analysis

    Directory of Open Access Journals (Sweden)

    Tadeu Alcides MARQUES

    2016-03-01

    Full Text Available Abstract Sugarcane monosaccharides are reducing sugars, and classical analytical methodologies (Lane-Eynon, Benedict, complexometric-EDTA, Luff-Schoorl, Musson-Walker, Somogyi-Nelson are based on reducing copper ions in alkaline solutions. In Brazil, certain factories use Lane-Eynon, others use the equipment referred to as “REDUTEC”, and additional factories analyze reducing sugars based on a mathematic model. The objective of this paper is to understand the relationship between variations in millivolts, mass and tenors of reducing sugars during the analysis process. Another objective is to generate an automatic model for this process. The work herein uses the equipment referred to as “REDUTEC”, a digital balance, a peristaltic pump, a digital camcorder, math programs and graphics programs. We conclude that the millivolts, mass and tenors of reducing sugars exhibit a good mathematical correlation, and the mathematical model generated was benchmarked to low-concentration reducing sugars (<0.3%. Using the model created herein, reducing sugars analyses can be automated using the new equipment.

  4. FPGA-Based Real-Time Motion Detection for Automated Video Surveillance Systems

    Directory of Open Access Journals (Sweden)

    Sanjay Singh

    2016-03-01

    Full Text Available Design of automated video surveillance systems is one of the exigent missions in computer vision community because of their ability to automatically select frames of interest in incoming video streams based on motion detection. This research paper focuses on the real-time hardware implementation of a motion detection algorithm for such vision based automated surveillance systems. A dedicated VLSI architecture has been proposed and designed for clustering-based motion detection scheme. The working prototype of a complete standalone automated video surveillance system, including input camera interface, designed motion detection VLSI architecture, and output display interface, with real-time relevant motion detection capabilities, has been implemented on Xilinx ML510 (Virtex-5 FX130T FPGA platform. The prototyped system robustly detects the relevant motion in real-time in live PAL (720 × 576 resolution video streams directly coming from the camera.

  5. Automation of the social interaction test by a video-tracking system: behavioural effects of repeated phencyclidine treatment.

    Science.gov (United States)

    Sams-Dodd, F

    1995-07-01

    The social interaction test is a valuable behavioural model for testing anxiolytic and neuroleptic drugs. The test quantifies the level of social behaviour between pairs of rats and it is usually based on manual analysis of behaviour. Advances in computer technology have made it possible to track the movements of pairs of rats in an arena, and the present paper describes the automation of the social interaction test by the commercial video-tracking programme, the EthoVision system. The ability of the automated system to correctly measure the social behaviour of rats is demonstrated by determining a dose-response relationship in the social interaction test for phencyclidine, a psychotomimetic drug that reduces social behaviour between pairs of rats. These data are subsequently analysed by the manual and automated data-acquisition methods and the results are compared. The study shows that the automated data-acquisition method best describes the behavioural effects of phencyclidine in the social interaction test by the locomotor activity of the rats, how much time the rats spend in different sections of the testing arena, and the level of social behaviour. Correlation analysis of the results from the manual and automated data-acquisition methods shows that the social behaviour measured by the automated system corresponds correctly to the social behaviour measured by the manual analysis. The present study has shown that the automated data-acquisition method can quantify locomotor activity, how rats use a testing arena and the level of social behaviour between rats in the social interaction test. The system cannot distinguish between social and aggressive behaviours, and therefore the rats should be tested in an unfamiliar arena to reduce territorial behaviour. Taking this limitation into consideration, the social interaction test can be automated by this computer-based video-tracking system and can be used as a routine test for quantifying the effects of drugs on the

  6. The Comparative study of Automated Face replacement Techniques for Video

    Directory of Open Access Journals (Sweden)

    Harmesh Sanghvi

    2014-03-01

    Full Text Available For entertaining purposes, a computerized special effect referred to as “morphing” has enlarged huge attention and face replacement is one of the interesting tasks. Face replacement in video is a useful application in the amusement and special effect industries. Though various techniques for face replacement have been developed for single image and generally applied in animation and morphing, there are few mechanisms to spread out these techniques to handle videos automatically. Face replacement in video automatically is not only a fascinating application, but a challenging problem. For face replacement in video, the frame-by-frame manipulation process using the software is often time consuming and labor-intensive. Hence, the paper compares numerous latest Automatic face replacement techniques in video to understand the various problems to be solved, their shortcomings and benefits over others.

  7. Video Analysis and Modeling in Physics Education

    Science.gov (United States)

    Brown, Doug

    2008-03-01

    The Tracker video analysis program allows users to overlay simple dynamical models on a video clip. Video modeling offers advantages over both traditional video analysis and animation-only modeling. In traditional video analysis, for example, students measure ``g'' by tracking a dropped or tossed ball, constructing a position or velocity vs. time graph, and interpreting the graphs to obtain initial conditions and acceleration. In video modeling, by contrast, the students interactively construct theoretical force expressions and define initial conditions for a dynamical particle model that synchs with and draws itself on the video. The behavior of the model is thus compared directly with that of the real-world motion. Tracker uses the Open Source Physics code library so sophisticated models are possible. I will demonstrate and compare video modeling with video analysis and I will discuss the advantages of video modeling over animation-only modeling. The Tracker video analysis program is available at: http://www.cabrillo.edu/˜dbrown/tracker/.

  8. Video-Based Motion Analysis

    Science.gov (United States)

    French, Paul; Peterson, Joel; Arrighi, Julie

    2005-04-01

    Video-based motion analysis has recently become very popular in introductory physics classes. This paper outlines general recommendations regarding equipment and software; videography issues such as scaling, shutter speed, lighting, background, and camera distance; as well as other methodological aspects. Also described are the measurement and modeling of the gravitational, drag, and Magnus forces on 1) a spherical projectile undergoing one-dimensional motion and 2) a spinning spherical projectile undergoing motion within a plane. Measurement and correction methods are devised for four common, major sources of error: parallax, lens distortion, discretization, and improper scaling.

  9. VideoHacking: Automated Tracking and Quantification of Locomotor Behavior with Open Source Software and Off-the-Shelf Video Equipment.

    Science.gov (United States)

    Conklin, Emily E; Lee, Kathyann L; Schlabach, Sadie A; Woods, Ian G

    2015-01-01

    Differences in nervous system function can result in differences in behavioral output. Measurements of animal locomotion enable the quantification of these differences. Automated tracking of animal movement is less labor-intensive and bias-prone than direct observation, and allows for simultaneous analysis of multiple animals, high spatial and temporal resolution, and data collection over extended periods of time. Here, we present a new video-tracking system built on Python-based software that is free, open source, and cross-platform, and that can analyze video input from widely available video capture devices such as smartphone cameras and webcams. We validated this software through four tests on a variety of animal species, including larval and adult zebrafish (Danio rerio), Siberian dwarf hamsters (Phodopus sungorus), and wild birds. These tests highlight the capacity of our software for long-term data acquisition, parallel analysis of multiple animals, and application to animal species of different sizes and movement patterns. We applied the software to an analysis of the effects of ethanol on thigmotaxis (wall-hugging) behavior on adult zebrafish, and found that acute ethanol treatment decreased thigmotaxis behaviors without affecting overall amounts of motion. The open source nature of our software enables flexibility, customization, and scalability in behavioral analyses. Moreover, our system presents a free alternative to commercial video-tracking systems and is thus broadly applicable to a wide variety of educational settings and research programs.

  10. VideoHacking: Automated Tracking and Quantification of Locomotor Behavior with Open Source Software and Off-the-Shelf Video Equipment

    Science.gov (United States)

    Conklin, Emily E.; Lee, Kathyann L.; Schlabach, Sadie A.; Woods, Ian G.

    2015-01-01

    Differences in nervous system function can result in differences in behavioral output. Measurements of animal locomotion enable the quantification of these differences. Automated tracking of animal movement is less labor-intensive and bias-prone than direct observation, and allows for simultaneous analysis of multiple animals, high spatial and temporal resolution, and data collection over extended periods of time. Here, we present a new video-tracking system built on Python-based software that is free, open source, and cross-platform, and that can analyze video input from widely available video capture devices such as smartphone cameras and webcams. We validated this software through four tests on a variety of animal species, including larval and adult zebrafish (Danio rerio), Siberian dwarf hamsters (Phodopus sungorus), and wild birds. These tests highlight the capacity of our software for long-term data acquisition, parallel analysis of multiple animals, and application to animal species of different sizes and movement patterns. We applied the software to an analysis of the effects of ethanol on thigmotaxis (wall-hugging) behavior on adult zebrafish, and found that acute ethanol treatment decreased thigmotaxis behaviors without affecting overall amounts of motion. The open source nature of our software enables flexibility, customization, and scalability in behavioral analyses. Moreover, our system presents a free alternative to commercial video-tracking systems and is thus broadly applicable to a wide variety of educational settings and research programs. PMID:26240518

  11. Automated processing of massive audio/video content using FFmpeg

    Directory of Open Access Journals (Sweden)

    Kia Siang Hock

    2014-01-01

    Full Text Available Audio and video content forms an integral, important and expanding part of the digital collections in libraries and archives world-wide. While these memory institutions are familiar and well-versed in the management of more conventional materials such as books, periodicals, ephemera and images, the handling of audio (e.g., oral history recordings and video content (e.g., audio-visual recordings, broadcast content requires additional toolkits. In particular, a robust and comprehensive tool that provides a programmable interface is indispensable when dealing with tens of thousands of hours of audio and video content. FFmpeg is comprehensive and well-established open source software that is capable of the full-range of audio/video processing tasks (such as encode, decode, transcode, mux, demux, stream and filter. It is also capable of handling a wide-range of audio and video formats, a unique challenge in memory institutions. It comes with a command line interface, as well as a set of developer libraries that can be incorporated into applications.

  12. Automated cell tracking and analysis in phase-contrast videos (iTrack4U): development of Java software based on combined mean-shift processes.

    Science.gov (United States)

    Cordelières, Fabrice P; Petit, Valérie; Kumasaka, Mayuko; Debeir, Olivier; Letort, Véronique; Gallagher, Stuart J; Larue, Lionel

    2013-01-01

    Cell migration is a key biological process with a role in both physiological and pathological conditions. Locomotion of cells during embryonic development is essential for their correct positioning in the organism; immune cells have to migrate and circulate in response to injury. Failure of cells to migrate or an inappropriate acquisition of migratory capacities can result in severe defects such as altered pigmentation, skull and limb abnormalities during development, and defective wound repair, immunosuppression or tumor dissemination. The ability to accurately analyze and quantify cell migration is important for our understanding of development, homeostasis and disease. In vitro cell tracking experiments, using primary or established cell cultures, are often used to study migration as cells can quickly and easily be genetically or chemically manipulated. Images of the cells are acquired at regular time intervals over several hours using microscopes equipped with CCD camera. The locations (x,y,t) of each cell on the recorded sequence of frames then need to be tracked. Manual computer-assisted tracking is the traditional method for analyzing the migratory behavior of cells. However, this processing is extremely tedious and time-consuming. Most existing tracking algorithms require experience in programming languages that are unfamiliar to most biologists. We therefore developed an automated cell tracking program, written in Java, which uses a mean-shift algorithm and ImageJ as a library. iTrack4U is a user-friendly software. Compared to manual tracking, it saves considerable amount of time to generate and analyze the variables characterizing cell migration, since they are automatically computed with iTrack4U. Another major interest of iTrack4U is the standardization and the lack of inter-experimenter differences. Finally, iTrack4U is adapted for phase contrast and fluorescent cells.

  13. Automated cell tracking and analysis in phase-contrast videos (iTrack4U: development of Java software based on combined mean-shift processes.

    Directory of Open Access Journals (Sweden)

    Fabrice P Cordelières

    Full Text Available Cell migration is a key biological process with a role in both physiological and pathological conditions. Locomotion of cells during embryonic development is essential for their correct positioning in the organism; immune cells have to migrate and circulate in response to injury. Failure of cells to migrate or an inappropriate acquisition of migratory capacities can result in severe defects such as altered pigmentation, skull and limb abnormalities during development, and defective wound repair, immunosuppression or tumor dissemination. The ability to accurately analyze and quantify cell migration is important for our understanding of development, homeostasis and disease. In vitro cell tracking experiments, using primary or established cell cultures, are often used to study migration as cells can quickly and easily be genetically or chemically manipulated. Images of the cells are acquired at regular time intervals over several hours using microscopes equipped with CCD camera. The locations (x,y,t of each cell on the recorded sequence of frames then need to be tracked. Manual computer-assisted tracking is the traditional method for analyzing the migratory behavior of cells. However, this processing is extremely tedious and time-consuming. Most existing tracking algorithms require experience in programming languages that are unfamiliar to most biologists. We therefore developed an automated cell tracking program, written in Java, which uses a mean-shift algorithm and ImageJ as a library. iTrack4U is a user-friendly software. Compared to manual tracking, it saves considerable amount of time to generate and analyze the variables characterizing cell migration, since they are automatically computed with iTrack4U. Another major interest of iTrack4U is the standardization and the lack of inter-experimenter differences. Finally, iTrack4U is adapted for phase contrast and fluorescent cells.

  14. Automated Analysis of Infinite Scenarios

    DEFF Research Database (Denmark)

    Buchholtz, Mikael

    2005-01-01

    The security of a network protocol crucially relies on the scenario in which the protocol is deployed. This paper describes syntactic constructs for modelling network scenarios and presents an automated analysis tool, which can guarantee that security properties hold in all of the (infinitely many...

  15. Video Analysis with a Web Camera

    Science.gov (United States)

    Wyrembeck, Edward P.

    2009-01-01

    Recent advances in technology have made video capture and analysis in the introductory physics lab even more affordable and accessible. The purchase of a relatively inexpensive web camera is all you need if you already have a newer computer and Vernier's Logger Pro 3 software. In addition to Logger Pro 3, other video analysis tools such as…

  16. AUTOMATED ANALYSIS OF BREAKERS

    Directory of Open Access Journals (Sweden)

    E. M. Farhadzade

    2014-01-01

    Full Text Available Breakers relate to Electric Power Systems’ equipment, the reliability of which influence, to a great extend, on reliability of Power Plants. In particular, the breakers determine structural reliability of switchgear circuit of Power Stations and network substations. Failure in short-circuit switching off by breaker with further failure of reservation unit or system of long-distance protection lead quite often to system emergency.The problem of breakers’ reliability improvement and the reduction of maintenance expenses is becoming ever more urgent in conditions of systematic increasing of maintenance cost and repair expenses of oil circuit and air-break circuit breakers. The main direction of this problem solution is the improvement of diagnostic control methods and organization of on-condition maintenance. But this demands to use a great amount of statistic information about nameplate data of breakers and their operating conditions, about their failures, testing and repairing, advanced developments (software of computer technologies and specific automated information system (AIS.The new AIS with AISV logo was developed at the department: “Reliability of power equipment” of AzRDSI of Energy. The main features of AISV are:· to provide the security and data base accuracy;· to carry out systematic control of breakers conformity with operating conditions;· to make the estimation of individual  reliability’s value and characteristics of its changing for given combination of characteristics variety;· to provide personnel, who is responsible for technical maintenance of breakers, not only with information but also with methodological support, including recommendations for the given problem solving  and advanced methods for its realization.

  17. Automated Analysis of Corpora Callosa

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille; Davies, Rhodri H.

    2003-01-01

    This report describes and evaluates the steps needed to perform modern model-based interpretation of the corpus callosum in MRI. The process is discussed from the initial landmark-free contours to full-fledged statistical models based on the Active Appearance Models framework. Topics treated incl...... include landmark placement, background modelling and multi-resolution analysis. Preliminary quantitative and qualitative validation in a cross-sectional study show that fully automated analysis and segmentation of the corpus callosum are feasible....

  18. Automated Sentiment Analysis

    Science.gov (United States)

    2009-06-01

    Sentiment Analysis? Deep philosophical questions could be raised about the nature of sentiment. It is not exactly an emotion – one can choose to...and syntactic analysis easier. It also forestalls misunderstanding; sentences likely to be misclassified (because of unusual style, sarcasm , etc...has no emotional significance. We focus on supervised learning for this prototype; though, we can alter our program to perform unsupervised learning

  19. Propuesta de un método para el resumen automático de video Proposal of a method for automatic video summary

    Directory of Open Access Journals (Sweden)

    Yendrys Blanco Rosabal

    2012-09-01

    Full Text Available El resumen automático de vídeo dentro del procesamiento digital de imágenes, campo de mucho auge de investigación en la actualidad, es una de las herramientas, que crea automáticamente una versión corta compuesta por un subconjunto de fotogramas claves que deben contener la mayor cantidad de información posible del video original.This work aims at developing a simple method which allows automatic video summary using statistical methods, such as processing histograms. The most significant work is to demonstrate the creation of a video summarization.

  20. Software for automated classification of probe-based confocal laser endomicroscopy videos of colorectal polyps

    Institute of Scientific and Technical Information of China (English)

    Barbara André; Tom Vercauteren; Anna M Buchner; Murli Krishna; Nicholas Ayache; Michael B Wallace

    2012-01-01

    To support probe-based confocal laser endomicroscopy (pCLE) diagnosis by designing software for the automated classification of colonic polyps.METHODS:Intravenous fluorescein pCLE imaging of colorectal lesions was performed on patients undergoing screening and surveillance colonoscopies,followed by polypectomies.All resected specimens were reviewed by a reference gastrointestinal pathologist blinded to pCLE information.Histopathology was used as the criterion standard for the differentiation between neoplastic and non-neoplastic lesions.The pCLE video sequences,recorded for each polyp,were analyzed offline by 2 expert endoscopists who were blinded to the endoscopic characteristics and histopathology.These pCLE videos,along with their histopathology diagnosis,were used to train the automated classification software which is a content-based image retrieval technique followed by k-nearest neighbor classification.The performance of the off-line diagnosis of pCLE videos established by the 2 expert endoscopists was compared with that of automated pCLE software classification.All evaluations were performed using leave-one-patient-out cross-validation to avoid bias.RESULTS:Colorectal lesions (135) were imaged in 71 patients.Based on histopathology,93 of these 135lesions were neoplastic and 42 were non-neoplastic.The study found no statistical significance for the difference between the performance of automated pCLE software classification (accuracy 89.6%,sensitivity 92.5%,specificity 83.3%,using leave-one-patient-outcross-validation) and the performance of the off-line diagnosis of pCLE videos established by the 2 expert endoscopists (accuracy 89.6%,sensitivity 91.4%,specificity 85.7%).There was very low power (< 6%)to detect the observed differences.The 95% confidence intervals for equivalence testing were:-0.073 to 0.073 for accuracy,-0.068 to 0.089 for sensitivity and -0.18 to 0.13 for specificity.The classification software proposed in this study is

  1. An Automated Algorithm for Approximation of Temporal Video Data Using Linear B'EZIER Fitting

    Directory of Open Access Journals (Sweden)

    Murtaza Ali Khan

    2010-05-01

    Full Text Available This paper presents an efficient method for approximation of temporal video data using linear Bezierfitting. For a given sequence of frames, the proposed method estimates the intensity variations of eachpixel in temporal dimension using linear Bezier fitting in Euclidean space. Fitting of each segmentensures upper bound of specified mean squared error. Break and fit criteria is employed to minimize thenumber of segments required to fit the data. The proposed method is well suitable for lossy compressionof temporal video data and automates the fitting process of each pixel. Experimental results show that theproposed method yields good results both in terms of objective and subjective quality measurementparameters without causing any blocking artifacts.

  2. Automating Commercial Video Game Development using Computational Intelligence

    Directory of Open Access Journals (Sweden)

    Tse G. Tan

    2011-01-01

    Full Text Available Problem statement: The retail sales of computer and video games have grown enormously during the last few years, not just in United States (US, but also all over the world. This is the reason a lot of game developers and academic researchers have focused on game related technologies, such as graphics, audio, physics and Artificial Intelligence (AI with the goal of creating newer and more fun games. In recent years, there has been an increasing interest in game AI for producing intelligent game objects and characters that can carry out their tasks autonomously. Approach: The aim of this study is an attempt to create an autonomous intelligent controller to play the game with no human intervention. Our approach is to use a simple but powerful evolutionary algorithm called Evolution Strategies (ES to evolve the connection weights and biases of feed-forward Artificial Neural Networks (ANN and to examine its learning ability through computational experiments in a non-deterministic and dynamic environment, which is the well-known arcade game, called Ms. Pac-man. The resulting algorithm is referred to as an Evolution Strategies Neural Network or ESNet. Results: The comparison of ESNet with two random systems, Random Direction (RandDir and Random Neural Network (RandNet yields promising results. The contribution of this work also focused on the comparison between the ESNet with different mutation probabilities. The results show that ESNet with a high probability with high mean scores recorded compared to the mean scores of RandDir, RandNet and ESNet with a low probability. Conclusion: Overall, the proposed algorithm has a very good performance with a high probability of automatically generating successful game AI controllers for the video game.

  3. Fully automated (operational) modal analysis

    Science.gov (United States)

    Reynders, Edwin; Houbrechts, Jeroen; De Roeck, Guido

    2012-05-01

    Modal parameter estimation requires a lot of user interaction, especially when parametric system identification methods are used and the modes are selected in a stabilization diagram. In this paper, a fully automated, generally applicable three-stage clustering approach is developed for interpreting such a diagram. It does not require any user-specified parameter or threshold value, and it can be used in an experimental, operational, and combined vibration testing context and with any parametric system identification algorithm. The three stages of the algorithm correspond to the three stages in a manual analysis: setting stabilization thresholds for clearing out the diagram, detecting columns of stable modes, and selecting a representative mode from each column. An extensive validation study illustrates the accuracy and robustness of this automation strategy.

  4. Glyph-Based Video Visualization for Semen Analysis

    KAUST Repository

    Duffy, Brian

    2015-08-01

    © 2013 IEEE. The existing efforts in computer assisted semen analysis have been focused on high speed imaging and automated image analysis of sperm motility. This results in a large amount of data, and it is extremely challenging for both clinical scientists and researchers to interpret, compare and correlate the multidimensional and time-varying measurements captured from video data. In this work, we use glyphs to encode a collection of numerical measurements taken at a regular interval and to summarize spatio-temporal motion characteristics using static visual representations. The design of the glyphs addresses the needs for (a) encoding some 20 variables using separable visual channels, (b) supporting scientific observation of the interrelationships between different measurements and comparison between different sperm cells and their flagella, and (c) facilitating the learning of the encoding scheme by making use of appropriate visual abstractions and metaphors. As a case study, we focus this work on video visualization for computer-aided semen analysis, which has a broad impact on both biological sciences and medical healthcare. We demonstrate that glyph-based visualization can serve as a means of external memorization of video data as well as an overview of a large set of spatiotemporal measurements. It enables domain scientists to make scientific observation in a cost-effective manner by reducing the burden of viewing videos repeatedly, while providing them with a new visual representation for conveying semen statistics.

  5. PIZZARO: Forensic analysis and restoration of image and video data.

    Science.gov (United States)

    Kamenicky, Jan; Bartos, Michal; Flusser, Jan; Mahdian, Babak; Kotera, Jan; Novozamsky, Adam; Saic, Stanislav; Sroubek, Filip; Sorel, Michal; Zita, Ales; Zitova, Barbara; Sima, Zdenek; Svarc, Petr; Horinek, Jan

    2016-07-01

    This paper introduces a set of methods for image and video forensic analysis. They were designed to help to assess image and video credibility and origin and to restore and increase image quality by diminishing unwanted blur, noise, and other possible artifacts. The motivation came from the best practices used in the criminal investigation utilizing images and/or videos. The determination of the image source, the verification of the image content, and image restoration were identified as the most important issues of which automation can facilitate criminalists work. Novel theoretical results complemented with existing approaches (LCD re-capture detection and denoising) were implemented in the PIZZARO software tool, which consists of the image processing functionality as well as of reporting and archiving functions to ensure the repeatability of image analysis procedures and thus fulfills formal aspects of the image/video analysis work. Comparison of new proposed methods with the state of the art approaches is shown. Real use cases are presented, which illustrate the functionality of the developed methods and demonstrate their applicability in different situations. The use cases as well as the method design were solved in tight cooperation of scientists from the Institute of Criminalistics, National Drug Headquarters of the Criminal Police and Investigation Service of the Police of the Czech Republic, and image processing experts from the Czech Academy of Sciences.

  6. A High End Building Automation and Online Video Surveillance Security System

    Directory of Open Access Journals (Sweden)

    Iyer Adith Nagarajan

    2015-02-01

    Full Text Available This paper deals with the design and implementation of a building automation and security system which facilitates a healthy, flexible, comfortable and a secure environment to the residents. The design incorporates a SIRC (Sony Infrared Remote Control protocol based infrared remote controller for the wireless operation and control of electrical appliances. Alternatively, the appliances are monitored and controlled via a laptop using a GUI (Graphical User Interface application built in C#. Apart from automation, this paper also focuses on indoor security. Multiple PIR (Pyroelectric Infrared sensors are placed within the area under surveillance to detect any intruder. A web camera used to record the video footage is mounted on the shaft of a servo motor to enable angular motion. Corresponding to which sensor has detected the motion; the ARM7 LPC2148 microcontroller provides appropriate PWM pulses to drive the servo motor, thus adjusting the position and orientation of the camera precisely. OpenCV libraries are used to record a video feed of 5 seconds at 30 frames per second (fps. Video frames are embedded with date and time stamp. The recorded video is compressed, saved to predefined directory (for backup and also uploaded to specific remote location over the internet using Google drive for instant access. The entire security system is automatic and does not need any human intervention.

  7. Automated analysis of complex data

    Science.gov (United States)

    Saintamant, Robert; Cohen, Paul R.

    1994-01-01

    We have examined some of the issues involved in automating exploratory data analysis, in particular the tradeoff between control and opportunism. We have proposed an opportunistic planning solution for this tradeoff, and we have implemented a prototype, Igor, to test the approach. Our experience in developing Igor was surprisingly smooth. In contrast to earlier versions that relied on rule representation, it was straightforward to increment Igor's knowledge base without causing the search space to explode. The planning representation appears to be both general and powerful, with high level strategic knowledge provided by goals and plans, and the hooks for domain-specific knowledge are provided by monitors and focusing heuristics.

  8. Video micro analysis in music therapy research

    DEFF Research Database (Denmark)

    Holck, Ulla; Oldfield, Amelia; Plahl, Christine

    2004-01-01

    Three music therapy researchers from three different countries who have recently completed their PhD theses will each briefly discuss the role of video analysis in their investigations. All three of these research projects have involved music therapy work with children, some of whom were on the a...... and qualitative approaches to data collection. In addition, participants will be encouraged to reflect on what types of knowledge can be gained from video analyses and to explore the general relevance of video analysis in music therapy research.......Three music therapy researchers from three different countries who have recently completed their PhD theses will each briefly discuss the role of video analysis in their investigations. All three of these research projects have involved music therapy work with children, some of whom were...... on the autistic spectrum. Brief video clips will be shown and workshop participants will be invited to use different micro analysis approaches to record information from the video recordings. Through this process the participants will explore some of the advantages and disadvantages of quantitative...

  9. AN HMM BASED ANALYSIS FRAMEWORK FOR SEMANTIC VIDEO EVENTS

    Institute of Scientific and Technical Information of China (English)

    You Junyong; Liu Guizhong; Zhang Yaxin

    2007-01-01

    Semantic video analysis plays an important role in the field of machine intelligence and pattern recognition. In this paper, based on the Hidden Markov Model (HMM), a semantic recognition framework on compressed videos is proposed to analyze the video events according to six low-level features. After the detailed analysis of video events, the pattern of global motion and five features in foreground--the principal parts of videos, are employed as the observations of the Hidden Markov Model to classify events in videos. The applications of the proposed framework in some video event detections demonstrate the promising success of the proposed framework on semantic video analysis.

  10. Automated pipelines for spectroscopic analysis

    Science.gov (United States)

    Allende Prieto, C.

    2016-09-01

    The Gaia mission will have a profound impact on our understanding of the structure and dynamics of the Milky Way. Gaia is providing an exhaustive census of stellar parallaxes, proper motions, positions, colors and radial velocities, but also leaves some glaring holes in an otherwise complete data set. The radial velocities measured with the on-board high-resolution spectrograph will only reach some 10 % of the full sample of stars with astrometry and photometry from the mission, and detailed chemical information will be obtained for less than 1 %. Teams all over the world are organizing large-scale projects to provide complementary radial velocities and chemistry, since this can now be done very efficiently from the ground thanks to large and mid-size telescopes with a wide field-of-view and multi-object spectrographs. As a result, automated data processing is taking an ever increasing relevance, and the concept is applying to many more areas, from targeting to analysis. In this paper, I provide a quick overview of recent, ongoing, and upcoming spectroscopic surveys, and the strategies adopted in their automated analysis pipelines.

  11. Automated Video Detection of Epileptic Convulsion Slowing as a Precursor for Post-Seizure Neuronal Collapse.

    Science.gov (United States)

    Kalitzin, Stiliyan N; Bauer, Prisca R; Lamberts, Robert J; Velis, Demetrios N; Thijs, Roland D; Lopes Da Silva, Fernando H

    2016-12-01

    Automated monitoring and alerting for adverse events in people with epilepsy can provide higher security and quality of life for those who suffer from this debilitating condition. Recently, we found a relation between clonic slowing at the end of a convulsive seizure (CS) and the occurrence and duration of a subsequent period of postictal generalized EEG suppression (PGES). Prolonged periods of PGES can be predicted by the amount of progressive increase of interclonic intervals (ICIs) during the seizure. The purpose of the present study is to develop an automated, remote video sensing-based algorithm for real-time detection of significant clonic slowing that can be used to alert for PGES. This may help preventing sudden unexpected death in epilepsy (SUDEP). The technique is based on our previously published optical flow video sequence processing paradigm that was applied for automated detection of major motor seizures. Here, we introduce an integral Radon-like transformation on the time-frequency wavelet spectrum to detect log-linear frequency changes during the seizure. We validate the automated detection and quantification of the ICI increase by comparison to the results from manually processed electroencephalography (EEG) traces as "gold standard". We studied 48 cases of convulsive seizures for which synchronized EEG-video recordings were available. In most cases, the spectral ridges obtained from Gabor-wavelet transformations of the optical flow group velocities were in close proximity to the ICI traces detected manually from EEG data during the seizure. The quantification of the slowing-down effect measured by the dominant angle in the Radon transformed spectrum was significantly correlated with the exponential ICI increase factors obtained from manual detection. If this effect is validated as a reliable precursor of PGES periods that lead to or increase the probability of SUDEP, the proposed method would provide an efficient alerting device.

  12. AUTOMATED VIDEO IMAGE MORPHOMETRY OF THE CORNEAL ENDOTHELIUM

    NARCIS (Netherlands)

    SIERTSEMA, JV; LANDESZ, M; VANDENBROM, H; VANRIJ, G

    1993-01-01

    The central corneal endothelium of 13 eyes in 13 subjects was visualized with a non-contact specular microscope. This report describes the computer-assisted morphometric analysis of enhanced digitized images, using a direct input by means of a frame grabber. The output consisted of mean cell area, c

  13. Automated segmentation and tracking of non-rigid objects in time-lapse microscopy videos of polymorphonuclear neutrophils.

    Science.gov (United States)

    Brandes, Susanne; Mokhtari, Zeinab; Essig, Fabian; Hünniger, Kerstin; Kurzai, Oliver; Figge, Marc Thilo

    2015-02-01

    Time-lapse microscopy is an important technique to study the dynamics of various biological processes. The labor-intensive manual analysis of microscopy videos is increasingly replaced by automated segmentation and tracking methods. These methods are often limited to certain cell morphologies and/or cell stainings. In this paper, we present an automated segmentation and tracking framework that does not have these restrictions. In particular, our framework handles highly variable cell shapes and does not rely on any cell stainings. Our segmentation approach is based on a combination of spatial and temporal image variations to detect moving cells in microscopy videos. This method yields a sensitivity of 99% and a precision of 95% in object detection. The tracking of cells consists of different steps, starting from single-cell tracking based on a nearest-neighbor-approach, detection of cell-cell interactions and splitting of cell clusters, and finally combining tracklets using methods from graph theory. The segmentation and tracking framework was applied to synthetic as well as experimental datasets with varying cell densities implying different numbers of cell-cell interactions. We established a validation framework to measure the performance of our tracking technique. The cell tracking accuracy was found to be >99% for all datasets indicating a high accuracy for connecting the detected cells between different time points.

  14. Power analysis in flexible automation

    Science.gov (United States)

    Titus, Nathan A.

    1992-12-01

    The performance of an automation or robotic device can be measured in terms of its power efficiency. Screw theory is used to mathematically define the task instantaneously with two screws. The task wrench defines the effect of the device on its environment, and the task twist describes the motion of the device. The tasks can be separated into three task types: kinetic, manipulative, and reactive. Efficiency metrics are developed for each task type. The output power is strictly a function of the task screws, while device input power is shown to be a function of the task, the device Jacobian, and the actuator type. Expressions for input power are developed for two common types of actuators, DC servometers and hydraulic actuators. Simple examples are used to illustrate how power analysis can be used for task/workspace planning, actuator selection, device configuration design, and redundancy resolution.

  15. AN AUTOMATED ALGORITHM FOR APPROXIMATION OF TEMPORAL VIDEO DATA USING LINEAR BEZIER FITTING

    Directory of Open Access Journals (Sweden)

    Murtaza Ali Khan

    2010-05-01

    Full Text Available This paper presents an efficient method for approximation of temporal video data using linear Bezier fitting. For a given sequence of frames, the proposed method estimates the intensity variations of each pixel in temporal dimension using linear Bezier fitting in Euclidean space. Fitting of each segment ensures upper bound of specified mean squared error. Break and fit criteria is employed to minimize the number of segments required to fit the data. The proposed method is well suitable for lossy compression of temporal video data and automates the fitting process of each pixel. Experimental results show that the proposed method yields good results both in terms of objective and subjective quality measurement parameters without causing any blocking artifacts.

  16. Automation for System Safety Analysis

    Science.gov (United States)

    Malin, Jane T.; Fleming, Land; Throop, David; Thronesbery, Carroll; Flores, Joshua; Bennett, Ted; Wennberg, Paul

    2009-01-01

    This presentation describes work to integrate a set of tools to support early model-based analysis of failures and hazards due to system-software interactions. The tools perform and assist analysts in the following tasks: 1) extract model parts from text for architecture and safety/hazard models; 2) combine the parts with library information to develop the models for visualization and analysis; 3) perform graph analysis and simulation to identify and evaluate possible paths from hazard sources to vulnerable entities and functions, in nominal and anomalous system-software configurations and scenarios; and 4) identify resulting candidate scenarios for software integration testing. There has been significant technical progress in model extraction from Orion program text sources, architecture model derivation (components and connections) and documentation of extraction sources. Models have been derived from Internal Interface Requirements Documents (IIRDs) and FMEA documents. Linguistic text processing is used to extract model parts and relationships, and the Aerospace Ontology also aids automated model development from the extracted information. Visualizations of these models assist analysts in requirements overview and in checking consistency and completeness.

  17. An Ethnografic Approach to Video Analysis

    DEFF Research Database (Denmark)

    Holck, Ulla

    2007-01-01

    European Music Therapy Congress, June 16-20, 2004 Jyväskylä, Finland. P. 1094-1110. eBook available at MusicTherapyToday.com Vol.6. Issue 4 (November 2005). Holck, U. (2007). An Ethnographic Descriptive Approach to Video Micro Analysis. In: T. Wosch & T. Wigram (Eds.) Microanalysis in music therapy...

  18. Video Game Characters. Theory and Analysis

    Directory of Open Access Journals (Sweden)

    Felix Schröter

    2014-06-01

    Full Text Available This essay develops a method for the analysis of video game characters based on a theoretical understanding of their medium-specific representation and the mental processes involved in their intersubjective construction by video game players. We propose to distinguish, first, between narration, simulation, and communication as three modes of representation particularly salient for contemporary video games and the characters they represent, second, between narrative, ludic, and social experience as three ways in which players perceive video game characters and their representations, and, third, between three dimensions of video game characters as ‘intersubjective constructs’, which usually are to be analyzed not only as fictional beings with certain diegetic properties but also as game pieces with certain ludic properties and, in those cases in which they function as avatars in the social space of a multiplayer game, as representations of other players. Having established these basic distinctions, we proceed to analyze their realization and interrelation by reference to the character of Martin Walker from the third-person shooter Spec Ops: The Line (Yager Development 2012, the highly customizable player-controlled characters from the role-playing game The Elder Scrolls V: Skyrim (Bethesda 2011, and the complex multidimensional characters in the massively multiplayer online role-playing game Star Wars: The Old Republic (BioWare 2011-2014.

  19. Automated Pipelines for Spectroscopic Analysis

    CERN Document Server

    Prieto, Carlos Allende

    2016-01-01

    The Gaia mission will have a profound impact on our understanding of the structure and dynamics of the Milky Way. Gaia is providing an exhaustive census of stellar parallaxes, proper motions, positions, colors and radial velocities, but also leaves some flaring holes in an otherwise complete data set. The radial velocities measured with the on-board high-resolution spectrograph will only reach some 10% of the full sample of stars with astrometry and photometry from the mission, and detailed chemical information will be obtained for less than 1%. Teams all over the world are organizing large-scale projects to provide complementary radial velocities and chemistry, since this can now be done very efficiently from the ground thanks to large and mid-size telescopes with a wide field-of-view and multi-object spectrographs. As a result, automated data processing is taking an ever increasing relevance, and the concept is applying to many more areas, from targeting to analysis. In this paper, I provide a quick overvie...

  20. Automated UAV-based video exploitation using service oriented architecture framework

    Science.gov (United States)

    Se, Stephen; Nadeau, Christian; Wood, Scott

    2011-05-01

    Airborne surveillance and reconnaissance are essential for successful military missions. Such capabilities are critical for troop protection, situational awareness, mission planning, damage assessment, and others. Unmanned Aerial Vehicles (UAVs) gather huge amounts of video data but it is extremely labour-intensive for operators to analyze hours and hours of received data. At MDA, we have developed a suite of tools that can process the UAV video data automatically, including mosaicking, change detection and 3D reconstruction, which have been integrated within a standard GIS framework. In addition, the mosaicking and 3D reconstruction tools have also been integrated in a Service Oriented Architecture (SOA) framework. The Visualization and Exploitation Workstation (VIEW) integrates 2D and 3D visualization, processing, and analysis capabilities developed for UAV video exploitation. Visualization capabilities are supported through a thick-client Graphical User Interface (GUI), which allows visualization of 2D imagery, video, and 3D models. The GUI interacts with the VIEW server, which provides video mosaicking and 3D reconstruction exploitation services through the SOA framework. The SOA framework allows multiple users to perform video exploitation by running a GUI client on the operator's computer and invoking the video exploitation functionalities residing on the server. This allows the exploitation services to be upgraded easily and allows the intensive video processing to run on powerful workstations. MDA provides UAV services to the Canadian and Australian forces in Afghanistan with the Heron, a Medium Altitude Long Endurance (MALE) UAV system. On-going flight operations service provides important intelligence, surveillance, and reconnaissance information to commanders and front-line soldiers.

  1. Automated detection of feeding strikes by larval fish using continuous high-speed digital video: a novel method to extract quantitative data from fast, sparse kinematic events.

    Science.gov (United States)

    Shamur, Eyal; Zilka, Miri; Hassner, Tal; China, Victor; Liberzon, Alex; Holzman, Roi

    2016-06-01

    Using videography to extract quantitative data on animal movement and kinematics constitutes a major tool in biomechanics and behavioral ecology. Advanced recording technologies now enable acquisition of long video sequences encompassing sparse and unpredictable events. Although such events may be ecologically important, analysis of sparse data can be extremely time-consuming and potentially biased; data quality is often strongly dependent on the training level of the observer and subject to contamination by observer-dependent biases. These constraints often limit our ability to study animal performance and fitness. Using long videos of foraging fish larvae, we provide a framework for the automated detection of prey acquisition strikes, a behavior that is infrequent yet critical for larval survival. We compared the performance of four video descriptors and their combinations against manually identified feeding events. For our data, the best single descriptor provided a classification accuracy of 77-95% and detection accuracy of 88-98%, depending on fish species and size. Using a combination of descriptors improved the accuracy of classification by ∼2%, but did not improve detection accuracy. Our results indicate that the effort required by an expert to manually label videos can be greatly reduced to examining only the potential feeding detections in order to filter false detections. Thus, using automated descriptors reduces the amount of manual work needed to identify events of interest from weeks to hours, enabling the assembly of an unbiased large dataset of ecologically relevant behaviors.

  2. Feasibility Analysis of Crane Automation

    Institute of Scientific and Technical Information of China (English)

    DONG Ming-xiao; MEI Xue-song; JIANG Ge-dong; ZHANG Gui-qing

    2006-01-01

    This paper summarizes the modeling methods, open-loop control and closed-loop control techniques of various forms of cranes, worldwide, and discusses their feasibilities and limitations in engineering. Then the dynamic behaviors of cranes are analyzed. Finally, we propose applied modeling methods and feasible control techniques and demonstrate the feasibilities of crane automation.

  3. Semi-automated detection of fractional shortening in zebrafish embryo heart videos

    Directory of Open Access Journals (Sweden)

    Nasrat Sara

    2016-09-01

    Full Text Available Quantifying cardiac functions in model organisms like embryonic zebrafish is of high importance in small molecule screens for new therapeutic compounds. One relevant cardiac parameter is the fractional shortening (FS. A method for semi-automatic quantification of FS in video recordings of zebrafish embryo hearts is presented. The software provides automated visual information about the end-systolic and end-diastolic stages of the heart by displaying corresponding colored lines into a Motion-mode display. After manually marking the ventricle diameters in frames of end-systolic and end-diastolic stages, the FS is calculated. The software was evaluated by comparing the results of the determination of FS with results obtained from another established method. Correlations of 0.96 < r < 0.99 between the two methods were found indicating that the new software provides comparable results for the determination of the FS.

  4. Automated analysis of 3D echocardiography

    NARCIS (Netherlands)

    Stralen, Marijn van

    2009-01-01

    In this thesis we aim at automating the analysis of 3D echocardiography, mainly targeting the functional analysis of the left ventricle. Manual analysis of these data is cumbersome, time-consuming and is associated with inter-observer and inter-institutional variability. Methods for reconstruction o

  5. No-Reference Video Quality Assessment using MPEG Analysis

    DEFF Research Database (Denmark)

    Søgaard, Jacob; Forchhammer, Søren; Korhonen, Jari

    2013-01-01

    We present a method for No-Reference (NR) Video Quality Assessment (VQA) for decoded video without access to the bitstream. This is achieved by extracting and pooling features from a NR image quality assessment method used frame by frame. We also present methods to identify the video coding...... and estimate the video coding parameters for MPEG-2 and H.264/AVC which can be used to improve the VQA. The analysis differs from most other video coding analysis methods since it is without access to the bitstream. The results show that our proposed method is competitive with other recent NR VQA methods...

  6. Automated Technology for Verificiation and Analysis

    DEFF Research Database (Denmark)

    This volume contains the papers presented at the 7th International Symposium on Automated Technology for Verification and Analysis held during October 13-16 in Macao SAR, China. The primary objective of the ATVA conferences remains the same: to exchange and promote the latest advances of state......-of-the-art research on theoretical and practical aspects of automated analysis, verification, and synthesis. Among 74 research papers and 10 tool papers submitted to ATVA 2009, the Program Committee accepted 23 as regular papers and 3 as tool papers. In all, 33 experts from 17 countries worked hard to make sure...

  7. Automation of the proximate analysis of coals

    Energy Technology Data Exchange (ETDEWEB)

    1985-01-01

    A study is reported of the feasibility of using a multi-jointed general-purpose robot for the automated analysis of moisture, volatile matter, ash and total post-combustion sulfur in coal and coke. The results obtained with an automated system are compared with those of conventional manual methods. The design of the robot hand and the safety measures provided are now both fully satisfactory, and the analytic values obtained exhibit little scatter. It is concluded that the use of this robot system results in a better working environment and in considerable labour saving. Applications to other tasks are under development.

  8. Automated assessment of Pavlovian conditioned freezing and shock reactivity in mice using the VideoFreeze system

    Directory of Open Access Journals (Sweden)

    Stephan G Anagnostaras

    2010-09-01

    Full Text Available The Pavlovian conditioned freezing paradigm has become a prominent mouse and rat model of learning and memory, as well as of pathological fear. Due to its efficiency, reproducibility, and well-defined neurobiology, the paradigm has become widely adopted in large-scale genetic and pharmacological screens. However, one major shortcoming of the use of freezing behavior has been that it has required the use of tedious hand scoring, or a variety of proprietary automated methods that are often poorly validated or difficult to obtain and implement. Here we report an extensive validation of the Video Freeze system in mice, a turn-key all-inclusive system for fear conditioning in small animals. Using digital video and near-infrared lighting, the system achieved outstanding performance in scoring both freezing and movement. Given the large-scale adoption of the conditioned freezing paradigm, we encourage similar validation of other automated systems for scoring freezing, or other behaviors.

  9. Video Analysis and Repackaging for Distance Education

    CERN Document Server

    Ram, A Ranjith

    2012-01-01

    This book presents various video processing methodologies that are useful for distance education. The motivation is to devise new multimedia technologies that are suitable for better representation of instructional videos by exploiting the temporal redundancies present in the original video. This solves many of the issues related to the memory and bandwidth limitation of lecture videos. The various methods described in the book focus on a key-frame based approach which is used to time shrink, repackage and retarget instructional videos. All the methods need a preprocessing step of shot detecti

  10. Web Video Event Recognition by Semantic Analysis From Ubiquitous Documents.

    Science.gov (United States)

    Yu, Litao; Yang, Yang; Huang, Zi; Wang, Peng; Song, Jingkuan; Shen, Heng Tao

    2016-12-01

    In recent years, the task of event recognition from videos has attracted increasing interest in multimedia area. While most of the existing research was mainly focused on exploring visual cues to handle relatively small-granular events, it is difficult to directly analyze video content without any prior knowledge. Therefore, synthesizing both the visual and semantic analysis is a natural way for video event understanding. In this paper, we study the problem of Web video event recognition, where Web videos often describe large-granular events and carry limited textual information. Key challenges include how to accurately represent event semantics from incomplete textual information and how to effectively explore the correlation between visual and textual cues for video event understanding. We propose a novel framework to perform complex event recognition from Web videos. In order to compensate the insufficient expressive power of visual cues, we construct an event knowledge base by deeply mining semantic information from ubiquitous Web documents. This event knowledge base is capable of describing each event with comprehensive semantics. By utilizing this base, the textual cues for a video can be significantly enriched. Furthermore, we introduce a two-view adaptive regression model, which explores the intrinsic correlation between the visual and textual cues of the videos to learn reliable classifiers. Extensive experiments on two real-world video data sets show the effectiveness of our proposed framework and prove that the event knowledge base indeed helps improve the performance of Web video event recognition.

  11. Experiments and video analysis in classical mechanics

    CERN Document Server

    de Jesus, Vitor L B

    2017-01-01

    This book is an experimental physics textbook on classical mechanics focusing on the development of experimental skills by means of discussion of different aspects of the experimental setup and the assessment of common issues such as accuracy and graphical representation. The most important topics of an experimental physics course on mechanics are covered and the main concepts are explored in detail. Each chapter didactically connects the experiment and the theoretical models available to explain it. Real data from the proposed experiments are presented and a clear discussion over the theoretical models is given. Special attention is also dedicated to the experimental uncertainty of measurements and graphical representation of the results. In many of the experiments, the application of video analysis is proposed and compared with traditional methods.

  12. Descriptive analysis of YouTube music therapy videos.

    Science.gov (United States)

    Gooding, Lori F; Gregory, Dianne

    2011-01-01

    The purpose of this study was to conduct a descriptive analysis of music therapy-related videos on YouTube. Preliminary searches using the keywords music therapy, music therapy session, and "music therapy session" resulted in listings of 5000, 767, and 59 videos respectively. The narrowed down listing of 59 videos was divided between two investigators and reviewed in order to determine their relationship to actual music therapy practice. A total of 32 videos were determined to be depictions of music therapy sessions. These videos were analyzed using a 16-item investigator-created rubric that examined both video specific information and therapy specific information. Results of the analysis indicated that audio and visual quality was adequate, while narrative descriptions and identification information were ineffective in the majority of the videos. The top 5 videos (based on the highest number of viewings in the sample) were selected for further analysis in order to investigate demonstration of the Professional Level of Practice Competencies set forth in the American Music Therapy Association (AMTA) Professional Competencies (AMTA, 2008). Four of the five videos met basic competency criteria, with the quality of the fifth video precluding evaluation of content. Of particular interest is the fact that none of the videos included credentialing information. Results of this study suggest the need to consider ways to ensure accurate dissemination of music therapy-related information in the YouTube environment, ethical standards when posting music therapy session videos, and the possibility of creating AMTA standards for posting music therapy related video.

  13. High-Speed Video Analysis in a Conceptual Physics Class

    Science.gov (United States)

    Desbien, Dwain M.

    2011-01-01

    The use of probe ware and computers has become quite common in introductory physics classrooms. Video analysis is also becoming more popular and is available to a wide range of students through commercially available and/or free software. Video analysis allows for the study of motions that cannot be easily measured in the traditional lab setting…

  14. Automated Loads Analysis System (ATLAS)

    Science.gov (United States)

    Gardner, Stephen; Frere, Scot; O’Reilly, Patrick

    2013-01-01

    ATLAS is a generalized solution that can be used for launch vehicles. ATLAS is used to produce modal transient analysis and quasi-static analysis results (i.e., accelerations, displacements, and forces) for the payload math models on a specific Shuttle Transport System (STS) flight using the shuttle math model and associated forcing functions. This innovation solves the problem of coupling of payload math models into a shuttle math model. It performs a transient loads analysis simulating liftoff, landing, and all flight events between liftoff and landing. ATLAS utilizes efficient and numerically stable algorithms available in MSC/NASTRAN.

  15. Real-time video-image analysis

    Science.gov (United States)

    Eskenazi, R.; Rayfield, M. J.; Yakimovsky, Y.

    1979-01-01

    Digitizer and storage system allow rapid random access to video data by computer. RAPID (random-access picture digitizer) uses two commercially-available, charge-injection, solid-state TV cameras as sensors. It can continuously update its memory with each frame of video signal, or it can hold given frame in memory. In either mode, it generates composite video output signal representing digitized image in memory.

  16. H.264/AVC Video Compressed Traces: Multifractal and Fractal Analysis

    Directory of Open Access Journals (Sweden)

    Samčović Andreja

    2006-01-01

    Full Text Available Publicly available long video traces encoded according to H.264/AVC were analyzed from the fractal and multifractal points of view. It was shown that such video traces, as compressed videos (H.261, H.263, and MPEG-4 Version 2 exhibit inherent long-range dependency, that is, fractal, property. Moreover they have high bit rate variability, particularly at higher compression ratios. Such signals may be better characterized by multifractal (MF analysis, since this approach describes both local and global features of the process. From multifractal spectra of the frame size video traces it was shown that higher compression ratio produces broader and less regular MF spectra, indicating to higher MF nature and the existence of additive components in video traces. Considering individual frames (I, P, and B and their MF spectra one can approve additive nature of compressed video and the particular influence of these frames to a whole MF spectrum. Since compressed video occupies a main part of transmission bandwidth, results obtained from MF analysis of compressed video may contribute to more accurate modeling of modern teletraffic. Moreover, by appropriate choice of the method for estimating MF quantities, an inverse MF analysis is possible, that means, from a once derived MF spectrum of observed signal it is possible to recognize and extract parts of the signal which are characterized by particular values of multifractal parameters. Intensive simulations and results obtained confirm the applicability and efficiency of MF analysis of compressed video.

  17. Two video analysis applications using foreground/background segmentation

    OpenAIRE

    Zivkovic, Z.; Petkovic, M; Mierlo, van, B.C.; Keulen, van, H.; Heijden, van der, RW Rob; Jonker, W.; Rijnierse, E.

    2003-01-01

    Probably the most frequently solved problem when videos are analyzed is segmenting a foreground object from its background in an image. After some regions in an image are detected as the foreground objects, some features are extracted that describe the segmented regions. These features together with the domain knowledge are often enough to extract the needed high-level semantics from the video material. In this paper we present two automatic systems for video analysis and indexing. In both sy...

  18. An Analysis of a Video Game

    Science.gov (United States)

    Allain, Rhett; Williams, Richard

    2009-01-01

    Suppose we had a brand new world to study--a world that possibly works with a different set of principles, a non-Newtonian world. Maybe this world is Newtonian, maybe it isn't. This world exists in video games, and it is open for exploration. Most video games try to incorporate realistic physics, but sometimes this does not happen. The obvious…

  19. Techniques for Automated Performance Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Marcus, Ryan C. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2014-09-02

    The performance of a particular HPC code depends on a multitude of variables, including compiler selection, optimization flags, OpenMP pool size, file system load, memory usage, MPI configuration, etc. As a result of this complexity, current predictive models have limited applicability, especially at scale. We present a formulation of scientific codes, nodes, and clusters that reduces complex performance analysis to well-known mathematical techniques. Building accurate predictive models and enhancing our understanding of scientific codes at scale is an important step towards exascale computing.

  20. Failure modes and effects analysis automation

    Science.gov (United States)

    Kamhieh, Cynthia H.; Cutts, Dannie E.; Purves, R. Byron

    1988-01-01

    A failure modes and effects analysis (FMEA) assistant was implemented as a knowledge based system and will be used during design of the Space Station to aid engineers in performing the complex task of tracking failures throughout the entire design effort. The three major directions in which automation was pursued were the clerical components of the FMEA process, the knowledge acquisition aspects of FMEA, and the failure propagation/analysis portions of the FMEA task. The system is accessible to design, safety, and reliability engineers at single user workstations and, although not designed to replace conventional FMEA, it is expected to decrease by many man years the time required to perform the analysis.

  1. Automated Music Video Generation Using Multi-level Feature-based Segmentation

    Science.gov (United States)

    Yoon, Jong-Chul; Lee, In-Kwon; Byun, Siwoo

    The expansion of the home video market has created a requirement for video editing tools to allow ordinary people to assemble videos from short clips. However, professional skills are still necessary to create a music video, which requires a stream to be synchronized with pre-composed music. Because the music and the video are pre-generated in separate environments, even a professional producer usually requires a number of trials to obtain a satisfactory synchronization, which is something that most amateurs are unable to achieve.

  2. Proximate analysis by automated thermogravimetry

    Energy Technology Data Exchange (ETDEWEB)

    Elder, J.P.

    1983-05-01

    A study has been made of the use of the Perkin-Elmer thermogravimetric instrument TGS-2, under the control of the System 4 microprocessor for the automatic proximate analysis of solid fossil fuels and related matter. The programs developed are simple to operate, and do not require detailed temperature calibration of the instrumental system. They have been tested with coals of varying rank, biomass samples and Devonian oil shales all of which were of special importance to the State of Kentucky. Precise, accurate data conforming to ASTM specifications were obtained. The simplicity of the technique suggests that it may complement the classical ASTM method and could be used when this latter procedure cannot be employed. However, its adoption as a standardized method must await the development of statistical data resulting from interlaboratory testing on a variety of fossil fuels. (9 refs.)

  3. Compression Algorithm Analysis of In-Situ (S)TEM Video: Towards Automatic Event Detection and Characterization

    Energy Technology Data Exchange (ETDEWEB)

    Teuton, Jeremy R.; Griswold, Richard L.; Mehdi, Beata L.; Browning, Nigel D.

    2015-09-23

    Precise analysis of both (S)TEM images and video are time and labor intensive processes. As an example, determining when crystal growth and shrinkage occurs during the dynamic process of Li dendrite deposition and stripping involves manually scanning through each frame in the video to extract a specific set of frames/images. For large numbers of images, this process can be very time consuming, so a fast and accurate automated method is desirable. Given this need, we developed software that uses analysis of video compression statistics for detecting and characterizing events in large data sets. This software works by converting the data into a series of images which it compresses into an MPEG-2 video using the open source “avconv” utility [1]. The software does not use the video itself, but rather analyzes the video statistics from the first pass of the video encoding that avconv records in the log file. This file contains statistics for each frame of the video including the frame quality, intra-texture and predicted texture bits, forward and backward motion vector resolution, among others. In all, avconv records 15 statistics for each frame. By combining different statistics, we have been able to detect events in various types of data. We have developed an interactive tool for exploring the data and the statistics that aids the analyst in selecting useful statistics for each analysis. Going forward, an algorithm for detecting and possibly describing events automatically can be written based on statistic(s) for each data type.

  4. Nonlinear Dynamic Analysis of MPEG-4 Video Traffic

    Institute of Scientific and Technical Information of China (English)

    GE Fei; CAO Yang; WANG Yuan-ni

    2005-01-01

    The main research motive is to analysis and to verify the inherent nonlinear character of MPEG-4 video. The power spectral density estimation of the video trafiic describes its 1/fβ and periodic characteristics. The principal components analysis of the reconstructed space dimension shows only several principal components can be the representation of all dimensions. The correlation dimension analysis proves its fractal characteristic. To accurately compute the largest Lyapunov exponent, the video traffic is divided into many parts. So the largest Lyapunov exponent spectrum is separately calculated using the small data sets method. The largest Lyapunov exponent spectrum shows there exists abundant nonlinear chaos in MPEG-4 video traffic. The conclusion can be made that MPEG-4 video traffic have complex nonlinear behavior and can be characterized by its power spectral density, principal components, correlation dimension and the largest Lyapunov exponent besides its common statistics.

  5. No-Reference Video Quality Assessment using Codec Analysis

    DEFF Research Database (Denmark)

    Søgaard, Jacob; Forchhammer, Søren; Korhonen, Jari

    2015-01-01

    propose a novel estimation method of the quantization in H.264/AVC videos without bitstream access, which can also be used for Peak Signalto-Noise Ratio (PSNR) estimation. The results from the MPEG-2 and H.264/AVC analysis are mapped to a perceptual measure of video quality by Support Vector Regression...

  6. Flux-P: Automating Metabolic Flux Analysis

    Directory of Open Access Journals (Sweden)

    Birgitta E. Ebert

    2012-11-01

    Full Text Available Quantitative knowledge of intracellular fluxes in metabolic networks is invaluable for inferring metabolic system behavior and the design principles of biological systems. However, intracellular reaction rates can not often be calculated directly but have to be estimated; for instance, via 13C-based metabolic flux analysis, a model-based interpretation of stable carbon isotope patterns in intermediates of metabolism. Existing software such as FiatFlux, OpenFLUX or 13CFLUX supports experts in this complex analysis, but requires several steps that have to be carried out manually, hence restricting the use of this software for data interpretation to a rather small number of experiments. In this paper, we present Flux-P as an approach to automate and standardize 13C-based metabolic flux analysis, using the Bio-jETI workflow framework. Exemplarily based on the FiatFlux software, it demonstrates how services can be created that carry out the different analysis steps autonomously and how these can subsequently be assembled into software workflows that perform automated, high-throughput intracellular flux analysis of high quality and reproducibility. Besides significant acceleration and standardization of the data analysis, the agile workflow-based realization supports flexible changes of the analysis workflows on the user level, making it easy to perform custom analyses.

  7. Protocol for Data Collection and Analysis Applied to Automated Facial Expression Analysis Technology and Temporal Analysis for Sensory Evaluation.

    Science.gov (United States)

    Crist, Courtney A; Duncan, Susan E; Gallagher, Daniel L

    2016-08-26

    We demonstrate a method for capturing emotional response to beverages and liquefied foods in a sensory evaluation laboratory using automated facial expression analysis (AFEA) software. Additionally, we demonstrate a method for extracting relevant emotional data output and plotting the emotional response of a population over a specified time frame. By time pairing each participant's treatment response to a control stimulus (baseline), the overall emotional response over time and across multiple participants can be quantified. AFEA is a prospective analytical tool for assessing unbiased response to food and beverages. At present, most research has mainly focused on beverages. Methodologies and analyses have not yet been standardized for the application of AFEA to beverages and foods; however, a consistent standard methodology is needed. Optimizing video capture procedures and resulting video quality aids in a successful collection of emotional response to foods. Furthermore, the methodology of data analysis is novel for extracting the pertinent data relevant to the emotional response. The combinations of video capture optimization and data analysis will aid in standardizing the protocol for automated facial expression analysis and interpretation of emotional response data.

  8. Automating Risk Analysis of Software Design Models

    Directory of Open Access Journals (Sweden)

    Maxime Frydman

    2014-01-01

    Full Text Available The growth of the internet and networked systems has exposed software to an increased amount of security threats. One of the responses from software developers to these threats is the introduction of security activities in the software development lifecycle. This paper describes an approach to reduce the need for costly human expertise to perform risk analysis in software, which is common in secure development methodologies, by automating threat modeling. Reducing the dependency on security experts aims at reducing the cost of secure development by allowing non-security-aware developers to apply secure development with little to no additional cost, making secure development more accessible. To automate threat modeling two data structures are introduced, identification trees and mitigation trees, to identify threats in software designs and advise mitigation techniques, while taking into account specification requirements and cost concerns. These are the components of our model for automated threat modeling, AutSEC. We validated AutSEC by implementing it in a tool based on data flow diagrams, from the Microsoft security development methodology, and applying it to VOMS, a grid middleware component, to evaluate our model's performance.

  9. Interacting with target tracking algorithms in a gaze-enhanced motion video analysis system

    Science.gov (United States)

    Hild, Jutta; Krüger, Wolfgang; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen

    2016-05-01

    Motion video analysis is a challenging task, particularly if real-time analysis is required. It is therefore an important issue how to provide suitable assistance for the human operator. Given that the use of customized video analysis systems is more and more established, one supporting measure is to provide system functions which perform subtasks of the analysis. Recent progress in the development of automated image exploitation algorithms allow, e.g., real-time moving target tracking. Another supporting measure is to provide a user interface which strives to reduce the perceptual, cognitive and motor load of the human operator for example by incorporating the operator's visual focus of attention. A gaze-enhanced user interface is able to help here. This work extends prior work on automated target recognition, segmentation, and tracking algorithms as well as about the benefits of a gaze-enhanced user interface for interaction with moving targets. We also propose a prototypical system design aiming to combine both the qualities of the human observer's perception and the automated algorithms in order to improve the overall performance of a real-time video analysis system. In this contribution, we address two novel issues analyzing gaze-based interaction with target tracking algorithms. The first issue extends the gaze-based triggering of a target tracking process, e.g., investigating how to best relaunch in the case of track loss. The second issue addresses the initialization of tracking algorithms without motion segmentation where the operator has to provide the system with the object's image region in order to start the tracking algorithm.

  10. NEW TECHNIQUES USED IN AUTOMATED TEXT ANALYSIS

    Directory of Open Access Journals (Sweden)

    M. I strate

    2010-12-01

    Full Text Available Automated analysis of natural language texts is one of the most important knowledge discovery tasks for any organization. According to Gartner Group, almost 90% of knowledge available at an organization today is dispersed throughout piles of documents buried within unstructured text. Analyzing huge volumes of textual information is often involved in making informed and correct business decisions. Traditional analysis methods based on statistics fail to help processing unstructured texts and the society is in search of new technologies for text analysis. There exist a variety of approaches to the analysis of natural language texts, but most of them do not provide results that could be successfully applied in practice. This article concentrates on recent ideas and practical implementations in this area.

  11. Correlation structure analysis for distributed video compression over wireless video sensor networks

    Science.gov (United States)

    He, Zhihai; Chen, Xi

    2006-01-01

    From the information-theoretic perspective, as stated by the Wyner-Ziv theorem, the distributed source encoder doesn't need any knowledge about its side information in achieving the R-D performance limit. However, from the system design and performance analysis perspective, correlation modeling plays an important role in analysis, control, and optimization of the R-D behavior of the Wyner-Ziv video coding In this work, we observe that videos captured from a wireless video sensor network (WVSN) are uniquely correlated under the multi-view geometry. We propose to utilize this computer vision principal, as well as other existing information, which is already available or can be easily obtained from the encoder, to estimate the source correlation structure. The source correlation determines the R-D behavior of the Wyner-Ziv encoder, and provide useful information for rate control and performance optimization of the Wyner-Ziv encoder.

  12. Synthesis and analysis of three-dimensional video information

    Science.gov (United States)

    Katys, P. G.; Katys, Georgy P.

    2005-02-01

    The principles of design, the basis of functioning and characteristics of 3-dimensional (3D) visual information systems synthesis and analysis are analyzed. In the first part of paper the modern state of 3D video information synthesis and reproduction systems development is considered, like: stereoscopic, auto-stereoscopic, and holographic. In the second part the principles of machine-vision systems are considered, that the analysis of 3D video-information are realized.

  13. Feature point tracking and trajectory analysis for video imaging in cell biology.

    Science.gov (United States)

    Sbalzarini, I F; Koumoutsakos, P

    2005-08-01

    This paper presents a computationally efficient, two-dimensional, feature point tracking algorithm for the automated detection and quantitative analysis of particle trajectories as recorded by video imaging in cell biology. The tracking process requires no a priori mathematical modeling of the motion, it is self-initializing, it discriminates spurious detections, and it can handle temporary occlusion as well as particle appearance and disappearance from the image region. The efficiency of the algorithm is validated on synthetic video data where it is compared to existing methods and its accuracy and precision are assessed for a wide range of signal-to-noise ratios. The algorithm is well suited for video imaging in cell biology relying on low-intensity fluorescence microscopy. Its applicability is demonstrated in three case studies involving transport of low-density lipoproteins in endosomes, motion of fluorescently labeled Adenovirus-2 particles along microtubules, and tracking of quantum dots on the plasma membrane of live cells. The present automated tracking process enables the quantification of dispersive processes in cell biology using techniques such as moment scaling spectra.

  14. An Automated Solar Synoptic Analysis Software System

    Science.gov (United States)

    Hong, S.; Lee, S.; Oh, S.; Kim, J.; Lee, J.; Kim, Y.; Lee, J.; Moon, Y.; Lee, D.

    2012-12-01

    We have developed an automated software system of identifying solar active regions, filament channels, and coronal holes, those are three major solar sources causing the space weather. Space weather forecasters of NOAA Space Weather Prediction Center produce the solar synoptic drawings as a daily basis to predict solar activities, i.e., solar flares, filament eruptions, high speed solar wind streams, and co-rotating interaction regions as well as their possible effects to the Earth. As an attempt to emulate this process with a fully automated and consistent way, we developed a software system named ASSA(Automated Solar Synoptic Analysis). When identifying solar active regions, ASSA uses high-resolution SDO HMI intensitygram and magnetogram as inputs and providing McIntosh classification and Mt. Wilson magnetic classification of each active region by applying appropriate image processing techniques such as thresholding, morphology extraction, and region growing. At the same time, it also extracts morphological and physical properties of active regions in a quantitative way for the short-term prediction of flares and CMEs. When identifying filament channels and coronal holes, images of global H-alpha network and SDO AIA 193 are used for morphological identification and also SDO HMI magnetograms for quantitative verification. The output results of ASSA are routinely checked and validated against NOAA's daily SRS(Solar Region Summary) and UCOHO(URSIgram code for coronal hole information). A couple of preliminary scientific results are to be presented using available output results. ASSA will be deployed at the Korean Space Weather Center and serve its customers in an operational status by the end of 2012.

  15. Forensic analysis of video steganography tools

    Directory of Open Access Journals (Sweden)

    Thomas Sloan

    2015-05-01

    Full Text Available Steganography is the art and science of concealing information in such a way that only the sender and intended recipient of a message should be aware of its presence. Digital steganography has been used in the past on a variety of media including executable files, audio, text, games and, notably, images. Additionally, there is increasing research interest towards the use of video as a media for steganography, due to its pervasive nature and diverse embedding capabilities. In this work, we examine the embedding algorithms and other security characteristics of several video steganography tools. We show how all feature basic and severe security weaknesses. This is potentially a very serious threat to the security, privacy and anonymity of their users. It is important to highlight that most steganography users have perfectly legal and ethical reasons to employ it. Some common scenarios would include citizens in oppressive regimes whose freedom of speech is compromised, people trying to avoid massive surveillance or censorship, political activists, whistle blowers, journalists, etc. As a result of our findings, we strongly recommend ceasing any use of these tools, and to remove any contents that may have been hidden, and any carriers stored, exchanged and/or uploaded online. For many of these tools, carrier files will be trivial to detect, potentially compromising any hidden data and the parties involved in the communication. We finish this work by presenting our steganalytic results, that highlight a very poor current state of the art in practical video steganography tools. There is unfortunately a complete lack of secure and publicly available tools, and even commercial tools offer very poor security. We therefore encourage the steganography community to work towards the development of more secure and accessible video steganography tools, and make them available for the general public. The results presented in this work can also be seen as a useful

  16. Validation of a Video Analysis Software Package for Quantifying Movement Velocity in Resistance Exercises.

    Science.gov (United States)

    Sañudo, Borja; Rueda, David; Pozo-Cruz, Borja Del; de Hoyo, Moisés; Carrasco, Luis

    2016-10-01

    Sañudo, B, Rueda, D, del Pozo-Cruz, B, de Hoyo, M, and Carrasco, L. Validation of a video analysis software package for quantifying movement velocity in resistance exercises. J Strength Cond Res 30(10): 2934-2941, 2016-The aim of this study was to establish the validity of a video analysis software package in measuring mean propulsive velocity (MPV) and the maximal velocity during bench press. Twenty-one healthy males (21 ± 1 year) with weight training experience were recruited, and the MPV and the maximal velocity of the concentric phase (Vmax) were compared with a linear position transducer system during a standard bench press exercise. Participants performed a 1 repetition maximum test using the supine bench press exercise. The testing procedures involved the simultaneous assessment of bench press propulsive velocity using 2 kinematic (linear position transducer and semi-automated tracking software) systems. High Pearson's correlation coefficients for MPV and Vmax between both devices (r = 0.473 to 0.993) were observed. The intraclass correlation coefficients for barbell velocity data and the kinematic data obtained from video analysis were high (>0.79). In addition, the low coefficients of variation indicate that measurements had low variability. Finally, Bland-Altman plots with the limits of agreement of the MPV and Vmax with different loads showed a negative trend, which indicated that the video analysis had higher values than the linear transducer. In conclusion, this study has demonstrated that the software used for the video analysis was an easy to use and cost-effective tool with a very high degree of concurrent validity. This software can be used to evaluate changes in velocity of training load in resistance training, which may be important for the prescription and monitoring of training programmes.

  17. Strategic Analysis of a Video Compression Software Project

    OpenAIRE

    Bai, Chun Jung Rosalind

    2008-01-01

    The objective of this project is to develop a strategic recommendation for market entry of the Client's new software product based on a breakthrough predictive-decoding technology. The analysis examines videoconferencing market and reveals that there is a strong demand for the software products that can reduce delays in interactive video communications while maintaining reasonable video quality. The evaluation of the key external competitive forces suggests that the market has low intensity o...

  18. Video Analysis in Multi-Intelligence

    Energy Technology Data Exchange (ETDEWEB)

    Key, Everett Kiusan [Univ. of Washington, Seattle, WA (United States); Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Van Buren, Kendra Lu [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Warren, Will [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Hemez, Francois M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-07-27

    This is a project which was performed by a graduated high school student at Los Alamos National Laboratory (LANL). The goal of the Multi-intelligence (MINT) project is to determine the state of a facility from multiple data streams. The data streams are indirect observations. The researcher is using DARHT (Dual-Axis Radiographic Hydrodynamic Test Facility) as a proof of concept. In summary, videos from the DARHT facility contain a rich amount of information. Distribution of car activity can inform us about the state of the facility. Counting large vehicles shows promise as another feature for identifying the state of operations. Signal processing techniques are limited by the low resolution and compression of the videos. We are working on integrating these features with features obtained from other data streams to contribute to the MINT project. Future work can pursue other observations, such as when the gate is functioning or non-functioning.

  19. Automated Scanning Electron Microscopy Analysis of Sampled Aerosol

    DEFF Research Database (Denmark)

    Bluhme, Anders Brostrøm; Kling, Kirsten; Mølhave, Kristian

    development of an automated software-based analysis of aerosols using Scanning Electron Microscopy (SEM) and Scanning Transmission Electron Microscopy (STEM) coupled with Energy-Dispersive X-ray Spectroscopy (EDS). The automated analysis will be capable of providing both detailed physical and chemical single...

  20. Sensitivity Analysis of Automated Ice Edge Detection

    Science.gov (United States)

    Moen, Mari-Ann N.; Isaksem, Hugo; Debien, Annekatrien

    2016-08-01

    The importance of highly detailed and time sensitive ice charts has increased with the increasing interest in the Arctic for oil and gas, tourism, and shipping. Manual ice charts are prepared by national ice services of several Arctic countries. Methods are also being developed to automate this task. Kongsberg Satellite Services uses a method that detects ice edges within 15 minutes after image acquisition. This paper describes a sensitivity analysis of the ice edge, assessing to which ice concentration class from the manual ice charts it can be compared to. The ice edge is derived using the Ice Tracking from SAR Images (ITSARI) algorithm. RADARSAT-2 images of February 2011 are used, both for the manual ice charts and the automatic ice edges. The results show that the KSAT ice edge lies within ice concentration classes with very low ice concentration or open water.

  1. Management issues in automated audit analysis

    Energy Technology Data Exchange (ETDEWEB)

    Jackson, K.A.; Hochberg, J.G.; Wilhelmy, S.K.; McClary, J.F.; Christoph, G.G.

    1994-03-01

    This paper discusses management issues associated with the design and implementation of an automated audit analysis system that we use to detect security events. It gives the viewpoint of a team directly responsible for developing and managing such a system. We use Los Alamos National Laboratory`s Network Anomaly Detection and Intrusion Reporter (NADIR) as a case in point. We examine issues encountered at Los Alamos, detail our solutions to them, and where appropriate suggest general solutions. After providing an introduction to NADIR, we explore four general management issues: cost-benefit questions, privacy considerations, legal issues, and system integrity. Our experiences are of general interest both to security professionals and to anyone who may wish to implement a similar system. While NADIR investigates security events, the methods used and the management issues are potentially applicable to a broad range of complex systems. These include those used to audit credit card transactions, medical care payments, and procurement systems.

  2. ASteCA - Automated Stellar Cluster Analysis

    CERN Document Server

    Perren, Gabriel I; Piatti, Andrés E

    2014-01-01

    We present ASteCA (Automated Stellar Cluster Analysis), a suit of tools designed to fully automatize the standard tests applied on stellar clusters to determine their basic parameters. The set of functions included in the code make use of positional and photometric data to obtain precise and objective values for a given cluster's center coordinates, radius, luminosity function and integrated color magnitude, as well as characterizing through a statistical estimator its probability of being a true physical cluster rather than a random overdensity of field stars. ASteCA incorporates a Bayesian field star decontamination algorithm capable of assigning membership probabilities using photometric data alone. An isochrone fitting process based on the generation of synthetic clusters from theoretical isochrones and selection of the best fit through a genetic algorithm is also present, which allows ASteCA to provide accurate estimates for a cluster's metallicity, age, extinction and distance values along with its unce...

  3. Automating the construction of scene classifiers for content-based video retrieval

    NARCIS (Netherlands)

    Israël, Menno; Broek, van den Egon L.; Putten, van der Peter; Khan, L.; Petrushin, V.A.

    2004-01-01

    This paper introduces a real time automatic scene classifier within content-based video retrieval. In our envisioned approach end users like documentalists, not image processing experts, build classifiers interactively, by simply indicating positive examples of a scene. Classification consists of a

  4. The impact of online video lecture recordings and automated feedback on student performance

    NARCIS (Netherlands)

    Wieling, M. B.; Hofman, W. H. A.

    2010-01-01

    To what extent a blended learning configuration of face-to-face lectures, online on-demand video recordings of the face-to-face lectures and the offering of online quizzes with appropriate feedback has an additional positive impact on the performance of these students compared to the traditional fac

  5. Ecological Automation Design, Extending Work Domain Analysis

    NARCIS (Netherlands)

    Amelink, M.H.J.

    2010-01-01

    In high–risk domains like aviation, medicine and nuclear power plant control, automation has enabled new capabilities, increased the economy of operation and has greatly contributed to safety. However, automation increases the number of couplings in a system, which can inadvertently lead to more com

  6. Development of automated conjunctival hyperemia analysis software.

    Science.gov (United States)

    Sumi, Tamaki; Yoneda, Tsuyoshi; Fukuda, Ken; Hoshikawa, Yasuhiro; Kobayashi, Masahiko; Yanagi, Masahide; Kiuchi, Yoshiaki; Yasumitsu-Lovell, Kahoko; Fukushima, Atsuki

    2013-11-01

    Conjunctival hyperemia is observed in a variety of ocular inflammatory conditions. The evaluation of hyperemia is indispensable for the treatment of patients with ocular inflammation. However, the major methods currently available for evaluation are based on nonquantitative and subjective methods. Therefore, we developed novel software to evaluate bulbar hyperemia quantitatively and objectively. First, we investigated whether the histamine-induced hyperemia of guinea pigs could be quantified by image analysis. Bulbar conjunctival images were taken by means of a digital camera, followed by the binarization of the images and the selection of regions of interest (ROIs) for evaluation. The ROIs were evaluated by counting the number of absolute pixel values. Pixel values peaked significantly 1 minute after histamine challenge was performed and were still increased after 5 minutes. Second, we applied the same method to antigen (ovalbumin)-induced hyperemia of sensitized guinea pigs, acquiring similar results except for the substantial upregulation in the first 5 minutes after challenge. Finally, we analyzed human bulbar hyperemia using the new software we developed especially for human usage. The new software allows the automatic calculation of pixel values once the ROIs have been selected. In our clinical trials, the percentage of blood vessel coverage of ROIs was significantly higher in the images of hyperemia caused by allergic conjunctival diseases and hyperemia induced by Bimatoprost, compared with those of healthy volunteers. We propose that this newly developed automated hyperemia analysis software will be an objective clinical tool for the evaluation of ocular hyperemia.

  7. VisioTracker, an innovative automated approach to oculomotor analysis.

    Science.gov (United States)

    Mueller, Kaspar P; Schnaedelbach, Oliver D R; Russig, Holger D; Neuhauss, Stephan C F

    2011-10-12

    Investigations into the visual system development and function necessitate quantifiable behavioral models of visual performance that are easy to elicit, robust, and simple to manipulate. A suitable model has been found in the optokinetic response (OKR), a reflexive behavior present in all vertebrates due to its high selection value. The OKR involves slow stimulus-following movements of eyes alternated with rapid resetting saccades. The measurement of this behavior is easily carried out in zebrafish larvae, due to its early and stable onset (fully developed after 96 hours post fertilization (hpf)), and benefitting from the thorough knowledge about zebrafish genetics, for decades one of the favored model organisms in this field. Meanwhile the analysis of similar mechanisms in adult fish has gained importance, particularly for pharmacological and toxicological applications. Here we describe VisioTracker, a fully automated, high-throughput system for quantitative analysis of visual performance. The system is based on research carried out in the group of Prof. Stephan Neuhauss and was re-designed by TSE Systems. It consists of an immobilizing device for small fish monitored by a high-quality video camera equipped with a high-resolution zoom lens. The fish container is surrounded by a drum screen, upon which computer-generated stimulus patterns can be projected. Eye movements are recorded and automatically analyzed by the VisioTracker software package in real time. Data analysis enables immediate recognition of parameters such as slow and fast phase duration, movement cycle frequency, slow-phase gain, visual acuity, and contrast sensitivity. Typical results allow for example the rapid identification of visual system mutants that show no apparent alteration in wild type morphology, or the determination of quantitative effects of pharmacological or toxic and mutagenic agents on visual system performance.

  8. Quantitative assessment of human motion using video motion analysis

    Science.gov (United States)

    Probe, John D.

    1993-01-01

    In the study of the dynamics and kinematics of the human body a wide variety of technologies has been developed. Photogrammetric techniques are well documented and are known to provide reliable positional data from recorded images. Often these techniques are used in conjunction with cinematography and videography for analysis of planar motion, and to a lesser degree three-dimensional motion. Cinematography has been the most widely used medium for movement analysis. Excessive operating costs and the lag time required for film development, coupled with recent advances in video technology, have allowed video based motion analysis systems to emerge as a cost effective method of collecting and analyzing human movement. The Anthropometric and Biomechanics Lab at Johnson Space Center utilizes the video based Ariel Performance Analysis System (APAS) to develop data on shirtsleeved and space-suited human performance in order to plan efficient on-orbit intravehicular and extravehicular activities. APAS is a fully integrated system of hardware and software for biomechanics and the analysis of human performance and generalized motion measurement. Major components of the complete system include the video system, the AT compatible computer, and the proprietary software.

  9. Automation and robotics for genetic analysis.

    Science.gov (United States)

    Smith, J H; Madan, D; Salhaney, J; Engelstein, M

    2001-05-01

    This guide to laboratory robotics covers a wide variety of methods amenable to automation including mapping, genotyping, barcoding and data handling, template preparation, reaction setup, colony and plaque picking, and more.

  10. Video Analysis of the Flight of a Model Aircraft

    Science.gov (United States)

    Tarantino, Giovanni; Fazio, Claudio

    2011-01-01

    A video-analysis software tool has been employed in order to measure the steady-state values of the kinematics variables describing the longitudinal behaviour of a radio-controlled model aircraft during take-off, climbing and gliding. These experimental results have been compared with the theoretical steady-state configurations predicted by the…

  11. Content-based analysis and indexing of sports video

    Science.gov (United States)

    Luo, Ming; Bai, Xuesheng; Xu, Guang-you

    2001-12-01

    An explosion of on-line image and video data in digital form is already well underway. With the exponential rise in interactive information exploration and dissemination through the World-Wide Web, the major inhibitors of rapid access to on-line video data are the management of capture and storage, and content-based intelligent search and indexing techniques. This paper proposes an approach for content-based analysis and event-based indexing of sports video. It includes a novel method to organize shots - classifying shots as close shots and far shots, an original idea of blur extent-based event detection, and an innovative local mutation-based algorithm for caption detection and retrieval. Results on extensive real TV programs demonstrate the applicability of our approach.

  12. Statistical analysis of subjective preferences for video enhancement

    Science.gov (United States)

    Woods, Russell L.; Satgunam, PremNandhini; Bronstad, P. Matthew; Peli, Eli

    2010-02-01

    Measuring preferences for moving video quality is harder than for static images due to the fleeting and variable nature of moving video. Subjective preferences for image quality can be tested by observers indicating their preference for one image over another. Such pairwise comparisons can be analyzed using Thurstone scaling (Farrell, 1999). Thurstone (1927) scaling is widely used in applied psychology, marketing, food tasting and advertising research. Thurstone analysis constructs an arbitrary perceptual scale for the items that are compared (e.g. enhancement levels). However, Thurstone scaling does not determine the statistical significance of the differences between items on that perceptual scale. Recent papers have provided inferential statistical methods that produce an outcome similar to Thurstone scaling (Lipovetsky and Conklin, 2004). Here, we demonstrate that binary logistic regression can analyze preferences for enhanced video.

  13. Automated Cache Performance Analysis And Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Mohror, Kathryn [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-12-23

    While there is no lack of performance counter tools for coarse-grained measurement of cache activity, there is a critical lack of tools for relating data layout to cache behavior to application performance. Generally, any nontrivial optimizations are either not done at all, or are done ”by hand” requiring significant time and expertise. To the best of our knowledge no tool available to users measures the latency of memory reference instructions for partic- ular addresses and makes this information available to users in an easy-to-use and intuitive way. In this project, we worked to enable the Open|SpeedShop performance analysis tool to gather memory reference latency information for specific instructions and memory ad- dresses, and to gather and display this information in an easy-to-use and intuitive way to aid performance analysts in identifying problematic data structures in their codes. This tool was primarily designed for use in the supercomputer domain as well as grid, cluster, cloud-based parallel e-commerce, and engineering systems and middleware. Ultimately, we envision a tool to automate optimization of application cache layout and utilization in the Open|SpeedShop performance analysis tool. To commercialize this soft- ware, we worked to develop core capabilities for gathering enhanced memory usage per- formance data from applications and create and apply novel methods for automatic data structure layout optimizations, tailoring the overall approach to support existing supercom- puter and cluster programming models and constraints. In this Phase I project, we focused on infrastructure necessary to gather performance data and present it in an intuitive way to users. With the advent of enhanced Precise Event-Based Sampling (PEBS) counters on recent Intel processor architectures and equivalent technology on AMD processors, we are now in a position to access memory reference information for particular addresses. Prior to the introduction of PEBS counters

  14. Performance Analysis of Video Transmission Using Sequential Distortion Minimization Method for Digital Video Broadcasting Terrestrial

    Directory of Open Access Journals (Sweden)

    Novita Astin

    2016-12-01

    Full Text Available This paper presents about the transmission of Digital Video Broadcasting system with streaming video resolution 640x480 on different IQ rate and modulation. In the video transmission, distortion often occurs, so the received video has bad quality. Key frames selection algorithm is flexibel on a change of video, but on these methods, the temporal information of a video sequence is omitted. To minimize distortion between the original video and received video, we aimed at adding methodology using sequential distortion minimization algorithm. Its aim was to create a new video, better than original video without significant loss of content between the original video and received video, fixed sequentially. The reliability of video transmission was observed based on a constellation diagram, with the best result on IQ rate 2 Mhz and modulation 8 QAM. The best video transmission was also investigated using SEDIM (Sequential Distortion Minimization Method and without SEDIM. The experimental result showed that the PSNR (Peak Signal to Noise Ratio average of video transmission using SEDIM was an increase from 19,855 dB to 48,386 dB and SSIM (Structural Similarity average increase 10,49%. The experimental results and comparison of proposed method obtained a good performance. USRP board was used as RF front-end on 2,2 GHz.

  15. Automated image analysis techniques for cardiovascular magnetic resonance imaging

    NARCIS (Netherlands)

    Geest, Robertus Jacobus van der

    2011-01-01

    The introductory chapter provides an overview of various aspects related to quantitative analysis of cardiovascular MR (CMR) imaging studies. Subsequently, the thesis describes several automated methods for quantitative assessment of left ventricular function from CMR imaging studies. Several novel

  16. Human Motion Video Analysis in Clinical Practice (Review)

    OpenAIRE

    V.V. Borzikov; N.N. Rukina; O.V. Vorobyova; A.N. Kuznetsov; A. N. Belova

    2015-01-01

    The development of new rehabilitation approaches to neurological and traumatological patients requires understanding of normal and pathological movement patterns. Biomechanical analysis of video images is the most accurate method of investigation and quantitative assessment of human normal and pathological locomotion. The review of currently available methods and systems of optical human motion analysis used in clinical practice is presented here. Short historical background is provi...

  17. User-oriented summary extraction for soccer video based on multimodal analysis

    Science.gov (United States)

    Liu, Huayong; Jiang, Shanshan; He, Tingting

    2011-11-01

    An advanced user-oriented summary extraction method for soccer video is proposed in this work. Firstly, an algorithm of user-oriented summary extraction for soccer video is introduced. A novel approach that integrates multimodal analysis, such as extraction and analysis of the stadium features, moving object features, audio features and text features is introduced. By these features the semantic of the soccer video and the highlight mode are obtained. Then we can find the highlight position and put them together by highlight degrees to obtain the video summary. The experimental results for sports video of world cup soccer games indicate that multimodal analysis is effective for soccer video browsing and retrieval.

  18. A video-polygraphic analysis of the cataplectic attack

    DEFF Research Database (Denmark)

    Rubboli, G; d'Orsi, G; Zaniboni, A

    2000-01-01

    with bradycardia, that was maximal during the atonic phase. CONCLUSIONS: Analysis of the muscular phenomena that characterize cataplectic attacks in a standing patient suggests that the cataplectic fall occurs with a pattern that might result from the interaction between neuronal networks mediating muscular atonia......OBJECTIVES AND METHODS: To perform a video-polygraphic analysis of 11 cataplectic attacks in a 39-year-old narcoleptic patient, correlating clinical manifestations with polygraphic findings. Polygraphic recordings monitored EEG, EMG activity from several cranial, trunk, upper and lower limbs...... muscles, eye movements, EKG, thoracic respiration. RESULTS: Eleven attacks were recorded, all of them lasting less than 1 min and ending with the fall of the patient to the ground. We identified, based on the video-polygraphic analysis of the episodes, 3 phases: initial phase, characterized essentially...

  19. Flexible Human Behavior Analysis Framework for Video Surveillance Applications

    Directory of Open Access Journals (Sweden)

    Weilun Lao

    2010-01-01

    Full Text Available We study a flexible framework for semantic analysis of human motion from surveillance video. Successful trajectory estimation and human-body modeling facilitate the semantic analysis of human activities in video sequences. Although human motion is widely investigated, we have extended such research in three aspects. By adding a second camera, not only more reliable behavior analysis is possible, but it also enables to map the ongoing scene events onto a 3D setting to facilitate further semantic analysis. The second contribution is the introduction of a 3D reconstruction scheme for scene understanding. Thirdly, we perform a fast scheme to detect different body parts and generate a fitting skeleton model, without using the explicit assumption of upright body posture. The extension of multiple-view fusion improves the event-based semantic analysis by 15%–30%. Our proposed framework proves its effectiveness as it achieves a near real-time performance (13–15 frames/second and 6–8 frames/second for monocular and two-view video sequences.

  20. Automated Steel Cleanliness Analysis Tool (ASCAT)

    Energy Technology Data Exchange (ETDEWEB)

    Gary Casuccio (RJ Lee Group); Michael Potter (RJ Lee Group); Fred Schwerer (RJ Lee Group); Dr. Richard J. Fruehan (Carnegie Mellon University); Dr. Scott Story (US Steel)

    2005-12-30

    The objective of this study was to develop the Automated Steel Cleanliness Analysis Tool (ASCATTM) to permit steelmakers to evaluate the quality of the steel through the analysis of individual inclusions. By characterizing individual inclusions, determinations can be made as to the cleanliness of the steel. Understanding the complicating effects of inclusions in the steelmaking process and on the resulting properties of steel allows the steel producer to increase throughput, better control the process, reduce remelts, and improve the quality of the product. The ASCAT (Figure 1) is a steel-smart inclusion analysis tool developed around a customized next-generation computer controlled scanning electron microscopy (NG-CCSEM) hardware platform that permits acquisition of inclusion size and composition data at a rate never before possible in SEM-based instruments. With built-in customized ''intelligent'' software, the inclusion data is automatically sorted into clusters representing different inclusion types to define the characteristics of a particular heat (Figure 2). The ASCAT represents an innovative new tool for the collection of statistically meaningful data on inclusions, and provides a means of understanding the complicated effects of inclusions in the steel making process and on the resulting properties of steel. Research conducted by RJLG with AISI (American Iron and Steel Institute) and SMA (Steel Manufactures of America) members indicates that the ASCAT has application in high-grade bar, sheet, plate, tin products, pipes, SBQ, tire cord, welding rod, and specialty steels and alloys where control of inclusions, whether natural or engineered, are crucial to their specification for a given end-use. Example applications include castability of calcium treated steel; interstitial free (IF) degasser grade slag conditioning practice; tundish clogging and erosion minimization; degasser circulation and optimization; quality assessment

  1. Automated Cache Performance Analysis And Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Mohror, Kathryn [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-12-23

    While there is no lack of performance counter tools for coarse-grained measurement of cache activity, there is a critical lack of tools for relating data layout to cache behavior to application performance. Generally, any nontrivial optimizations are either not done at all, or are done ”by hand” requiring significant time and expertise. To the best of our knowledge no tool available to users measures the latency of memory reference instructions for partic- ular addresses and makes this information available to users in an easy-to-use and intuitive way. In this project, we worked to enable the Open|SpeedShop performance analysis tool to gather memory reference latency information for specific instructions and memory ad- dresses, and to gather and display this information in an easy-to-use and intuitive way to aid performance analysts in identifying problematic data structures in their codes. This tool was primarily designed for use in the supercomputer domain as well as grid, cluster, cloud-based parallel e-commerce, and engineering systems and middleware. Ultimately, we envision a tool to automate optimization of application cache layout and utilization in the Open|SpeedShop performance analysis tool. To commercialize this soft- ware, we worked to develop core capabilities for gathering enhanced memory usage per- formance data from applications and create and apply novel methods for automatic data structure layout optimizations, tailoring the overall approach to support existing supercom- puter and cluster programming models and constraints. In this Phase I project, we focused on infrastructure necessary to gather performance data and present it in an intuitive way to users. With the advent of enhanced Precise Event-Based Sampling (PEBS) counters on recent Intel processor architectures and equivalent technology on AMD processors, we are now in a position to access memory reference information for particular addresses. Prior to the introduction of PEBS counters

  2. Applying Image Matching to Video Analysis

    Science.gov (United States)

    2010-09-01

    Database of Spent Cartridge Cases of Firearms". Forensic Science International . Page(s) 97-106. 2001. 21: Birchfield, S. "Derivation of Kanade-Lucas-Tomasi...Ortega-Garcia, J. "Bayesian Analysis of Fingerprint, Face and Signature Evidences with Automatic Biometric Systems". Forensic Science International . Vol

  3. Video analysis of the flight of a model aircraft

    Energy Technology Data Exchange (ETDEWEB)

    Tarantino, Giovanni; Fazio, Claudio, E-mail: giovanni.tarantino19@unipa.it, E-mail: claudio.fazio@unipa.it [UOP-PERG (University of Palermo Physics Education Research Group), Dipartimento di Fisica, Universita di Palermo, Palermo (Italy)

    2011-11-15

    A video-analysis software tool has been employed in order to measure the steady-state values of the kinematics variables describing the longitudinal behaviour of a radio-controlled model aircraft during take-off, climbing and gliding. These experimental results have been compared with the theoretical steady-state configurations predicted by the phugoid model for longitudinal flight. A comparison with the parameters and performance of the full-size aircraft has also been outlined.

  4. Observations and analysis of FTU plasmas by video cameras

    Energy Technology Data Exchange (ETDEWEB)

    De Angelis, R. [Associazione Euratom/ENEA sulla fusione, CP 65-00044 Frascati, Rome (Italy); Di Matteo, L., E-mail: lucy.dimatteo@enea.i [ENEA Fellow, Via E. Fermi, Frascati (Italy)

    2010-11-11

    The interaction of the FTU plasma with the vessel walls and with the limiters is responsible for the release of hydrogen and impurities through various physical mechanisms (physical and chemical sputtering, desorption, etc.). In the cold plasma periphery, these particles are weakly ionised and emit mainly in the visible spectral range. A good description of plasma periphery can then be obtained by use of video cameras. In FTU small size video cameras, placed close to the plasma edge, give wide-angle images of the plasma at a standard rate of 25 frames/s. Images are stored digitally, allowing their retrieval and analysis. This paper reports some of the most interesting features of the discharges evidenced by the images. As a first example, the accumulation of cold neutral gas in the plasma periphery above a density threshold (a phenomenon known as Marfe) can be seen on the video images as a toroidally symmetric band oscillating poloidally; on the multi-chord spectroscopy or bolometer channels, this appears only as a sudden rise of the signals whose overall behaviour could not be clearly interpreted. A second example is the identification of runaway discharges by the signature of the fast electrons emitting synchrotron radiation in their motion direction; this appears as a bean shaped bright spot on one toroidal side, which reverts according to plasma current direction. A relevant side effect of plasma discharges, as potentially dangerous, is the formation of dust as a consequence of some strong plasma-wall interaction event; video images allow monitoring and possibly estimating numerically the amount of dust, which can be produced in these events. Specialised software can automatically search experimental database identifying relevant events, partly overcoming the difficulties associated with the very large amount of data produced by video techniques.

  5. Comparative Analysis of Manual and Automated AFEES

    Science.gov (United States)

    1976-05-14

    system and in-house technical studies of AFEES-related issues. The design of the system -vas performed by Computer Sciences Corporation (CSC...34two day" blood pressure and pulse and/or answer any questions that some liaison would have concerning a previously physicalled applicant. This...from the Consumption/ Usage Listings as submitted by Computer Sciences Corporation and estimates for automated applicant forms from Central

  6. Content-Based Hierarchical Analysis of News Video Using Audio and Visual Information

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A schema for content-based analysis of broadcast news video is presented. First, we separate commercials from news using audiovisual features. Then, we automatically organize news programs into a content hierarchy at various levels of abstraction via effective integration of video, audio, and text data available from the news programs. Based on these news video structure and content analysis technologies, a TV news video Library is generated, from which users can retrieve definite news story according to their demands.

  7. Video Analysis of Eddy Structures from Explosive Volcanic Eruptions

    Science.gov (United States)

    Fisher, M. A.; Kobs-Nawotniak, S. E.

    2013-12-01

    We present a method of analyzing turbulent eddy structures in explosive volcanic eruptions using high definition video. Film from the eruption of Sakurajima on 25 September 2011 was analyzed using a modified version of FlowJ, a Java-based toolbox released by National Institute of Health. Using the Lucas and Kanade algorithm with a Gaussian derivative gradient, it tracks the change in pixel position over a 23 image buffer to determine the optical flow. This technique assumes that the optical flow, which is the apparent motion of the pixels, is equivalent to the actual flow field. We calculated three flow fields per second for the duration of the video. FlowJ outputs flow fields in pixels per frame that were then converted to meters per second in Matlab using a known distance and video rate. We constructed a low pass filter using proper orthogonal decomposition (POD) and critical point analysis to identify the underlying eddy structure with boundaries determined by tracing the flow lines. We calculated the area of each eddy and noted its position over a series of velocity fields. The changes in shape and position were tracked to determine the eddy growth rate and overall eddy rising velocity. The eddies grow in size 1.5 times quicker than they rise vertically. Presently, this method is most successful in high contrast videos when there is little to no effect of wind on the plumes. Additionally, the pixel movement from the video images represents a 2D flow with no depth, while the actual flow is three dimensional; we are continuing to develop an algorithm that will allow 3D reprojection of the 2D data. Flow in the y-direction lessens the overall velocity magnitude as the true flow motion has larger y-direction component. POD, which only uses the pattern of the flow, and analysis of the critical points (points where flow is zero) is used to determine the shape of the eddies. The method allows for video recorded at remote distances to be used to study eruption dynamics

  8. Video analysis of motor events in REM sleep behavior disorder.

    Science.gov (United States)

    Frauscher, Birgit; Gschliesser, Viola; Brandauer, Elisabeth; Ulmer, Hanno; Peralta, Cecilia M; Müller, Jörg; Poewe, Werner; Högl, Birgit

    2007-07-30

    In REM sleep behavior disorder (RBD), several studies focused on electromyographic characterization of motor activity, whereas video analysis has remained more general. The aim of this study was to undertake a detailed and systematic video analysis. Nine polysomnographic records from 5 Parkinson patients with RBD were analyzed and compared with sex- and age-matched controls. Each motor event in the video during REM sleep was classified according to duration, type of movement, and topographical distribution. In RBD, a mean of 54 +/- 23.2 events/10 minutes of REM sleep (total 1392) were identified and visually analyzed. Seventy-five percent of all motor events lasted <2 seconds. Of these events, 1,155 (83.0%) were classified as elementary, 188 (13.5%) as complex behaviors, 50 (3.6%) as violent, and 146 (10.5%) as vocalizations. In the control group, 3.6 +/- 2.3 events/10 minutes (total 264) of predominantly elementary simple character (n = 240, 90.9%) were identified. Number and types of motor events differed significantly between patients and controls (P < 0.05). This study shows a very high number and great variety of motor events during REM sleep in symptomatic RBD. However, most motor events are minor, and violent episodes represent only a small fraction.

  9. Analysis of Trinity Power Metrics for Automated Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Michalenko, Ashley Christine [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-07-28

    This is a presentation from Los Alamos National Laboraotyr (LANL) about the analysis of trinity power metrics for automated monitoring. The following topics are covered: current monitoring efforts, motivation for analysis, tools used, the methodology, work performed during the summer, and future work planned.

  10. Image analysis and platform development for automated phenotyping in cytomics

    NARCIS (Netherlands)

    Yan, Kuan

    2013-01-01

    This thesis is dedicated to the empirical study of image analysis in HT/HC screen study. Often a HT/HC screening produces extensive amounts that cannot be manually analyzed. Thus, an automated image analysis solution is prior to an objective understanding of the raw image data. Compared to general a

  11. An Interactive Assessment Framework for Visual Engagement: Statistical Analysis of a TEDx Video

    Science.gov (United States)

    Farhan, Muhammad; Aslam, Muhammad

    2017-01-01

    This study aims to assess the visual engagement of the video lectures. This analysis can be useful for the presenter and student to find out the overall visual attention of the videos. For this purpose, a new algorithm and data collection module are developed. Videos can be transformed into a dataset with the help of data collection module. The…

  12. The Use of Video Disks: Computer Based Analysis of Works of Art.

    Science.gov (United States)

    McWhinnie, Harold J.

    This paper presents research using a computer with a video disk player to do aesthetic analysis of the work of Vincent Van Gogh. A discussion of the video disk system, and of several software systems including: (1) Dr. Halo, (2) Handy, (3) PC-Paint, and (4) Pilot are outlined. Several possible uses of the computer with interactive video disks for…

  13. Analysis of brook trout spatial behavior during passage attempts in corrugated culverts using near-infrared illumination video imagery

    Science.gov (United States)

    Bergeron, Normand E.; Constantin, Pierre-Marc; Goerig, Elsa; Castro-Santos, Theodore R.

    2016-01-01

    We used video recording and near-infrared illumination to document the spatial behavior of brook trout of various sizes attempting to pass corrugated culverts under different hydraulic conditions. Semi-automated image analysis was used to digitize fish position at high temporal resolution inside the culvert, which allowed calculation of various spatial behavior metrics, including instantaneous ground and swimming speed, path complexity, distance from side walls, velocity preference ratio (mean velocity at fish lateral position/mean crosssectional velocity) as well as number and duration of stops in forward progression. The presentation summarizes the main results and discusses how they could be used to improve fish passage performance in culverts.

  14. YouTube™ as a Source of Instructional Videos on Bowel Preparation: a Content Analysis.

    Science.gov (United States)

    Ajumobi, Adewale B; Malakouti, Mazyar; Bullen, Alexander; Ahaneku, Hycienth; Lunsford, Tisha N

    2016-12-01

    Instructional videos on bowel preparation have been shown to improve bowel preparation scores during colonoscopy. YouTube™ is one of the most frequently visited website on the internet and contains videos on bowel preparation. In an era where patients are increasingly turning to social media for guidance on their health, the content of these videos merits further investigation. We assessed the content of bowel preparation videos available on YouTube™ to determine the proportion of YouTube™ videos on bowel preparation that are high-content videos and the characteristics of these videos. YouTube™ videos were assessed for the following content: (1) definition of bowel preparation, (2) importance of bowel preparation, (3) instructions on home medications, (4) name of bowel cleansing agent (BCA), (5) instructions on when to start taking BCA, (6) instructions on volume and frequency of BCA intake, (7) diet instructions, (8) instructions on fluid intake, (9) adverse events associated with BCA, and (10) rectal effluent. Each content parameter was given 1 point for a total of 10 points. Videos with ≥5 points were considered by our group to be high-content videos. Videos with ≤4 points were considered low-content videos. Forty-nine (59 %) videos were low-content videos while 34 (41 %) were high-content videos. There was no association between number of views, number of comments, thumbs up, thumbs down or engagement score, and videos deemed high-content. Multiple regression analysis revealed bowel preparation videos on YouTube™ with length >4 minutes and non-patient authorship to be associated with high-content videos.

  15. Inverse Multifractal Analysis of Different Frame Types of Multiview 3D Video

    Directory of Open Access Journals (Sweden)

    A. Zeković

    2014-11-01

    Full Text Available In this paper, the results of multifractal characterization of multiview 3D video are presented. Analyses are performed for different views of multiview video and for different frame types of video. Multifractal analysis is performed by the histogram method. Due to the advantages of the selected method for determining the spectrum, the inverse multifractal analysis of multiview 3D video was also possible. A discussion of the results obtained by the inverse multifractal analysis of multiview 3D video is presented, taking into account the frame type and whether the original frames belong to the left or right view of multiview 3D video. In the analysis, publicly available multiview 3D video traces were used.

  16. Web Video Mining: Metadata Predictive Analysis using Classification Techniques

    Directory of Open Access Journals (Sweden)

    Siddu P. Algur

    2016-02-01

    Full Text Available Now a days, the Data Engineering becoming emerging trend to discover knowledge from web audiovisual data such as- YouTube videos, Yahoo Screen, Face Book videos etc. Different categories of web video are being shared on such social websites and are being used by the billions of users all over the world. The uploaded web videos will have different kind of metadata as attribute information of the video data. The metadata attributes defines the contents and features/characteristics of the web videos conceptually. Hence, accomplishing web video mining by extracting features of web videos in terms of metadata is a challenging task. In this work, effective attempts are made to classify and predict the metadata features of web videos such as length of the web videos, number of comments of the web videos, ratings information and view counts of the web videos using data mining algorithms such as Decision tree J48 and navie Bayesian algorithms as a part of web video mining. The results of Decision tree J48 and navie Bayesian classification models are analyzed and compared as a step in the process of knowledge discovery from web videos.

  17. Flexible surveillance system architecture for prototyping video content analysis algorithms

    Science.gov (United States)

    Wijnhoven, R. G. J.; Jaspers, E. G. T.; de With, P. H. N.

    2006-01-01

    Many proposed video content analysis algorithms for surveillance applications are very computationally intensive, which limits the integration in a total system, running on one processing unit (e.g. PC). To build flexible prototyping systems of low cost, a distributed system with scalable processing power is therefore required. This paper discusses requirements for surveillance systems, considering two example applications. From these requirements, specifications for a prototyping architecture are derived. An implementation of the proposed architecture is presented, enabling mapping of multiple software modules onto a number of processing units (PCs). The architecture enables fast prototyping of new algorithms for complex surveillance applications without considering resource constraints.

  18. ORIGAMI Automator Primer. Automated ORIGEN Source Terms and Spent Fuel Storage Pool Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Wieselquist, William A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Thompson, Adam B. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Bowman, Stephen M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Peterson, Joshua L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-04-01

    Source terms and spent nuclear fuel (SNF) storage pool decay heat load analyses for operating nuclear power plants require a large number of Oak Ridge Isotope Generation and Depletion (ORIGEN) calculations. SNF source term calculations also require a significant amount of bookkeeping to track quantities such as core and assembly operating histories, spent fuel pool (SFP) residence times, heavy metal masses, and enrichments. The ORIGEN Assembly Isotopics (ORIGAMI) module in the SCALE code system provides a simple scheme for entering these data. However, given the large scope of the analysis, extensive scripting is necessary to convert formats and process data to create thousands of ORIGAMI input files (one per assembly) and to process the results into formats readily usable by follow-on analysis tools. This primer describes a project within the SCALE Fulcrum graphical user interface (GUI) called ORIGAMI Automator that was developed to automate the scripting and bookkeeping in large-scale source term analyses. The ORIGAMI Automator enables the analyst to (1) easily create, view, and edit the reactor site and assembly information, (2) automatically create and run ORIGAMI inputs, and (3) analyze the results from ORIGAMI. ORIGAMI Automator uses the standard ORIGEN binary concentrations files produced by ORIGAMI, with concentrations available at all time points in each assembly’s life. The GUI plots results such as mass, concentration, activity, and decay heat using a powerful new ORIGEN Post-Processing Utility for SCALE (OPUS) GUI component. This document includes a description and user guide for the GUI, a step-by-step tutorial for a simplified scenario, and appendices that document the file structures used.

  19. An Automated Data Analysis Tool for Livestock Market Data

    Science.gov (United States)

    Williams, Galen S.; Raper, Kellie Curry

    2011-01-01

    This article describes an automated data analysis tool that allows Oklahoma Cooperative Extension Service educators to disseminate results in a timely manner. Primary data collected at Oklahoma Quality Beef Network (OQBN) certified calf auctions across the state results in a large amount of data per sale site. Sale summaries for an individual sale…

  20. Automated Analysis of Child Phonetic Production Using Naturalistic Recordings

    Science.gov (United States)

    Xu, Dongxin; Richards, Jeffrey A.; Gilkerson, Jill

    2014-01-01

    Purpose: Conventional resource-intensive methods for child phonetic development studies are often impractical for sampling and analyzing child vocalizations in sufficient quantity. The purpose of this study was to provide new information on early language development by an automated analysis of child phonetic production using naturalistic…

  1. Automated image analysis in the study of collagenous colitis

    DEFF Research Database (Denmark)

    Kanstrup, Anne-Marie Fiehn; Kristensson, Martin; Engel, Ulla

    2016-01-01

    PURPOSE: The aim of this study was to develop an automated image analysis software to measure the thickness of the subepithelial collagenous band in colon biopsies with collagenous colitis (CC) and incomplete CC (CCi). The software measures the thickness of the collagenous band on microscopic...

  2. Development and validation of a video analysis software for marine benthic applications

    Science.gov (United States)

    Romero-Ramirez, A.; Grémare, A.; Bernard, G.; Pascal, L.; Maire, O.; Duchêne, J. C.

    2016-10-01

    Our aim in the EU funded JERICO project was to develop a flexible and scalable imaging platform that could be used in the widest possible set of ecological situations. Depending on research objectives, both image acquisition and analysis procedures may indeed differ. Up to now the attempts for automating image analysis procedures have consisted of the development of pieces of software specifically designed for a given objective. This led to the conception of a new software: AVIExplore. Its general architecture and its three constitutive modules: AVIExplore - Mobile, AVIExplore - Fixed and AVIExplore - ScriptEdit are presented. AVIExplore provides a unique environment for video analysis. Its main features include: (1) image selection tools allowing for the division of videos in homogeneous sections, (2) automatic extraction of targeted information, (3) solutions for long-term time-series as well as large spatial scale image acquisition, (4) real time acquisition and in some cases real time analysis, and (5) a large range of customized image-analysis possibilities through a script editor. The flexibility of use of AVIExplore is illustrated and validated by three case studies: (1) coral identification and mapping, (2) identification and quantification of different types of behaviors in a mud shrimp, and (3) quantification of filtering activity in a passive suspension-feeder. The accuracy of the software is measured comparing with visual assessment. It is: 90.2%, 82.7%, and 98.3% for the three case studies, respectively. Some of the advantages and current limitations of the software as well as some of its foreseen advancements are then briefly discussed.

  3. Volumetric measurements of pulmonary nodules: variability in automated analysis tools

    Science.gov (United States)

    Juluru, Krishna; Kim, Woojin; Boonn, William; King, Tara; Siddiqui, Khan; Siegel, Eliot

    2007-03-01

    Over the past decade, several computerized tools have been developed for detection of lung nodules and for providing volumetric analysis. Incidentally detected lung nodules have traditionally been followed over time by measurements of their axial dimensions on CT scans to ensure stability or document progression. A recently published article by the Fleischner Society offers guidelines on the management of incidentally detected nodules based on size criteria. For this reason, differences in measurements obtained by automated tools from various vendors may have significant implications on management, yet the degree of variability in these measurements is not well understood. The goal of this study is to quantify the differences in nodule maximum diameter and volume among different automated analysis software. Using a dataset of lung scans obtained with both "ultra-low" and conventional doses, we identified a subset of nodules in each of five size-based categories. Using automated analysis tools provided by three different vendors, we obtained size and volumetric measurements on these nodules, and compared these data using descriptive as well as ANOVA and t-test analysis. Results showed significant differences in nodule maximum diameter measurements among the various automated lung nodule analysis tools but no significant differences in nodule volume measurements. These data suggest that when using automated commercial software, volume measurements may be a more reliable marker of tumor progression than maximum diameter. The data also suggest that volumetric nodule measurements may be relatively reproducible among various commercial workstations, in contrast to the variability documented when performing human mark-ups, as is seen in the LIDC (lung imaging database consortium) study.

  4. Video Games and Youth Violence: A Prospective Analysis in Adolescents

    Science.gov (United States)

    Ferguson, Christopher J.

    2011-01-01

    The potential influence of violent video games on youth violence remains an issue of concern for psychologists, policymakers and the general public. Although several prospective studies of video game violence effects have been conducted, none have employed well validated measures of youth violence, nor considered video game violence effects in…

  5. Tech Tips: Using Video Management/ Analysis Technology in Qualitative Research

    Directory of Open Access Journals (Sweden)

    J.A. Spiers

    2004-03-01

    Full Text Available This article presents tips on how to use video in qualitative research. The author states that, though there many complex and powerful computer programs for working with video, the work done in qualitative research does not require those programs. For this work, simple editing software is sufficient. Also presented is an easy and efficient method of transcribing video clips.

  6. On Automating and Standardising Corpus Callosum Analysis in Brain MRI

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille; Skoglund, Karl

    2005-01-01

    Corpus callosum analysis is influenced by many factors. The effort in controlling these has previously been incomplete and scattered. This paper sketches a complete pipeline for automated corpus callosum analysis from magnetic resonance images, with focus on measurement standardisation....... The presented pipeline deals with i) estimation of the mid-sagittal plane, ii) localisation and registration of the corpus callosum, iii) parameterisation and representation of its contour, and iv) means of standardising the traditional reference area measurements....

  7. Fully automated apparatus for the proximate analysis of coals

    Energy Technology Data Exchange (ETDEWEB)

    Fukumoto, K.; Ishibashi, Y.; Ishii, T.; Maeda, K.; Ogawa, A.; Gotoh, K.

    1985-01-01

    The authors report the development of fully-automated equipment for the proximate analysis of coals, a development undertaken with the twin aims of labour-saving and developing robot applications technology. This system comprises a balance, electric furnaces, a sulfur analyzer, etc., arranged concentrically around a multi-jointed robot which automatically performs all the necessary operations, such as sampling and weighing the materials for analysis, and inserting and removing them from the furnaces. 2 references.

  8. A fully automated multicapillary electrophoresis device for DNA analysis.

    Science.gov (United States)

    Behr, S; Mätzig, M; Levin, A; Eickhoff, H; Heller, C

    1999-06-01

    We describe the construction and performance of a fully automated multicapillary electrophoresis system for the analysis of fluorescently labeled biomolecules. A special detection system allows the simultaneous spectral analysis of all 96 capillaries. The main features are true parallel detection without any moving parts, high robustness, and full compatibility to existing protocols. The device can process up to 40 microtiter plates (96 and 384 well) without human interference, which means up to 15,000 samples before it has to be reloaded.

  9. Automated Asteroseismic Analysis of Solar-type Stars

    DEFF Research Database (Denmark)

    Karoff, Christoffer; Campante, T.L.; Chaplin, W.J.

    2010-01-01

    The rapidly increasing volume of asteroseismic observations on solar-type stars has revealed a need for automated analysis tools. The reason for this is not only that individual analyses of single stars are rather time consuming, but more importantly that these large volumes of observations open...... the possibility to do population studies on large samples of stars and such population studies demand a consistent analysis. By consistent analysis we understand an analysis that can be performed without the need to make any subjective choices on e.g. mode identification and an analysis where the uncertainties...

  10. Towards Automated Design, Analysis and Optimization of Declarative Curation Workflows

    Directory of Open Access Journals (Sweden)

    Tianhong Song

    2014-10-01

    Full Text Available Data curation is increasingly important. Our previous work on a Kepler curation package has demonstrated advantages that come from automating data curation pipelines by using workflow systems. However, manually designed curation workflows can be error-prone and inefficient due to a lack of user understanding of the workflow system, misuse of actors, or human error. Correcting problematic workflows is often very time-consuming. A more proactive workflow system can help users avoid such pitfalls. For example, static analysis before execution can be used to detect the potential problems in a workflow and help the user to improve workflow design. In this paper, we propose a declarative workflow approach that supports semi-automated workflow design, analysis and optimization. We show how the workflow design engine helps users to construct data curation workflows, how the workflow analysis engine detects different design problems of workflows and how workflows can be optimized by exploiting parallelism.

  11. Identification of Mobile Phone and Analysis of Original Version of Videos through a Delay Time Analysis of Sound Signals from Mobile Phone Videos.

    Science.gov (United States)

    Hwang, Min Gu; Har, Dong Hwan

    2017-02-15

    This study designs a method of identifying the camera model used to take videos that are distributed through mobile phones and determines the original version of the mobile phone video for use as legal evidence. For this analysis, an experiment was conducted to find the unique characteristics of each mobile phone. The videos recorded by mobile phones were analyzed to establish the delay time of sound signals, and the differences between the delay times of sound signals for different mobile phones were traced by classifying their characteristics. Furthermore, the sound input signals for mobile phone videos used as legal evidence were analyzed to ascertain whether they have the unique characteristics of the original version. The objective of this study was to find a method for validating the use of mobile phone videos as legal evidence using mobile phones through differences in the delay times of sound input signals.

  12. Multispectral tissue analysis and classification towards enabling automated robotic surgery

    Science.gov (United States)

    Triana, Brian; Cha, Jaepyeong; Shademan, Azad; Krieger, Axel; Kang, Jin U.; Kim, Peter C. W.

    2014-02-01

    Accurate optical characterization of different tissue types is an important tool for potentially guiding surgeons and enabling automated robotic surgery. Multispectral imaging and analysis have been used in the literature to detect spectral variations in tissue reflectance that may be visible to the naked eye. Using this technique, hidden structures can be visualized and analyzed for effective tissue classification. Here, we investigated the feasibility of automated tissue classification using multispectral tissue analysis. Broadband reflectance spectra (200-1050 nm) were collected from nine different ex vivo porcine tissues types using an optical fiber-probe based spectrometer system. We created a mathematical model to train and distinguish different tissue types based upon analysis of the observed spectra using total principal component regression (TPCR). Compared to other reported methods, our technique is computationally inexpensive and suitable for real-time implementation. Each of the 92 spectra was cross-referenced against the nine tissue types. Preliminary results show a mean detection rate of 91.3%, with detection rates of 100% and 70.0% (inner and outer kidney), 100% and 100% (inner and outer liver), 100% (outer stomach), and 90.9%, 100%, 70.0%, 85.7% (four different inner stomach areas, respectively). We conclude that automated tissue differentiation using our multispectral tissue analysis method is feasible in multiple ex vivo tissue specimens. Although measurements were performed using ex vivo tissues, these results suggest that real-time, in vivo tissue identification during surgery may be possible.

  13. Extended -Regular Sequence for Automated Analysis of Microarray Images

    Directory of Open Access Journals (Sweden)

    Jin Hee-Jeong

    2006-01-01

    Full Text Available Microarray study enables us to obtain hundreds of thousands of expressions of genes or genotypes at once, and it is an indispensable technology for genome research. The first step is the analysis of scanned microarray images. This is the most important procedure for obtaining biologically reliable data. Currently most microarray image processing systems require burdensome manual block/spot indexing work. Since the amount of experimental data is increasing very quickly, automated microarray image analysis software becomes important. In this paper, we propose two automated methods for analyzing microarray images. First, we propose the extended -regular sequence to index blocks and spots, which enables a novel automatic gridding procedure. Second, we provide a methodology, hierarchical metagrid alignment, to allow reliable and efficient batch processing for a set of microarray images. Experimental results show that the proposed methods are more reliable and convenient than the commercial tools.

  14. Statistical models of video structure for content analysis and characterization.

    Science.gov (United States)

    Vasconcelos, N; Lippman, A

    2000-01-01

    Content structure plays an important role in the understanding of video. In this paper, we argue that knowledge about structure can be used both as a means to improve the performance of content analysis and to extract features that convey semantic information about the content. We introduce statistical models for two important components of this structure, shot duration and activity, and demonstrate the usefulness of these models with two practical applications. First, we develop a Bayesian formulation for the shot segmentation problem that is shown to extend the standard thresholding model in an adaptive and intuitive way, leading to improved segmentation accuracy. Second, by applying the transformation into the shot duration/activity feature space to a database of movie clips, we also illustrate how the Bayesian model captures semantic properties of the content. We suggest ways in which these properties can be used as a basis for intuitive content-based access to movie libraries.

  15. Automatic Video-based Analysis of Human Motion

    DEFF Research Database (Denmark)

    Fihl, Preben

    received great interest from both industry and research communities. The focus of this thesis is on video-based analysis of human motion and the thesis presents work within three overall topics, namely foreground segmentation, action recognition, and human pose estimation. Foreground segmentation is often...... foreground camouflage, shadows, and moving backgrounds. The method continuously updates the background model to maintain high quality segmentation over long periods of time. Within action recognition the thesis presents work on both recognition of arm gestures and gait types. A key-frame based approach...... range of gait which deals with an inherent ambiguity of gait types. Human pose estimation does not target a specific action but is considered as a good basis for the recognition of any action. The pose estimation work presented in this thesis is mainly concerned with the problems of interacting people...

  16. An optimized method for automated analysis of algal pigments by HPLC

    NARCIS (Netherlands)

    van Leeuwe, M. A.; Villerius, L. A.; Roggeveld, J.; Visser, R. J. W.; Stefels, J.

    2006-01-01

    A recent development in algal pigment analysis by high-performance liquid chromatography (HPLC) is the application of automation. An optimization of a complete sampling and analysis protocol applied specifically in automation has not yet been performed. In this paper we show that automation can only

  17. Video Analysis and Modeling Tool for Physics Education: A workshop for Redesigning Pedagogy

    CERN Document Server

    Wee, Loo Kang

    2012-01-01

    This workshop aims to demonstrate how the Tracker Video Analysis and Modeling Tool engages, enables and empowers teachers to be learners so that we can be leaders in our teaching practice. Through this workshop, the kinematics of a falling ball and a projectile motion are explored using video analysis and in the later video modeling. We hope to lead and inspire other teachers by facilitating their experiences with this ICT-enabled video modeling pedagogy (Brown, 2008) and free tool for facilitating students-centered active learning, thus motivate students to be more self-directed.

  18. Flux-P: Automating Metabolic Flux Analysis

    OpenAIRE

    Ebert, Birgitta E.; Anna-Lena Lamprecht; Bernhard Steffen; Blank, Lars M.

    2012-01-01

    Quantitative knowledge of intracellular fluxes in metabolic networks is invaluable for inferring metabolic system behavior and the design principles of biological systems. However, intracellular reaction rates can not often be calculated directly but have to be estimated; for instance, via 13C-based metabolic flux analysis, a model-based interpretation of stable carbon isotope patterns in intermediates of metabolism. Existing software such as FiatFlux, OpenFLUX or 13CFLUX supports experts in ...

  19. Multimodal Semantic Analysis and Annotation for Basketball Video

    Science.gov (United States)

    Liu, Song; Xu, Min; Yi, Haoran; Chia, Liang-Tien; Rajan, Deepu

    2006-12-01

    This paper presents a new multiple-modality method for extracting semantic information from basketball video. The visual, motion, and audio information are extracted from video to first generate some low-level video segmentation and classification. Domain knowledge is further exploited for detecting interesting events in the basketball video. For video, both visual and motion prediction information are utilized for shot and scene boundary detection algorithm; this will be followed by scene classification. For audio, audio keysounds are sets of specific audio sounds related to semantic events and a classification method based on hidden Markov model (HMM) is used for audio keysound identification. Subsequently, by analyzing the multimodal information, the positions of potential semantic events, such as "foul" and "shot at the basket," are located with additional domain knowledge. Finally, a video annotation is generated according to MPEG-7 multimedia description schemes (MDSs). Experimental results demonstrate the effectiveness of the proposed method.

  20. Analysis of YouTube~TM videos related to bowel preparation for colonoscopy

    Institute of Scientific and Technical Information of China (English)

    Corey; Hannah; Basch; Grace; Clarke; Hillyer; Rachel; Reeves; Charles; E; Basch

    2014-01-01

    AIM: To examine YouTubeTM videos about bowel preparation procedure to better understand the quality of this information on the Internet. METHODS: YouTubeTM videos related to colonoscopy preparation were identified during the winter of 2014; only those with ≥ 5000 views were selected for analysis(n = 280). Creator of the video, length, date posted, whether the video was based upon personal experience, and theme was recorded. Bivariate analysis was conducted to examine differences between consumers vs healthcare professionals-created videos. RESULTS: Most videos were based on personal experience. Half were created by consumers and 34% were ≥ 4.5 min long. Healthcare professional videos were viewed more often(> 19400, 59.4% vs 40.8%,P = 0.037, for healthcare professional and consumer, respectively) and more often focused on the purgative type and completing the preparation. Consumer videos received more comments(> 10 comments, 62.2% vs 42.7%, P = 0.001) and more often emphasized the palatability of the purgative, disgust, and hunger during the procedure. Content of colonoscopy bowel preparation YouTube? videos is influenced by who creates the video and may affect views on colon cancer screening. CONCLUSION: The impact of perspectives on the quality of health-related information found on the Internet requires further examination.

  1. A Mixed Approach Of Automated ECG Analysis

    Science.gov (United States)

    De, A. K.; Das, J.; Majumder, D. Dutta

    1982-11-01

    ECG is one of the non-invasive and risk-free technique for collecting data about the functional state of the heart. However, all these data-processing techniques can be classified into two basically different approaches -- the first and second generation ECG computer program. Not the opposition, but simbiosis of these two approaches will lead to systems with the highest accuracy. In our paper we are going to describe a mixed approach which will show higher accuracy with lesser amount of computational work. Key Words : Primary features, Patients' parameter matrix, Screening, Logical comparison technique, Multivariate statistical analysis, Mixed approach.

  2. The experiments and analysis of several selective video encryption methods

    Science.gov (United States)

    Zhang, Yue; Yang, Cheng; Wang, Lei

    2013-07-01

    This paper presents four methods for selective video encryption based on the MPEG-2 video compression,including the slices, the I-frames, the motion vectors, and the DCT coefficients. We use the AES encryption method for simulation experiment for the four methods on VS2010 Platform, and compare the video effects and the processing speed of each frame after the video encrypted. The encryption depth can be arbitrarily selected, and design the encryption depth by using the double limit counting method, so the accuracy can be increased.

  3. Prevalence of discordant microscopic changes with automated CBC analysis

    Directory of Open Access Journals (Sweden)

    Fabiano de Jesus Santos

    2014-12-01

    Full Text Available Introduction:The most common cause of diagnostic error is related to errors in laboratory tests as well as errors of results interpretation. In order to reduce them, the laboratory currently has modern equipment which provides accurate and reliable results. The development of automation has revolutionized the laboratory procedures in Brazil and worldwide.Objective:To determine the prevalence of microscopic changes present in blood slides concordant and discordant with results obtained using fully automated procedures.Materials and method:From January to July 2013, 1,000 hematological parameters slides were analyzed. Automated analysis was performed on last generation equipment, which methodology is based on electrical impedance, and is able to quantify all the figurative elements of the blood in a universe of 22 parameters. The microscopy was performed by two experts in microscopy simultaneously.Results:The data showed that only 42.70% were concordant, comparing with 57.30% discordant. The main findings among discordant were: Changes in red blood cells 43.70% (n = 250, white blood cells 38.46% (n = 220, and number of platelet 17.80% (n = 102.Discussion:The data show that some results are not consistent with clinical or physiological state of an individual, and cannot be explained because they have not been investigated, which may compromise the final diagnosis.Conclusion:It was observed that it is of fundamental importance that the microscopy qualitative analysis must be performed in parallel with automated analysis in order to obtain reliable results, causing a positive impact on the prevention, diagnosis, prognosis, and therapeutic follow-up.

  4. Automated Analysis of Security in Networking Systems

    DEFF Research Database (Denmark)

    Buchholtz, Mikael

    2004-01-01

    It has for a long time been a challenge to built secure networking systems. One way to counter this problem is to provide developers of software applications for networking systems with easy-to-use tools that can check security properties before the applications ever reach the marked. These tools...... will both help raise the general level of awareness of the problems and prevent the most basic flaws from occurring. This thesis contributes to the development of such tools. Networking systems typically try to attain secure communication by applying standard cryptographic techniques. In this thesis...... attacks, and attacks launched by insiders. Finally, the perspectives for the application of the analysis techniques are discussed, thereby, coming a small step closer to providing developers with easy- to-use tools for validating the security of networking applications....

  5. Automated analysis for lifecycle assembly processes

    Energy Technology Data Exchange (ETDEWEB)

    Calton, T.L.; Brown, R.G.; Peters, R.R.

    1998-05-01

    Many manufacturing companies today expend more effort on upgrade and disposal projects than on clean-slate design, and this trend is expected to become more prevalent in coming years. However, commercial CAD tools are better suited to initial product design than to the product`s full life cycle. Computer-aided analysis, optimization, and visualization of life cycle assembly processes based on the product CAD data can help ensure accuracy and reduce effort expended in planning these processes for existing products, as well as provide design-for-lifecycle analysis for new designs. To be effective, computer aided assembly planning systems must allow users to express the plan selection criteria that apply to their companies and products as well as to the life cycles of their products. Designing products for easy assembly and disassembly during its entire life cycle for purposes including service, field repair, upgrade, and disposal is a process that involves many disciplines. In addition, finding the best solution often involves considering the design as a whole and by considering its intended life cycle. Different goals and constraints (compared to initial assembly) require one to re-visit the significant fundamental assumptions and methods that underlie current assembly planning techniques. Previous work in this area has been limited to either academic studies of issues in assembly planning or applied studies of life cycle assembly processes, which give no attention to automatic planning. It is believed that merging these two areas will result in a much greater ability to design for; optimize, and analyze life cycle assembly processes.

  6. Overhead spine arch analysis of dairy cows from three-dimensional video

    Science.gov (United States)

    Abdul Jabbar, K.; Hansen, M. F.; Smith, M. L.; Smith, L. N.

    2017-02-01

    We present a spine arch analysis method in dairy cows using overhead 3D video data. This method is aimed for early stage lameness detection. That is important in order to allow early treatment; and thus, reduce the animal suffering and minimize the high forecasted financial losses, caused by lameness. Our physical data collection setup is non-intrusive, covert and designed to allow full automation; therefore, it could be implemented on a large scale or daily basis with high accuracy. We track the animal's spine using shape index and curvedness measure from the 3D surface as she walks freely under the 3D camera. Our spinal analysis focuses on the thoracic vertebrae region, where we found most of the arching caused by lameness. A cubic polynomial is fitted to analyze the arch and estimate the locomotion soundness. We have found more accurate results by eliminating the regular neck/head movements' effect from the arch. Using 22-cow data set, we are able to achieve an early stage lameness detection accuracy of 95.4%.

  7. An Innovative Requirements Solution: Combining Six Sigma KJ Language Data Analysis with Automated Content Analysis

    Science.gov (United States)

    2009-03-01

    2008 Carnegie Mellon University An Innovative Requirements Solution: Combining Six Sigma KJ Language Data Analysis with Automated Content...2. REPORT TYPE 3. DATES COVERED 00-00-2009 to 00-00-2009 4. TITLE AND SUBTITLE An Innovative Requirements Solution: Combining Six Sigma KJ...Prescribed by ANSI Std Z39-18 3 An Innovative Requirements Solution: Marrying Six Sigma KJ Analysis with Automation for Text Analysis and

  8. Automated morphological analysis approach for classifying colorectal microscopic images

    Science.gov (United States)

    Marghani, Khaled A.; Dlay, Satnam S.; Sharif, Bayan S.; Sims, Andrew J.

    2003-10-01

    Automated medical image diagnosis using quantitative measurements is extremely helpful for cancer prognosis to reach a high degree of accuracy and thus make reliable decisions. In this paper, six morphological features based on texture analysis were studied in order to categorize normal and cancer colon mucosa. They were derived after a series of pre-processing steps to generate a set of different shape measurements. Based on the shape and the size, six features known as Euler Number, Equivalent Diamater, Solidity, Extent, Elongation, and Shape Factor AR were extracted. Mathematical morphology is used firstly to remove background noise from segmented images and then to obtain different morphological measures to describe shape, size, and texture of colon glands. The automated system proposed is tested to classifying 102 microscopic samples of colorectal tissues, which consist of 44 normal color mucosa and 58 cancerous. The results were first statistically evaluated, using one-way ANOVA method in order to examine the significance of each feature extracted. Then significant features are selected in order to classify the dataset into two categories. Finally, using two discrimination methods; linear method and k-means clustering, important classification factors were estimated. In brief, this study demonstrates that abnormalities in low-level power tissue morphology can be distinguished using quantitative image analysis. This investigation shows the potential of an automated vision system in histopathology. Furthermore, it has the advantage of being objective, and more importantly a valuable diagnostic decision support tool.

  9. Automation of Large-scale Computer Cluster Monitoring Information Analysis

    Science.gov (United States)

    Magradze, Erekle; Nadal, Jordi; Quadt, Arnulf; Kawamura, Gen; Musheghyan, Haykuhi

    2015-12-01

    High-throughput computing platforms consist of a complex infrastructure and provide a number of services apt to failures. To mitigate the impact of failures on the quality of the provided services, a constant monitoring and in time reaction is required, which is impossible without automation of the system administration processes. This paper introduces a way of automation of the process of monitoring information analysis to provide the long and short term predictions of the service response time (SRT) for a mass storage and batch systems and to identify the status of a service at a given time. The approach for the SRT predictions is based on Adaptive Neuro Fuzzy Inference System (ANFIS). An evaluation of the approaches is performed on real monitoring data from the WLCG Tier 2 center GoeGrid. Ten fold cross validation results demonstrate high efficiency of both approaches in comparison to known methods.

  10. Toward an Analysis of Video Games for Mathematics Education

    Science.gov (United States)

    Offenholley, Kathleen

    2011-01-01

    Video games have tremendous potential in mathematics education, yet there is a push to simply add mathematics to a video game without regard to whether the game structure suits the mathematics, and without regard to the level of mathematical thought being learned in the game. Are students practicing facts, or are they problem-solving? This paper…

  11. Principal components null space analysis for image and video classification.

    Science.gov (United States)

    Vaswani, Namrata; Chellappa, Rama

    2006-07-01

    We present a new classification algorithm, principal component null space analysis (PCNSA), which is designed for classification problems like object recognition where different classes have unequal and nonwhite noise covariance matrices. PCNSA first obtains a principal components subspace (PCA space) for the entire data. In this PCA space, it finds for each class "i," an Mi-dimensional subspace along which the class' intraclass variance is the smallest. We call this subspace an approximate null space (ANS) since the lowest variance is usually "much smaller" than the highest. A query is classified into class "i" if its distance from the class' mean in the class' ANS is a minimum. We derive upper bounds on classification error probability of PCNSA and use these expressions to compare classification performance of PCNSA with that of subspace linear discriminant analysis (SLDA). We propose a practical modification of PCNSA called progressive-PCNSA that also detects "new" (untrained classes). Finally, we provide an experimental comparison of PCNSA and progressive PCNSA with SLDA and PCA and also with other classification algorithms-linear SVMs, kernel PCA, kernel discriminant analysis, and kernel SLDA, for object recognition and face recognition under large pose/expression variation. We also show applications of PCNSA to two classification problems in video--an action retrieval problem and abnormal activity detection.

  12. Using statistical analysis and artificial intelligence tools for automatic assessment of video sequences

    Science.gov (United States)

    Ekobo Akoa, Brice; Simeu, Emmanuel; Lebowsky, Fritz

    2014-01-01

    This paper proposes two novel approaches to Video Quality Assessment (VQA). Both approaches attempt to develop video evaluation techniques capable of replacing human judgment when rating video quality in subjective experiments. The underlying study consists of selecting fundamental quality metrics based on Human Visual System (HVS) models and using artificial intelligence solutions as well as advanced statistical analysis. This new combination enables suitable video quality ratings while taking as input multiple quality metrics. The first method uses a neural network based machine learning process. The second method consists in evaluating the video quality assessment using non-linear regression model. The efficiency of the proposed methods is demonstrated by comparing their results with those of existing work done on synthetic video artifacts. The results obtained by each method are compared with scores from a database resulting from subjective experiments.

  13. Critical Discourse Analysis of City Cultural Identity Construction—A Case Study of Video Xi tang

    Institute of Scientific and Technical Information of China (English)

    李漾

    2012-01-01

      With the rapid development of science and technology, video has become a very important way to demonstrate the city itself. This paper, based on Fairclough’s three-dimensional conception of discourse to the video of Xitang, analyzes the on-screen titles of this video to reveal the ideology behind it. Through the analysis, it points out that this video, while protecting Xi⁃tang’s own unique traditional characteristics, it tries to cater for the needs not only Chinese people but also foreigners. To some extent, it tries to promote Xitang itself.

  14. Design of video quality metrics with multi-way data analysis a data driven approach

    CERN Document Server

    Keimel, Christian

    2016-01-01

    This book proposes a data-driven methodology using multi-way data analysis for the design of video-quality metrics. It also enables video- quality metrics to be created using arbitrary features. This data- driven design approach not only requires no detailed knowledge of the human visual system, but also allows a proper consideration of the temporal nature of video using a three-way prediction model, corresponding to the three-way structure of video. Using two simple example metrics, the author demonstrates not only that this purely data- driven approach outperforms state-of-the-art video-quality metrics, which are often optimized for specific properties of the human visual system, but also that multi-way data analysis methods outperform the combination of two-way data analysis methods and temporal pooling. .

  15. Applications of Automation Methods for Nonlinear Fracture Test Analysis

    Science.gov (United States)

    Allen, Phillip A.; Wells, Douglas N.

    2013-01-01

    Using automated and standardized computer tools to calculate the pertinent test result values has several advantages such as: 1. allowing high-fidelity solutions to complex nonlinear phenomena that would be impractical to express in written equation form, 2. eliminating errors associated with the interpretation and programing of analysis procedures from the text of test standards, 3. lessening the need for expertise in the areas of solid mechanics, fracture mechanics, numerical methods, and/or finite element modeling, to achieve sound results, 4. and providing one computer tool and/or one set of solutions for all users for a more "standardized" answer. In summary, this approach allows a non-expert with rudimentary training to get the best practical solution based on the latest understanding with minimum difficulty.Other existing ASTM standards that cover complicated phenomena use standard computer programs: 1. ASTM C1340/C1340M-10- Standard Practice for Estimation of Heat Gain or Loss Through Ceilings Under Attics Containing Radiant Barriers by Use of a Computer Program 2. ASTM F 2815 - Standard Practice for Chemical Permeation through Protective Clothing Materials: Testing Data Analysis by Use of a Computer Program 3. ASTM E2807 - Standard Specification for 3D Imaging Data Exchange, Version 1.0 The verification, validation, and round-robin processes required of a computer tool closely parallel the methods that are used to ensure the solution validity for equations included in test standard. The use of automated analysis tools allows the creation and practical implementation of advanced fracture mechanics test standards that capture the physics of a nonlinear fracture mechanics problem without adding undue burden or expense to the user. The presented approach forms a bridge between the equation-based fracture testing standards of today and the next generation of standards solving complex problems through analysis automation.

  16. MPEG-2 Compressed-Domain Algorithms for Video Analysis

    Directory of Open Access Journals (Sweden)

    Hesseler Wolfgang

    2006-01-01

    Full Text Available This paper presents new algorithms for extracting metadata from video sequences in the MPEG-2 compressed domain. Three algorithms for efficient low-level metadata extraction in preprocessing stages are described. The first algorithm detects camera motion using the motion vector field of an MPEG-2 video. The second method extends the idea of motion detection to a limited region of interest, yielding an efficient algorithm to track objects inside video sequences. The third algorithm performs a cut detection using macroblock types and motion vectors.

  17. Despeckle filtering for ultrasound imaging and video II selected applications

    CERN Document Server

    Loizou, Christos P

    2015-01-01

    In ultrasound imaging and video visual perception is hindered by speckle multiplicative noise that degrades the quality. Noise reduction is therefore essential for improving the visual observation quality or as a pre-processing step for further automated analysis, such as image/video segmentation, texture analysis and encoding in ultrasound imaging and video. The goal of the first book (book 1 of 2 books) was to introduce the problem of speckle in ultrasound image and video as well as the theoretical background, algorithmic steps, and the MatlabTM for the following group of despeckle filters:

  18. Acquisition and Analysis of Dynamic Responses of a Historic Pedestrian Bridge using Video Image Processing

    Science.gov (United States)

    O'Byrne, Michael; Ghosh, Bidisha; Schoefs, Franck; O'Donnell, Deirdre; Wright, Robert; Pakrashi, Vikram

    2015-07-01

    Video based tracking is capable of analysing bridge vibrations that are characterised by large amplitudes and low frequencies. This paper presents the use of video images and associated image processing techniques to obtain the dynamic response of a pedestrian suspension bridge in Cork, Ireland. This historic structure is one of the four suspension bridges in Ireland and is notable for its dynamic nature. A video camera is mounted on the river-bank and the dynamic responses of the bridge have been measured from the video images. The dynamic response is assessed without the need of a reflector on the bridge and in the presence of various forms of luminous complexities in the video image scenes. Vertical deformations of the bridge were measured in this regard. The video image tracking for the measurement of dynamic responses of the bridge were based on correlating patches in time-lagged scenes in video images and utilisinga zero mean normalisedcross correlation (ZNCC) metric. The bridge was excited by designed pedestrian movement and by individual cyclists traversing the bridge. The time series data of dynamic displacement responses of the bridge were analysedto obtain the frequency domain response. Frequencies obtained from video analysis were checked against accelerometer data from the bridge obtained while carrying out the same set of experiments used for video image based recognition.

  19. BioFoV - An open platform for forensic video analysis and biometric data extraction

    DEFF Research Database (Denmark)

    Almeida, Miguel; Correia, Paulo Lobato; Larsen, Peter Kastmand

    2016-01-01

    Forensic experts, police officers and others working with forensic video analysis can benefit from more 'intelligent' video handling tools, such as automatic event/face detection, for instance to help search the relevant video footage prior to the actual case work. The ideal would be to have access...... to tailor-made software, based on state of art knowledge in fields such as soft biometrics, gait recognition, photogrammetry, etc. This paper proposes an open and extensible platform, BioFoV (Biometric Forensic Video tool), for forensic video analysis and biometric data extraction, aiming to host some...... of the developments that researchers come up with for solving specific problems, but that are often not shared with the community. BioFoV includes a simple to use Graphical User Interface (GUI), is implemented with open software that can run in multiple software platforms, and its implementation is publicly available....

  20. Postprocessing algorithm for automated analysis of pelvic intraoperative neuromonitoring signals

    Directory of Open Access Journals (Sweden)

    Wegner Celine

    2016-09-01

    Full Text Available Two dimensional pelvic intraoperative neuromonitoring (pIONM® is based on electric stimulation of autonomic nerves under observation of electromyography of internal anal sphincter (IAS and manometry of urinary bladder. The method provides nerve identification and verification of its’ functional integrity. Currently pIONM® is gaining increased attention in times where preservation of function is becoming more and more important. Ongoing technical and methodological developments in experimental and clinical settings require further analysis of the obtained signals. This work describes a postprocessing algorithm for pIONM® signals, developed for automated analysis of huge amount of recorded data. The analysis routine includes a graphical representation of the recorded signals in the time and frequency domain, as well as a quantitative evaluation by means of features calculated from the time and frequency domain. The produced plots are summarized automatically in a PowerPoint presentation. The calculated features are filled into a standardized Excel-sheet, ready for statistical analysis.

  1. No-Reference Video Quality Assessment using Codec Analysis

    DEFF Research Database (Denmark)

    Søgaard, Jacob; Forchhammer, Søren; Korhonen, Jari

    2015-01-01

    A no-reference video quality assessment (VQA) method is presented for videos distorted by H.264/AVC and MPEG-2. The assessment is performed without access to the bit-stream. Instead we analyze and estimate coefficients based on decoded pixels. The approach involves distinguishing between the two...... (SVR). For validation purposes, the proposed method was tested on two databases. In both cases good performance compared with state of the art full, reduced, and no-reference VQA algorithms was achieved....

  2. Automated monitoring of activated sludge using image analysis

    OpenAIRE

    Motta, Maurício da; M. N. Pons; Roche, N; A.L. Amaral; Ferreira, E. C.; Alves, M.M.; Mota, M.; Vivier, H.

    2000-01-01

    An automated procedure for the characterisation by image analysis of the morphology of activated sludge has been used to monitor in a systematic manner the biomass in wastewater treatment plants. Over a period of one year, variations in terms mainly of the fractal dimension of flocs and of the amount of filamentous bacteria could be related to rain events affecting the plant influent flow rate and composition. Grand Nancy Council. Météo-France. Brasil. Ministério da Ciênc...

  3. A computerized system for video analysis of the aortic valve.

    Science.gov (United States)

    Vesely, I; Menkis, A; Campbell, G

    1990-10-01

    A novel technique was developed to study the dynamic behavior of the porcine aortic valve in an isolated heart preparation. Under the control of a personal computer, a video frame grabber board continuously acquired and digitized images of the aortic valve, and an analog-to-digital (A/D) converter read four channels of physiological data (flow rate, aortic and ventricular pressure, and aortic root diameter). The valve was illuminated with a strobe light synchronized to fire at the field acquisition rate of the CCD video camera. Using the overlay bits in the video board, the measured parameters were super-imposed over the live video as graphical tracing, and the resultant composite images were recorded on-line to video tape. The overlaying of the valve images with the graphical tracings of acquired data enabled the data tracings to be precisely synchronized with the video images of the aortic valve. This technique enabled us to observe the relationship between aortic root expansion and valve function.

  4. Simulation and Analysis of Digital Video Watermarking Using MPEG-2

    Directory of Open Access Journals (Sweden)

    Dr. Anil Kumar Sharma,

    2011-07-01

    Full Text Available Quantization Index Modulation (QIM is an important method for embedding digital watermark signal with information. This technique achieves very efficient tradeoffs among watermark embedding rate, the amount of embedding induced distortion to the host signal and the robustness to intentional or unintentional attacks. Most of the schemes of video watermarking have been proposed on uncompressed video. This paper introduces a compressed video watermarking procedure to reduce computations. In a video frame the luminance component is an important factor where much change can not be made as it can disturb the original data. The MPGE-2 video compression technique is based on a macroblock structure, motion compression and conditional replenishment of macroblocks. To achieve high compression motion compensation employed with P-frames, the Discrete Cosine Transformation (DCT always exists in video stream for high robustness. In this work the QIM technique used for embedding is the DC component of chrome DCT of P-frames. The robustness of the proposed method has been studied through simulation.

  5. Analysis of User Requirements in Interactive 3D Video Systems

    Directory of Open Access Journals (Sweden)

    Haiyue Yuan

    2012-01-01

    Full Text Available The recent development of three dimensional (3D display technologies has resulted in a proliferation of 3D video production and broadcasting, attracting a lot of research into capture, compression and delivery of stereoscopic content. However, the predominant design practice of interactions with 3D video content has failed to address its differences and possibilities in comparison to the existing 2D video interactions. This paper presents a study of user requirements related to interaction with the stereoscopic 3D video. The study suggests that the change of view, zoom in/out, dynamic video browsing, and textual information are the most relevant interactions with stereoscopic 3D video. In addition, we identified a strong demand for object selection that resulted in a follow-up study of user preferences in 3D selection using virtual-hand and ray-casting metaphors. These results indicate that interaction modality affects users’ decision of object selection in terms of chosen location in 3D, while user attitudes do not have significant impact. Furthermore, the ray-casting-based interaction modality using Wiimote can outperform the volume-based interaction modality using mouse and keyboard for object positioning accuracy.

  6. Analysis of automated highway system risks and uncertainties. Volume 5

    Energy Technology Data Exchange (ETDEWEB)

    Sicherman, A.

    1994-10-01

    This volume describes a risk analysis performed to help identify important Automated Highway System (AHS) deployment uncertainties and quantify their effect on costs and benefits for a range of AHS deployment scenarios. The analysis identified a suite of key factors affecting vehicle and roadway costs, capacities and market penetrations for alternative AHS deployment scenarios. A systematic protocol was utilized for obtaining expert judgments of key factor uncertainties in the form of subjective probability percentile assessments. Based on these assessments, probability distributions on vehicle and roadway costs, capacity and market penetration were developed for the different scenarios. The cost/benefit risk methodology and analysis provide insights by showing how uncertainties in key factors translate into uncertainties in summary cost/benefit indices.

  7. Automated Digital Analysis Of Holographic Interferograms Of Pure Translations

    Science.gov (United States)

    Choudry, A.; Frankena, H. J.; van Beek, J. W.

    1983-10-01

    Holographic interferometry is a versatile technique for non-tactile measurement of changes in a wide variety of physical variables such as temperature, strain, position etc. It has a great potential for becoming an important metrologic technique in industrial applications. For holographic interferometry to become more attractive for industrial practice the problem of quantitative analysis of the patterns and thereby eliciting reliable values of the relevant parameters has to be addressed. In an attempt to calibrate the technique of holographic interferometry and ascertain the reliability of the subsequent digital analysis, we have chosen precisely known translations as a basis. Holographic interferograms taken from these are analysed manually and by digital techniques specially developed for such patterns. The results are promising enough to indicate the feasibility of automated digital analysis for determining translations within an acceptable accuracy. Some details of the evaluation techniques, along with a brief discussion of the preliminary results are presented.

  8. Power consumption analysis of constant bit rate video transmission over 3G networks

    DEFF Research Database (Denmark)

    Ukhanova, Ann; Belyaev, Evgeny; Wang, Le

    2012-01-01

    for the 3GPP transition state machine that allows to decrease power consumption on a mobile device taking signaling traffic, buffer size and latency restrictions into account. Furthermore, we discuss the gain in power consumption vs. PSNR for transmitted video and show the possibility of performing power......This paper presents an analysis of the power consumption of video data transmission with constant bit rate over 3G mobile wireless networks. The work includes the description of the radio resource control transition state machine in 3G networks, followed by a detailed power consumption analysis...... consumption management based on the requirements for the video quality....

  9. Video Frames Reconstruction Based on Time-Frequency Analysis and Hermite Projection Method

    Directory of Open Access Journals (Sweden)

    Krylov Andrey

    2010-01-01

    Full Text Available A method for temporal analysis and reconstruction of video sequences based on the time-frequency analysis and Hermite projection method is proposed. The S-method-based time-frequency distribution is used to characterize stationarity within the sequence. Namely, a sequence of DCT coefficients along the time axes is used to create a frequency-modulated signal. The reconstruction of nonstationary sequences is done using the Hermite expansion coefficients. Here, a small number of Hermite coefficients can be used, which may provide significant savings for some video-based applications. The results are illustrated with video examples.

  10. Trends in biomedical informatics: automated topic analysis of JAMIA articles.

    Science.gov (United States)

    Han, Dong; Wang, Shuang; Jiang, Chao; Jiang, Xiaoqian; Kim, Hyeon-Eui; Sun, Jimeng; Ohno-Machado, Lucila

    2015-11-01

    Biomedical Informatics is a growing interdisciplinary field in which research topics and citation trends have been evolving rapidly in recent years. To analyze these data in a fast, reproducible manner, automation of certain processes is needed. JAMIA is a "generalist" journal for biomedical informatics. Its articles reflect the wide range of topics in informatics. In this study, we retrieved Medical Subject Headings (MeSH) terms and citations of JAMIA articles published between 2009 and 2014. We use tensors (i.e., multidimensional arrays) to represent the interaction among topics, time and citations, and applied tensor decomposition to automate the analysis. The trends represented by tensors were then carefully interpreted and the results were compared with previous findings based on manual topic analysis. A list of most cited JAMIA articles, their topics, and publication trends over recent years is presented. The analyses confirmed previous studies and showed that, from 2012 to 2014, the number of articles related to MeSH terms Methods, Organization & Administration, and Algorithms increased significantly both in number of publications and citations. Citation trends varied widely by topic, with Natural Language Processing having a large number of citations in particular years, and Medical Record Systems, Computerized remaining a very popular topic in all years.

  11. Quantifying biodiversity using digital cameras and automated image analysis.

    Science.gov (United States)

    Roadknight, C. M.; Rose, R. J.; Barber, M. L.; Price, M. C.; Marshall, I. W.

    2009-04-01

    Monitoring the effects on biodiversity of extensive grazing in complex semi-natural habitats is labour intensive. There are also concerns about the standardization of semi-quantitative data collection. We have chosen to focus initially on automating the most time consuming aspect - the image analysis. The advent of cheaper and more sophisticated digital camera technology has lead to a sudden increase in the number of habitat monitoring images and information that is being collected. We report on the use of automated trail cameras (designed for the game hunting market) to continuously capture images of grazer activity in a variety of habitats at Moor House National Nature Reserve, which is situated in the North of England at an average altitude of over 600m. Rainfall is high, and in most areas the soil consists of deep peat (1m to 3m), populated by a mix of heather, mosses and sedges. The cameras have been continuously in operation over a 6 month period, daylight images are in full colour and night images (IR flash) are black and white. We have developed artificial intelligence based methods to assist in the analysis of the large number of images collected, generating alert states for new or unusual image conditions. This paper describes the data collection techniques, outlines the quantitative and qualitative data collected and proposes online and offline systems that can reduce the manpower overheads and increase focus on important subsets in the collected data. By converting digital image data into statistical composite data it can be handled in a similar way to other biodiversity statistics thus improving the scalability of monitoring experiments. Unsupervised feature detection methods and supervised neural methods were tested and offered solutions to simplifying the process. Accurate (85 to 95%) categorization of faunal content can be obtained, requiring human intervention for only those images containing rare animals or unusual (undecidable) conditions, and

  12. Block Based Video Watermarking Scheme Using Wavelet Transform and Principle Component Analysis

    Directory of Open Access Journals (Sweden)

    Nisreen I. Yassin

    2012-01-01

    Full Text Available In this paper, a comprehensive approach for digital video watermarking is introduced, where a binary watermark image is embedded into the video frames. Each video frame is decomposed into sub-images using 2 level discrete wavelet transform then the Principle Component Analysis (PCA transformation is applied for each block in the two bands LL and HH. The watermark is embedded into the maximum coefficient of the PCA block of the two bands. The proposed scheme is tested using a number of video sequences. Experimental results show high imperceptibility where there is no noticeable difference between the watermarked video frames and the original frames. The computed PSNR achieves high score which is 44.097 db. The proposed scheme shows high robustness against several attacks such as JPEG coding, Gaussian noise addition, histogram equalization, gamma correction, and contrast adjustment.

  13. Effectiveness of slow motion video compared to real time video in improving the accuracy and consistency of subjective gait analysis in dogs.

    Science.gov (United States)

    Lane, D M; Hill, S A; Huntingford, J L; Lafuente, P; Wall, R; Jones, K A

    2015-01-01

    Objective measures of canine gait quality via force plates, pressure mats or kinematic analysis are considered superior to subjective gait assessment (SGA). Despite research demonstrating that SGA does not accurately detect subtle lameness, it remains the most commonly performed diagnostic test for detecting lameness in dogs. This is largely because the financial, temporal and spatial requirements for existing objective gait analysis equipment makes this technology impractical for use in general practice. The utility of slow motion video as a potential tool to augment SGA is currently untested. To evaluate a more accessible way to overcome the limitations of SGA, a slow motion video study was undertaken. Three experienced veterinarians reviewed video footage of 30 dogs, 15 with a diagnosis of primary limb lameness based on history and physical examination, and 15 with no indication of limb lameness based on history and physical examination. Four different videos were made for each dog, demonstrating each dog walking and trotting in real time, and then again walking and trotting in 50% slow motion. For each video, the veterinary raters assessed both the degree of lameness, and which limb(s) they felt represented the source of the lameness. Spearman's rho, Cramer's V, and t-tests were performed to determine if slow motion video increased either the accuracy or consistency of raters' SGA relative to real time video. Raters demonstrated no significant increase in consistency or accuracy in their SGA of slow motion video relative to real time video. Based on these findings, slow motion video does not increase the consistency or accuracy of SGA values. Further research is required to determine if slow motion video will benefit SGA in other ways.

  14. Widely applicable MATLAB routines for automated analysis of saccadic reaction times.

    Science.gov (United States)

    Leppänen, Jukka M; Forssman, Linda; Kaatiala, Jussi; Yrttiaho, Santeri; Wass, Sam

    2015-06-01

    Saccadic reaction time (SRT) is a widely used dependent variable in eye-tracking studies of human cognition and its disorders. SRTs are also frequently measured in studies with special populations, such as infants and young children, who are limited in their ability to follow verbal instructions and remain in a stable position over time. In this article, we describe a library of MATLAB routines (Mathworks, Natick, MA) that are designed to (1) enable completely automated implementation of SRT analysis for multiple data sets and (2) cope with the unique challenges of analyzing SRTs from eye-tracking data collected from poorly cooperating participants. The library includes preprocessing and SRT analysis routines. The preprocessing routines (i.e., moving median filter and interpolation) are designed to remove technical artifacts and missing samples from raw eye-tracking data. The SRTs are detected by a simple algorithm that identifies the last point of gaze in the area of interest, but, critically, the extracted SRTs are further subjected to a number of postanalysis verification checks to exclude values contaminated by artifacts. Example analyses of data from 5- to 11-month-old infants demonstrated that SRTs extracted with the proposed routines were in high agreement with SRTs obtained manually from video records, robust against potential sources of artifact, and exhibited moderate to high test-retest stability. We propose that the present library has wide utility in standardizing and automating SRT-based cognitive testing in various populations. The MATLAB routines are open source and can be downloaded from http://www.uta.fi/med/icl/methods.html .

  15. Automated drawing of network plots in network meta-analysis.

    Science.gov (United States)

    Rücker, Gerta; Schwarzer, Guido

    2016-03-01

    In systematic reviews based on network meta-analysis, the network structure should be visualized. Network plots often have been drawn by hand using generic graphical software. A typical way of drawing networks, also implemented in statistical software for network meta-analysis, is a circular representation, often with many crossing lines. We use methods from graph theory in order to generate network plots in an automated way. We give a number of requirements for graph drawing and present an algorithm that fits prespecified ideal distances between the nodes representing the treatments. The method was implemented in the function netgraph of the R package netmeta and applied to a number of networks from the literature. We show that graph representations with a small number of crossing lines are often preferable to circular representations.

  16. Automated eigensystem realisation algorithm for operational modal analysis

    Science.gov (United States)

    Zhang, Guowen; Ma, Jinghua; Chen, Zhuo; Wang, Ruirong

    2014-07-01

    The eigensystem realisation algorithm (ERA) is one of the most popular methods in civil engineering applications for estimating modal parameters. Three issues have been addressed in the paper: spurious mode elimination, estimating the energy relationship between different modes, and automatic analysis of the stabilisation diagram. On spurious mode elimination, a new criterion, modal similarity index (MSI) is proposed to measure the reliability of the modes obtained by ERA. On estimating the energy relationship between different modes, the mode energy level (MEL) was introduced to measure the energy contribution of each mode, which can be used to indicate the dominant mode. On automatic analysis of the stabilisation diagram, an automation of the mode selection process based on a hierarchical clustering algorithm was developed. An experimental example of the parameter estimation for the Chaotianmen bridge model in Chongqing, China, is presented to demonstrate the efficacy of the proposed method.

  17. Subjective Analysis and Objective Characterization of Adaptive Bitrate Videos

    DEFF Research Database (Denmark)

    Søgaard, Jacob; Tavakoli, Samira; Brunnström, Kjell;

    2016-01-01

    the factors influencing on subjective QoE of adaptation events.However, adapting the video quality typically lasts in a time scale much longer than what current standardized subjective testing methods are designed for, thus making the full matrix design of the experiment on an event level hard to achieve...... mean opinion score (MOS) and the MOS from shorter sequences. The aforementioned empirical dataset has proven to be very challenging in terms of video quality assessment test design, thus deriving a conclusive outcome about the influence of different parameters have been difficult. The second...

  18. AutoGate: automating analysis of flow cytometry data.

    Science.gov (United States)

    Meehan, Stephen; Walther, Guenther; Moore, Wayne; Orlova, Darya; Meehan, Connor; Parks, David; Ghosn, Eliver; Philips, Megan; Mitsunaga, Erin; Waters, Jeffrey; Kantor, Aaron; Okamura, Ross; Owumi, Solomon; Yang, Yang; Herzenberg, Leonard A; Herzenberg, Leonore A

    2014-05-01

    Nowadays, one can hardly imagine biology and medicine without flow cytometry to measure CD4 T cell counts in HIV, follow bone marrow transplant patients, characterize leukemias, etc. Similarly, without flow cytometry, there would be a bleak future for stem cell deployment, HIV drug development and full characterization of the cells and cell interactions in the immune system. But while flow instruments have improved markedly, the development of automated tools for processing and analyzing flow data has lagged sorely behind. To address this deficit, we have developed automated flow analysis software technology, provisionally named AutoComp and AutoGate. AutoComp acquires sample and reagent labels from users or flow data files, and uses this information to complete the flow data compensation task. AutoGate replaces the manual subsetting capabilities provided by current analysis packages with newly defined statistical algorithms that automatically and accurately detect, display and delineate subsets in well-labeled and well-recognized formats (histograms, contour and dot plots). Users guide analyses by successively specifying axes (flow parameters) for data subset displays and selecting statistically defined subsets to be used for the next analysis round. Ultimately, this process generates analysis "trees" that can be applied to automatically guide analyses for similar samples. The first AutoComp/AutoGate version is currently in the hands of a small group of users at Stanford, Emory and NIH. When this "early adopter" phase is complete, the authors expect to distribute the software free of charge to .edu, .org and .gov users.

  19. Automated High-Dimensional Flow Cytometric Data Analysis

    Science.gov (United States)

    Pyne, Saumyadipta; Hu, Xinli; Wang, Kui; Rossin, Elizabeth; Lin, Tsung-I.; Maier, Lisa; Baecher-Allan, Clare; McLachlan, Geoffrey; Tamayo, Pablo; Hafler, David; de Jager, Philip; Mesirov, Jill

    Flow cytometry is widely used for single cell interrogation of surface and intracellular protein expression by measuring fluorescence intensity of fluorophore-conjugated reagents. We focus on the recently developed procedure of Pyne et al. (2009, Proceedings of the National Academy of Sciences USA 106, 8519-8524) for automated high- dimensional flow cytometric analysis called FLAME (FLow analysis with Automated Multivariate Estimation). It introduced novel finite mixture models of heavy-tailed and asymmetric distributions to identify and model cell populations in a flow cytometric sample. This approach robustly addresses the complexities of flow data without the need for transformation or projection to lower dimensions. It also addresses the critical task of matching cell populations across samples that enables downstream analysis. It thus facilitates application of flow cytometry to new biological and clinical problems. To facilitate pipelining with standard bioinformatic applications such as high-dimensional visualization, subject classification or outcome prediction, FLAME has been incorporated with the GenePattern package of the Broad Institute. Thereby analysis of flow data can be approached similarly as other genomic platforms. We also consider some new work that proposes a rigorous and robust solution to the registration problem by a multi-level approach that allows us to model and register cell populations simultaneously across a cohort of high-dimensional flow samples. This new approach is called JCM (Joint Clustering and Matching). It enables direct and rigorous comparisons across different time points or phenotypes in a complex biological study as well as for classification of new patient samples in a more clinical setting.

  20. The potential of accelerating early detection of autism through content analysis of YouTube videos.

    Directory of Open Access Journals (Sweden)

    Vincent A Fusaro

    Full Text Available Autism is on the rise, with 1 in 88 children receiving a diagnosis in the United States, yet the process for diagnosis remains cumbersome and time consuming. Research has shown that home videos of children can help increase the accuracy of diagnosis. However the use of videos in the diagnostic process is uncommon. In the present study, we assessed the feasibility of applying a gold-standard diagnostic instrument to brief and unstructured home videos and tested whether video analysis can enable more rapid detection of the core features of autism outside of clinical environments. We collected 100 public videos from YouTube of children ages 1-15 with either a self-reported diagnosis of an ASD (N = 45 or not (N = 55. Four non-clinical raters independently scored all videos using one of the most widely adopted tools for behavioral diagnosis of autism, the Autism Diagnostic Observation Schedule-Generic (ADOS. The classification accuracy was 96.8%, with 94.1% sensitivity and 100% specificity, the inter-rater correlation for the behavioral domains on the ADOS was 0.88, and the diagnoses matched a trained clinician in all but 3 of 22 randomly selected video cases. Despite the diversity of videos and non-clinical raters, our results indicate that it is possible to achieve high classification accuracy, sensitivity, and specificity as well as clinically acceptable inter-rater reliability with nonclinical personnel. Our results also demonstrate the potential for video-based detection of autism in short, unstructured home videos and further suggests that at least a percentage of the effort associated with detection and monitoring of autism may be mobilized and moved outside of traditional clinical environments.

  1. Development of a fully automated online mixing system for SAXS protein structure analysis

    DEFF Research Database (Denmark)

    Nielsen, Søren Skou; Arleth, Lise

    2010-01-01

    This thesis presents the development of an automated high-throughput mixing and exposure system for Small-Angle Scattering analysis on a synchrotron using polymer microfluidics. Software and hardware for both automated mixing, exposure control on a beamline and automated data reduction and prelim......This thesis presents the development of an automated high-throughput mixing and exposure system for Small-Angle Scattering analysis on a synchrotron using polymer microfluidics. Software and hardware for both automated mixing, exposure control on a beamline and automated data reduction...... and preliminary analysis is presented. Three mixing systems that have been the corner stones of the development process are presented including a fully functioning high-throughput microfluidic system that is able to produce and expose 36 mixed samples per hour using 30 μL of sample volume. The system is tested...

  2. Two video analysis applications using foreground/background segmentation

    NARCIS (Netherlands)

    Zivkovic, Z.; Petkovic, M.; Mierlo, van R.; Keulen, van M.; Heijden, van der F.; Jonker, W.; Rijnierse, E.

    2003-01-01

    Probably the most frequently solved problem when videos are analyzed is segmenting a foreground object from its background in an image. After some regions in an image are detected as the foreground objects, some features are extracted that describe the segmented regions. These features together with

  3. Violent Video Games as Exemplary Teachers: A Conceptual Analysis

    Science.gov (United States)

    Gentile, Douglas A.; Gentile, J. Ronald

    2008-01-01

    This article presents conceptual and empirical analyses of several of the "best practices" of learning and instruction, and demonstrates how violent video games use them effectively to motivate learners to persevere in acquiring and mastering a number of skills, to navigate through complex problems and changing environments, and to experiment with…

  4. Automated quantitative gait analysis in animal models of movement disorders

    Directory of Open Access Journals (Sweden)

    Vandeputte Caroline

    2010-08-01

    Full Text Available Abstract Background Accurate and reproducible behavioral tests in animal models are of major importance in the development and evaluation of new therapies for central nervous system disease. In this study we investigated for the first time gait parameters of rat models for Parkinson's disease (PD, Huntington's disease (HD and stroke using the Catwalk method, a novel automated gait analysis test. Static and dynamic gait parameters were measured in all animal models, and these data were compared to readouts of established behavioral tests, such as the cylinder test in the PD and stroke rats and the rotarod tests for the HD group. Results Hemiparkinsonian rats were generated by unilateral injection of the neurotoxin 6-hydroxydopamine in the striatum or in the medial forebrain bundle. For Huntington's disease, a transgenic rat model expressing a truncated huntingtin fragment with multiple CAG repeats was used. Thirdly, a stroke model was generated by a photothrombotic induced infarct in the right sensorimotor cortex. We found that multiple gait parameters were significantly altered in all three disease models compared to their respective controls. Behavioural deficits could be efficiently measured using the cylinder test in the PD and stroke animals, and in the case of the PD model, the deficits in gait essentially confirmed results obtained by the cylinder test. However, in the HD model and the stroke model the Catwalk analysis proved more sensitive than the rotarod test and also added new and more detailed information on specific gait parameters. Conclusion The automated quantitative gait analysis test may be a useful tool to study both motor impairment and recovery associated with various neurological motor disorders.

  5. 基于单摄像机视频的鱼类三维自动跟踪方法初探%Preliminary studies on an automated 3D fish tracking method based on a single video camera

    Institute of Scientific and Technical Information of China (English)

    徐盼麟; 韩军; 童剑锋

    2012-01-01

    coordinate to world coordinate, the automated tracking algorithm of fish movement and the automated output of fish behavior 2D and 3D data. Tests find out that while the distance between the camera and the aquaria is 1.5 m, the distortion calibration result shows the pixel error is much more acceptable which is about 0.1 pixels. As the camera tilted slightly during the experiment, the shape of the aquaria in the images changed. So based on the processing of Free-Form Deformation, the deformation of images is rectified during coordinate transform process. Then we implemented the algorithm of Interacting Multiple Model Joint Probabilistic Data Association (IMMJPDA) to automatically track fishes in 3D and output fish behavior data. The result of 6 Hemigrammus rhodostomus tracking experiment shows that: IMMJPDA algorithm can deal with the key issues during fish tracking system, which enables the method to extract individual fish from video images, construct their tracks, output 3D positions and speeds, and finally generate a complete 3D movement track drawing for fish behavior analysis. In a dense clutter situation JPDA requires a fairly large amount of computation to evaluate the joint probabilities. We combined Nearest Neighbor algorithm and JPDA algorithm to reduce the computational burden.

  6. Hybridization of DCT and SVD in the Implementation and Performance Analysis of Video Watermarking

    Directory of Open Access Journals (Sweden)

    Ved Vyas Dwivedi

    2012-06-01

    Full Text Available In this Paper, We worked and documented the implementation and performance analysis of digital video watermarking that uses the hybrid features of two of the most powerful transform domain processing of the video and fundamentals of the linear algebra. We have taken into the account fundamentals of Discrete Cosine Transform and Singular Value Decomposition for the development of the proposed algorithm. We first used the Singular Value Decomposition and then used the singular values for the insertion of the message behind the video. Finally we used two of the visual quality matrices for the analysis purpose. We also applied various attacks on the video and found the proposed scheme more robust.

  7. VIDEO OBJECT SEGMENTATION BY 2-D MESH-BASED MOTION ANALYSIS

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Video object extraction is a key technology in content-based video coding. A novel video object extracting algorithm by two Dimensional (2-D) mesh-based motion analysis is proposed in this paper. Firstly, a 2-D mesh fitting the original frame image is obtained via feature detection algorithm.Then, higher order statistics motion analysis is applied on the 2-D mesh representation to get an initial motion detection mask. After post-processing, the final segmenting mask is quickly obtained. And hence the video object is effectively extracted. Experimental results show that the proposed algorithm combines the merits of mesh-based segmenting algorithms and pixel-based segmenting algorithms, and hereby achieves satisfactory subjective and objective performance while dramatically increasing the segmenting speed.

  8. Evaluation of the OSIRIS video reader as an automated measurement system for the agar disk diffusion technique.

    Science.gov (United States)

    Kolbert, M; Chegrani, F; Shah, P M

    2004-05-01

    Measurement of inhibition zones by the automated OSIRIS system was compared with manual measurement. In total, 14 176 measurements were made with 352 staphylococcal and 80 Enterobacteriaceae isolates, involving four panels of antibiotics on round and square Mueller-Hinton agar plates, according to the German DIN 58940 recommendations. Variations of +/- 3 mm in zone size measurements were defined as tolerable. Very major errors (i.e., classification of a resistant isolate as susceptible by the OSIRIS system) occurred in OSIRIS system was a rapid and reliable system for measuring disk susceptibility test results on round and square agar plates.

  9. Barcoding T cell calcium response diversity with methods for automated and accurate analysis of cell signals (MAAACS.

    Directory of Open Access Journals (Sweden)

    Audrey Salles

    Full Text Available We introduce a series of experimental procedures enabling sensitive calcium monitoring in T cell populations by confocal video-microscopy. Tracking and post-acquisition analysis was performed using Methods for Automated and Accurate Analysis of Cell Signals (MAAACS, a fully customized program that associates a high throughput tracking algorithm, an intuitive reconnection routine and a statistical platform to provide, at a glance, the calcium barcode of a population of individual T-cells. Combined with a sensitive calcium probe, this method allowed us to unravel the heterogeneity in shape and intensity of the calcium response in T cell populations and especially in naive T cells, which display intracellular calcium oscillations upon stimulation by antigen presenting cells.

  10. Barcoding T Cell Calcium Response Diversity with Methods for Automated and Accurate Analysis of Cell Signals (MAAACS)

    Science.gov (United States)

    Sergé, Arnauld; Bernard, Anne-Marie; Phélipot, Marie-Claire; Bertaux, Nicolas; Fallet, Mathieu; Grenot, Pierre; Marguet, Didier; He, Hai-Tao; Hamon, Yannick

    2013-01-01

    We introduce a series of experimental procedures enabling sensitive calcium monitoring in T cell populations by confocal video-microscopy. Tracking and post-acquisition analysis was performed using Methods for Automated and Accurate Analysis of Cell Signals (MAAACS), a fully customized program that associates a high throughput tracking algorithm, an intuitive reconnection routine and a statistical platform to provide, at a glance, the calcium barcode of a population of individual T-cells. Combined with a sensitive calcium probe, this method allowed us to unravel the heterogeneity in shape and intensity of the calcium response in T cell populations and especially in naive T cells, which display intracellular calcium oscillations upon stimulation by antigen presenting cells. PMID:24086124

  11. 14 CFR 1261.413 - Analysis of costs; automation; prevention of overpayments, delinquencies, or defaults.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 5 2010-01-01 2010-01-01 false Analysis of costs; automation; prevention of overpayments, delinquencies, or defaults. 1261.413 Section 1261.413 Aeronautics and Space NATIONAL...) § 1261.413 Analysis of costs; automation; prevention of overpayments, delinquencies, or defaults....

  12. galaxieEST: addressing EST identity through automated phylogenetic analysis

    Directory of Open Access Journals (Sweden)

    Larsson Karl-Henrik

    2004-07-01

    Full Text Available Abstract Background Research involving expressed sequence tags (ESTs is intricately coupled to the existence of large, well-annotated sequence repositories. Comparatively complete and satisfactory annotated public sequence libraries are, however, available only for a limited range of organisms, rendering the absence of sequences and gene structure information a tangible problem for those working with taxa lacking an EST or genome sequencing project. Paralogous genes belonging to the same gene family but distinguished by derived characteristics are particularly prone to misidentification and erroneous annotation; high but incomplete levels of sequence similarity are typically difficult to interpret and have formed the basis of many unsubstantiated assumptions of orthology. In these cases, a phylogenetic study of the query sequence together with the most similar sequences in the database may be of great value to the identification process. In order to facilitate this laborious procedure, a project to employ automated phylogenetic analysis in the identification of ESTs was initiated. Results galaxieEST is an open source Perl-CGI script package designed to complement traditional similarity-based identification of EST sequences through employment of automated phylogenetic analysis. It uses a series of BLAST runs as a sieve to retrieve nucleotide and protein sequences for inclusion in neighbour joining and parsimony analyses; the output includes the BLAST output, the results of the phylogenetic analyses, and the corresponding multiple alignments. galaxieEST is available as an on-line web service for identification of fungal ESTs and for download / local installation for use with any organism group at http://galaxie.cgb.ki.se/galaxieEST.html. Conclusions By addressing sequence relatedness in addition to similarity, galaxieEST provides an integrative view on EST origin and identity, which may prove particularly useful in cases where similarity searches

  13. How violent video games communicate violence: A literature review and content analysis of moral disengagement factors

    OpenAIRE

    Hartmann, T.; Krakowiak, M.; Tsay-Vogel, M.

    2014-01-01

    Mechanisms of moral disengagement in violent video game play have recently received considerable attention among communication scholars. To date, however, no study has analyzed the prevalence of moral disengagement factors in violent video games. To fill this research gap, the present approach includes both a systematic literature review and a content analysis of moral disengagement cues embedded in the narratives and actual game play of 17 top-ranked first-person shooters (PC). Findings sugg...

  14. VID-R and SCAN: Tools and Methods for the Automated Analysis of Visual Records.

    Science.gov (United States)

    Ekman, Paul; And Others

    The VID-R (Visual Information Display and Retrieval) system that enables computer-aided analysis of visual records is composed of a film-to-television chain, two videotape recorders with complete remote control of functions, a video-disc recorder, three high-resolution television monitors, a teletype, a PDP-8, a video and audio interface, three…

  15. Socio-phenomenology and conversation analysis: interpreting video lifeworld healthcare interactions.

    Science.gov (United States)

    Bickerton, Jane; Procter, Sue; Johnson, Barbara; Medina, Angel

    2011-10-01

    This article uses a socio-phenomenological methodology to develop knowledge and understanding of the healthcare consultation based on the concept of the lifeworld. It concentrates its attention on social action rather than strategic action and a systems approach. This article argues that patient-centred care is more effective when it is informed through a lifeworld conception of human mutual shared interaction. Videos offer an opportunity for a wide audience to experience the many kinds of conversations and dynamics that take place in consultations. Visual sociology used in this article provides a method to organize video emotional, knowledge and action conversations as well as dynamic typical consultation situations. These interactions are experienced through the video materials themselves unlike conversation analysis where video materials are first transcribed and then analysed. Both approaches have the potential to support intersubjective learning but this article argues that a video lifeworld schema is more accessible to health professionals and the general public. The typical interaction situations are constructed through the analysis of video materials of consultations in a London walk-in centre. Further studies are planned in the future to extend and replicate results in other healthcare services. This method of analysis focuses on the ways in which the everyday lifeworld informs face-to-face person-centred health care and supports social action as a significant factor underpinning strategic action and a systems approach to consultation practice.

  16. Online Nonparametric Bayesian Activity Mining and Analysis From Surveillance Video.

    Science.gov (United States)

    Bastani, Vahid; Marcenaro, Lucio; Regazzoni, Carlo S

    2016-05-01

    A method for online incremental mining of activity patterns from the surveillance video stream is presented in this paper. The framework consists of a learning block in which Dirichlet process mixture model is employed for the incremental clustering of trajectories. Stochastic trajectory pattern models are formed using the Gaussian process regression of the corresponding flow functions. Moreover, a sequential Monte Carlo method based on Rao-Blackwellized particle filter is proposed for tracking and online classification as well as the detection of abnormality during the observation of an object. Experimental results on real surveillance video data are provided to show the performance of the proposed algorithm in different tasks of trajectory clustering, classification, and abnormality detection.

  17. Automated SEM Modal Analysis Applied to the Diogenites

    Science.gov (United States)

    Bowman, L. E.; Spilde, M. N.; Papike, James J.

    1996-01-01

    Analysis of volume proportions of minerals, or modal analysis, is routinely accomplished by point counting on an optical microscope, but the process, particularly on brecciated samples such as the diogenite meteorites, is tedious and prone to error by misidentification of very small fragments, which may make up a significant volume of the sample. Precise volume percentage data can be gathered on a scanning electron microscope (SEM) utilizing digital imaging and an energy dispersive spectrometer (EDS). This form of automated phase analysis reduces error, and at the same time provides more information than could be gathered using simple point counting alone, such as particle morphology statistics and chemical analyses. We have previously studied major, minor, and trace-element chemistry of orthopyroxene from a suite of diogenites. This abstract describes the method applied to determine the modes on this same suite of meteorites and the results of that research. The modal abundances thus determined add additional information on the petrogenesis of the diogenites. In addition, low-abundance phases such as spinels were located for further analysis by this method.

  18. IFDOTMETER: A New Software Application for Automated Immunofluorescence Analysis.

    Science.gov (United States)

    Rodríguez-Arribas, Mario; Pizarro-Estrella, Elisa; Gómez-Sánchez, Rubén; Yakhine-Diop, S M S; Gragera-Hidalgo, Antonio; Cristo, Alejandro; Bravo-San Pedro, Jose M; González-Polo, Rosa A; Fuentes, José M

    2016-04-01

    Most laboratories interested in autophagy use different imaging software for managing and analyzing heterogeneous parameters in immunofluorescence experiments (e.g., LC3-puncta quantification and determination of the number and size of lysosomes). One solution would be software that works on a user's laptop or workstation that can access all image settings and provide quick and easy-to-use analysis of data. Thus, we have designed and implemented an application called IFDOTMETER, which can run on all major operating systems because it has been programmed using JAVA (Sun Microsystems). Briefly, IFDOTMETER software has been created to quantify a variety of biological hallmarks, including mitochondrial morphology and nuclear condensation. The program interface is intuitive and user-friendly, making it useful for users not familiar with computer handling. By setting previously defined parameters, the software can automatically analyze a large number of images without the supervision of the researcher. Once analysis is complete, the results are stored in a spreadsheet. Using software for high-throughput cell image analysis offers researchers the possibility of performing comprehensive and precise analysis of a high number of images in an automated manner, making this routine task easier.

  19. Video-Based Systems Research, Analysis, and Applications Opportunities

    Science.gov (United States)

    1981-07-30

    classic films Ii- into separate FM signals for video dual soundtrack or stereo sound censed from nearlk every major stu- and audio. Another...The equipment for the conversion to the use of micrographic systems is varied and reliable. Cameras available for film / fiche production include the...rotary camera that can film up to 2500 documents an hour; * Bell & Howell’s ABR-100 recorder combines the high-quality photography of a planetary camera

  20. The Video Genome

    CERN Document Server

    Bronstein, Alexander M; Kimmel, Ron

    2010-01-01

    Fast evolution of Internet technologies has led to an explosive growth of video data available in the public domain and created unprecedented challenges in the analysis, organization, management, and control of such content. The problems encountered in video analysis such as identifying a video in a large database (e.g. detecting pirated content in YouTube), putting together video fragments, finding similarities and common ancestry between different versions of a video, have analogous counterpart problems in genetic research and analysis of DNA and protein sequences. In this paper, we exploit the analogy between genetic sequences and videos and propose an approach to video analysis motivated by genomic research. Representing video information as video DNA sequences and applying bioinformatic algorithms allows to search, match, and compare videos in large-scale databases. We show an application for content-based metadata mapping between versions of annotated video.

  1. Applying shot boundary detection for automated crystal growth analysis during in situ transmission electron microscope experiments

    Energy Technology Data Exchange (ETDEWEB)

    Moeglein, W. A.; Griswold, R.; Mehdi, B. L.; Browning, N. D.; Teuton, J.

    2017-01-03

    In-situ (scanning) transmission electron microscopy (S/TEM) is being developed for numerous applications in the study of nucleation and growth under electrochemical driving forces. For this type of experiment, one of the key parameters is to identify when nucleation initiates. Typically the process of identifying the moment that crystals begin to form is a manual process requiring the user to perform an observation and respond accordingly (adjust focus, magnification, translate the stage etc.). However, as the speed of the cameras being used to perform these observations increases, the ability of a user to “catch” the important initial stage of nucleation decreases (there is more information that is available in the first few milliseconds of the process). Here we show that video shot boundary detection (SBD) can automatically detect frames where a change in the image occurs. We show that this method can be applied to quickly and accurately identify points of change during crystal growth. This technique allows for automated segmentation of a digital stream for further analysis and the assignment of arbitrary time stamps for the initiation of processes that are independent of the user’s ability to observe and react.

  2. Granulometric profiling of aeolian dust deposits by automated image analysis

    Science.gov (United States)

    Varga, György; Újvári, Gábor; Kovács, János; Jakab, Gergely; Kiss, Klaudia; Szalai, Zoltán

    2016-04-01

    Determination of granulometric parameters is of growing interest in the Earth sciences. Particle size data of sedimentary deposits provide insights into the physicochemical environment of transport, accumulation and post-depositional alterations of sedimentary particles, and are important proxies applied in paleoclimatic reconstructions. It is especially true for aeolian dust deposits with a fairly narrow grain size range as a consequence of the extremely selective nature of wind sediment transport. Therefore, various aspects of aeolian sedimentation (wind strength, distance to source(s), possible secondary source regions and modes of sedimentation and transport) can be reconstructed only from precise grain size data. As terrestrial wind-blown deposits are among the most important archives of past environmental changes, proper explanation of the proxy data is a mandatory issue. Automated imaging provides a unique technique to gather direct information on granulometric characteristics of sedimentary particles. Granulometric data obtained from automatic image analysis of Malvern Morphologi G3-ID is a rarely applied new technique for particle size and shape analyses in sedimentary geology. Size and shape data of several hundred thousand (or even million) individual particles were automatically recorded in this study from 15 loess and paleosoil samples from the captured high-resolution images. Several size (e.g. circle-equivalent diameter, major axis, length, width, area) and shape parameters (e.g. elongation, circularity, convexity) were calculated by the instrument software. At the same time, the mean light intensity after transmission through each particle is automatically collected by the system as a proxy of optical properties of the material. Intensity values are dependent on chemical composition and/or thickness of the particles. The results of the automated imaging were compared to particle size data determined by three different laser diffraction instruments

  3. Real-time video analysis for retail stores

    Science.gov (United States)

    Hassan, Ehtesham; Maurya, Avinash K.

    2015-03-01

    With the advancement in video processing technologies, we can capture subtle human responses in a retail store environment which play decisive role in the store management. In this paper, we present a novel surveillance video based analytic system for retail stores targeting localized and global traffic estimate. Development of an intelligent system for human traffic estimation in real-life poses a challenging problem because of the variation and noise involved. In this direction, we begin with a novel human tracking system by an intelligent combination of motion based and image level object detection. We demonstrate the initial evaluation of this approach on available standard dataset yielding promising result. Exact traffic estimate in a retail store require correct separation of customers from service providers. We present a role based human classification framework using Gaussian mixture model for this task. A novel feature descriptor named graded colour histogram is defined for object representation. Using, our role based human classification and tracking system, we have defined a novel computationally efficient framework for two types of analytics generation i.e., region specific people count and dwell-time estimation. This system has been extensively evaluated and tested on four hours of real-life video captured from a retail store.

  4. RPCA-KFE: Key Frame Extraction for Video Using Robust Principal Component Analysis.

    Science.gov (United States)

    Dang, Chinh; Radha, Hayder

    2015-11-01

    Key frame extraction algorithms consider the problem of selecting a subset of the most informative frames from a video to summarize its content. Several applications, such as video summarization, search, indexing, and prints from video, can benefit from extracted key frames of the video under consideration. Most approaches in this class of algorithms work directly with the input video data set, without considering the underlying low-rank structure of the data set. Other algorithms exploit the low-rank component only, ignoring the other key information in the video. In this paper, a novel key frame extraction framework based on robust principal component analysis (RPCA) is proposed. Furthermore, we target the challenging application of extracting key frames from unstructured consumer videos. The proposed framework is motivated by the observation that the RPCA decomposes an input data into: 1) a low-rank component that reveals the systematic information across the elements of the data set and 2) a set of sparse components each of which containing distinct information about each element in the same data set. The two information types are combined into a single l1-norm-based non-convex optimization problem to extract the desired number of key frames. Moreover, we develop a novel iterative algorithm to solve this optimization problem. The proposed RPCA-based framework does not require shot(s) detection, segmentation, or semantic understanding of the underlying video. Finally, experiments are performed on a variety of consumer and other types of videos. A comparison of the results obtained by our method with the ground truth and with related state-of-the-art algorithms clearly illustrates the viability of the proposed RPCA-based framework.

  5. A Semi-Automated Functional Test Data Analysis Tool

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Peng; Haves, Philip; Kim, Moosung

    2005-05-01

    The growing interest in commissioning is creating a demand that will increasingly be met by mechanical contractors and less experienced commissioning agents. They will need tools to help them perform commissioning effectively and efficiently. The widespread availability of standardized procedures, accessible in the field, will allow commissioning to be specified with greater certainty as to what will be delivered, enhancing the acceptance and credibility of commissioning. In response, a functional test data analysis tool is being developed to analyze the data collected during functional tests for air-handling units. The functional test data analysis tool is designed to analyze test data, assess performance of the unit under test and identify the likely causes of the failure. The tool has a convenient user interface to facilitate manual entry of measurements made during a test. A graphical display shows the measured performance versus the expected performance, highlighting significant differences that indicate the unit is not able to pass the test. The tool is described as semiautomated because the measured data need to be entered manually, instead of being passed from the building control system automatically. However, the data analysis and visualization are fully automated. The tool is designed to be used by commissioning providers conducting functional tests as part of either new building commissioning or retro-commissioning, as well as building owners and operators interested in conducting routine tests periodically to check the performance of their HVAC systems.

  6. Intelligent Control in Automation Based on Wireless Traffic Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kurt Derr; Milos Manic

    2007-08-01

    Wireless technology is a central component of many factory automation infrastructures in both the commercial and government sectors, providing connectivity among various components in industrial realms (distributed sensors, machines, mobile process controllers). However wireless technologies provide more threats to computer security than wired environments. The advantageous features of Bluetooth technology resulted in Bluetooth units shipments climbing to five million per week at the end of 2005 [1, 2]. This is why the real-time interpretation and understanding of Bluetooth traffic behavior is critical in both maintaining the integrity of computer systems and increasing the efficient use of this technology in control type applications. Although neuro-fuzzy approaches have been applied to wireless 802.11 behavior analysis in the past, a significantly different Bluetooth protocol framework has not been extensively explored using this technology. This paper presents a new neurofuzzy traffic analysis algorithm of this still new territory of Bluetooth traffic. Further enhancements of this algorithm are presented along with the comparison against the traditional, numerical approach. Through test examples, interesting Bluetooth traffic behavior characteristics were captured, and the comparative elegance of this computationally inexpensive approach was demonstrated. This analysis can be used to provide directions for future development and use of this prevailing technology in various control type applications, as well as making the use of it more secure.

  7. Intelligent Control in Automation Based on Wireless Traffic Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kurt Derr; Milos Manic

    2007-09-01

    Wireless technology is a central component of many factory automation infrastructures in both the commercial and government sectors, providing connectivity among various components in industrial realms (distributed sensors, machines, mobile process controllers). However wireless technologies provide more threats to computer security than wired environments. The advantageous features of Bluetooth technology resulted in Bluetooth units shipments climbing to five million per week at the end of 2005 [1, 2]. This is why the real-time interpretation and understanding of Bluetooth traffic behavior is critical in both maintaining the integrity of computer systems and increasing the efficient use of this technology in control type applications. Although neuro-fuzzy approaches have been applied to wireless 802.11 behavior analysis in the past, a significantly different Bluetooth protocol framework has not been extensively explored using this technology. This paper presents a new neurofuzzy traffic analysis algorithm of this still new territory of Bluetooth traffic. Further enhancements of this algorithm are presented along with the comparison against the traditional, numerical approach. Through test examples, interesting Bluetooth traffic behavior characteristics were captured, and the comparative elegance of this computationally inexpensive approach was demonstrated. This analysis can be used to provide directions for future development and use of this prevailing technology in various control type applications, as well as making the use of it more secure.

  8. Automated pollen identification using microscopic imaging and texture analysis.

    Science.gov (United States)

    Marcos, J Víctor; Nava, Rodrigo; Cristóbal, Gabriel; Redondo, Rafael; Escalante-Ramírez, Boris; Bueno, Gloria; Déniz, Óscar; González-Porto, Amelia; Pardo, Cristina; Chung, François; Rodríguez, Tomás

    2015-01-01

    Pollen identification is required in different scenarios such as prevention of allergic reactions, climate analysis or apiculture. However, it is a time-consuming task since experts are required to recognize each pollen grain through the microscope. In this study, we performed an exhaustive assessment on the utility of texture analysis for automated characterisation of pollen samples. A database composed of 1800 brightfield microscopy images of pollen grains from 15 different taxa was used for this purpose. A pattern recognition-based methodology was adopted to perform pollen classification. Four different methods were evaluated for texture feature extraction from the pollen image: Haralick's gray-level co-occurrence matrices (GLCM), log-Gabor filters (LGF), local binary patterns (LBP) and discrete Tchebichef moments (DTM). Fisher's discriminant analysis and k-nearest neighbour were subsequently applied to perform dimensionality reduction and multivariate classification, respectively. Our results reveal that LGF and DTM, which are based on the spectral properties of the image, outperformed GLCM and LBP in the proposed classification problem. Furthermore, we found that the combination of all the texture features resulted in the highest performance, yielding an accuracy of 95%. Therefore, thorough texture characterisation could be considered in further implementations of automatic pollen recognition systems based on image processing techniques.

  9. Development of a software for INAA analysis automation

    Energy Technology Data Exchange (ETDEWEB)

    Zahn, Guilherme S.; Genezini, Frederico A.; Figueiredo, Ana Maria G.; Ticianelli, Regina B., E-mail: gzahn@ipen [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2013-07-01

    In this work, a software to automate the post-counting tasks in comparative INAA has been developed that aims to become more flexible than the available options, integrating itself with some of the routines currently in use in the IPEN Activation Analysis Laboratory and allowing the user to choose between a fully-automatic analysis or an Excel-oriented one. The software makes use of the Genie 2000 data importing and analysis routines and stores each 'energy-counts-uncertainty' table as a separate ASCII file that can be used later on if required by the analyst. Moreover, it generates an Excel-compatible CSV (comma separated values) file with only the relevant results from the analyses for each sample or comparator, as well as the results of the concentration calculations and the results obtained with four different statistical tools (unweighted average, weighted average, normalized residuals and Rajeval technique), allowing the analyst to double-check the results. Finally, a 'summary' CSV file is also produced, with the final concentration results obtained for each element in each sample. (author)

  10. Ground-target detection system for digital video database

    Science.gov (United States)

    Liang, Yiqing; Huang, Jeffrey R.; Wolf, Wayne H.; Liu, Bede

    1998-07-01

    As more and more visual information is available on video, information indexing and retrieval of digital video data is becoming important. A digital video database embedded with visual information processing using image analysis and image understanding techniques such as automated target detection, classification, and identification can provide query results of higher quality. We address in this paper a robust digital video database system within which a target detection module is implemented and applied onto the keyframe images extracted by our digital library system. The tasks and application scenarios under consideration involve indexing video with information about detection and verification of artificial objects that exist in video scenes. Based on the scenario that the video sequences are acquired by an onboard camera mounted on Predator unmanned aircraft, we demonstrate how an incoming video stream is structured into different levels -- video program level, scene level, shot level, and object level, based on the analysis of video contents using global imagery information. We then consider that the keyframe representation is most appropriate for video processing and it holds the property that can be used as the input for our detection module. As a result, video processing becomes feasible in terms of decreased computational resources spent and increased confidence in the (detection) decisions reached. The architecture we proposed can respond to the query of whether artificial structures and suspected combat vehicles are detected. The architecture for ground detection takes advantage of the image understanding paradigm and it involves different methods to locate and identify the artificial object rather than nature background such as tree, grass, and cloud. Edge detection, morphological transformation, line and parallel line detection using Hough transform applied on key frame images at video shot level are introduced in our detection module. This function can

  11. A New Motion Capture System For Automated Gait Analysis Based On Multi Video Sequence Analysis

    DEFF Research Database (Denmark)

    Jensen, Karsten; Juhl, Jens

    There is an increasing demand for assessing foot mal positions and an interest in monitoring the effect of treatment. In the last decades several different motion capture systems has been used. This abstract describes a new low cost motion capture system....

  12. Automated Analysis of Vital Signs Identified Patients with Substantial Bleeding Prior to Hospital Arrival

    Science.gov (United States)

    2015-10-01

    culminating with the first and only deployment of an automated emergency care decision system on board active air ambulances: the APPRAISE system, a...hardware/software platform for automated , real-time analysis of vital-sign data. After developing the APPRAISE system using data from trauma patients

  13. 40 CFR 13.19 - Analysis of costs; automation; prevention of overpayments, delinquencies or defaults.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 1 2010-07-01 2010-07-01 false Analysis of costs; automation; prevention of overpayments, delinquencies or defaults. 13.19 Section 13.19 Protection of Environment...; automation; prevention of overpayments, delinquencies or defaults. (a) The Administrator may...

  14. RFI detection by automated feature extraction and statistical analysis

    Science.gov (United States)

    Winkel, B.; Kerp, J.; Stanko, S.

    2007-01-01

    In this paper we present an interference detection toolbox consisting of a high dynamic range Digital Fast-Fourier-Transform spectrometer (DFFT, based on FPGA-technology) and data analysis software for automated radio frequency interference (RFI) detection. The DFFT spectrometer allows high speed data storage of spectra on time scales of less than a second. The high dynamic range of the device assures constant calibration even during extremely powerful RFI events. The software uses an algorithm which performs a two-dimensional baseline fit in the time-frequency domain, searching automatically for RFI signals superposed on the spectral data. We demonstrate, that the software operates successfully on computer-generated RFI data as well as on real DFFT data recorded at the Effelsberg 100-m telescope. At 21-cm wavelength RFI signals can be identified down to the 4σ_rms level. A statistical analysis of all RFI events detected in our observational data revealed that: (1) mean signal strength is comparable to the astronomical line emission of the Milky Way, (2) interferences are polarised, (3) electronic devices in the neighbourhood of the telescope contribute significantly to the RFI radiation. We also show that the radiometer equation is no longer fulfilled in presence of RFI signals.

  15. RFI detection by automated feature extraction and statistical analysis

    CERN Document Server

    Winkel, B; Stanko, S; Winkel, Benjamin; Kerp, Juergen; Stanko, Stephan

    2006-01-01

    In this paper we present an interference detection toolbox consisting of a high dynamic range Digital Fast-Fourier-Transform spectrometer (DFFT, based on FPGA-technology) and data analysis software for automated radio frequency interference (RFI) detection. The DFFT spectrometer allows high speed data storage of spectra on time scales of less than a second. The high dynamic range of the device assures constant calibration even during extremely powerful RFI events. The software uses an algorithm which performs a two-dimensional baseline fit in the time-frequency domain, searching automatically for RFI signals superposed on the spectral data. We demonstrate, that the software operates successfully on computer-generated RFI data as well as on real DFFT data recorded at the Effelsberg 100-m telescope. At 21-cm wavelength RFI signals can be identified down to the 4-sigma level. A statistical analysis of all RFI events detected in our observational data revealed that: (1) mean signal strength is comparable to the a...

  16. Automated analysis for detecting beams in laser wakefield simulations

    Energy Technology Data Exchange (ETDEWEB)

    Ushizima, Daniela M.; Rubel, Oliver; Prabhat, Mr.; Weber, Gunther H.; Bethel, E. Wes; Aragon, Cecilia R.; Geddes, Cameron G.R.; Cormier-Michel, Estelle; Hamann, Bernd; Messmer, Peter; Hagen, Hans

    2008-07-03

    Laser wakefield particle accelerators have shown the potential to generate electric fields thousands of times higher than those of conventional accelerators. The resulting extremely short particle acceleration distance could yield a potential new compact source of energetic electrons and radiation, with wide applications from medicine to physics. Physicists investigate laser-plasma internal dynamics by running particle-in-cell simulations; however, this generates a large dataset that requires time-consuming, manual inspection by experts in order to detect key features such as beam formation. This paper describes a framework to automate the data analysis and classification of simulation data. First, we propose a new method to identify locations with high density of particles in the space-time domain, based on maximum extremum point detection on the particle distribution. We analyze high density electron regions using a lifetime diagram by organizing and pruning the maximum extrema as nodes in a minimum spanning tree. Second, we partition the multivariate data using fuzzy clustering to detect time steps in a experiment that may contain a high quality electron beam. Finally, we combine results from fuzzy clustering and bunch lifetime analysis to estimate spatially confined beams. We demonstrate our algorithms successfully on four different simulation datasets.

  17. Collective Behaviour in Video Viewing: A Thermodynamic Analysis of Gaze Position

    Science.gov (United States)

    2017-01-01

    Videos and commercials produced for large audiences can elicit mixed opinions. We wondered whether this diversity is also reflected in the way individuals watch the videos. To answer this question, we presented 65 commercials with high production value to 25 individuals while recording their eye movements, and asked them to provide preference ratings for each video. We find that gaze positions for the most popular videos are highly correlated. To explain the correlations of eye movements, we model them as “interactions” between individuals. A thermodynamic analysis of these interactions shows that they approach a “critical” point such that any stronger interaction would put all viewers into lock-step and any weaker interaction would fully randomise patterns. At this critical point, groups with similar collective behaviour in viewing patterns emerge while maintaining diversity between groups. Our results suggest that popularity of videos is already evident in the way we look at them, and that we maintain diversity in viewing behaviour even as distinct patterns of groups emerge. Our results can be used to predict popularity of videos and commercials at the population level from the collective behaviour of the eye movements of a few viewers. PMID:28045963

  18. Automated Mineral Analysis to Characterize Metalliferous Mine Waste

    Science.gov (United States)

    Hensler, Ana-Sophie; Lottermoser, Bernd G.; Vossen, Peter; Langenberg, Lukas C.

    2016-10-01

    The objective of this study was to investigate the applicability of automated QEMSCAN® mineral analysis combined with bulk geochemical analysis to evaluate the environmental risk of non-acid producing mine waste present at the historic Albertsgrube Pb-Zn mine site, Hastenrath, North Rhine-Westphalia, Germany. Geochemical analyses revealed elevated average abundances of As, Cd, Cu, Mn, Pb, Sb and Zn and near neutral to slightly alkaline paste pH values. Mineralogical analyses using the QEMSCAN® revealed diverse mono- and polymineralic particles across all samples, with grain sizes ranging from a few μm up to 2000 μm. Calcite and dolomite (up to 78 %), smithsonite (up to 24 %) and Ca sulphate (up to 11.5 %) are present mainly as coarse-grained particles. By contrast, significant amounts of quartz, muscovite/illite, sphalerite (up to 10.8 %), galena (up to 1 %), pyrite (up to 3.4 %) and cerussite/anglesite (up to 4.3 %) are present as fine-grained (<500 μm) particles. QEMSCAN® analysis also identified disseminated sauconite, coronadite/chalcophanite, chalcopyrite, jarosite, apatite, rutile, K-feldspar, biotite, Fe (hydr) oxides/CO3 and unknown Zn Pb(Fe) and Zn Pb Ca (Fe Ti) phases. Many of the metal-bearing sulphide grains occur as separate particles with exposed surface areas and thus, may be matter of environmental concern because such mineralogical hosts will continue to release metals and metalloids (As, Cd, Sb, Zn) at near neutral pH into ground and surface waters. QEMSCAN® mineral analysis allows acquisition of fully quantitative data on the mineralogical composition, textural characteristics and grain size estimation of mine waste material and permits the recognition of mine waste as “high-risk” material that would have otherwise been classified by traditional geochemical tests as benign.

  19. Reliability and Validity of Quantitative Video Analysis of Baseball Pitching Motion.

    Science.gov (United States)

    Oyama, Sakiko; Sosa, Araceli; Campbell, Rebekah; Correa, Alexandra

    2017-02-01

    Video recordings are used to quantitatively analyze pitchers' techniques. However, reliability and validity of such analysis is unknown. The purpose of the study was to investigate the reliability and validity of joint and segment angles identified during a pitching motion using video analysis. Thirty high school baseball pitchers participated. The pitching motion was captured using 2 high-speed video cameras and a motion capture system. Two raters reviewed the videos to digitize the body segments to calculate 2-dimensional angles. The corresponding 3-dimensional angles were calculated from the motion capture data. Intrarater reliability, interrater reliability, and validity of the 2-dimensional angles were determined. The intrarater and interrater reliability of the 2-dimensional angles were high for most variables. The trunk contralateral flexion at maximum external rotation was the only variable with high validity. Trunk contralateral flexion at ball release, trunk forward flexion at foot contact and ball release, shoulder elevation angle at foot contact, and maximum shoulder external rotation had moderate validity. Two-dimensional angles at the shoulder, elbow, and trunk could be measured with high reliability. However, the angles are not necessarily anatomically correct, and thus use of quantitative video analysis should be limited to angles that can be measured with good validity.

  20. Automated Facial Action Coding System for dynamic analysis of facial expressions in neuropsychiatric disorders.

    Science.gov (United States)

    Hamm, Jihun; Kohler, Christian G; Gur, Ruben C; Verma, Ragini

    2011-09-15

    Facial expression is widely used to evaluate emotional impairment in neuropsychiatric disorders. Ekman and Friesen's Facial Action Coding System (FACS) encodes movements of individual facial muscles from distinct momentary changes in facial appearance. Unlike facial expression ratings based on categorization of expressions into prototypical emotions (happiness, sadness, anger, fear, disgust, etc.), FACS can encode ambiguous and subtle expressions, and therefore is potentially more suitable for analyzing the small differences in facial affect. However, FACS rating requires extensive training, and is time consuming and subjective thus prone to bias. To overcome these limitations, we developed an automated FACS based on advanced computer science technology. The system automatically tracks faces in a video, extracts geometric and texture features, and produces temporal profiles of each facial muscle movement. These profiles are quantified to compute frequencies of single and combined Action Units (AUs) in videos, and they can facilitate a statistical study of large populations in disorders known to impact facial expression. We derived quantitative measures of flat and inappropriate facial affect automatically from temporal AU profiles. Applicability of the automated FACS was illustrated in a pilot study, by applying it to data of videos from eight schizophrenia patients and controls. We created temporal AU profiles that provided rich information on the dynamics of facial muscle movements for each subject. The quantitative measures of flatness and inappropriateness showed clear differences between patients and the controls, highlighting their potential in automatic and objective quantification of symptom severity.

  1. Application of automated image analysis to coal petrography

    Science.gov (United States)

    Chao, E.C.T.; Minkin, J.A.; Thompson, C.L.

    1982-01-01

    The coal petrologist seeks to determine the petrographic characteristics of organic and inorganic coal constituents and their lateral and vertical variations within a single coal bed or different coal beds of a particular coal field. Definitive descriptions of coal characteristics and coal facies provide the basis for interpretation of depositional environments, diagenetic changes, and burial history and determination of the degree of coalification or metamorphism. Numerous coal core or columnar samples must be studied in detail in order to adequately describe and define coal microlithotypes, lithotypes, and lithologic facies and their variations. The large amount of petrographic information required can be obtained rapidly and quantitatively by use of an automated image-analysis system (AIAS). An AIAS can be used to generate quantitative megascopic and microscopic modal analyses for the lithologic units of an entire columnar section of a coal bed. In our scheme for megascopic analysis, distinctive bands 2 mm or more thick are first demarcated by visual inspection. These bands consist of either nearly pure microlithotypes or lithotypes such as vitrite/vitrain or fusite/fusain, or assemblages of microlithotypes. Megascopic analysis with the aid of the AIAS is next performed to determine volume percentages of vitrite, inertite, minerals, and microlithotype mixtures in bands 0.5 to 2 mm thick. The microlithotype mixtures are analyzed microscopically by use of the AIAS to determine their modal composition in terms of maceral and optically observable mineral components. Megascopic and microscopic data are combined to describe the coal unit quantitatively in terms of (V) for vitrite, (E) for liptite, (I) for inertite or fusite, (M) for mineral components other than iron sulfide, (S) for iron sulfide, and (VEIM) for the composition of the mixed phases (Xi) i = 1,2, etc. in terms of the maceral groups vitrinite V, exinite E, inertinite I, and optically observable mineral

  2. Fully Automated Operational Modal Analysis using multi-stage clustering

    Science.gov (United States)

    Neu, Eugen; Janser, Frank; Khatibi, Akbar A.; Orifici, Adrian C.

    2017-02-01

    The interest for robust automatic modal parameter extraction techniques has increased significantly over the last years, together with the rising demand for continuous health monitoring of critical infrastructure like bridges, buildings and wind turbine blades. In this study a novel, multi-stage clustering approach for Automated Operational Modal Analysis (AOMA) is introduced. In contrast to existing approaches, the procedure works without any user-provided thresholds, is applicable within large system order ranges, can be used with very small sensor numbers and does not place any limitations on the damping ratio or the complexity of the system under investigation. The approach works with any parametric system identification algorithm that uses the system order n as sole parameter. Here a data-driven Stochastic Subspace Identification (SSI) method is used. Measurements from a wind tunnel investigation with a composite cantilever equipped with Fiber Bragg Grating Sensors (FBGSs) and piezoelectric sensors are used to assess the performance of the algorithm with a highly damped structure and low signal to noise ratio conditions. The proposed method was able to identify all physical system modes in the investigated frequency range from over 1000 individual datasets using FBGSs under challenging signal to noise ratio conditions and under better signal conditions but from only two sensors.

  3. Background Defect Density Reduction Using Automated Defect Inspection And Analysis

    Science.gov (United States)

    Weirauch, Steven C.

    1988-01-01

    Yield maintenance and improvement is a major area of concern in any integrated circuit manufacturing operation. A major aspect of this concern is controlling and reducing defect density. Obviously, large defect excursions must be immediately addressed in order to maintain yield levels. However, to enhance yields, the subtle defect mechanisms must be reduced or eliminated as well. In-line process control inspections are effective for detecting large variations in the defect density on a real time basis. Examples of in-line inspection strategies include after develop or after etch inspections. They are usually effective for detecting when a particular process segment has gone out of control. However, when a process is running normally, there exists a background defect density that is generally not resolved by in-line process control inspections. The inspection strategies that are frequently used to monitor the background defect density are offline inspections. Offline inspections are used to identify the magnitude and characteristics of the background defect density. These inspections sample larger areas of product wafers than the in-line inspections to allow identification of the defect generating mechanisms that normally occur in the process. They are used to construct a database over a period of time so that trends may be studied. This information enables engineering efforts to be focused on the mechanisms that have the greatest impact on device yield. Once trouble spots in the process are identified, the data base supplies the information needed to isolate and solve them. The key aspect to the entire program is to utilize a reliable data gathering mechanism coupled with a flexible information processing system. This paper describes one method of reducing the background defect density using automated wafer inspection and analysis. The tools used in this evaluation were the KLA 2020 Wafer Inspector, KLA Utility Terminal (KLAUT), and a new software package developed

  4. Automating dChip: toward reproducible sharing of microarray data analysis

    Directory of Open Access Journals (Sweden)

    Li Cheng

    2008-05-01

    Full Text Available Abstract Background During the past decade, many software packages have been developed for analysis and visualization of various types of microarrays. We have developed and maintained the widely used dChip as a microarray analysis software package accessible to both biologist and data analysts. However, challenges arise when dChip users want to analyze large number of arrays automatically and share data analysis procedures and parameters. Improvement is also needed when the dChip user support team tries to identify the causes of reported analysis errors or bugs from users. Results We report here implementation and application of the dChip automation module. Through this module, dChip automation files can be created to include menu steps, parameters, and data viewpoints to run automatically. A data-packaging function allows convenient transfer from one user to another of the dChip software, microarray data, and analysis procedures, so that the second user can reproduce the entire analysis session of the first user. An analysis report file can also be generated during an automated run, including analysis logs, user comments, and viewpoint screenshots. Conclusion The dChip automation module is a step toward reproducible research, and it can prompt a more convenient and reproducible mechanism for sharing microarray software, data, and analysis procedures and results. Automation data packages can also be used as publication supplements. Similar automation mechanisms could be valuable to the research community if implemented in other genomics and bioinformatics software packages.

  5. Development of students' conceptual thinking by means of video analysis and interactive simulations at technical universities

    Science.gov (United States)

    Hockicko, Peter; Krišt‧ák, L.‧uboš; Němec, Miroslav

    2015-03-01

    Video analysis, using the program Tracker (Open Source Physics), in the educational process introduces a new creative method of teaching physics and makes natural sciences more interesting for students. This way of exploring the laws of nature can amaze students because this illustrative and interactive educational software inspires them to think creatively, improves their performance and helps them in studying physics. This paper deals with increasing the key competencies in engineering by analysing real-life situation videos - physical problems - by means of video analysis and the modelling tools using the program Tracker and simulations of physical phenomena from The Physics Education Technology (PhET™) Project (VAS method of problem tasks). The statistical testing using the t-test confirmed the significance of the differences in the knowledge of the experimental and control groups, which were the result of interactive method application.

  6. Empirical Analysis and Automated Classification of Security Bug Reports

    Science.gov (United States)

    Tyo, Jacob P.

    2016-01-01

    With the ever expanding amount of sensitive data being placed into computer systems, the need for effective cybersecurity is of utmost importance. However, there is a shortage of detailed empirical studies of security vulnerabilities from which cybersecurity metrics and best practices could be determined. This thesis has two main research goals: (1) to explore the distribution and characteristics of security vulnerabilities based on the information provided in bug tracking systems and (2) to develop data analytics approaches for automatic classification of bug reports as security or non-security related. This work is based on using three NASA datasets as case studies. The empirical analysis showed that the majority of software vulnerabilities belong only to a small number of types. Addressing these types of vulnerabilities will consequently lead to cost efficient improvement of software security. Since this analysis requires labeling of each bug report in the bug tracking system, we explored using machine learning to automate the classification of each bug report as a security or non-security related (two-class classification), as well as each security related bug report as specific security type (multiclass classification). In addition to using supervised machine learning algorithms, a novel unsupervised machine learning approach is proposed. An ac- curacy of 92%, recall of 96%, precision of 92%, probability of false alarm of 4%, F-Score of 81% and G-Score of 90% were the best results achieved during two-class classification. Furthermore, an accuracy of 80%, recall of 80%, precision of 94%, and F-score of 85% were the best results achieved during multiclass classification.

  7. Interobserver and Intraobserver Variability in pH-Impedance Analysis between 10 Experts and Automated Analysis

    DEFF Research Database (Denmark)

    Loots, Clara M; van Wijk, Michiel P; Blondeau, Kathleen;

    2011-01-01

    OBJECTIVE: To determine interobserver and intraobserver variability in pH-impedance interpretation between experts and accuracy of automated analysis (AA). STUDY DESIGN: Ten pediatric 24-hour pH-impedance tracings were analyzed by 10 observers from 7 world groups and with AA. Detection of gastroe......OBJECTIVE: To determine interobserver and intraobserver variability in pH-impedance interpretation between experts and accuracy of automated analysis (AA). STUDY DESIGN: Ten pediatric 24-hour pH-impedance tracings were analyzed by 10 observers from 7 world groups and with AA. Detection....... CONCLUSION: Interobserver agreement in combined pH-multichannel intraluminal impedance analysis in experts is moderate; only 42% of GER episodes were detected by the majority of observers. Detection of total GER numbers is more consistent. Considering these poor outcomes, AA seems favorable compared...

  8. Automated Design and Analysis Tool for CEV Structural and TPS Components Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The innovation of the proposed effort is a unique automated process for the analysis, design, and sizing of CEV structures and TPS. This developed process will...

  9. Automated Design and Analysis Tool for CLV/CEV Composite and Metallic Structural Components Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The innovation of the proposed effort is a unique automated process for the analysis, design, and sizing of CLV/CEV composite and metallic structures. This developed...

  10. Automated Production Flow Line Failure Rate Mathematical Analysis with Probability Theory

    Directory of Open Access Journals (Sweden)

    Tan Chan Sin

    2014-12-01

    Full Text Available Automated lines have been widely used in the industries especially for mass production and to customize product. Productivity of automated line is a crucial indicator to show the output and performance of the production. Failure or breakdown of station or mechanisms is commonly occurs in the automated line in real condition due to the technological and technical problem which is highly affect the productivity. The failure rates of automated line are not express or analyse in terms of mathematic form. This paper presents the mathematic analysis by using probability theory towards the failure condition in automated line. The mathematic express for failure rates can produce and forecast the output of productivity accurately

  11. Estimation of low back moments from video analysis: A validation study

    NARCIS (Netherlands)

    Coenen, P.; Kingma, I.; Boot, C.R.L.; Faber, G.S.; Xu, X.; Bongers, P.M.; Dieën, J.H. van

    2011-01-01

    This study aimed to develop, compare and validate two versions of a video analysis method for assessment of low back moments during occupational lifting tasks since for epidemiological studies and ergonomic practice relatively cheap and easily applicable methods to assess low back loads are needed.

  12. The Case for Constructing Video Cases: Promoting Complex, Specific, Learner-Centered Analysis of Discussion

    Science.gov (United States)

    Rosaen, Cheryl; Lundeberg, Mary; Terpstra, Marjorie

    2010-01-01

    The use of reflection and analysis in preparation of elementary and secondary preservice teachers has become a standard practice aimed at helping them develop the capacity to engage in intentional and systematic investigation of their practice. Editing video may be a more powerful tool than writing reflections based on memory to help preservice…

  13. The Use of Video Analysis and the Knowledge Quartet in Mathematics Teacher Education Programmes

    Science.gov (United States)

    Liston, Miriam

    2015-01-01

    This study investigates the potential of video analysis and a mathematical knowledge for teaching framework, the Knowledge Quartet (KQ), in mathematics teacher education programmes. It reports on the effectiveness of these tools in analysing and supporting secondary level pre-service mathematics teachers' subject matter knowledge and pedagogical…

  14. Investigating the Magnetic Interaction with Geomag and Tracker Video Analysis: Static Equilibrium and Anharmonic Dynamics

    Science.gov (United States)

    Onorato, P.; Mascheretti, P.; DeAmbrosis, A.

    2012-01-01

    In this paper, we describe how simple experiments realizable by using easily found and low-cost materials allow students to explore quantitatively the magnetic interaction thanks to the help of an Open Source Physics tool, the Tracker Video Analysis software. The static equilibrium of a "column" of permanents magnets is carefully investigated by…

  15. XbD Video 3, The SEEing process of qualitative data analysis

    DEFF Research Database (Denmark)

    2013-01-01

    This is the third video in the Experience-based Designing series. It presents a live classroom demonstration of a nine step qualitative data analysis process called SEEing: The process is useful for uncovering or discovering deeper layers of 'meaning' and meaning structures in an experience...

  16. MPEG-7 applications for video browsing and analysis

    Science.gov (United States)

    Divakaran, Ajay; Bober, Miroslaw; Asai, Kohtaro

    2001-11-01

    The soon to be released MPEG-7 standard provides a Multimedia Content Description Interface. In other words, it provides a rich set of tools to describe the content with a view to facilitating applications such as content based querying, browsing and searching of multimedia content. In this paper, we describe practical applications of MPEG-7 tools. We use descriptors of features such as color, shape and motion to both index and analyze the content. The aforementioned descriptors stem from our previous work and are currently in the draft international MPEG-7 standard. In our previous work, we have shown the efficacy of each of the descriptors individually. In this paper, we show how we combine color and motion to effectively browse video in our first application. In our second application, we show how we can combine shape and color to recognize objects in real time. We will present a demonstration of our system at the conference. We have already successfully demonstrated it to the Japanese press.

  17. Automation Tools for Finite Element Analysis of Adhesively Bonded Joints

    Science.gov (United States)

    Tahmasebi, Farhad; Brodeur, Stephen J. (Technical Monitor)

    2002-01-01

    This article presents two new automation creation tools that obtain stresses and strains (Shear and peel) in adhesively bonded joints. For a given adhesively bonded joint Finite Element model, in which the adhesive is characterised using springs, these automation tools read the corresponding input and output files, use the spring forces and deformations to obtain the adhesive stresses and strains, sort the stresses and strains in descending order, and generate plot files for 3D visualisation of the stress and strain fields. Grids (nodes) and elements can be numbered in any order that is convenient for the user. Using the automation tools, trade-off studies, which are needed for design of adhesively bonded joints, can be performed very quickly.

  18. Comparative analysis of video processing and 3D rendering for cloud video games using different virtualization technologies

    Science.gov (United States)

    Bada, Adedayo; Alcaraz-Calero, Jose M.; Wang, Qi; Grecos, Christos

    2014-05-01

    This paper describes a comprehensive empirical performance evaluation of 3D video processing employing the physical/virtual architecture implemented in a cloud environment. Different virtualization technologies, virtual video cards and various 3D benchmarks tools have been utilized in order to analyse the optimal performance in the context of 3D online gaming applications. This study highlights 3D video rendering performance under each type of hypervisors, and other factors including network I/O, disk I/O and memory usage. Comparisons of these factors under well-known virtual display technologies such as VNC, Spice and Virtual 3D adaptors reveal the strengths and weaknesses of the various hypervisors with respect to 3D video rendering and streaming.

  19. Correcting Students' Misconceptions about Automobile Braking Distances and Video Analysis Using Interactive Program Tracker

    Science.gov (United States)

    Hockicko, Peter; Trpišová, Beáta; Ondruš, Ján

    2014-12-01

    The present paper informs about an analysis of students' conceptions about car braking distances and also presents one of the novel methods of learning: an interactive computer program Tracker that we used to analyse the process of braking of a car. The analysis of the students' conceptions about car braking distances consisted in obtaining their estimates of these quantities before and after watching a video recording of a car braking from various initial speeds to a complete stop and subsequent application of mathematical statistics to the obtained sets of students' answers. The results revealed that the difference between the value of the car braking distance estimated before watching the video and the real value of this distance was not caused by a random error but by a systematic error which was due to the incorrect students' conceptions about the car braking process. Watching the video significantly improved the students' estimates of the car braking distance, and we show that in this case, the difference between the estimated value and the real value of the car braking distance was due only to a random error, i.e. the students' conceptions about the car braking process were corrected. Some of the students subsequently performed video analysis of the braking processes of cars of various brands and under various conditions by means of Tracker that gave them exact knowledge of the physical quantities, which characterize a motor vehicle braking. Interviewing some of these students brought very positive reactions to this novel method of learning.

  20. Automated multivariate analysis of comprehensive two-dimensional gas chromatograms of petroleum

    DEFF Research Database (Denmark)

    Skov, Søren Furbo

    of separated compounds makes the analysis of GCGC chromatograms tricky, as there are too much data for manual analysis , and automated analysis is not always trouble-free: Manual checking of the results is often necessary. In this work, I will investigate the possibility of another approach to analysis of GCGC...

  1. Prototype Software for Automated Structural Analysis of Systems

    DEFF Research Database (Denmark)

    Jørgensen, A.; Izadi-Zamanabadi, Roozbeh; Kristensen, M.

    2004-01-01

    In this paper we present a prototype software tool that is developed to analyse the structural model of automated systems in order to identify redundant information that is hence utilized for Fault detection and Isolation (FDI) purposes. The dedicated algorithms in this software tool use a tri...

  2. Background Extraction Method Based on Block Histogram Analysis for Video Image

    Institute of Scientific and Technical Information of China (English)

    Li Hua; Peng Qiang

    2005-01-01

    A novel method of histogram analysis for background extraction in video image is proposed, which is derived from the pixelbased histogram analysis. Not only the statistical property of pixels between temporal frames, but also the correlation of local pixels in a single frame is exploited in this method. When carrying out histogram analysis for background extraction, the proposed method is not based on a single pixel but on a 2×2 block that has much less computational quantities and can extract a sound background image from video sequence simultaneously. A comparative experiment between the proposed method and the pixel-based histogram analysis shows that the proposed method has a faster speed in background extraction and the obtained background image is better in quantity.

  3. Are Health Videos from Hospitals, Health Organizations, and Active Users Available to Health Consumers? An Analysis of Diabetes Health Video Ranking in YouTube.

    Science.gov (United States)

    Fernandez-Llatas, Carlos; Traver, Vicente; Borras-Morell, Jose-Enrique; Martinez-Millana, Antonio; Karlsen, Randi

    2017-01-01

    Health consumers are increasingly using the Internet to search for health information. The existence of overloaded, inaccurate, obsolete, or simply incorrect health information available on the Internet is a serious obstacle for finding relevant and good-quality data that actually helps patients. Search engines of multimedia Internet platforms are thought to help users to find relevant information according to their search. But, is the information recovered by those search engines from quality sources? Is the health information uploaded from reliable sources, such as hospitals and health organizations, easily available to patients? The availability of videos is directly related to the ranking position in YouTube search. The higher the ranking of the information is, the more accessible it is. The aim of this study is to analyze the ranking evolution of diabetes health videos on YouTube in order to discover how videos from reliable channels, such as hospitals and health organizations, are evolving in the ranking. The analysis was done by tracking the ranking of 2372 videos on a daily basis during a 30-day period using 20 diabetes-related queries. Our conclusions are that the current YouTube algorithm favors the presence of reliable videos in upper rank positions in diabetes-related searches.

  4. Are Health Videos from Hospitals, Health Organizations, and Active Users Available to Health Consumers? An Analysis of Diabetes Health Video Ranking in YouTube

    Science.gov (United States)

    Borras-Morell, Jose-Enrique; Martinez-Millana, Antonio; Karlsen, Randi

    2017-01-01

    Health consumers are increasingly using the Internet to search for health information. The existence of overloaded, inaccurate, obsolete, or simply incorrect health information available on the Internet is a serious obstacle for finding relevant and good-quality data that actually helps patients. Search engines of multimedia Internet platforms are thought to help users to find relevant information according to their search. But, is the information recovered by those search engines from quality sources? Is the health information uploaded from reliable sources, such as hospitals and health organizations, easily available to patients? The availability of videos is directly related to the ranking position in YouTube search. The higher the ranking of the information is, the more accessible it is. The aim of this study is to analyze the ranking evolution of diabetes health videos on YouTube in order to discover how videos from reliable channels, such as hospitals and health organizations, are evolving in the ranking. The analysis was done by tracking the ranking of 2372 videos on a daily basis during a 30-day period using 20 diabetes-related queries. Our conclusions are that the current YouTube algorithm favors the presence of reliable videos in upper rank positions in diabetes-related searches. PMID:28243314

  5. Applications of ENF criterion in forensic audio, video, computer and telecommunication analysis.

    Science.gov (United States)

    Grigoras, Catalin

    2007-04-11

    This article reports on the electric network frequency criterion as a means of assessing the integrity of digital audio/video evidence and forensic IT and telecommunication analysis. A brief description is given to different ENF types and phenomena that determine ENF variations. In most situations, to reach a non-authenticity opinion, the visual inspection of spectrograms and comparison with an ENF database are enough. A more detailed investigation, in the time domain, requires short time windows measurements and analyses. The stability of the ENF over geographical distances has been established by comparison of synchronized recordings made at different locations on the same network. Real cases are presented, in which the ENF criterion was used to investigate audio and video files created with secret surveillance systems, a digitized audio/video recording and a TV broadcasted reportage. By applying the ENF Criterion in forensic audio/video analysis, one can determine whether and where a digital recording has been edited, establish whether it was made at the time claimed, and identify the time and date of the registering operation.

  6. The Use of Video Analysis in a Personnel Preparation Program for Teachers of Students Who Are Visually Impaired

    Science.gov (United States)

    Gale, Elaine; Trief, Ellen; Lengel, James

    2010-01-01

    Video analysis affords the observer the opportunity to capture and analyze videos of teaching practices, so that the observer can review, analyze, and synthesize specific examples of teaching in authentic classroom settings. The student teaching experience is the prime opportunity during the personnel preparation program in which student teachers…

  7. Facilitating Reflexivity in Preservice Science Teacher Education Using Video Analysis and Cogenerative Dialogue in Field-Based Methods Courses

    Science.gov (United States)

    Siry, Christina; Martin, Sonya N.

    2014-01-01

    This paper presents an approach to preservice science teacher education coupling video analysis with dialogue as tools for fostering teachers' ability to notice and reflexively interpret events captured during teaching practicum with the intent of transforming classroom practice. In this approach, video becomes a tool with which teachers…

  8. Facilitating Reflexivity in Preservice Science Teacher Education Using Video Analysis and Cogenerative Dialogue in Field-Based Methods Courses

    Science.gov (United States)

    Siry, Christina; Martin, Sonya N.

    2014-01-01

    This paper presents an approach to preservice science teacher education coupling video analysis with dialogue as tools for fostering teachers' ability to notice and reflexively interpret events captured during teaching practicum with the intent of transforming classroom practice. In this approach, video becomes a tool with which teachers connect…

  9. Automated Spatio-Temporal Analysis of Remotely Sensed Imagery for Water Resources Management

    Science.gov (United States)

    Bahr, Thomas

    2016-04-01

    a common video format. • Plotting the time series of water surface area in square kilometers. The automated spatio-temporal analysis introduced here can be embedded in virtually any existing geospatial workflow for operational applications. Three integration options were implemented in this case study: • Integration within any ArcGIS environment whether deployed on the desktop, in the cloud, or online. Execution uses a customized ArcGIS script tool. A Python script file retrieves the parameters from the user interface and runs the precompiled IDL code. That IDL code is used to interface between the Python script and the relevant ENVITasks. • Publishing the spatio-temporal analysis tasks as services via the ENVI Services Engine (ESE). ESE is a cloud-based image analysis solution to publish and deploy advanced ENVI image and data analytics to existing enterprise infrastructures. For this purpose the entire IDL code can be capsuled in a single ENVITask. • Integration in an existing geospatial workflow using the Python-to-IDL Bridge. This mechanism allows calling IDL code within Python on a user-defined platform. The results of this case study verify the drastic decrease of the amount of surface water in the AOI, indicative of the major drought that is pervasive throughout California. Accordingly, the time series analysis was correlated successfully with the daily reservoir elevations of the Don Pedro reservoir (station DNP, operated by CDEC).

  10. Dialog detection in narrative video by shot and face analysis

    NARCIS (Netherlands)

    Kroon, B.; Nesvadba, J.; Hanjalic, A.

    2007-01-01

    The proliferation of captured personal and broadcast content in personal consumer archives necessitates comfortable access to stored audiovisual content. Intuitive retrieval and navigation solutions require however a semantic level that cannot be reached by generic multimedia content analysis alone.

  11. Extending and automating a Systems-Theoretic hazard analysis for requirements generation and analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Thomas, John (Massachusetts Institute of Technology)

    2012-05-01

    Systems Theoretic Process Analysis (STPA) is a powerful new hazard analysis method designed to go beyond traditional safety techniques - such as Fault Tree Analysis (FTA) - that overlook important causes of accidents like flawed requirements, dysfunctional component interactions, and software errors. While proving to be very effective on real systems, no formal structure has been defined for STPA and its application has been ad-hoc with no rigorous procedures or model-based design tools. This report defines a formal mathematical structure underlying STPA and describes a procedure for systematically performing an STPA analysis based on that structure. A method for using the results of the hazard analysis to generate formal safety-critical, model-based system and software requirements is also presented. Techniques to automate both the analysis and the requirements generation are introduced, as well as a method to detect conflicts between the safety and other functional model-based requirements during early development of the system.

  12. Alert management for home healthcare based on home automation analysis.

    Science.gov (United States)

    Truong, T T; de Lamotte, F; Diguet, J-Ph; Said-Hocine, F

    2010-01-01

    Rising healthcare for elder and disabled people can be controlled by offering people autonomy at home by means of information technology. In this paper, we present an original and sensorless alert management solution which performs multimedia and home automation service discrimination and extracts highly regular home activities as sensors for alert management. The results of simulation data, based on real context, allow us to evaluate our approach before application to real data.

  13. Fuzzy Emotional Semantic Analysis and Automated Annotation of Scene Images

    Directory of Open Access Journals (Sweden)

    Jianfang Cao

    2015-01-01

    Full Text Available With the advances in electronic and imaging techniques, the production of digital images has rapidly increased, and the extraction and automated annotation of emotional semantics implied by images have become issues that must be urgently addressed. To better simulate human subjectivity and ambiguity for understanding scene images, the current study proposes an emotional semantic annotation method for scene images based on fuzzy set theory. A fuzzy membership degree was calculated to describe the emotional degree of a scene image and was implemented using the Adaboost algorithm and a back-propagation (BP neural network. The automated annotation method was trained and tested using scene images from the SUN Database. The annotation results were then compared with those based on artificial annotation. Our method showed an annotation accuracy rate of 91.2% for basic emotional values and 82.4% after extended emotional values were added, which correspond to increases of 5.5% and 8.9%, respectively, compared with the results from using a single BP neural network algorithm. Furthermore, the retrieval accuracy rate based on our method reached approximately 89%. This study attempts to lay a solid foundation for the automated emotional semantic annotation of more types of images and therefore is of practical significance.

  14. Fuzzy emotional semantic analysis and automated annotation of scene images.

    Science.gov (United States)

    Cao, Jianfang; Chen, Lichao

    2015-01-01

    With the advances in electronic and imaging techniques, the production of digital images has rapidly increased, and the extraction and automated annotation of emotional semantics implied by images have become issues that must be urgently addressed. To better simulate human subjectivity and ambiguity for understanding scene images, the current study proposes an emotional semantic annotation method for scene images based on fuzzy set theory. A fuzzy membership degree was calculated to describe the emotional degree of a scene image and was implemented using the Adaboost algorithm and a back-propagation (BP) neural network. The automated annotation method was trained and tested using scene images from the SUN Database. The annotation results were then compared with those based on artificial annotation. Our method showed an annotation accuracy rate of 91.2% for basic emotional values and 82.4% after extended emotional values were added, which correspond to increases of 5.5% and 8.9%, respectively, compared with the results from using a single BP neural network algorithm. Furthermore, the retrieval accuracy rate based on our method reached approximately 89%. This study attempts to lay a solid foundation for the automated emotional semantic annotation of more types of images and therefore is of practical significance.

  15. High-speed video imaging and digital analysis of microscopic features in contracting striated muscle cells

    Science.gov (United States)

    Roos, Kenneth P.; Taylor, Stuart R.

    1993-02-01

    The rapid motion of microscopic features such as the cross striations of single contracting muscle cells are difficult to capture with conventional optical microscopes, video systems, and image processing approaches. An integrated digital video imaging microscope system specifically designed to capture images from single contracting muscle cells at speeds of up to 240 Hz and to analyze images to extract features critical for the understanding of muscle contraction is described. This system consists of a brightfield microscope with immersion optics coupled to a high-speed charge-coupled device (CCD) video camera, super-VHS (S- VHS) and optical media disk video recording (OMDR) systems, and a semiautomated digital image analysis system. Components are modified to optimize spatial and temporal resolution to permit the evaluation of submicrometer features in real physiological time. This approach permits the critical evaluation of the magnitude, time course, and uniformity of contractile function throughout the volume of a single living cell with higher temporal and spatial resolutions than previously possible.

  16. A typology of affordances: untangling sociomaterial interactions through video analysis

    NARCIS (Netherlands)

    van Osch, W.; Mendelson, O.

    2011-01-01

    In this study we untangle the sociomaterial interactions between developers, users, and artifacts by analyzing what types of affordances occur in the interactions between actors and artifacts in the context of group generativity. Hereto, we conducted an in-depth ethnographic and interaction analysis

  17. Automated analysis of short responses in an interactive synthetic tutoring system for introductory physics

    Science.gov (United States)

    Nakamura, Christopher M.; Murphy, Sytil K.; Christel, Michael G.; Stevens, Scott M.; Zollman, Dean A.

    2016-06-01

    Computer-automated assessment of students' text responses to short-answer questions represents an important enabling technology for online learning environments. We have investigated the use of machine learning to train computer models capable of automatically classifying short-answer responses and assessed the results. Our investigations are part of a project to develop and test an interactive learning environment designed to help students learn introductory physics concepts. The system is designed around an interactive video tutoring interface. We have analyzed 9 with about 150 responses or less. We observe for 4 of the 9 automated assessment with interrater agreement of 70% or better with the human rater. This level of agreement may represent a baseline for practical utility in instruction and indicates that the method warrants further investigation for use in this type of application. Our results also suggest strategies that may be useful for writing activities and questions that are more appropriate for automated assessment. These strategies include building activities that have relatively few conceptually distinct ways of perceiving the physical behavior of relatively few physical objects. Further success in this direction may allow us to promote interactivity and better provide feedback in online learning systems. These capabilities could enable our system to function more like a real tutor.

  18. Mass asymmetry and tricyclic wobble motion assessment using automated launch video analysis

    Directory of Open Access Journals (Sweden)

    Ryan Decker

    2016-04-01

    Examination of the pitch and yaw histories clearly indicates that in addition to epicyclic motion's nutation and precession oscillations, an even faster wobble amplitude is present during each spin revolution, even though some of the amplitudes of the oscillation are smaller than 0.02 degree. The results are compared to a sequence of shots where little appreciable mass asymmetries were present, and only nutation and precession frequencies are predominantly apparent in the motion history results. Magnitudes of the wobble motion are estimated and compared to product of inertia measurements of the asymmetric projectiles.

  19. Video-based Analysis of Motivation and Interaction in Science Classrooms

    DEFF Research Database (Denmark)

    Andersen, Hanne Moeller; Nielsen, Birgitte Lund

    2013-01-01

    An analytical framework for examining students’ motivation was developed and used for analyses of video excerpts from science classrooms. The framework was developed in an iterative process involving theories on motivation and video excerpts from a ‘motivational event’ where students worked...... in groups. Subsequently, the framework was used for an analysis of students’ motivation in the whole class situation. A cross-case analysis was carried out illustrating characteristics of students’ motivation dependent on the context. This research showed that students’ motivation to learn science...... is stimulated by a range of different factors, with autonomy, relatedness and belonging apparently being the main sources of motivation. The teacher’s combined use of questions, uptake and high level evaluation was very important for students’ learning processes and motivation, especially students’ self...

  20. Automated detection and measurement of isolated retinal arterioles by a combination of edge enhancement and cost analysis.

    Directory of Open Access Journals (Sweden)

    José A Fernández

    Full Text Available Pressure myography studies have played a crucial role in our understanding of vascular physiology and pathophysiology. Such studies depend upon the reliable measurement of changes in the diameter of isolated vessel segments over time. Although several software packages are available to carry out such measurements on small arteries and veins, no such software exists to study smaller vessels (<50 µm in diameter. We provide here a new, freely available open-source algorithm, MyoTracker, to measure and track changes in the diameter of small isolated retinal arterioles. The program has been developed as an ImageJ plug-in and uses a combination of cost analysis and edge enhancement to detect the vessel walls. In tests performed on a dataset of 102 images, automatic measurements were found to be comparable to those of manual ones. The program was also able to track both fast and slow constrictions and dilations during intraluminal pressure changes and following application of several drugs. Variability in automated measurements during analysis of videos and processing times were also investigated and are reported. MyoTracker is a new software to assist during pressure myography experiments on small isolated retinal arterioles. It provides fast and accurate measurements with low levels of noise and works with both individual images and videos. Although the program was developed to work with small arterioles, it is also capable of tracking the walls of other types of microvessels, including venules and capillaries. It also works well with larger arteries, and therefore may provide an alternative to other packages developed for larger vessels when its features are considered advantageous.

  1. A new morphometric implemented video-image analysis protocol for the study of social modulation in activity rhythms of marine organisms.

    Science.gov (United States)

    Menesatti, Paolo; Aguzzi, Jacopo; Costa, Corrado; García, José Antonio; Sardà, Francesc

    2009-10-30

    Video-image analysis can be an efficient tool for microcosm experiments portraying the modulation of individual behaviour based on sociality. The Norway lobster, Nephrops norvegicus is a burrowing decapod the commercial capture of which occurs by trawling only when animals are engaged in seabed excursions. Emergence behaviour is modulated by the day-night cycle but a further modulation occurs upon social interaction in a still unknown fashion. Here, we present a novel automated protocol for the tracking of the movement of different animals at once based on a multivariate morphometric approach. Four black and white tags were customized according to a precise geometric design. Shape Matching and Complex Fourier Descriptors analyses were used to track tag displacement through consecutive frames in a 7-day experiment under monochromatic blue light (480 nm)-darkness conditions. Shape Matching errors were evaluated in relation to tag geometry. Time series of centroid coordinates in pixels were transformed in centimetres. The FD analysis was slightly less efficient than the Shape Matching, although more rapid (i.e. up to 20 times faster). Nocturnal rhythms were reported for all animals. Waveform analysis indicated marked differences in the amplitude of activity phases as proof of interindividual interaction. Total diel activity presented a decrease in the rate of out of burrow locomotion as the testing progressed. N. norvegicus is a nocturnal species and present observations sustain the efficiency and fidelity of our automated tracking system.

  2. Using Video Analysis and Biomechanics to Engage Life Science Majors in Introductory Physics

    Science.gov (United States)

    Stephens, Jeff

    There is an interest in Introductory Physics for the Life Sciences (IPLS) as a way to better engage students in what may be their only physical science course. In this talk I will present some low cost and readily available technologies for video analysis and how they have been implemented in classes and in student research projects. The technologies include software like Tracker and LoggerPro for video analysis and low cost high speed cameras for capturing real world events. The focus of the talk will be on content created by students including two biomechanics research projects performed over the summer by pre-physical therapy majors. One project involved assessing medial knee displacement (MKD), a situation where the subject's knee becomes misaligned during a squatting motion and is a contributing factor in ACL and other knee injuries. The other project looks at the difference in landing forces experienced by gymnasts and cheer-leaders while performing on foam mats versus spring floors. The goal of this talk is to demonstrate how easy it can be to engage life science majors through the use of video analysis and topics like biomechanics and encourage others to try it for themselves.

  3. MULTI LEVEL SEMANTIC EXTRACTION FOR CRICKET VIDEO BY TEXT PROCESSING

    Directory of Open Access Journals (Sweden)

    Dr. SUNITHA ABBURU

    2010-10-01

    Full Text Available Semantic video analysis, indexing and retrieval are necessary for effective utilization of video repositories. The semantics can be extracted from the semantic carriers such as voice and video text. Super imposed text is the proper source to extract semantics of the video which will increase the efficiency of retrieval system. This paper proposes a semiautomatic method to generate annotation for cricket videos and an automated tool- DLER, to extract the semantics of cricket video. The DLER tool provides a fast and robust approach for text Detection, Localization, Extraction, and Reorganization in video frames, which is flexible and customer friendly. The DLER integrates all the pre-processing steps and the OCR steps in to a single unit. The annotator can pick the ROI, increase or decrease the threshold, contrast, brightness or inverse the image based on the typeof the broadcasted video. The tool has been implemented and tested with cricket video and the results of the experiments are promising. Finally conclusion and future work has been discussed.

  4. The Narrative Analysis of the Discourse on Homosexual BDSM Pornograhic Video Clips of The Manhunt Variety

    Directory of Open Access Journals (Sweden)

    Milica Vasić

    2016-02-01

    Full Text Available In this paper we have analyzed the ideal-type model of the story which represents the basic framework of action in Manhunt category pornographic internet video clips, using narrative analysis methods of Claude Bremond. The results have shown that it is possible to apply the theoretical model to elements of visual and mass culture, with certain modifications and taking into account the wider context of the narrative itself. The narrative analysis indicated the significance of researching categories of pornography on the internet, because it leads to a deep analysis of the distribution of power in relations between the categories of heterosexual and homosexual within a virtual environment.

  5. Organ donation on Web 2.0: content and audience analysis of organ donation videos on YouTube.

    Science.gov (United States)

    Tian, Yan

    2010-04-01

    This study examines the content of and audience response to organ donation videos on YouTube, a Web 2.0 platform, with framing theory. Positive frames were identified in both video content and audience comments. Analysis revealed a reciprocity relationship between media frames and audience frames. Videos covered content categories such as kidney, liver, organ donation registration process, and youth. Videos were favorably rated. No significant differences were found between videos produced by organizations and individuals in the United States and those produced in other countries. The findings provide insight into how new communication technologies are shaping health communication in ways that differ from traditional media. The implications of Web 2.0, characterized by user-generated content and interactivity, for health communication and health campaign practice are discussed.

  6. Automation or De-automation

    Science.gov (United States)

    Gorlach, Igor; Wessel, Oliver

    2008-09-01

    In the global automotive industry, for decades, vehicle manufacturers have continually increased the level of automation of production systems in order to be competitive. However, there is a new trend to decrease the level of automation, especially in final car assembly, for reasons of economy and flexibility. In this research, the final car assembly lines at three production sites of Volkswagen are analysed in order to determine the best level of automation for each, in terms of manufacturing costs, productivity, quality and flexibility. The case study is based on the methodology proposed by the Fraunhofer Institute. The results of the analysis indicate that fully automated assembly systems are not necessarily the best option in terms of cost, productivity and quality combined, which is attributed to high complexity of final car assembly systems; some de-automation is therefore recommended. On the other hand, the analysis shows that low automation can result in poor product quality due to reasons related to plant location, such as inadequate workers' skills, motivation, etc. Hence, the automation strategy should be formulated on the basis of analysis of all relevant aspects of the manufacturing process, such as costs, quality, productivity and flexibility in relation to the local context. A more balanced combination of automated and manual assembly operations provides better utilisation of equipment, reduces production costs and improves throughput.

  7. Writing/Thinking in Real Time: Digital Video and Corpus Query Analysis

    Directory of Open Access Journals (Sweden)

    Park, Kwanghyun

    2010-10-01

    Full Text Available The advance of digital video technology in the past two decades facilitates empirical investigation of learning in real time. The focus of this paper is the combined use of real-time digital video and a networked linguistic corpus for exploring the ways in which these technologies enhance our capability to investigate the cognitive process of learning. A perennial challenge to research using digital video (e.g., screen recordings has been the method for interfacing the captured behavior with the learners’ cognition. An exploratory proposal in this paper is that with an additional layer of data (i.e., corpus search queries, analyses of real-time data can be extended to provide an explicit representation of learner’s cognitive processes. This paper describes the method and applies it to an area of SLA, specifically writing, and presents an in-depth, moment-by-moment analysis of an L2 writer’s composing process. The findings show that the writer’s composing process is fundamentally developmental, and that it is facilitated in her dialogue-like interaction with an artifact (i.e., the corpus. The analysis illustrates the effectiveness of the method for capturing learners’ cognition, suggesting that L2 learning can be more fully explicated by interpreting real-time data in concert with investigation of corpus search queries.

  8. Video Analysis and Modeling Performance Task to promote becoming like scientists in classrooms

    CERN Document Server

    Wee, Loo Kang

    2015-01-01

    This paper aims to share the use of Tracker a free open source video analysis and modeling tool that is increasingly used as a pedagogical tool for the effective learning and teaching of Physics for Grade 9 Secondary 3 students in Singapore schools to make physics relevant to the real world. We discuss the pedagogical use of Tracker, guided by the Framework for K-12 Science Education by National Research Council, USA to help students to be more like scientists. For a period of 6 to 10 weeks, students use a video analysis coupled with the 8 practices of sciences such as 1. Ask question, 2. Use models, 3. Plan and carry out investigation, 4. Analyse and interpret data, 5. Use mathematical and computational thinking, 6. Construct explanations, 7. Argue from evidence and 8. Communicate information. This papers focus in on discussing some of the performance task design ideas such as 3.1 flip video, 3.2 starting with simple classroom activities, 3.3 primer science activity, 3.4 integrative dynamics and kinematics l...

  9. HIERARCHICAL ADAPTIVE ROOD PATTERN SEARCH FOR MOTION ESTIMATION AT VIDEO SEQUENCE ANALYSIS

    Directory of Open Access Journals (Sweden)

    V. T. Nguyen

    2016-05-01

    Full Text Available Subject of Research.The paper deals with the motion estimation algorithms for the analysis of video sequences in compression standards MPEG-4 Visual and H.264. Anew algorithm has been offered based on the analysis of the advantages and disadvantages of existing algorithms. Method. Thealgorithm is called hierarchical adaptive rood pattern search (Hierarchical ARPS, HARPS. This new algorithm includes the classic adaptive rood pattern search ARPS and hierarchical search MP (Hierarchical search or Mean pyramid. All motion estimation algorithms have been implemented using MATLAB package and tested with several video sequences. Main Results. The criteria for evaluating the algorithms were: speed, peak signal to noise ratio, mean square error and mean absolute deviation. The proposed method showed a much better performance at a comparable error and deviation. The peak signal to noise ratio in different video sequences shows better and worse results than characteristics of known algorithms so it requires further investigation. Practical Relevance. Application of this algorithm in MPEG-4 and H.264 codecs instead of the standard can significantly reduce compression time. This feature enables to recommend it in telecommunication systems for multimedia data storing, transmission and processing.

  10. The use of video analysis and the Knowledge Quartet in mathematics teacher education programmes

    Science.gov (United States)

    Liston, Miriam

    2015-01-01

    This study investigates the potential of video analysis and a mathematical knowledge for teaching framework, the Knowledge Quartet (KQ), in mathematics teacher education programmes. It reports on the effectiveness of these tools in analysing and supporting secondary level pre-service mathematics teachers' subject matter knowledge and pedagogical content knowledge. This paper describes how a videotaped lesson of one pre-service teacher, teaching a class of mature students, was analysed and makes comparisons between the teacher educators' and the pre-service teacher's observations. Inter-rater reliability was investigated and a Kappa coefficient of .72 indicated substantial agreement between both coders. Findings are presented and implications of the use of video and the KQ for mathematics teacher education are drawn.

  11. Rate-prediction structure complexity analysis for multi-view video coding using hybrid genetic algorithms

    Science.gov (United States)

    Liu, Yebin; Dai, Qionghai; You, Zhixiang; Xu, Wenli

    2007-01-01

    Efficient exploitation of the temporal and inter-view correlation is critical to multi-view video coding (MVC), and the key to it relies on the design of prediction chain structure according to the various pattern of correlations. In this paper, we propose a novel prediction structure model to design optimal MVC coding schemes along with tradeoff analysis in depth between compression efficiency and prediction structure complexity for certain standard functionalities. Focusing on the representation of the entire set of possible chain structures rather than certain typical ones, the proposed model can given efficient MVC schemes that adaptively vary with the requirements of structure complexity and video source characteristics (the number of views, the degrees of temporal and interview correlations). To handle large scale problem in model optimization, we deploy a hybrid genetic algorithm which yields satisfactory results shown in the simulations.

  12. Automatic classification of images with appendiceal orifice in colonoscopy videos.

    Science.gov (United States)

    Cao, Yu; Liu, Danyu; Tavanapong, Wallapak; Wong, Johnny; Oh, JungHwan; de Groen, Piet C

    2006-01-01

    Colonoscopy is an endoscopic technique that allows a physician to inspect the inside of the human colon. In current practice, videos captured from colonoscopic procedures are not routinely stored for either manual or automated post-procedure analysis. In this paper, we introduce new algorithms for automated detection of the presence of the shape of the opening of the appendix in a colonoscopy video frame. The appearance of the appendix in colonoscopy videos indicates traversal of the colon, which is an important measurement for evaluating the quality of colonoscopic procedures. The proposed techniques are valuable for (1) establishment of an effective content-based retrieval system to facilitate endoscopic research and education; and (2) assessment and improvement of the procedural skills of endoscopists, both in training and practice.

  13. AUTOMATION OF MORPHOMETRIC MEASUREMENTS FOR PLANETARY SURFACE ANALYSIS AND CARTOGRAPHY

    Directory of Open Access Journals (Sweden)

    A. A. Kokhanov

    2016-06-01

    Full Text Available For automation of measurements of morphometric parameters of surface relief various tools were developed and integrated into GIS. We have created a tool, which calculates statistical characteristics of the surface: interquartile range of heights, and slopes, as well as second derivatives of height fields as measures of topographic roughness. Other tools were created for morphological studies of craters. One of them allows automatic placing of topographic profiles through the geometric center of a crater. Another tool was developed for calculation of small crater depths and shape estimation, using C++ programming language. Additionally, we have prepared tool for calculating volumes of relief features from DTM rasters. The created software modules and models will be available in a new developed web-GIS system, operating in distributed cloud environment.

  14. Automated Image Processing for the Analysis of DNA Repair Dynamics

    CERN Document Server

    Riess, Thorsten; Tomas, Martin; Ferrando-May, Elisa; Merhof, Dorit

    2011-01-01

    The efficient repair of cellular DNA is essential for the maintenance and inheritance of genomic information. In order to cope with the high frequency of spontaneous and induced DNA damage, a multitude of repair mechanisms have evolved. These are enabled by a wide range of protein factors specifically recognizing different types of lesions and finally restoring the normal DNA sequence. This work focuses on the repair factor XPC (xeroderma pigmentosum complementation group C), which identifies bulky DNA lesions and initiates their removal via the nucleotide excision repair pathway. The binding of XPC to damaged DNA can be visualized in living cells by following the accumulation of a fluorescent XPC fusion at lesions induced by laser microirradiation in a fluorescence microscope. In this work, an automated image processing pipeline is presented which allows to identify and quantify the accumulation reaction without any user interaction. The image processing pipeline comprises a preprocessing stage where the ima...

  15. Automation of Morphometric Measurements for Planetary Surface Analysis and Cartography

    Science.gov (United States)

    Kokhanov, A. A.; Bystrov, A. Y.; Kreslavsky, M. A.; Matveev, E. V.; Karachevtseva, I. P.

    2016-06-01

    For automation of measurements of morphometric parameters of surface relief various tools were developed and integrated into GIS. We have created a tool, which calculates statistical characteristics of the surface: interquartile range of heights, and slopes, as well as second derivatives of height fields as measures of topographic roughness. Other tools were created for morphological studies of craters. One of them allows automatic placing of topographic profiles through the geometric center of a crater. Another tool was developed for calculation of small crater depths and shape estimation, using C++ programming language. Additionally, we have prepared tool for calculating volumes of relief features from DTM rasters. The created software modules and models will be available in a new developed web-GIS system, operating in distributed cloud environment.

  16. Automated Multivariate Optimization Tool for Energy Analysis: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, P. G.; Griffith, B. T.; Long, N.; Torcellini, P. A.; Crawley, D.

    2006-07-01

    Building energy simulations are often used for trial-and-error evaluation of ''what-if'' options in building design--a limited search for an optimal solution, or ''optimization''. Computerized searching has the potential to automate the input and output, evaluate many options, and perform enough simulations to account for the complex interactions among combinations of options. This paper describes ongoing efforts to develop such a tool. The optimization tool employs multiple modules, including a graphical user interface, a database, a preprocessor, the EnergyPlus simulation engine, an optimization engine, and a simulation run manager. Each module is described and the overall application architecture is summarized.

  17. 3D Assembly Group Analysis for Cognitive Automation

    Directory of Open Access Journals (Sweden)

    Christian Brecher

    2012-01-01

    Full Text Available A concept that allows the cognitive automation of robotic assembly processes is introduced. An assembly cell comprised of two robots was designed to verify the concept. For the purpose of validation a customer-defined part group consisting of Hubelino bricks is assembled. One of the key aspects for this process is the verification of the assembly group. Hence a software component was designed that utilizes the Microsoft Kinect to perceive both depth and color data in the assembly area. This information is used to determine the current state of the assembly group and is compared to a CAD model for validation purposes. In order to efficiently resolve erroneous situations, the results are interactively accessible to a human expert. The implications for an industrial application are demonstrated by transferring the developed concepts to an assembly scenario for switch-cabinet systems.

  18. Added value of a mandible movement automated analysis in the screening of obstructive sleep apnea.

    Science.gov (United States)

    Maury, Gisele; Cambron, Laurent; Jamart, Jacques; Marchand, Eric; Senny, Frédéric; Poirrier, Robert

    2013-02-01

    In-laboratory polysomnography is the 'gold standard' for diagnosing obstructive sleep apnea syndrome, but is time consuming and costly, with long waiting lists in many sleep laboratories. Therefore, the search for alternative methods to detect respiratory events is growing. In this prospective study, we compared attended polysomnography with two other methods, with or without mandible movement automated analysis provided by a distance-meter and added to airflow and oxygen saturation analysis for the detection of respiratory events. The mandible movement automated analysis allows for the detection of salient mandible movement, which is a surrogate for arousal. All parameters were recorded simultaneously in 570 consecutive patients (M/F: 381/189; age: 50±14 years; body mass index: 29±7 kg m(-2) ) visiting a sleep laboratory. The most frequent main diagnoses were: obstructive sleep apnea (344; 60%); insomnia/anxiety/depression (75; 13%); and upper airway resistance syndrome (25; 4%). The correlation between polysomnography and the method with mandible movement automated analysis was excellent (r: 0.95; P<0.001). Accuracy characteristics of the methods showed a statistical improvement in sensitivity and negative predictive value with the addition of mandible movement automated analysis. This was true for different diagnostic thresholds of obstructive sleep severity, with an excellent efficiency for moderate to severe index (apnea-hypopnea index ≥15h(-1) ). A Bland & Altman plot corroborated the analysis. The addition of mandible movement automated analysis significantly improves the respiratory index calculation accuracy compared with an airflow and oxygen saturation analysis. This is an attractive method for the screening of obstructive sleep apnea syndrome, increasing the ability to detect hypopnea thanks to the salient mandible movement as a marker of arousals.

  19. Functional MRI Preprocessing in Lesioned Brains: Manual Versus Automated Region of Interest Analysis.

    Science.gov (United States)

    Garrison, Kathleen A; Rogalsky, Corianne; Sheng, Tong; Liu, Brent; Damasio, Hanna; Winstein, Carolee J; Aziz-Zadeh, Lisa S

    2015-01-01

    Functional magnetic resonance imaging (fMRI) has significant potential in the study and treatment of neurological disorders and stroke. Region of interest (ROI) analysis in such studies allows for testing of strong a priori clinical hypotheses with improved statistical power. A commonly used automated approach to ROI analysis is to spatially normalize each participant's structural brain image to a template brain image and define ROIs using an atlas. However, in studies of individuals with structural brain lesions, such as stroke, the gold standard approach may be to manually hand-draw ROIs on each participant's non-normalized structural brain image. Automated approaches to ROI analysis are faster and more standardized, yet are susceptible to preprocessing error (e.g., normalization error) that can be greater in lesioned brains. The manual approach to ROI analysis has high demand for time and expertise, but may provide a more accurate estimate of brain response. In this study, commonly used automated and manual approaches to ROI analysis were directly compared by reanalyzing data from a previously published hypothesis-driven cognitive fMRI study, involving individuals with stroke. The ROI evaluated is the pars opercularis of the inferior frontal gyrus. Significant differences were identified in task-related effect size and percent-activated voxels in this ROI between the automated and manual approaches to ROI analysis. Task interactions, however, were consistent across ROI analysis approaches. These findings support the use of automated approaches to ROI analysis in studies of lesioned brains, provided they employ a task interaction design.

  20. Digital video analysis of health professionals' interactions with an electronic whiteboard

    DEFF Research Database (Denmark)

    Rasmussen, Rasmus; Kushniruk, Andre

    2013-01-01

    and analysis of continuous digital video recordings of naturalistic "live" user interactions. The method developed and employed in the study included recording the users' interactions with system during actual use using screen-capturing software and analyzing these recordings for usability issues....... However, challenges and drawbacks to using the method (including the time taken for analysis and logistical issues in doing live recordings) should be considered before utilizing a similar approach. In conclusion we summarize our findings and call for an increased focus on longitudinal and naturalistic...

  1. DEFINITION AND ANALYSIS OF MOTION ACTIVITY AFTER-STROKE PATIENT FROM THE VIDEO STREAM

    Directory of Open Access Journals (Sweden)

    M. Yu. Katayev

    2014-01-01

    Full Text Available This article describes an approach to the assessment of motion activity of man in after-stroke period, allowing the doctor to get new information to give a more informed recommendations on rehabilitation treatment than in traditional approaches. Consider description of the hardware-software complex for determination and analysis of motion activity after-stroke patient for the video stream. The article provides a description of the complex, its algorithmic filling and the results of the work on the example of processing of the actual data. The algorithms and technology to significantly accelerate the gait analysis and improve the quality of diagnostics post-stroke patients.

  2. Eulerian frequency analysis of structural vibrations from high-speed video

    Science.gov (United States)

    Venanzoni, Andrea; De Ryck, Laurent; Cuenca, Jacques

    2016-06-01

    An approach for the analysis of the frequency content of structural vibrations from high-speed video recordings is proposed. The techniques and tools proposed rely on an Eulerian approach, that is, using the time history of pixels independently to analyse structural motion, as opposed to Lagrangian approaches, where the motion of the structure is tracked in time. The starting point is an existing Eulerian motion magnification method, which consists in decomposing the video frames into a set of spatial scales through a so-called Laplacian pyramid [1]. Each scale - or level - can be amplified independently to reconstruct a magnified motion of the observed structure. The approach proposed here provides two analysis tools or pre-amplification steps. The first tool provides a representation of the global frequency content of a video per pyramid level. This may be further enhanced by applying an angular filter in the spatial frequency domain to each frame of the video before the Laplacian pyramid decomposition, which allows for the identification of the frequency content of the structural vibrations in a particular direction of space. This proposed tool complements the existing Eulerian magnification method by amplifying selectively the levels containing relevant motion information with respect to their frequency content. This magnifies the displacement while limiting the noise contribution. The second tool is a holographic representation of the frequency content of a vibrating structure, yielding a map of the predominant frequency components across the structure. In contrast to the global frequency content representation of the video, this tool provides a local analysis of the periodic gray scale intensity changes of the frame in order to identify the vibrating parts of the structure and their main frequencies. Validation cases are provided and the advantages and limits of the approaches are discussed. The first validation case consists of the frequency content

  3. A Meta-Analysis of Factors Influencing the Development of Trust in Automation: Implications for Human-Robot Interaction

    Science.gov (United States)

    2014-07-01

    684. Lin, P.; Bekey, G.; Abney, K. Autonomous Military Robotics : Risk, Ethics , and Design; California Polytechnic State University: San Luis Obispo...A Meta-Analysis of Factors Influencing the Development of Trust in Automation: Implications for Human- Robot Interaction by Kristin E...Meta-Analysis of Factors Influencing the Development of Trust in Automation: Implications for Human- Robot Interaction Kristin E. Schaefer

  4. Interaction between High-Level and Low-Level Image Analysis for Semantic Video Object Extraction

    Directory of Open Access Journals (Sweden)

    Ebrahimi Touradj

    2004-01-01

    Full Text Available The task of extracting a semantic video object is split into two subproblems, namely, object segmentation and region segmentation. Object segmentation relies on a priori assumptions, whereas region segmentation is data-driven and can be solved in an automatic manner. These two subproblems are not mutually independent, and they can benefit from interactions with each other. In this paper, a framework for such interaction is formulated. This representation scheme based on region segmentation and semantic segmentation is compatible with the view that image analysis and scene understanding problems can be decomposed into low-level and high-level tasks. Low-level tasks pertain to region-oriented processing, whereas the high-level tasks are closely related to object-level processing. This approach emulates the human visual system: what one “sees” in a scene depends on the scene itself (region segmentation as well as on the cognitive task (semantic segmentation at hand. The higher-level segmentation results in a partition corresponding to semantic video objects. Semantic video objects do not usually have invariant physical properties and the definition depends on the application. Hence, the definition incorporates complex domain-specific knowledge and is not easy to generalize. For the specific implementation used in this paper, motion is used as a clue to semantic information. In this framework, an automatic algorithm is presented for computing the semantic partition based on color change detection. The change detection strategy is designed to be immune to the sensor noise and local illumination variations. The lower-level segmentation identifies the partition corresponding to perceptually uniform regions. These regions are derived by clustering in an -dimensional feature space, composed of static as well as dynamic image attributes. We propose an interaction mechanism between the semantic and the region partitions which allows to cope with multiple

  5. Driver-centred vehicle automation: using network analysis for agent-based modelling of the driver in highly automated driving systems.

    Science.gov (United States)

    Banks, Victoria A; Stanton, Neville A

    2016-11-01

    To the average driver, the concept of automation in driving infers that they can become completely 'hands and feet free'. This is a common misconception, however, one that has been shown through the application of Network Analysis to new Cruise Assist technologies that may feature on our roads by 2020. Through the adoption of a Systems Theoretic approach, this paper introduces the concept of driver-initiated automation which reflects the role of the driver in highly automated driving systems. Using a combination of traditional task analysis and the application of quantitative network metrics, this agent-based modelling paper shows how the role of the driver remains an integral part of the driving system implicating the need for designers to ensure they are provided with the tools necessary to remain actively in-the-loop despite giving increasing opportunities to delegate their control to the automated subsystems. Practitioner Summary: This paper describes and analyses a driver-initiated command and control system of automation using representations afforded by task and social networks to understand how drivers remain actively involved in the task. A network analysis of different driver commands suggests that such a strategy does maintain the driver in the control loop.

  6. Automated analysis of small animal PET studies through deformable registration to an atlas

    Energy Technology Data Exchange (ETDEWEB)

    Gutierrez, Daniel F. [Geneva University Hospital, Division of Nuclear Medicine and Molecular Imaging, Geneva 4 (Switzerland); Zaidi, Habib [Geneva University Hospital, Division of Nuclear Medicine and Molecular Imaging, Geneva 4 (Switzerland); Geneva University, Geneva Neuroscience Center, Geneva (Switzerland); University of Groningen, Department of Nuclear Medicine and Molecular Imaging, University Medical Center Groningen, Groningen (Netherlands)

    2012-11-15

    This work aims to develop a methodology for automated atlas-guided analysis of small animal positron emission tomography (PET) data through deformable registration to an anatomical mouse model. A non-rigid registration technique is used to put into correspondence relevant anatomical regions of rodent CT images from combined PET/CT studies to corresponding CT images of the Digimouse anatomical mouse model. The latter provides a pre-segmented atlas consisting of 21 anatomical regions suitable for automated quantitative analysis. Image registration is performed using a package based on the Insight Toolkit allowing the implementation of various image registration algorithms. The optimal parameters obtained for deformable registration were applied to simulated and experimental mouse PET/CT studies. The accuracy of the image registration procedure was assessed by segmenting mouse CT images into seven regions: brain, lungs, heart, kidneys, bladder, skeleton and the rest of the body. This was accomplished prior to image registration using a semi-automated algorithm. Each mouse segmentation was transformed using the parameters obtained during CT to CT image registration. The resulting segmentation was compared with the original Digimouse atlas to quantify image registration accuracy using established metrics such as the Dice coefficient and Hausdorff distance. PET images were then transformed using the same technique and automated quantitative analysis of tracer uptake performed. The Dice coefficient and Hausdorff distance show fair to excellent agreement and a mean registration mismatch distance of about 6 mm. The results demonstrate good quantification accuracy in most of the regions, especially the brain, but not in the bladder, as expected. Normalized mean activity estimates were preserved between the reference and automated quantification techniques with relative errors below 10 % in most of the organs considered. The proposed automated quantification technique is

  7. A unified and efficient framework for court-net sports video analysis using 3D camera modeling

    Science.gov (United States)

    Han, Jungong; de With, Peter H. N.

    2007-01-01

    The extensive amount of video data stored on available media (hard and optical disks) necessitates video content analysis, which is a cornerstone for different user-friendly applications, such as, smart video retrieval and intelligent video summarization. This paper aims at finding a unified and efficient framework for court-net sports video analysis. We concentrate on techniques that are generally applicable for more than one sports type to come to a unified approach. To this end, our framework employs the concept of multi-level analysis, where a novel 3-D camera modeling is utilized to bridge the gap between the object-level and the scene-level analysis. The new 3-D camera modeling is based on collecting features points from two planes, which are perpendicular to each other, so that a true 3-D reference is obtained. Another important contribution is a new tracking algorithm for the objects (i.e. players). The algorithm can track up to four players simultaneously. The complete system contributes to summarization by various forms of information, of which the most important are the moving trajectory and real-speed of each player, as well as 3-D height information of objects and the semantic event segments in a game. We illustrate the performance of the proposed system by evaluating it for a variety of court-net sports videos containing badminton, tennis and volleyball, and we show that the feature detection performance is above 92% and events detection about 90%.

  8. Method 365.5 Determination of Orthophosphate in Estuarine and Coastal Waters by Automated Colorimetric Analysis

    Science.gov (United States)

    This method provides a procedure for the determination of low-level orthophosphate concentrations normally found in estuarine and/or coastal waters. It is based upon the method of Murphy and Riley1 adapted for automated segmented flow analysis2 in which the two reagent solutions ...

  9. Development of an Automated Technique for Failure Modes and Effect Analysis

    DEFF Research Database (Denmark)

    Blanke, M.; Borch, Ole; Allasia, G.;

    1999-01-01

    implementing an automated technique for Failure Modes and Effects Analysis (FMEA). This technique is based on the matrix formulation of FMEA for the investigation of failure propagation through a system. As main result, this technique will provide the design engineer with decision tables for fault handling...

  10. Development of an automated technique for failure modes and effect analysis

    DEFF Research Database (Denmark)

    Blanke, M.; Borch, Ole; Bagnoli, F.;

    implementing an automated technique for Failure Modes and Effects Analysis (FMEA). This technique is based on the matrix formulation of FMEA for the investigation of failure propagation through a system. As main result, this technique will provide the design engineer with decision tables for fault handling...

  11. Chapter 2: Predicting Newcomer Integration in Online Knowledge Communities by Automated Dialog Analysis

    NARCIS (Netherlands)

    Nistor, Nicolae; Dascalu, Mihai; Stavarache, Lucia; Tarnai, Christian; Trausan-Matu, Stefan

    2016-01-01

    Nistor, N., Dascalu, M., Stavarache, L.L., Tarnai, C., & Trausan-Matu, S. (2015). Predicting Newcomer Integration in Online Knowledge Communities by Automated Dialog Analysis. In Y. Li, M. Chang, M. Kravcik, E. Popescu, R. Huang, Kinshuk & N.-S. Chen (Eds.), State-of-the-Art and Future Directions of

  12. Design and Prototype of an Automated Column-Switching HPLC System for Radiometabolite Analysis.

    Science.gov (United States)

    Vasdev, Neil; Collier, Thomas Lee

    2016-08-17

    Column-switching high performance liquid chromatography (HPLC) is extensively used for the critical analysis of radiolabeled ligands and their metabolites in plasma. However, the lack of streamlined apparatus and consequently varying protocols remain as a challenge among positron emission tomography laboratories. We report here the prototype apparatus and implementation of a fully automated and simplified column-switching procedure to allow for the easy and automated determination of radioligands and their metabolites in up to 5 mL of plasma. The system has been used with conventional UV and coincidence radiation detectors, as well as with a single quadrupole mass spectrometer.

  13. Design and Prototype of an Automated Column-Switching HPLC System for Radiometabolite Analysis

    Directory of Open Access Journals (Sweden)

    Neil Vasdev

    2016-08-01

    Full Text Available Column-switching high performance liquid chromatography (HPLC is extensively used for the critical analysis of radiolabeled ligands and their metabolites in plasma. However, the lack of streamlined apparatus and consequently varying protocols remain as a challenge among positron emission tomography laboratories. We report here the prototype apparatus and implementation of a fully automated and simplified column-switching procedure to allow for the easy and automated determination of radioligands and their metabolites in up to 5 mL of plasma. The system has been used with conventional UV and coincidence radiation detectors, as well as with a single quadrupole mass spectrometer.

  14. Detailed interrogation of trypanosome cell biology via differential organelle staining and automated image analysis

    Directory of Open Access Journals (Sweden)

    Wheeler Richard J

    2012-01-01

    Full Text Available Abstract Background Many trypanosomatid protozoa are important human or animal pathogens. The well defined morphology and precisely choreographed division of trypanosomatid cells makes morphological analysis a powerful tool for analyzing the effect of mutations, chemical insults and changes between lifecycle stages. High-throughput image analysis of micrographs has the potential to accelerate collection of quantitative morphological data. Trypanosomatid cells have two large DNA-containing organelles, the kinetoplast (mitochondrial DNA and nucleus, which provide useful markers for morphometric analysis; however they need to be accurately identified and often lie in close proximity. This presents a technical challenge. Accurate identification and quantitation of the DNA content of these organelles is a central requirement of any automated analysis method. Results We have developed a technique based on double staining of the DNA with a minor groove binding (4'', 6-diamidino-2-phenylindole (DAPI and a base pair intercalating (propidium iodide (PI or SYBR green fluorescent stain and color deconvolution. This allows the identification of kinetoplast and nuclear DNA in the micrograph based on whether the organelle has DNA with a more A-T or G-C rich composition. Following unambiguous identification of the kinetoplasts and nuclei the resulting images are amenable to quantitative automated analysis of kinetoplast and nucleus number and DNA content. On this foundation we have developed a demonstrative analysis tool capable of measuring kinetoplast and nucleus DNA content, size and position and cell body shape, length and width automatically. Conclusions Our approach to DNA staining and automated quantitative analysis of trypanosomatid morphology accelerated analysis of trypanosomatid protozoa. We have validated this approach using Leishmania mexicana, Crithidia fasciculata and wild-type and mutant Trypanosoma brucei. Automated analysis of T. brucei

  15. Automated Protein Biomarker Analysis: on-line extraction of clinical samples by Molecularly Imprinted Polymers

    Science.gov (United States)

    Rossetti, Cecilia; Świtnicka-Plak, Magdalena A.; Grønhaug Halvorsen, Trine; Cormack, Peter A.G.; Sellergren, Börje; Reubsaet, Léon

    2017-01-01

    Robust biomarker quantification is essential for the accurate diagnosis of diseases and is of great value in cancer management. In this paper, an innovative diagnostic platform is presented which provides automated molecularly imprinted solid-phase extraction (MISPE) followed by liquid chromatography-mass spectrometry (LC-MS) for biomarker determination using ProGastrin Releasing Peptide (ProGRP), a highly sensitive biomarker for Small Cell Lung Cancer, as a model. Molecularly imprinted polymer microspheres were synthesized by precipitation polymerization and analytical optimization of the most promising material led to the development of an automated quantification method for ProGRP. The method enabled analysis of patient serum samples with elevated ProGRP levels. Particularly low sample volumes were permitted using the automated extraction within a method which was time-efficient, thereby demonstrating the potential of such a strategy in a clinical setting. PMID:28303910

  16. Automated Protein Biomarker Analysis: on-line extraction of clinical samples by Molecularly Imprinted Polymers

    Science.gov (United States)

    Rossetti, Cecilia; Świtnicka-Plak, Magdalena A.; Grønhaug Halvorsen, Trine; Cormack, Peter A. G.; Sellergren, Börje; Reubsaet, Léon

    2017-03-01

    Robust biomarker quantification is essential for the accurate diagnosis of diseases and is of great value in cancer management. In this paper, an innovative diagnostic platform is presented which provides automated molecularly imprinted solid-phase extraction (MISPE) followed by liquid chromatography-mass spectrometry (LC-MS) for biomarker determination using ProGastrin Releasing Peptide (ProGRP), a highly sensitive biomarker for Small Cell Lung Cancer, as a model. Molecularly imprinted polymer microspheres were synthesized by precipitation polymerization and analytical optimization of the most promising material led to the development of an automated quantification method for ProGRP. The method enabled analysis of patient serum samples with elevated ProGRP levels. Particularly low sample volumes were permitted using the automated extraction within a method which was time-efficient, thereby demonstrating the potential of such a strategy in a clinical setting.

  17. Automated identification of mitochondrial regions in complex intracellular space by texture analysis

    Science.gov (United States)

    Pham, Tuan D.

    2014-01-01

    Automated processing and quantification of biological images have been rapidly increasing the attention of researchers in image processing and pattern recognition because the roles of computerized image and pattern analyses are critical for new biological findings and drug discovery based on modern high-throughput and highcontent image screening. This paper presents a study of the automated detection of regions of mitochondria, which are a subcellular structure of eukaryotic cells, in microscopy images. The automated identification of mitochondria in intracellular space that is captured by the state-of-the-art combination of focused ion beam and scanning electron microscope imaging reported here is the first of its type. Existing methods and a proposed algorithm for texture analysis were tested with the real intracellular images. The high correction rate of detecting the locations of the mitochondria in a complex environment suggests the effectiveness of the proposed study.

  18. Automated red blood cell analysis compared with routine red blood cell morphology by smear review

    Directory of Open Access Journals (Sweden)

    Dr.Poonam Radadiya

    2015-01-01

    Full Text Available The RBC histogram is an integral part of automated haematology analysis and is now routinely available on all automated cell counters. This histogram and other associated complete blood count (CBC parameters have been found abnormal in various haematological conditions and may provide major clues in the diagnosis and management of significant red cell disorders. Performing manual blood smears is important to ensure the quality of blood count results and to make presumptive diagnosis. In this article we have taken 100 samples for comparative study between RBC histograms obtained by automated haematology analyzer with peripheral blood smear. This article discusses some morphological features of dimorphism and the ensuing characteristic changes in their RBC histograms.

  19. Terminal Performance of Lead Free Pistol Bullets in Ballistic Gelatin Using Retarding Force Analysis from High Speed Video

    Science.gov (United States)

    2016-04-04

    Terminal Performance of Lead-Free Pistol Bullets in Ballistic Gelatin Using Retarding Force Analysis from High Speed Video ELIJAH COURTNEY, AMY...quantified using high speed video. The temporary stretch cavities and permanent wound cavities are also characterized. Two factors tend to re- duce the...which reduces muzzle velocity and energy, and thus reduces the ability of the bullet to exert damaging forces in tissue simulant. Second, the lower

  20. A qualitative analysis of methotrexate self-injection education videos on YouTube.

    Science.gov (United States)

    Rittberg, Rebekah; Dissanayake, Tharindri; Katz, Steven J

    2016-05-01

    The aim of this study is to identify and evaluate the quality of videos for patients available on YouTube for learning to self-administer subcutaneous methotrexate. Using the search term "Methotrexate injection," two clinical reviewers analyzed the first 60 videos on YouTube. Source and search rank of video, audience interaction, video duration, and time since video was uploaded on YouTube were recorded. Videos were classified as useful, misleading, or a personal patient view. Videos were rated for reliability, comprehensiveness, and global quality scale (GQS). Reasons for misleading videos were documented, and patient videos were documented as being either positive or negative towards methotrexate (MTX) injection. Fifty-one English videos overlapped between the two geographic locations; 10 videos were classified as useful (19.6 %), 14 misleading (27.5 %), and 27 personal patient view (52.9 %). Total views of videos were 161,028: 19.2 % useful, 72.8 % patient, and 8.0 % misleading. Mean GQS: 4.2 (±1.0) useful, 1.6 (±1.1) misleading, and 2.0 (±0.9) for patient videos (p videos (p videos (p = 0.0027). This study demonstrates a minority of videos are useful for teaching MTX injection. Further, video quality does not correlate with video views. While web video may be an additional educational tool available, clinicians need to be familiar with specific resources to help guide and educate their patients to ensure best outcomes.

  1. Automated Finite Element Analysis of Elastically-Tailored Plates

    Science.gov (United States)

    Jegley, Dawn C. (Technical Monitor); Tatting, Brian F.; Guerdal, Zafer

    2003-01-01

    A procedure for analyzing and designing elastically tailored composite laminates using the STAGS finite element solver has been presented. The methodology used to produce the elastic tailoring, namely computer-controlled steering of unidirectionally reinforced composite material tows, has been reduced to a handful of design parameters along with a selection of construction methods. The generality of the tow-steered ply definition provides the user a wide variety of options for laminate design, which can be automatically incorporated with any finite element model that is composed of STAGS shell elements. Furthermore, the variable stiffness parameterization is formulated so that manufacturability can be assessed during the design process, plus new ideas using tow steering concepts can be easily integrated within the general framework of the elastic tailoring definitions. Details for the necessary implementation of the tow-steering definitions within the STAGS hierarchy is provided, and the format of the ply definitions is discussed in detail to provide easy access to the elastic tailoring choices. Integration of the automated STAGS solver with laminate design software has been demonstrated, so that the large design space generated by the tow-steering options can be traversed effectively. Several design problems are presented which confirm the usefulness of the design tool as well as further establish the potential of tow-steered plies for laminate design.

  2. Automated face analysis by feature point tracking has high concurrent validity with manual FACS coding.

    Science.gov (United States)

    Cohn, J F; Zlochower, A J; Lien, J; Kanade, T

    1999-01-01

    The face is a rich source of information about human behavior. Available methods for coding facial displays, however, are human-observer dependent, labor intensive, and difficult to standardize. To enable rigorous and efficient quantitative measurement of facial displays, we have developed an automated method of facial display analysis. In this report, we compare the results with this automated system with those of manual FACS (Facial Action Coding System, Ekman & Friesen, 1978a) coding. One hundred university students were videotaped while performing a series of facial displays. The image sequences were coded from videotape by certified FACS coders. Fifteen action units and action unit combinations that occurred a minimum of 25 times were selected for automated analysis. Facial features were automatically tracked in digitized image sequences using a hierarchical algorithm for estimating optical flow. The measurements were normalized for variation in position, orientation, and scale. The image sequences were randomly divided into a training set and a cross-validation set, and discriminant function analyses were conducted on the feature point measurements. In the training set, average agreement with manual FACS coding was 92% or higher for action units in the brow, eye, and mouth regions. In the cross-validation set, average agreement was 91%, 88%, and 81% for action units in the brow, eye, and mouth regions, respectively. Automated face analysis by feature point tracking demonstrated high concurrent validity with manual FACS coding.

  3. Performance analysis of medical video streaming over mobile WiMAX.

    Science.gov (United States)

    Alinejad, Ali; Philip, N; Istepanian, R H

    2010-01-01

    Wireless medical ultrasound streaming is considered one of the emerging application within the broadband mobile healthcare domain. These applications are considered as bandwidth demanding services that required high data rates with acceptable diagnostic quality of the transmitted medical images. In this paper, we present the performance analysis of a medical ultrasound video streaming acquired via special robotic ultrasonography system over emulated WiMAX wireless network. The experimental set-up of this application is described together with the performance of the relevant medical quality of service (m-QoS) metrics.

  4. Determining Variation in Flight Speed and Pattern of Cliff Swallow Using Video Frame Analysis

    Directory of Open Access Journals (Sweden)

    E Santosh,

    2014-03-01

    Full Text Available Ability to fly faster varies from one species of birds to the other. Take off from their nest, settling on the nest or migratory speed of flight is different from one bird to the other, so is also to Cliff Swallows. Many workers have tried different ways to analyze the flight speed of birds using principles of mechanics and physics. Here we have analyzed take off and settling in speeds of Indian Cliff Swallows by applying Video frame analysis technique with fixed focal length.

  5. Lipid vesicle shape analysis from populations using light video microscopy and computer vision.

    Directory of Open Access Journals (Sweden)

    Jernej Zupanc

    Full Text Available We present a method for giant lipid vesicle shape analysis that combines manually guided large-scale video microscopy and computer vision algorithms to enable analyzing vesicle populations. The method retains the benefits of light microscopy and enables non-destructive analysis of vesicles from suspensions containing up to several thousands of lipid vesicles (1-50 µm in diameter. For each sample, image analysis was employed to extract data on vesicle quantity and size distributions of their projected diameters and isoperimetric quotients (measure of contour roundness. This process enables a comparison of samples from the same population over time, or the comparison of a treated population to a control. Although vesicles in suspensions are heterogeneous in sizes and shapes and have distinctively non-homogeneous distribution throughout the suspension, this method allows for the capture and analysis of repeatable vesicle samples that are representative of the population inspected.

  6. Automated image analysis for space debris identification and astrometric measurements

    Science.gov (United States)

    Piattoni, Jacopo; Ceruti, Alessandro; Piergentili, Fabrizio

    2014-10-01

    The space debris is a challenging problem for the human activity in the space. Observation campaigns are conducted around the globe to detect and track uncontrolled space objects. One of the main problems in optical observation is obtaining useful information about the debris dynamical state by the images collected. For orbit determination, the most relevant information embedded in optical observation is the precise angular position, which can be evaluated by astrometry procedures, comparing the stars inside the image with star catalogs. This is typically a time consuming process, if done by a human operator, which makes this task impractical when dealing with large amounts of data, in the order of thousands images per night, generated by routinely conducted observations. An automated procedure is investigated in this paper that is capable to recognize the debris track inside a picture, calculate the celestial coordinates of the image's center and use these information to compute the debris angular position in the sky. This procedure has been implemented in a software code, that does not require human interaction and works without any supplemental information besides the image itself, detecting space objects and solving for their angular position without a priori information. The algorithm for object detection was developed inside the research team. For the star field computation, the software code astrometry.net was used and released under GPL v2 license. The complete procedure was validated by an extensive testing, using the images obtained in the observation campaign performed in a joint project between the Italian Space Agency (ASI) and the University of Bologna at the Broglio Space center, Kenya.

  7. Multimodal microscopy for automated histologic analysis of prostate cancer

    Directory of Open Access Journals (Sweden)

    Sinha Saurabh

    2011-02-01

    Full Text Available Abstract Background Prostate cancer is the single most prevalent cancer in US men whose gold standard of diagnosis is histologic assessment of biopsies. Manual assessment of stained tissue of all biopsies limits speed and accuracy in clinical practice and research of prostate cancer diagnosis. We sought to develop a fully-automated multimodal microscopy method to distinguish cancerous from non-cancerous tissue samples. Methods We recorded chemical data from an unstained tissue microarray (TMA using Fourier transform infrared (FT-IR spectroscopic imaging. Using pattern recognition, we identified epithelial cells without user input. We fused the cell type information with the corresponding stained images commonly used in clinical practice. Extracted morphological features, optimized by two-stage feature selection method using a minimum-redundancy-maximal-relevance (mRMR criterion and sequential floating forward selection (SFFS, were applied to classify tissue samples as cancer or non-cancer. Results We achieved high accuracy (area under ROC curve (AUC >0.97 in cross-validations on each of two data sets that were stained under different conditions. When the classifier was trained on one data set and tested on the other data set, an AUC value of ~0.95 was observed. In the absence of IR data, the performance of the same classification system dropped for both data sets and between data sets. Conclusions We were able to achieve very effective fusion of the information from two different images that provide very different types of data with different characteristics. The method is entirely transparent to a user and does not involve any adjustment or decision-making based on spectral data. By combining the IR and optical data, we achieved high accurate classification.

  8. Two-Dimensional Video Analysis of Youth and Adolescent Pitching Biomechanics: A Tool For the Common Athlete.

    Science.gov (United States)

    DeFroda, Steven F; Thigpen, Charles A; Kriz, Peter K

    2016-01-01

    Three-dimensional (3D) motion analysis is the gold standard for analyzing the biomechanics of the baseball pitching motion. Historically, 3D analysis has been available primarily to elite athletes, requiring advanced cameras, and sophisticated facilities with expensive software. The advent of newer technology, and increased affordability of video recording devices, and smartphone/tablet-based applications has led to increased access to this technology for youth/amateur athletes and sports medicine professionals. Two-dimensional (2D) video analysis is an emerging tool for the kinematic assessment and observational measurement of pitching biomechanics. It is important for providers, coaches, and players to be aware of this technology, its application in identifying causes of arm pain and preventing injury, as well as its limitations. This review provides an in-depth assessment of 2D video analysis studies for pitching, a direct comparison of 2D video versus 3D motion analysis, and a practical introduction to assessing pitching biomechanics using 2D video analysis.

  9. Grcarma: A fully automated task-oriented interface for the analysis of molecular dynamics trajectories.

    Science.gov (United States)

    Koukos, Panagiotis I; Glykos, Nicholas M

    2013-10-05

    We report the availability of grcarma, a program encoding for a fully automated set of tasks aiming to simplify the analysis of molecular dynamics trajectories of biological macromolecules. It is a cross-platform, Perl/Tk-based front-end to the program carma and is designed to facilitate the needs of the novice as well as those of the expert user, while at the same time maintaining a user-friendly and intuitive design. Particular emphasis was given to the automation of several tedious tasks, such as extraction of clusters of structures based on dihedral and Cartesian principal component analysis, secondary structure analysis, calculation and display of root-meansquare deviation (RMSD) matrices, calculation of entropy, calculation and analysis of variance–covariance matrices, calculation of the fraction of native contacts, etc. The program is free-open source software available immediately for download.

  10. Implicit media frames: automated analysis of public debate on artificial sweeteners.

    Science.gov (United States)

    Hellsten, Iina; Dawson, James; Leydesdorff, Loet

    2010-09-01

    The framing of issues in the mass media plays a crucial role in the public understanding of science and technology. This article contributes to research concerned with the analysis of media frames over time by making an analytical distinction between implicit and explicit media frames, and by introducing an automated method for the analysis of implicit frames. In particular, we apply a semantic maps method to a case study on the newspaper debate about artificial sweeteners, published in the New York Times between 1980 and 2006. Our results show that the analysis of semantic changes enables us to filter out the dynamics of implicit frames, and to detect emerging metaphors in public debates. Theoretically, we discuss the relation between implicit frames in public debates and the codification of meaning and information in scientific discourses, and suggest further avenues for research interested in the automated analysis of frame changes and trends in public debates.

  11. The motion analysis of fire video images based on moment features and flicker frequency

    Institute of Scientific and Technical Information of China (English)

    LI Jin; FONG N. K.; CHOW W. K.; WONG L.T.; LU Puyi; XU Dian-guo

    2004-01-01

    In this paper, motion analysis methods based on the moment features and flicker frequency features for early fire flame from ordinary CCD video camera were proposed, and in order to describe the changing of flame and disturbance of non-flame phenomena further more, the average changing pixel number of the first-order moments of consecutive flames has been defined in the moment analysis as well. The first-order moments of all kinds of flames used in our experiments present irregularly flickering, and their average changing pixel numbers of first-order moments are greater than fire-like disturbances. For the analysis of flicker frequency of flame, which is extracted and calculated in spatial domain, and therefore it is computational simple and fast. The method of extracting flicker frequency from video images is not affected by the catalogues of combustion material and distance. In experiments, we adopted two kinds of flames, i. e. , fixed flame and movable flame. Many comparing and disturbing experiments were done and verified that the methods can be used as criteria for early fire detection.

  12. Dynamics at the Holuhraun eruption based on high speed video data analysis

    Science.gov (United States)

    Witt, Tanja; Walter, Thomas R.

    2016-04-01

    The 2014/2015 Holuhraun eruption was an gas rich fissure eruption with high fountains. The magma was transported by a horizontal dyke over a distance of 45km. At the first day the fountains occur over a distance of 1.5km and focused at isolated vents during the following day. Based on video analysis of the fountains we obtained a detailed view onto the velocities of the eruption, the propagation path of magma, communication between vents and complexities in the magma paths. We collected videos from the Holuhraun eruption with 2 high speed cameras and one DSLR camera from 31st August, 2015 to 4th September, 2015 for several hours. The fountains at adjacent vents visually seemed to be related at all days. Hence, we calculated the height as a function of time from the video data. All fountains show a pulsating regime with apparent and sporadic alternations from meter to several tens of meters heights. By a time-dependent cross-correlation approach developed within the FUTUREVOLC project, we are able to compare the pulses in the height at adjacent vents. We find that in most cases there is a time lag between the pulses. From the calculated time lags between the pulses and the distance between the correlated vents, we calculate the apparent speed of magma pulses. The analysis of the frequency of the fountains and the eruption and rest time between the the fountains itself, are quite similar and suggest a connection and controlling process of the fountains in the feeder below. At the Holuhraun eruption 2014/2015 (Iceland) we find a significant time shift between the single pulses of adjacent vents at all days. The mean velocity of all days is 30-40 km/hr, which could be interpreted by a magma flow velocity along the dike at depth.Comparison of the velocities derived from the video data analysis to the assumed magma flow velocity in the dike based on seismic data shows a very good agreement, implying that surface expressions of pulsating vents provide an insight into the

  13. Sample preparation and in situ hybridization techniques for automated molecular cytogenetic analysis of white blood cells

    Energy Technology Data Exchange (ETDEWEB)

    Rijke, F.M. van de; Vrolijk, H.; Sloos, W. [Leiden Univ. (Netherlands)] [and others

    1996-06-01

    With the advent in situ hybridization techniques for the analysis of chromosome copy number or structure in interphase cells, the diagnostic and prognostic potential of cytogenetics has been augmented considerably. In theory, the strategies for detection of cytogenetically aberrant cells by in situ hybridization are simple and straightforward. In practice, however, they are fallible, because false classification of hybridization spot number or patterns occurs. When a decision has to be made on molecular cytogenetic normalcy or abnormalcy of a cell sample, the problem of false classification becomes particularly prominent if the fraction of aberrant cells is relatively small. In such mosaic situations, often > 200 cells have to be evaluated to reach a statistical sound figure. The manual enumeration of in situ hybridization spots in many cells in many patient samples is tedious. Assistance in the evaluation process by automation of microscope functions and image analysis techniques is, therefore, strongly indicated. Next to research and development of microscope hardware, camera technology, and image analysis, the optimization of the specimen for the (semi)automated microscopic analysis is essential, since factors such as cell density, thickness, and overlap have dramatic influences on the speed and complexity of the analysis process. Here we describe experiments that have led to a protocol for blood cell specimen that results in microscope preparations that are well suited for automated molecular cytogenetic analysis. 13 refs., 4 figs., 1 tab.

  14. Automated microscopic characterization of metallic ores with image analysis: a key to improve ore processing. I: test of the methodology; Reconocimiento automatizado de menas metalicas mediante analisis digital de imagen: un apoyo al proceso mineralurgico. I: ensayo metodologico

    Energy Technology Data Exchange (ETDEWEB)

    Berrezueta, E.; Castroviejo, R.

    2007-07-01

    Ore microscopy has traditionally been an important support to control ore processing, but the volume of present day processes is beyond the reach of human operators. Automation is therefore compulsory, but its development through digital image analysis, DIA, is limited by various problems, such as the similarity in reflectance values of some important ores, their anisotropism, and the performance of instruments and methods. The results presented show that automated identification and quantification by DIA are possible through multiband (RGB) determinations with a research 3CCD video camera on reflected light microscope. These results were obtained by systematic measurement of selected ores accounting for most of the industrial applications. Polarized light is avoided, so the effects of anisotropism can be neglected. Quality control at various stages and statistical analysis are important, as is the application of complementary criteria (e.g. metallogenetic). The sequential methodology is described and shown through practical examples. (Author)

  15. Analysis of Video Signal Transmission Through DWDM Network Based on a Quality Check Algorithm

    Directory of Open Access Journals (Sweden)

    A. Markovic

    2013-04-01

    Full Text Available This paper provides an analysis of the multiplexed video signal transmission through the Dense Wavelength Division Multiplexing (DWDM network based on a quality check algorithm, which determines where the interruption of the transmission quality starts. On the basis of this algorithm, simulations of transmission for specific values of fiber parameters ​​ are executed. The analysis of the results shows how the BER and Q-factor change depends on the length of the fiber, i.e. on the number of amplifiers, and what kind of an effect the number of multiplexed channels and the flow rate per channel have on a transmited signals. Analysis of DWDM systems is performed in the software package OptiSystem 7.0, which is designed for systems with flow rates of 2.5 Gb/s and 10 Gb/s per channel.

  16. Video Analytics

    DEFF Research Database (Denmark)

    include: re-identification, consumer behavior analysis, utilizing pupillary response for task difficulty measurement, logo detection, saliency prediction, classification of facial expressions, face recognition, face verification, age estimation, super-resolution, pose estimation, and pain recognition......This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real...

  17. Implicit media frames: Automated analysis of public debate on artificial sweeteners

    CERN Document Server

    Hellsten, Iina; Leydesdorff, Loet

    2010-01-01

    The framing of issues in the mass media plays a crucial role in the public understanding of science and technology. This article contributes to research concerned with diachronic analysis of media frames by making an analytical distinction between implicit and explicit media frames, and by introducing an automated method for analysing diachronic changes of implicit frames. In particular, we apply a semantic maps method to a case study on the newspaper debate about artificial sweeteners, published in The New York Times (NYT) between 1980 and 2006. Our results show that the analysis of semantic changes enables us to filter out the dynamics of implicit frames, and to detect emerging metaphors in public debates. Theoretically, we discuss the relation between implicit frames in public debates and codification of information in scientific discourses, and suggest further avenues for research interested in the automated analysis of frame changes and trends in public debates.

  18. A Review of Machine-Vision-Based Analysis of Wireless Capsule Endoscopy Video

    Directory of Open Access Journals (Sweden)

    Yingju Chen

    2012-01-01

    Full Text Available Wireless capsule endoscopy (WCE enables a physician to diagnose a patient's digestive system without surgical procedures. However, it takes 1-2 hours for a gastroenterologist to examine the video. To speed up the review process, a number of analysis techniques based on machine vision have been proposed by computer science researchers. In order to train a machine to understand the semantics of an image, the image contents need to be translated into numerical form first. The numerical form of the image is known as image abstraction. The process of selecting relevant image features is often determined by the modality of medical images and the nature of the diagnoses. For example, there are radiographic projection-based images (e.g., X-rays and PET scans, tomography-based images (e.g., MRT and CT scans, and photography-based images (e.g., endoscopy, dermatology, and microscopic histology. Each modality imposes unique image-dependent restrictions for automatic and medically meaningful image abstraction processes. In this paper, we review the current development of machine-vision-based analysis of WCE video, focusing on the research that identifies specific gastrointestinal (GI pathology and methods of shot boundary detection.

  19. Performance Task using Video Analysis and Modelling to promote K12 eight practices of science

    CERN Document Server

    Wee, Loo Kang

    2015-01-01

    We will share on the use of Tracker as a pedagogical tool in the effective learning and teaching of physics performance tasks taking root in some Singapore Grade 9 (Secondary 3) schools. We discuss the pedagogical use of Tracker help students to be like scientists in these 6 to 10 weeks where all Grade 9 students are to conduct a personal video analysis and where appropriate the 8 practices of sciences (1. ask question, 2. use models, 3. Plan and carry out investigation, 4. Analyse and interpret data, 5. Using mathematical and computational thinking, 6. Construct explanations, 7. Discuss from evidence and 8. Communicating information). We will situate our sharing on actual students work and discuss how tracker could be an effective pedagogical tool. Initial research findings suggest that allowing learners conduct performance task using Tracker, a free open source video analysis and modelling tool, guided by the 8 practices of sciences and engineering, could be an innovative and effective way to mentor authent...

  20. Multi-scale AM-FM motion analysis of ultrasound videos of carotid artery plaques

    Science.gov (United States)

    Murillo, Sergio; Murray, Victor; Loizou, C. P.; Pattichis, C. S.; Pattichis, Marios; Barriga, E. Simon

    2012-03-01

    An estimated 82 million American adults have one or more type of cardiovascular diseases (CVD). CVD is the leading cause of death (1 of every 3 deaths) in the United States. When considered separately from other CVDs, stroke ranks third among all causes of death behind diseases of the heart and cancer. Stroke accounts for 1 out of every 18 deaths and is the leading cause of serious long-term disability in the United States. Motion estimation of ultrasound videos (US) of carotid artery (CA) plaques provides important information regarding plaque deformation that should be considered for distinguishing between symptomatic and asymptomatic plaques. In this paper, we present the development of verifiable methods for the estimation of plaque motion. Our methodology is tested on a set of 34 (5 symptomatic and 29 asymptomatic) ultrasound videos of carotid artery plaques. Plaque and wall motion analysis provides information about plaque instability and is used in an attempt to differentiate between symptomatic and asymptomatic cases. The final goal for motion estimation and analysis is to identify pathological conditions that can be detected from motion changes due to changes in tissue stiffness.

  1. GenePublisher: automated analysis of DNA microarray data

    DEFF Research Database (Denmark)

    Knudsen, Steen; Workman, Christopher; Sicheritz-Ponten, T.

    2003-01-01

    GenePublisher, a system for automatic analysis of data from DNA microarray experiments, has been implemented with a web interface at http://www.cbs.dtu.dk/services/GenePublisher. Raw data are uploaded to the server together with aspecification of the data. The server performs normalization......, statistical analysis and visualization of the data. The results are run against databases of signal transduction pathways, metabolic pathways and promoter sequences in order to extract more information. The results of the entire analysis are summarized in report form and returned to the user....

  2. Examining Feedback in an Instructional Video Game Using Process Data and Error Analysis. CRESST Report 817

    Science.gov (United States)

    Buschang, Rebecca E.; Kerr, Deirdre S.; Chung, Gregory K. W. K.

    2012-01-01

    Appropriately designed technology-based learning environments such as video games can be used to give immediate and individualized feedback to students. However, little is known about the design and use of feedback in instructional video games. This study investigated how feedback used in a mathematics video game about fractions impacted student…

  3. Automation of Safety Analysis with SysML Models Project

    Data.gov (United States)

    National Aeronautics and Space Administration — This project was a small proof-of-concept case study, generating SysML model information as a side effect of safety analysis. A prototype FMEA Assistant was...

  4. A performance analysis system for MEMS using automated imaging methods

    Energy Technology Data Exchange (ETDEWEB)

    LaVigne, G.F.; Miller, S.L.

    1998-08-01

    The ability to make in-situ performance measurements of MEMS operating at high speeds has been demonstrated using a new image analysis system. Significant improvements in performance and reliability have directly resulted from the use of this system.

  5. Infrascope: Full-Spectrum Phonocardiography with Automated Signal Analysis Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Using digital signal analysis tools, we will generate a repeatable output from the infrascope and compare it to the output of a traditional electrocardiogram, and...

  6. Automated Techniques for Rapid Analysis of Momentum Exchange Devices

    Science.gov (United States)

    2013-12-01

    Contiguousness At this point, it is necessary to introduce the concept of contiguousness. In this thesis, a state space analysis representation is... concept of contiguousness was established to ensure that the results of the analysis would allow for the CMGs to reach every state in the defined...forces at the attachment points of the RWs and CMGs throughout a spacecraft maneuver. Current pedagogy on this topic focuses on the transfer of

  7. Automated Dermoscopy Image Analysis of Pigmented Skin Lesions

    Directory of Open Access Journals (Sweden)

    Alfonso Baldi

    2010-03-01

    Full Text Available Dermoscopy (dermatoscopy, epiluminescence microscopy is a non-invasive diagnostic technique for the in vivo observation of pigmented skin lesions (PSLs, allowing a better visualization of surface and subsurface structures (from the epidermis to the papillary dermis. This diagnostic tool permits the recognition of morphologic structures not visible by the naked eye, thus opening a new dimension in the analysis of the clinical morphologic features of PSLs. In order to reduce the learning-curve of non-expert clinicians and to mitigate problems inherent in the reliability and reproducibility of the diagnostic criteria used in pattern analysis, several indicative methods based on diagnostic algorithms have been introduced in the last few years. Recently, numerous systems designed to provide computer-aided analysis of digital images obtained by dermoscopy have been reported in the literature. The goal of this article is to review these systems, focusing on the most recent approaches based on content-based image retrieval systems (CBIR.

  8. Automation of semen analysis using flow cytometer in comparison with manual methods.

    Science.gov (United States)

    Saleh, Mohamed; Fathy, Amal; El-Akras, Atef I; Eyada, Mostafa M; Younes, Soha; El-Gohary, Ahmed M

    2005-01-01

    In order to standardize techniques and limit the effect of human factors on the results of analyses of biological fluids, automation seems to be mandatory. In an attempt to automate semen analysis, computer assisted sperm analysis (CASA) system has been developed, however its use is still limited and its practical applications have many criticisms. In a trial to automate semen analysis, this study aimed to evaluate the usefulness of flow cytometer in the detection of some seminal parameters in comparison with the traditional manual methods. Isolated spermatogenic cells and isolated sperms from semen and EDTA blood of volunteers were analyzed by flow cytometer in order to define their respective regions. Ejaculates of 28 male patients were subjected to routine semen analyses, leucocytes detection by peroxidase test and monoclonal antibody CD53 using flow cytometer after preparation of the patients' semen samples for flow cytometeric analysis. A highly significant correlation (r=0.96, p= 0.001) of absolute neutrophils (pus cells) detected by peroxidase versus flow cytometer using CD53 monoclonal antibody. A poor correlation (r=0.39, p=0.035) of sperm counts assessed by manual technique and flow cytometer and a spurious sperm counts of 1.08 million/ml detected by flow cytometery in azoospermic patients. Flow cytometer could be used for the assessment of pus cells in semen but seems to be non reliable for the assessment of sperm count if gating depend on sperm size and granularity alone.

  9. Molecular Detection of Bladder Cancer by Fluorescence Microsatellite Analysis and an Automated Genetic Analyzing System

    Directory of Open Access Journals (Sweden)

    Sarel Halachmi

    2007-01-01

    Full Text Available To investigate the ability of an automated fluorescent analyzing system to detect microsatellite alterations, in patients with bladder cancer. We investigated 11 with pathology proven bladder Transitional Cell Carcinoma (TCC for microsatellite alterations in blood, urine, and tumor biopsies. DNA was prepared by standard methods from blood, urine and resected tumor specimens, and was used for microsatellite analysis. After the primers were fluorescent labeled, amplification of the DNA was performed with PCR. The PCR products were placed into the automated genetic analyser (ABI Prism 310, Perkin Elmer, USA and were subjected to fluorescent scanning with argon ion laser beams. The fluorescent signal intensity measured by the genetic analyzer measured the product size in terms of base pairs. We found loss of heterozygocity (LOH or microsatellite alterations (a loss or gain of nucleotides, which alter the original normal locus size in all the patients by using fluorescent microsatellite analysis and an automated analyzing system. In each case the genetic changes found in urine samples were identical to those found in the resected tumor sample. The studies demonstrated the ability to detect bladder tumor non-invasively by fluorescent microsatellite analysis of urine samples. Our study supports the worldwide trend for the search of non-invasive methods to detect bladder cancer. We have overcome major obstacles that prevented the clinical use of an experimental system. With our new tested system microsatellite analysis can be done cheaper, faster, easier and with higher scientific accuracy.

  10. An automated image analysis system to measure and count organisms in laboratory microcosms.

    Directory of Open Access Journals (Sweden)

    François Mallard

    Full Text Available 1. Because of recent technological improvements in the way computer and digital camera perform, the potential use of imaging for contributing to the study of communities, populations or individuals in laboratory microcosms has risen enormously. However its limited use is due to difficulties in the automation of image analysis. 2. We present an accurate and flexible method of image analysis for detecting, counting and measuring moving particles on a fixed but heterogeneous substrate. This method has been specifically designed to follow individuals, or entire populations, in experimental laboratory microcosms. It can be used in other applications. 3. The method consists in comparing multiple pictures of the same experimental microcosm in order to generate an image of the fixed background. This background is then used to extract, measure and count the moving organisms, leaving out the fixed background and the motionless or dead individuals. 4. We provide different examples (springtails, ants, nematodes, daphnia to show that this non intrusive method is efficient at detecting organisms under a wide variety of conditions even on faintly contrasted and heterogeneous substrates. 5. The repeatability and reliability of this method has been assessed using experimental populations of the Collembola Folsomia candida. 6. We present an ImageJ plugin to automate the analysis of digital pictures of laboratory microcosms. The plugin automates the successive steps of the analysis and recursively analyses multiple sets of images, rapidly producing measurements from a large number of replicated microcosms.

  11. AMAB: Automated measurement and analysis of body motion

    NARCIS (Netherlands)

    Poppe, Ronald; Zee, van der Sophie; Heylen, Dirk K.J.; Taylor, Paul J.

    2014-01-01

    Technologies that measure human nonverbal behavior have existed for some time, and their use in the analysis of social behavior has become more popular following the development of sensor technologies that record full-body movement. However, a standardized methodology to efficiently represent and an

  12. Analysis of the automated systems of planning of spatial constructions

    Directory of Open Access Journals (Sweden)

    М.С. Барабаш

    2004-04-01

    Full Text Available  The article is devoted to the questions of analysis of existing SAPR and questions of development of new information technologies of planning on the basis of integration of programmatic complexes with the use of united informatively-logical model of object.

  13. ADDIS : an automated way to do network meta-analysis

    NARCIS (Netherlands)

    Zhao, Jing; van Valkenhoef, Gert; de Brock, E.O.; Hillege, Hans

    2012-01-01

    In evidence-based medicine, meta-analysis is an important statistical technique for combining the findings from independent clinical trials which have attempted to answer similar questions about treatment's clinical eectiveness [1]. Normally, such meta-analyses are pair-wise treatment comparisons, w

  14. Automated analysis of three-dimensional stress echocardiography

    NARCIS (Netherlands)

    K.Y.E. Leung (Esther); M. van Stralen (Marijn); M.G. Danilouchkine (Mikhail); G. van Burken (Gerard); M.L. Geleijnse (Marcel); J.H.C. Reiber (Johan); N. de Jong (Nico); A.F.W. van der Steen (Ton); J.G. Bosch (Johan)

    2011-01-01

    textabstractReal-time three-dimensional (3D) ultrasound imaging has been proposed as an alternative for two-dimensional stress echocardiography for assessing myocardial dysfunction and underlying coronary artery disease. Analysis of 3D stress echocardiography is no simple task and requires considera

  15. Automated Frequency Domain Decomposition for Operational Modal Analysis

    DEFF Research Database (Denmark)

    Brincker, Rune; Andersen, Palle; Jacobsen, Niels-Jørgen

    2007-01-01

    The Frequency Domain Decomposition (FDD) technique is known as one of the most user friendly and powerful techniques for operational modal analysis of structures. However, the classical implementation of the technique requires some user interaction. The present paper describes an algorithm for au...

  16. An Empirical Study on the Impact of Automation on the Requirements Analysis Process

    Institute of Scientific and Technical Information of China (English)

    Giuseppe Lami; Robert W. Ferguson

    2007-01-01

    Requirements analysis is an important phase in a software project. The analysis is often performed in aninformal way by specialists who review documents looking for ambiguities, technical inconsistencies and incomplete parts.Automation is still far from being applied in requirements analyses, above all since natural languages are informal andthus difficult to treat automatically. There are only a few tools that can analyse texts. One of them, called QuARS, wasdeveloped by the Istituto di Scienza e Tecnologie dell'Informazione and can analyse texts in terms of ambiguity. This paperdescribes how QuARS was used in a formal empirical experiment to assess the impact in terms of effectiveness and efficacyof the automation in the requirements review process of a software company.

  17. An analysis of lecture video utilization in undergraduate medical education: associations with performance in the courses

    Directory of Open Access Journals (Sweden)

    Chandrasekhar Arcot

    2009-01-01

    Full Text Available Abstract Background Increasing numbers of medical schools are providing videos of lectures to their students. This study sought to analyze utilization of lecture videos by medical students in their basic science courses and to determine if student utilization was associated with performance on exams. Methods Streaming videos of lectures (n = 149 to first year and second year medical students (n = 284 were made available through a password-protected server. Server logs were analyzed over a 10-week period for both classes. For each lecture, the logs recorded time and location from which students accessed the file. A survey was administered at the end of the courses to obtain additional information about student use of the videos. Results There was a wide disparity in the level of use of lecture videos by medical students with the majority of students accessing the lecture videos sparingly (60% of the students viewed less than 10% of the available videos. The anonymous student survey revealed that students tended to view the videos by themselves from home during weekends and prior to exams. Students who accessed lecture videos more frequently had significantly (p Conclusion We conclude that videos of lectures are used by relatively few medical students and that individual use of videos is associated with the degree to which students are having difficulty with the subject matter.

  18. Exposure to violent video games and aggression in German adolescents: a longitudinal analysis.

    Science.gov (United States)

    Möller, Ingrid; Krahé, Barbara

    2009-01-01

    The relationship between exposure to violent electronic games and aggressive cognitions and behavior was examined in a longitudinal study. A total of 295 German adolescents completed the measures of violent video game usage, endorsement of aggressive norms, hostile attribution bias, and physical as well as indirect/relational aggression cross-sectionally, and a subsample of N=143 was measured again 30 months later. Cross-sectional results at T1 showed a direct relationship between violent game usage and aggressive norms, and an indirect link to hostile attribution bias through aggressive norms. In combination, exposure to game violence, normative beliefs, and hostile attribution bias predicted physical and indirect/relational aggression. Longitudinal analyses using path analysis showed that violence exposure at T1 predicted physical (but not indirect/relational) aggression 30 months later, whereas aggression at T1 was unrelated to later video game use. Exposure to violent games at T1 influenced physical (but not indirect/relational) aggression at T2 via an increase of aggressive norms and hostile attribution bias. The findings are discussed in relation to social-cognitive explanations of long-term effects of media violence on aggression.

  19. Automated analysis of craniofacial morphology using magnetic resonance images.

    Directory of Open Access Journals (Sweden)

    M Mallar Chakravarty

    Full Text Available Quantitative analysis of craniofacial morphology is of interest to scholars working in a wide variety of disciplines, such as anthropology, developmental biology, and medicine. T1-weighted (anatomical magnetic resonance images (MRI provide excellent contrast between soft tissues. Given its three-dimensional nature, MRI represents an ideal imaging modality for the analysis of craniofacial structure in living individuals. Here we describe how T1-weighted MR images, acquired to examine brain anatomy, can also be used to analyze facial features. Using a sample of typically developing adolescents from the Saguenay Youth Study (N = 597; 292 male, 305 female, ages: 12 to 18 years, we quantified inter-individual variations in craniofacial structure in two ways. First, we adapted existing nonlinear registration-based morphological techniques to generate iteratively a group-wise population average of craniofacial features. The nonlinear transformations were used to map the craniofacial structure of each individual to the population average. Using voxel-wise measures of expansion and contraction, we then examined the effects of sex and age on inter-individual variations in facial features. Second, we employed a landmark-based approach to quantify variations in face surfaces. This approach involves: (a placing 56 landmarks (forehead, nose, lips, jaw-line, cheekbones, and eyes on a surface representation of the MRI-based group average; (b warping the landmarks to the individual faces using the inverse nonlinear transformation estimated for each person; and (3 using a principal components analysis (PCA of the warped landmarks to identify facial features (i.e. clusters of landmarks that vary in our sample in a correlated fashion. As with the voxel-wise analysis of the deformation fields, we examined the effects of sex and age on the PCA-derived spatial relationships between facial features. Both methods demonstrated significant sexual dimorphism in

  20. Automated Analysis of the SCR-Style Requirements Specifications

    Institute of Scientific and Technical Information of China (English)

    WU Guoqing; LIU Xiang; YING Shi; Tetsuo Tamai

    1999-01-01

    The SCR (Software Cost Reduction)requirements method is an effectivemethod for specifying softwaresystem requirements. This paper presents a formalmodel analyzingSCR-style requirements. The analysis model mainly appliesstatetranslation rules, semantic computing rules and attributes to defineformalsemantics of a tabular notation in the SCR requirements method,and may be used toanalyze requirements specifications to be specifiedby the SCR requirements method.Using a simple example, this paperintroduces how to analyze consistency andcompleteness of requirementsspecifications.

  1. Automated analysis of image mammogram for breast cancer diagnosis

    Science.gov (United States)

    Nurhasanah, Sampurno, Joko; Faryuni, Irfana Diah; Ivansyah, Okto

    2016-03-01

    Medical imaging help doctors in diagnosing and detecting diseases that attack the inside of the body without surgery. Mammogram image is a medical image of the inner breast imaging. Diagnosis of breast cancer needs to be done in detail and as soon as possible for determination of next medical treatment. The aim of this work is to increase the objectivity of clinical diagnostic by using fractal analysis. This study applies fractal method based on 2D Fourier analysis to determine the density of normal and abnormal and applying the segmentation technique based on K-Means clustering algorithm to image abnormal for determine the boundary of the organ and calculate the area of organ segmentation results. The results show fractal method based on 2D Fourier analysis can be used to distinguish between the normal and abnormal breast and segmentation techniques with K-Means Clustering algorithm is able to generate the boundaries of normal and abnormal tissue organs, so area of the abnormal tissue can be determined.

  2. Using causal reasoning for automated failure modes and effects analysis (FMEA)

    Science.gov (United States)

    Bell, Daniel; Cox, Lisa; Jackson, Steve; Schaefer, Phil

    The authors have developed a tool that automates the reasoning portion of a failure modes and effects analysis (FMEA). It is built around a flexible causal reasoning module that has been adapted to the FMEA procedure. The approach and software architecture have been proven. A prototype tool has been created and successfully passed a test and evaluation program. The authors are expanding the operational capability and adapting the tool to various CAD/CAE (computer-aided design and engineering) platforms.

  3. Automated static image analysis as a novel tool in describing the physical properties of dietary fiber

    OpenAIRE

    Kurek,Marcin Andrzej; Piwińska, Monika; Wyrwisz, Jarosław; Wierzbicka, Agnieszka

    2015-01-01

    Abstract The growing interest in the usage of dietary fiber in food has caused the need to provide precise tools for describing its physical properties. This research examined two dietary fibers from oats and beets, respectively, in variable particle sizes. The application of automated static image analysis for describing the hydration properties and particle size distribution of dietary fiber was analyzed. Conventional tests for water holding capacity (WHC) were conducted. The particles were...

  4. Semantic Concept Mining Based on Hierarchical Event Detection for Soccer Video Indexing

    Directory of Open Access Journals (Sweden)

    Maheshkumar H. Kolekar

    2009-10-01

    Full Text Available In this paper, we present a novel automated indexing and semantic labeling for broadcast soccer video sequences. The proposed method automatically extracts silent events from the video and classifies each event sequence into a concept by sequential association mining. The paper makes three new contributions in multimodal sports video indexing and summarization. First, we propose a novel hierarchical framework for soccer (football video event sequence detection and classification. Unlike most existing video classification approaches, which focus on shot detection followed by shot-clustering for classification, the proposed scheme perform a top-down video scene classification which avoids shot clustering. This improves the classification accuracy and also maintains the temporal order of shots. Second, we compute the association for the events of each excitement clip using a priori mining algorithm. We propose a novel sequential association distance to classify the association of the excitement clip into semantic concepts. For soccer video, we have considered goal scored by team-A, goal scored by team-B, goal saved by team-A, goal saved by team-B as semantic concepts. Third, the extracted excitement clips with semantic concept label helps us to summarize many hours of video to collection of soccer highlights such as goals, saves, corner kicks, etc. We show promising results, with correctly indexed soccer scenes, enabling structural and temporal analysis, such as video retrieval, highlight extraction, and video skimming.

  5. Automated image analysis for quantification of filamentous bacteria

    DEFF Research Database (Denmark)

    Fredborg, M.; Rosenvinge, F. S.; Spillum, E.

    2015-01-01

    Background: Antibiotics of the beta-lactam group are able to alter the shape of the bacterial cell wall, e.g. filamentation or a spheroplast formation. Early determination of antimicrobial susceptibility may be complicated by filamentation of bacteria as this can be falsely interpreted as growth...... displaying different resistant profiles and differences in filamentation kinetics were used to study a novel image analysis algorithm to quantify length of bacteria and bacterial filamentation. A total of 12 beta-lactam antibiotics or beta-lactam-beta-lactamase inhibitor combinations were analyzed...

  6. Video Image Analysis of Turbulent Buoyant Jets Using a Novel Laboratory Apparatus

    Science.gov (United States)

    Crone, T. J.; Colgan, R. E.; Ferencevych, P. G.

    2012-12-01

    Turbulent buoyant jets play an important role in the transport of heat and mass in a variety of environmental settings on Earth. Naturally occurring examples include the discharges from high-temperature seafloor hydrothermal vents and from some types of subaerial volcanic eruptions. Anthropogenic examples include flows from industrial smokestacks and the flow from the damaged well after the Deepwater Horizon oil leak of 2010. Motivated by a desire to find non-invasive methods for measuring the volumetric flow rates of turbulent buoyant jets, we have constructed a laboratory apparatus that can generate these types of flows with easily adjustable nozzle velocities and fluid densities. The jet fluid comprises a variable mixture of nitrogen and carbon dioxide gas, which can be injected at any angle with respect to the vertical into the quiescent surrounding air. To make the flow visible we seed the jet fluid with a water fog generated by an array of piezoelectric diaphragms oscillating at ultrasonic frequencies. The system can generate jets that have initial densities ranging from approximately 2-48% greater than the ambient air. We obtain independent estimates of the volumetric flow rates using well-calibrated rotameters, and collect video image sequences for analysis at frame rates up to 120 frames per second using a machine vision camera. We are using this apparatus to investigate several outstanding problems related to the physics of these flows and their analysis using video imagery. First, we are working to better constrain several theoretical parameters that describe the trajectory of these flows when their initial velocities are not parallel to the buoyancy force. The ultimate goal of this effort is to develop well-calibrated methods for establishing volumetric flow rates using trajectory analysis. Second, we are working to refine optical plume velocimetry (OPV), a non-invasive technique for estimating flow rates using temporal cross-correlation of image

  7. Automated Production of Movies on a Cluster of Computers

    Science.gov (United States)

    Nail, Jasper; Le, Duong; Nail, William L.; Nail, William

    2008-01-01

    A method of accelerating and facilitating production of video and film motion-picture products, and software and generic designs of computer hardware to implement the method, are undergoing development. The method provides for automation of most of the tedious and repetitive tasks involved in editing and otherwise processing raw digitized imagery into final motion-picture products. The method was conceived to satisfy requirements, in industrial and scientific testing, for rapid processing of multiple streams of simultaneously captured raw video imagery into documentation in the form of edited video imagery and video derived data products for technical review and analysis. In the production of such video technical documentation, unlike in production of motion-picture products for entertainment, (1) it is often necessary to produce multiple video derived data products, (2) there are usually no second chances to repeat acquisition of raw imagery, (3) it is often desired to produce final products within minutes rather than hours, days, or months, and (4) consistency and quality, rather than aesthetics, are the primary criteria for judging the products. In the present method, the workflow has both serial and parallel aspects: processing can begin before all the raw imagery has been acquired, each video stream can be subjected to different stages of processing simultaneously on different computers that may be grouped into one or more cluster(s), and the final product may consist of multiple video streams. Results of processing on different computers are shared, so that workers can collaborate effectively.

  8. Automated Experimental Data Analysis at the National Ignition Facility

    Energy Technology Data Exchange (ETDEWEB)

    Azevedo, S G; Bettenhausen, R C; Beeler, R G; Bond, E J; Edwards, P W; Glenn, S M; Liebman, J A; Tappero, J D; Warrick, A L; Williams, W H

    2009-09-24

    The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory is a 192-beam 1.8 MJ ultraviolet laser system designed to support high-energy-density science, including demonstration of inertial confinement fusion ignition. After each target shot lasting {approx}20 ns, scientists require data acquisition, analysis and display within 30 minutes from more than 20 specialized high-speed diagnostic instruments. These diagnostics measure critical x-ray, optical and nuclear phenomena during target burn to quantify ignition results and compare to computational models. All diagnostic data (hundreds of Gbytes) are automatically transferred to an Oracle database that triggers the NIF Shot Data Analysis (SDA) Engine, which distributes the signal and image processing tasks to a Linux cluster. The SDA Engine integrates commercial workflow tools and messaging technologies into a scientific software architecture that is highly parallel, scalable, and flexible. Results are archived in the database for scientist approval and displayed using a web-based tool. The unique architecture and functionality of the SDA Engine will be presented along with an example.

  9. Automated quantification of the synchrogram by recurrence plot analysis.

    Science.gov (United States)

    Nguyen, Chinh Duc; Wilson, Stephen James; Crozier, Stuart

    2012-04-01

    Recently, the concept of phase synchronization of two weakly coupled oscillators has raised a great research interest and has been applied to characterize synchronization phenomenon in physiological data. Phase synchronization of cardiorespiratory coupling is often studied by a synchrogram analysis, a graphical tool investigating the relationship between instantaneous phases of two signals. Although several techniques have been proposed to automatically quantify the synchrogram, most of them require a preselection of a phase-locking ratio by trial and error. One technique does not require this information; however, it is based on the power spectrum of phase's distribution in the synchrogram, which is vulnerable to noise. This study aims to introduce a new technique to automatically quantify the synchrogram by studying its dynamic structure. Our technique exploits recurrence plot analysis, which is a well-established tool for characterizing recurring patterns and nonstationarities in experiments. We applied our technique to detect synchronization in simulated and measured infants' cardiorespiratory data. Our results suggest that the proposed technique is able to systematically detect synchronization in noisy and chaotic data without preselecting the phase-locking ratio. By embedding phase information of the synchrogram into phase space, the phase-locking ratio is automatically unveiled as the number of attractors.

  10. Conventional Versus Automated Implantation of Loose Seeds in Prostate Brachytherapy: Analysis of Dosimetric and Clinical Results

    Energy Technology Data Exchange (ETDEWEB)

    Genebes, Caroline, E-mail: genebes.caroline@claudiusregaud.fr [Radiation Oncology Department, Institut Claudius Regaud, Toulouse (France); Filleron, Thomas; Graff, Pierre [Radiation Oncology Department, Institut Claudius Regaud, Toulouse (France); Jonca, Frédéric [Department of Urology, Clinique Ambroise Paré, Toulouse (France); Huyghe, Eric; Thoulouzan, Matthieu; Soulie, Michel; Malavaud, Bernard [Department of Urology and Andrology, CHU Rangueil, Toulouse (France); Aziza, Richard; Brun, Thomas; Delannes, Martine; Bachaud, Jean-Marc [Radiation Oncology Department, Institut Claudius Regaud, Toulouse (France)

    2013-11-15

    Purpose: To review the clinical outcome of I-125 permanent prostate brachytherapy (PPB) for low-risk and intermediate-risk prostate cancer and to compare 2 techniques of loose-seed implantation. Methods and Materials: 574 consecutive patients underwent I-125 PPB for low-risk and intermediate-risk prostate cancer between 2000 and 2008. Two successive techniques were used: conventional implantation from 2000 to 2004 and automated implantation (Nucletron, FIRST system) from 2004 to 2008. Dosimetric and biochemical recurrence-free (bNED) survival results were reported and compared for the 2 techniques. Univariate and multivariate analysis researched independent predictors for bNED survival. Results: 419 (73%) and 155 (27%) patients with low-risk and intermediate-risk disease, respectively, were treated (median follow-up time, 69.3 months). The 60-month bNED survival rates were 95.2% and 85.7%, respectively, for patients with low-risk and intermediate-risk disease (P=.04). In univariate analysis, patients treated with automated implantation had worse bNED survival rates than did those treated with conventional implantation (P<.0001). By day 30, patients treated with automated implantation showed lower values of dose delivered to 90% of prostate volume (D90) and volume of prostate receiving 100% of prescribed dose (V100). In multivariate analysis, implantation technique, Gleason score, and V100 on day 30 were independent predictors of recurrence-free status. Grade 3 urethritis and urinary incontinence were observed in 2.6% and 1.6% of the cohort, respectively, with no significant differences between the 2 techniques. No grade 3 proctitis was observed. Conclusion: Satisfactory 60-month bNED survival rates (93.1%) and acceptable toxicity (grade 3 urethritis <3%) were achieved by loose-seed implantation. Automated implantation was associated with worse dosimetric and bNED survival outcomes.

  11. Automated Extraction of Archaeological Traces by a Modified Variance Analysis

    Directory of Open Access Journals (Sweden)

    Tiziana D'Orazio

    2015-03-01

    Full Text Available This paper considers the problem of detecting archaeological traces in digital aerial images by analyzing the pixel variance over regions around selected points. In order to decide if a point belongs to an archaeological trace or not, its surrounding regions are considered. The one-way ANalysis Of VAriance (ANOVA is applied several times to detect the differences among these regions; in particular the expected shape of the mark to be detected is used in each region. Furthermore, an effect size parameter is defined by comparing the statistics of these regions with the statistics of the entire population in order to measure how strongly the trace is appreciable. Experiments on synthetic and real images demonstrate the effectiveness of the proposed approach with respect to some state-of-the-art methodologies.

  12. Knowledge Support and Automation for Performance Analysis with PerfExplorer 2.0

    Directory of Open Access Journals (Sweden)

    Kevin A. Huck

    2008-01-01

    Full Text Available The integration of scalable performance analysis in parallel development tools is difficult. The potential size of data sets and the need to compare results from multiple experiments presents a challenge to manage and process the information. Simply to characterize the performance of parallel applications running on potentially hundreds of thousands of processor cores requires new scalable analysis techniques. Furthermore, many exploratory analysis processes are repeatable and could be automated, but are now implemented as manual procedures. In this paper, we will discuss the current version of PerfExplorer, a performance analysis framework which provides dimension reduction, clustering and correlation analysis of individual trails of large dimensions, and can perform relative performance analysis between multiple application executions. PerfExplorer analysis processes can be captured in the form of Python scripts, automating what would otherwise be time-consuming tasks. We will give examples of large-scale analysis results, and discuss the future development of the framework, including the encoding and processing of expert performance rules, and the increasing use of performance metadata.

  13. TScratch: a novel and simple software tool for automated analysis of monolayer wound healing assays.

    Science.gov (United States)

    Gebäck, Tobias; Schulz, Martin Michael Peter; Koumoutsakos, Petros; Detmar, Michael

    2009-04-01

    Cell migration plays a major role in development, physiology, and disease, and is frequently evaluated in vitro by the monolayer wound healing assay. The assay analysis, however, is a time-consuming task that is often performed manually. In order to accelerate this analysis, we have developed TScratch, a new, freely available image analysis technique and associated software tool that uses the fast discrete curvelet transform to automate the measurement of the area occupied by cells in the images. This tool helps to significantly reduce the time needed for analysis and enables objective and reproducible quantification of assays. The software also offers a graphical user interface which allows easy inspection of analysis results and, if desired, manual modification of analysis parameters. The automated analysis was validated by comparing its results with manual-analysis results for a range of different cell lines. The comparisons demonstrate a close agreement for the vast majority of images that were examined and indicate that the present computational tool can reproduce statistically significant results in experiments with well-known cell migration inhibitors and enhancers.

  14. Unsupervised EEG analysis for automated epileptic seizure detection

    Science.gov (United States)

    Birjandtalab, Javad; Pouyan, Maziyar Baran; Nourani, Mehrdad

    2016-07-01

    Epilepsy is a neurological disorder which can, if not controlled, potentially cause unexpected death. It is extremely crucial to have accurate automatic pattern recognition and data mining techniques to detect the onset of seizures and inform care-givers to help the patients. EEG signals are the preferred biosignals for diagnosis of epileptic patients. Most of the existing pattern recognition techniques used in EEG analysis leverage the notion of supervised machine learning algorithms. Since seizure data are heavily under-represented, such techniques are not always practical particularly when the labeled data is not sufficiently available or when disease progression is rapid and the corresponding EEG footprint pattern will not be robust. Furthermore, EEG pattern change is highly individual dependent and requires experienced specialists to annotate the seizure and non-seizure events. In this work, we present an unsupervised technique to discriminate seizures and non-seizures events. We employ power spectral density of EEG signals in different frequency bands that are informative features to accurately cluster seizure and non-seizure events. The experimental results tried so far indicate achieving more than 90% accuracy in clustering seizure and non-seizure events without having any prior knowledge on patient's history.

  15. Automated Analysis of Crackles in Patients with Interstitial Pulmonary Fibrosis

    Directory of Open Access Journals (Sweden)

    B. Flietstra

    2011-01-01

    Full Text Available Background. The crackles in patients with interstitial pulmonary fibrosis (IPF can be difficult to distinguish from those heard in patients with congestive heart failure (CHF and pneumonia (PN. Misinterpretation of these crackles can lead to inappropriate therapy. The purpose of this study was to determine whether the crackles in patients with IPF differ from those in patients with CHF and PN. Methods. We studied 39 patients with IPF, 95 with CHF and 123 with PN using a 16-channel lung sound analyzer. Crackle features were analyzed using machine learning methods including neural networks and support vector machines. Results. The IPF crackles had distinctive features that allowed them to be separated from those in patients with PN with a sensitivity of 0.82, a specificity of 0.88 and an accuracy of 0.86. They were separated from those of CHF patients with a sensitivity of 0.77, a specificity of 0.85 and an accuracy of 0.82. Conclusion. Distinctive features are present in the crackles of IPF that help separate them from the crackles of CHF and PN. Computer analysis of crackles at the bedside has the potential of aiding clinicians in diagnosing IPF more easily and thus helping to avoid medication errors.

  16. Adaptive pattern recognition in real-time video-based soccer analysis

    DEFF Research Database (Denmark)

    Schlipsing, Marc; Salmen, Jan; Tschentscher, Marc

    2014-01-01

    collection, annotation, and learning as an offline task. A semi-automatic labeling of training data and robust learning given few examples from unbalanced classes are required. We present a real-time system acquiring and analyzing video sequences from soccer matches. It estimates each player's position...... to the identification of players in uncertain situations. Our experiments showed high performance in the classification task achieving an average error rate of 3 % on three real-world datasets. The system was proved to collect accurate tracking statistics throughout different soccer matches in real......-time by incorporating two human operators only. We finally show how the resulting data can be used instantly for consumer applications and discuss further development in the context of behavior analysis. © 2014 Springer-Verlag Berlin Heidelberg....

  17. Gender (In)equality in Internet Pornography: A Content Analysis of Popular Pornographic Internet Videos.

    Science.gov (United States)

    Klaassen, Marleen J E; Peter, Jochen

    2015-01-01

    Although Internet pornography is widely consumed and researchers have started to investigate its effects, we still know little about its content. This has resulted in contrasting claims about whether Internet pornography depicts gender (in)equality and whether this depiction differs between amateur and professional pornography. We conducted a content analysis of three main dimensions of gender (in)equality (i.e., objectification, power, and violence) in 400 popular pornographic Internet videos from the most visited pornographic Web sites. Objectification was depicted more often for women through instrumentality, but men were more frequently objectified through dehumanization. Regarding power, men and women did not differ in social or professional status, but men were more often shown as dominant and women as submissive during sexual activities. Except for spanking and gagging, violence occurred rather infrequently. Nonconsensual sex was also relatively rare. Overall, amateur pornography contained more gender inequality at the expense of women than professional pornography did.

  18. Structural analysis of color video camera installation on tank 241AW101 (2 Volumes)

    Energy Technology Data Exchange (ETDEWEB)

    Strehlow, J.P.

    1994-08-24

    A video camera is planned to be installed on the radioactive storage tank 241AW101 at the DOE` s Hanford Site in Richland, Washington. The camera will occupy the 20 inch port of the Multiport Flange riser which is to be installed on riser 5B of the 241AW101 (3,5,10). The objective of the project reported herein was to perform a seismic analysis and evaluation of the structural components of the camera for a postulated Design Basis Earthquake (DBE) per the reference Structural Design Specification (SDS) document (6). The detail of supporting engineering calculations is documented in URS/Blume Calculation No. 66481-01-CA-03 (1).

  19. Precursors and trajectories of sensory features: qualitative analysis of infant home videos.

    Science.gov (United States)

    Freuler, Ashley; Baranek, Grace T; Watson, Linda R; Boyd, Brian A; Bulluck, John C

    2012-01-01

    OBJECTIVE. This study explored precursors and trajectories of extreme sensory patterns in children with autism spectrum disorders (ASD) compared with children with developmental delay (DD). METHOD. We conducted a retrospective analysis of home videos of 12 infants who later displayed extreme presence or absence of three sensory patterns at preschool and school age. RESULTS. In ASD, hyporesponsiveness was most evident in infancy, followed by sensory repetitions. Hyporesponsiveness appeared stable over time and also was a precursor of sensory seeking. Infants with DD had few sensory precursors. CONCLUSION. Precursors of extreme sensory features emerge early in children with ASD and appear relatively stable over time for a pattern of hyporesponsiveness but less stable for patterns of hyperresponsiveness and sensory seeking. These findings highlight the emergent nature of sensory features that may inform early identification and intervention.

  20. Video analysis of dust events in full-tungsten ASDEX Upgrade

    Science.gov (United States)

    Brochard, F.; Shalpegin, A.; Bardin, S.; Lunt, T.; Rohde, V.; Briançon, J. L.; Pautasso, G.; Vorpahl, C.; Neu, R.; The ASDEX Upgrade Team

    2017-03-01

    Fast video data recorded during seven consecutive operation campaigns (2008-2012) in full-tungsten ASDEX Upgrade have been analyzed with an algorithm developed to automatically detect and track dust particles. A total of 2425 discharges have been analyzed, corresponding to 12 204 s of plasma operation. The analysis aimed at precisely identifying and sorting the discharge conditions responsible of the dust generation or remobilization. Dust rates are found to be significantly lower than in tokamaks with carbon PFCs. Significant dust events occur mostly during off-normal plasma phases such as disruptions and particularly those preceded by vertical displacement events (VDEs). Dust rates are also increased but to a lower extent during type-I ELMy H-modes. The influences of disruption energy, heating scenario, vessel venting and vessel vibrations are also presented.

  1. Single-cell bacteria growth monitoring by automated DEP-facilitated image analysis.

    Science.gov (United States)

    Peitz, Ingmar; van Leeuwen, Rien

    2010-11-07

    Growth monitoring is the method of choice in many assays measuring the presence or properties of pathogens, e.g. in diagnostics and food quality. Established methods, relying on culturing large numbers of bacteria, are rather time-consuming, while in healthcare time often is crucial. Several new approaches have been published, mostly aiming at assaying growth or other properties of a small number of bacteria. However, no method so far readily achieves single-cell resolution with a convenient and easy to handle setup that offers the possibility for automation and high throughput. We demonstrate these benefits in this study by employing dielectrophoretic capturing of bacteria in microfluidic electrode structures, optical detection and automated bacteria identification and counting with image analysis algorithms. For a proof-of-principle experiment we chose an antibiotic susceptibility test with Escherichia coli and polymyxin B. Growth monitoring is demonstrated on single cells and the impact of the antibiotic on the growth rate is shown. The minimum inhibitory concentration as a standard diagnostic parameter is derived from a dose-response plot. This report is the basis for further integration of image analysis code into device control. Ultimately, an automated and parallelized setup may be created, using an optical microscanner and many of the electrode structures simultaneously. Sufficient data for a sound statistical evaluation and a confirmation of the initial findings can then be generated in a single experiment.

  2. Automated Software Analysis of Fetal Movement Recorded during a Pregnant Woman's Sleep at Home.

    Science.gov (United States)

    Nishihara, Kyoko; Ohki, Noboru; Kamata, Hideo; Ryo, Eiji; Horiuchi, Shigeko

    2015-01-01

    Fetal movement is an important biological index of fetal well-being. Since 2008, we have been developing an original capacitive acceleration sensor and device that a pregnant woman can easily use to record fetal movement by herself at home during sleep. In this study, we report a newly developed automated software system for analyzing recorded fetal movement. This study will introduce the system and compare its results to those of a manual analysis of the same fetal movement signals (Experiment I). We will also demonstrate an appropriate way to use the system (Experiment II). In Experiment I, fetal movement data reported previously for six pregnant women at 28-38 gestational weeks were used. We evaluated the agreement of the manual and automated analyses for the same 10-sec epochs using prevalence-adjusted bias-adjusted kappa (PABAK) including quantitative indicators for prevalence and bias. The mean PABAK value was 0.83, which can be considered almost perfect. In Experiment II, twelve pregnant women at 24-36 gestational weeks recorded fetal movement at night once every four weeks. Overall, mean fetal movement counts per hour during maternal sleep significantly decreased along with gestational weeks, though individual differences in fetal development were noted. This newly developed automated analysis system can provide important data throughout late pregnancy.

  3. Automated Software Analysis of Fetal Movement Recorded during a Pregnant Woman's Sleep at Home.

    Directory of Open Access Journals (Sweden)

    Kyoko Nishihara

    Full Text Available Fetal movement is an important biological index of fetal well-being. Since 2008, we have been developing an original capacitive acceleration sensor and device that a pregnant woman can easily use to record fetal movement by herself at home during sleep. In this study, we report a newly developed automated software system for analyzing recorded fetal movement. This study will introduce the system and compare its results to those of a manual analysis of the same fetal movement signals (Experiment I. We will also demonstrate an appropriate way to use the system (Experiment II. In Experiment I, fetal movement data reported previously for six pregnant women at 28-38 gestational weeks were used. We evaluated the agreement of the manual and automated analyses for the same 10-sec epochs using prevalence-adjusted bias-adjusted kappa (PABAK including quantitative indicators for prevalence and bias. The mean PABAK value was 0.83, which can be considered almost perfect. In Experiment II, twelve pregnant women at 24-36 gestational weeks recorded fetal movement at night once every four weeks. Overall, mean fetal movement counts per hour during maternal sleep significantly decreased along with gestational weeks, though individual differences in fetal development were noted. This newly developed automated analysis system can provide important data throughout late pregnancy.

  4. Towards the Procedure Automation of Full Stochastic Spectral Based Fatigue Analysis

    Directory of Open Access Journals (Sweden)

    Khurram Shehzad

    2013-05-01

    Full Text Available Fatigue is one of the most significant failure modes for marine structures such as ships and offshore platforms. Among numerous methods for fatigue life estimation, spectral method is considered as the most reliable one due to its ability to cater different sea states as well as their probabilities of occurrence. However, spectral based simulation procedure itself is quite complex and numerically intensive owing to various critical technical details. Present research study is focused on the application and automation of spectral based fatigue analysis procedure for ship structure using ANSYS software with 3D liner sea keeping code AQWA. Ansys Parametric Design Language (APDL macros are created and subsequently implemented to automate the workflow of simulation process by reducing the time spent on non-value added repetitive activity. A MATLAB program based on direct calculation procedure of spectral fatigue is developed to calculate total fatigue damage. The automation procedure is employed to predict the fatigue life of a ship structural detail using wave scatter data of North Atlantic and Worldwide trade. The current work will provide a system for efficient implementation of stochastic spectral fatigue analysis procedure for ship structures.

  5. Development of automated high throughput single molecular microfluidic detection platform for signal transduction analysis

    Science.gov (United States)

    Huang, Po-Jung; Baghbani Kordmahale, Sina; Chou, Chao-Kai; Yamaguchi, Hirohito; Hung, Mien-Chie; Kameoka, Jun

    2016-03-01

    Signal transductions including multiple protein post-translational modifications (PTM), protein-protein interactions (PPI), and protein-nucleic acid interaction (PNI) play critical roles for cell proliferation and differentiation that are directly related to the cancer biology. Traditional methods, like mass spectrometry, immunoprecipitation, fluorescence resonance energy transfer, and fluorescence correlation spectroscopy require a large amount of sample and long processing time. "microchannel for multiple-parameter analysis of proteins in single-complex (mMAPS)"we proposed can reduce the process time and sample volume because this system is composed by microfluidic channels, fluorescence microscopy, and computerized data analysis. In this paper, we will present an automated mMAPS including integrated microfluidic device, automated stage and electrical relay for high-throughput clinical screening. Based on this result, we estimated that this automated detection system will be able to screen approximately 150 patient samples in a 24-hour period, providing a practical application to analyze tissue samples in a clinical setting.

  6. RootGraph: a graphic optimization tool for automated image analysis of plant roots.

    Science.gov (United States)

    Cai, Jinhai; Zeng, Zhanghui; Connor, Jason N; Huang, Chun Yuan; Melino, Vanessa; Kumar, Pankaj; Miklavcic, Stanley J

    2015-11-01

    This paper outlines a numerical scheme for accurate, detailed, and high-throughput image analysis of plant roots. In contrast to existing root image analysis tools that focus on root system-average traits, a novel, fully automated and robust approach for the detailed characterization of root traits, based on a graph optimization process is presented. The scheme, firstly, distinguishes primary roots from lateral roots and, secondly, quantifies a broad spectrum of root traits for each identified primary and lateral root. Thirdly, it associates lateral roots and their properties with the specific primary root from which the laterals emerge. The performance of this approach was evaluated through comparisons with other automated and semi-automated software solutions as well as against results based on manual measurements. The comparisons and subsequent application of the algorithm to an array of experimental data demonstrate that this method outperforms existing methods in terms of accuracy, robustness, and the ability to process root images under high-throughput conditions.

  7. Automated DNA extraction of single dog hairs without roots for mitochondrial DNA analysis.

    Science.gov (United States)

    Bekaert, Bram; Larmuseau, Maarten H D; Vanhove, Maarten P M; Opdekamp, Anouschka; Decorte, Ronny

    2012-03-01

    Dogs are intensely integrated in human social life and their shed hairs can play a major role in forensic investigations. The overall aim of this study was to validate a semi-automated extraction method for mitochondrial DNA analysis of telogenic dog hairs. Extracted DNA was amplified with a 95% success rate from 43 samples using two new experimental designs in which the mitochondrial control region was amplified as a single large (± 1260 bp) amplicon or as two individual amplicons (HV1 and HV2; ± 650 and 350 bp) with tailed-primers. The results prove that the extraction of dog hair mitochondrial DNA can easily be automated to provide sufficient DNA yield for the amplification of a forensically useful long mitochondrial DNA fragment or alternatively two short fragments with minimal loss of sequence in case of degraded samples.

  8. Application Study of MMA Video 2 . 0-based Multimodal Discourse Analysis%基于MMA Video2.0的视频语篇分析应用研究

    Institute of Scientific and Technical Information of China (English)

    陈松菁

    2014-01-01

    视频语篇即真实场景中的演讲、电视、电影等经常在外语教学课堂中作为教材辅助材料演示,但由于分析工具缺乏或难操作,使视频语篇的研究相对滞后。基于系统功能多模态语篇分析框架设计的多模态视频分析软件MMA Video2.0为教学视频分析开通了新的途径。本文借助软件的分层标注和统计功能对一段医学会议演讲录像的多模态符号选择特点进行分析和评价,藉此探讨该软件在外语教学中的应用前景。%Multimodal video discourses, comprising the videos of speeches made in real context, TV dramas and films, are often used in the foreign language teaching class as supplementary materials of textbook. But for unavailability of analyzing software or hardness of operation, research on video discourses progresses relatively slowly among other multimodal discourse analysis. The birth of MMA Vid-eo 2. 0, a multimodal video discourse analysis software designed on the basis of systemic-functional multimodal discourse analysis (SF-MDA),opened new possibilities to the analysis of video discourses used in foreign language teaching contexts. This paper tries to apply the ranking and statistics features of the software to the analysis and evaluation of a multimodal video discourse sample collected from an international medical academic conference in order to explore the future applications of the software in foreign language teaching con-texts.

  9. Performance characterization of image and video analysis systems at Siemens Corporate Research

    Science.gov (United States)

    Ramesh, Visvanathan; Jolly, Marie-Pierre; Greiffenhagen, Michael

    2000-06-01

    There has been a significant increase in commercial products using imaging analysis techniques to solve real-world problems in diverse fields such as manufacturing, medical imaging, document analysis, transportation and public security, etc. This has been accelerated by various factors: more advanced algorithms, the availability of cheaper sensors, and faster processors. While algorithms continue to improve in performance, a major stumbling block in translating improvements in algorithms to faster deployment of image analysis systems is the lack of characterization of limits of algorithms and how they affect total system performance. The research community has realized the need for performance analysis and there have been significant efforts in the last few years to remedy the situation. Our efforts at SCR have been on statistical modeling and characterization of modules and systems. The emphasis is on both white-box and black box methodologies to evaluate and optimize vision systems. In the first part of this paper we review the literature on performance characterization and then provide an overview of the status of research in performance characterization of image and video understanding systems. The second part of the paper is on performance evaluation of medical image segmentation algorithms. Finally, we highlight some research issues in performance analysis in medical imaging systems.

  10. Video and the Analysis of Social Interaction. An interview with Christian Heath

    Directory of Open Access Journals (Sweden)

    Barbara Pentimalli

    2016-12-01

    Full Text Available Christian Heath is Professor at King’s College London and co-director of the Work, Interaction and Technology Research Centre. Drawing on Ethnomethodology and Conversation Analysis, he specialises in fine grained, video-based field studies of social interaction. He is currently undertaking research in settings that include auctions, control centres, operating theatres, and museums and galleries. His previous research involves a range of projects UK Research Councils and the European Commission in areas that include command and control, health care, the cultural industries, and advanced telecommunications. He has held positions at the Universities of Manchester, Surrey, and Nottingham and visiting positions at Universities and industrial research laboratories in the UK and abroad. He is an Fellow of the Academy of the Social Sciences (AcSS, a Freeman of the Worshipful Company of Art Scholars and in 2015 was given the EUSSET-IISI Lifetime Achievement Award presented to scholars for an outstanding contribution to the reorientation of the fields of computing and Informatics. His publications include: “The Dynamics of Auction: Social Interaction and the Sale of Fine Art and Antiques” (Cambridge 2013: awarded the Best Book Award in 2014 by the International Society for Conversation Analysis, “Video in Qualitative Research: Analysing Social Interaction in Everyday Life” (Sage with Hindmarsh, J. and P. Luff, 2010, “Technology in Action” (with P. Luff, Cambridge 2000, “Workplace Studies: Recovering Work Practice and Informing System Design” (Cambridge with Luff, P. and J. Hindmarsh 2000, “Body Movement and Speech in Medical Interaction” (Cambridge 1986 and numerous articles in journals and books. With Roy Pea and Lucy Suchman he is editor of the book series published by Cambridge University Press, Learning and Doing: Social, Cognitive and Computational Perspectives.

  11. A cross-sectional analysis of video games and attention deficit hyperactivity disorder symptoms in adolescents

    OpenAIRE

    Rabinowitz Terry; Chan Philip A

    2006-01-01

    Abstract Background Excessive use of the Internet has been associated with attention deficit hyperactivity disorder (ADHD), but the relationship between video games and ADHD symptoms in adolescents is unknown. Method A survey of adolescents and parents (n = 72 adolescents, 72 parents) was performed assessing daily time spent on the Internet, television, console video games, and Internet video games, and their association with academic and social functioning. Subjects were high school students...

  12. Effects of video interaction analysis training on nurse-patient communication in the care of the elderly.

    NARCIS (Netherlands)

    Caris-Verhallen, W.M.C.M.; Kerkstra, A.; Bensing, J.M.; Grypdonck, M.H.F.

    2000-01-01

    This paper describes an empirical evaluation of communication skills training for nurses in elderly care. The training programme was based on Video Interaction Analysis and aimed to improve nurses' communication skills such that they pay attention to patients' physical, social and emotional needs an

  13. Effects of Video Interaction Analysis Training on Nurse-Patient Communication in the Care of the Elderly.

    Science.gov (United States)

    Caris-Verhallen, Wilma M. C. M.; Kerkstra, Ada; Bensing, Jozien M.; Grypdonck, Mieke H. F.

    2000-01-01

    Describes an empirical evaluation of training based on Video Interaction Analysis. The training aimed to improve nurses' (N=40) communication skills such that they pay attention to patients' physical, social, and emotional needs and support self care in elderly people. Limitations of this study and topics for further research are discussed.…

  14. Effects of video interaction analysis training on nurse–patient communication in the care of the elderly

    NARCIS (Netherlands)

    Caris-Verhallen, W.M.C.M.; Kerkstra, A.; Bensing, J.; Grypdonck, M.H.F.

    2000-01-01

    This paper describes an empirical evaluation of communication skills training for nurses in elderly care. The training programme was based on Video Interaction Analysis and aimed to improve nurses’ communication skills such that they pay attention to patients’ physical, social and emotional needs an

  15. Exploring the Nonformal Adult Educator in Twenty-First Century Contexts Using Qualitative Video Data Analysis Techniques

    Science.gov (United States)

    Alston, Geleana Drew; Ellis-Hervey, Nina

    2015-01-01

    This study examined how "YouTube" creates a unique, nonformal cyberspace for Black females to vlog about natural hair. Specifically, we utilized qualitative video data analysis techniques to understand how using "YouTube" as a facilitation tool has the ability to collectively capture and maintain an audience of more than a…

  16. Correlation between two-dimensional video analysis and subjective assessment in evaluating knee control among elite female team handball players

    DEFF Research Database (Denmark)

    Stensrud, Silje; Myklebust, Grethe; Kristianslund, Eirik

    2011-01-01

    players completed three tests: single-leg squat (SLS), single-leg vertical drop jump (SLVDJ) and two-leg vertical drop jump (VDJ). Receiver operating characteristic (ROC) analyses showed good to excellent agreement between 2D video analysis and subjective assessment for SLS and VDJ (area under the ROC...

  17. Video Analysis of Sensory-Motor Features in Infants with Fragile X Syndrome at 9-12 Months of Age

    Science.gov (United States)

    Baranek, Grace T.; Danko, Cassandra D.; Skinner, Martie L.; Donald B., Jr.; Hatton, Deborah D.; Roberts, Jane E.; Mirrett, Penny L.

    2005-01-01

    This study utilized retrospective video analysis to distinguish sensory-motor patterns in infants with fragile X syndrome (FXS) (n=11) from other infants [i.e., autism (n=11), other developmental delay (n=10), typical (n=11)] at 9-12 months of age. Measures of development, autistic features, and FMRP were assessed at the time of entry into the…

  18. Reproducibility of In Vivo Corneal Confocal Microscopy Using an Automated Analysis Program for Detection of Diabetic Sensorimotor Polyneuropathy.

    Directory of Open Access Journals (Sweden)

    Ilia Ostrovski

    Full Text Available In vivo Corneal Confocal Microscopy (IVCCM is a validated, non-invasive test for diabetic sensorimotor polyneuropathy (DSP detection, but its utility is limited by the image analysis time and expertise required. We aimed to determine the inter- and intra-observer reproducibility of a novel automated analysis program compared to manual analysis.In a cross-sectional diagnostic study, 20 non-diabetes controls (mean age 41.4±17.3y, HbA1c 5.5±0.4% and 26 participants with type 1 diabetes (42.8±16.9y, 8.0±1.9% underwent two separate IVCCM examinations by one observer and a third by an independent observer. Along with nerve density and branch density, corneal nerve fibre length (CNFL was obtained by manual analysis (CNFLMANUAL, a protocol in which images were manually selected for automated analysis (CNFLSEMI-AUTOMATED, and one in which selection and analysis were performed electronically (CNFLFULLY-AUTOMATED. Reproducibility of each protocol was determined using intraclass correlation coefficients (ICC and, as a secondary objective, the method of Bland and Altman was used to explore agreement between protocols.Mean CNFLManual was 16.7±4.0, 13.9±4.2 mm/mm2 for non-diabetes controls and diabetes participants, while CNFLSemi-Automated was 10.2±3.3, 8.6±3.0 mm/mm2 and CNFLFully-Automated was 12.5±2.8, 10.9 ± 2.9 mm/mm2. Inter-observer ICC and 95% confidence intervals (95%CI were 0.73(0.56, 0.84, 0.75(0.59, 0.85, and 0.78(0.63, 0.87, respectively (p = NS for all comparisons. Intra-observer ICC and 95%CI were 0.72(0.55, 0.83, 0.74(0.57, 0.85, and 0.84(0.73, 0.91, respectively (p<0.05 for CNFLFully-Automated compared to others. The other IVCCM parameters had substantially lower ICC compared to those for CNFL. CNFLSemi-Automated and CNFLFully-Automated underestimated CNFLManual by mean and 95%CI of 35.1(-4.5, 67.5% and 21.0(-21.6, 46.1%, respectively.Despite an apparent measurement (underestimation bias in comparison to the manual strategy of image

  19. The Use of Computer Simulation Methods to Reach Data for Economic Analysis of Automated Logistic Systems

    Science.gov (United States)

    Neradilová, Hana; Fedorko, Gabriel

    2016-12-01

    Automated logistic systems are becoming more widely used within enterprise logistics processes. Their main advantage is that they allow increasing the efficiency and reliability of logistics processes. In terms of evaluating their effectiveness, it is necessary to take into account the economic aspect of the entire process. However, many users ignore and underestimate this area,which is not correct. One of the reasons why the economic aspect is overlooked is the fact that obtaining information for such an analysis is not easy. The aim of this paper is to present the possibilities of computer simulation methods for obtaining data for full-scale economic analysis implementation.

  20. Procedures and compliance of a video modeling applied behavior analysis intervention for Brazilian parents of children with autism spectrum disorders.

    Science.gov (United States)

    Bagaiolo, Leila F; Mari, Jair de J; Bordini, Daniela; Ribeiro, Tatiane C; Martone, Maria Carolina C; Caetano, Sheila C; Brunoni, Decio; Brentani, Helena; Paula, Cristiane S

    2017-03-01

    Video modeling using applied behavior analysis techniques is one of the most promising and cost-effective ways to improve social skills for parents with autism spectrum disorder children. The main objectives were: (1) To elaborate/describe videos to improve eye contact and joint attention, and to decrease disruptive behaviors of autism spectrum disorder children, (2) to describe a low-cost parental training intervention, and (3) to assess participant's compliance. This is a descriptive study of a clinical trial for autism spectrum disorder children. The parental training intervention was delivered over 22 weeks based on video modeling. Parents with at least 8 years of schooling with an autism spectrum disorder child between 3 and 6 years old with an IQ lower than 70 were invited to participate. A total of 67 parents fulfilled the study criteria and were randomized into two groups: 34 as the intervention and 33 as controls. In all, 14 videos were recorded covering management of disruptive behaviors, prompting hierarchy, preference assessment, and acquisition of better eye contact and joint attention. Compliance varied as follows: good 32.4%, reasonable 38.2%, low 5.9%, and 23.5% with no compliance. Video modeling parental training seems a promising, feasible, and low-cost way to deliver care for children with autism spectrum disorder, particularly for populations with scarce treatment resources.

  1. OpenComet: An automated tool for comet assay image analysis

    Directory of Open Access Journals (Sweden)

    Benjamin M. Gyori

    2014-01-01

    Full Text Available Reactive species such as free radicals are constantly generated in vivo and DNA is the most important target of oxidative stress. Oxidative DNA damage is used as a predictive biomarker to monitor the risk of development of many diseases. The comet assay is widely used for measuring oxidative DNA damage at a single cell level. The analysis of comet assay output images, however, poses considerable challenges. Commercial software is costly and restrictive, while free software generally requires laborious manual tagging of cells. This paper presents OpenComet, an open-source software tool providing automated analysis of comet assay images. It uses a novel and robust method for finding comets based on geometric shape attributes and segmenting the comet heads through image intensity profile analysis. Due to automation, OpenComet is more accurate, less prone to human bias, and faster than manual analysis. A live analysis functionality also allows users to analyze images captured directly from a microscope. We have validated OpenComet on both alkaline and neutral comet assay images as well as sample images from existing software packages. Our results show that OpenComet achieves high accuracy with significantly reduced analysis time.

  2. OpenComet: an automated tool for comet assay image analysis.

    Science.gov (United States)

    Gyori, Benjamin M; Venkatachalam, Gireedhar; Thiagarajan, P S; Hsu, David; Clement, Marie-Veronique

    2014-01-01

    Reactive species such as free radicals are constantly generated in vivo and DNA is the most important target of oxidative stress. Oxidative DNA damage is used as a predictive biomarker to monitor the risk of development of many diseases. The comet assay is widely used for measuring oxidative DNA damage at a single cell level. The analysis of comet assay output images, however, poses considerable challenges. Commercial software is costly and restrictive, while free software generally requires laborious manual tagging of cells. This paper presents OpenComet, an open-source software tool providing automated analysis of comet assay images. It uses a novel and robust method for finding comets based on geometric shape attributes and segmenting the comet heads through image intensity profile analysis. Due to automation, OpenComet is more accurate, less prone to human bias, and faster than manual analysis. A live analysis functionality also allows users to analyze images captured directly from a microscope. We have validated OpenComet on both alkaline and neutral comet assay images as well as sample images from existing software packages. Our results show that OpenComet achieves high accuracy with significantly reduced analysis time.

  3. Analysis of Complexity Evolution Management and Human Performance Issues in Commercial Aircraft Automation Systems

    Science.gov (United States)

    Vakil, Sanjay S.; Hansman, R. John

    2000-01-01

    Autoflight systems in the current generation of aircraft have been implicated in several recent incidents and accidents. A contributory aspect to these incidents may be the manner in which aircraft transition between differing behaviours or 'modes.' The current state of aircraft automation was investigated and the incremental development of the autoflight system was tracked through a set of aircraft to gain insight into how these systems developed. This process appears to have resulted in a system without a consistent global representation. In order to evaluate and examine autoflight systems, a 'Hybrid Automation Representation' (HAR) was developed. This representation was used to examine several specific problems known to exist in aircraft systems. Cyclomatic complexity is an analysis tool from computer science which counts the number of linearly independent paths through a program graph. This approach was extended to examine autoflight mode transitions modelled with the HAR. A survey was conducted of pilots to identify those autoflight mode transitions which airline pilots find difficult. The transitions identified in this survey were analyzed using cyclomatic complexity to gain insight into the apparent complexity of the autoflight system from the perspective of the pilot. Mode transitions which had been identified as complex by pilots were found to have a high cyclomatic complexity. Further examination was made into a set of specific problems identified in aircraft: the lack of a consistent representation of automation, concern regarding appropriate feedback from the automation, and the implications of physical limitations on the autoflight systems. Mode transitions involved in changing to and leveling at a new altitude were identified across multiple aircraft by numerous pilots. Where possible, evaluation and verification of the behaviour of these autoflight mode transitions was investigated via aircraft-specific high fidelity simulators. Three solution

  4. Motmot, an open-source toolkit for realtime video acquisition and analysis

    Directory of Open Access Journals (Sweden)

    Dickinson Michael H

    2009-07-01

    Full Text Available Abstract Background Video cameras sense passively from a distance, offer a rich information stream, and provide intuitively meaningful raw data. Camera-based imaging has thus proven critical for many advances in neuroscience and biology, with applications ranging from cellular imaging of fluorescent dyes to tracking of whole-animal behavior at ecologically relevant spatial scales. Results Here we present 'Motmot': an open-source software suite for acquiring, displaying, saving, and analyzing digital video in real-time. At the highest level, Motmot is written in the Python computer language. The large amounts of data produced by digital cameras are handled by low-level, optimized functions, usually written in C. This high-level/low-level partitioning and use of select external libraries allow Motmot, with only modest complexity, to perform well as a core technology for many high-performance imaging tasks. In its current form, Motmot allows for: (1 image acquisition from a variety of camera interfaces (package motmot.cam_iface, (2 the display of these images with minimal latency and computer resources using wxPython and OpenGL (package motmot.wxglvideo, (3 saving images with no compression in a single-pass, low-CPU-use format (package motmot.FlyMovieFormat, (4 a pluggable framework for custom analysis of images in realtime and (5 firmware for an inexpensive USB device to synchronize image acquisition across multiple cameras, with analog input, or with other hardware devices (package motmot.fview_ext_trig. These capabilities are brought together in a graphical user interface, called 'FView', allowing an end user to easily view and save digital video without writing any code. One plugin for FView, 'FlyTrax', which tracks the movement of fruit flies in real-time, is included with Motmot, and is described to illustrate the capabilities of FView. Conclusion Motmot enables realtime image processing and display using the Python computer language. In

  5. AGAPE (Automated Genome Analysis PipelinE for pan-genome analysis of Saccharomyces cerevisiae.

    Directory of Open Access Journals (Sweden)

    Giltae Song

    Full Text Available The characterization and public release of genome sequences from thousands of organisms is expanding the scope for genetic variation studies. However, understanding the phenotypic consequences of genetic variation remains a challenge in eukaryotes due to the complexity of the genotype-phenotype map. One approach to this is the intensive study of model systems for which diverse sources of information can be accumulated and integrated. Saccharomyces cerevisiae is an extensively studied model organism, with well-known protein functions and thoroughly curated phenotype data. To develop and expand the available resources linking genomic variation with function in yeast, we aim to model the pan-genome of S. cerevisiae. To initiate the yeast pan-genome, we newly sequenced or re-sequenced the genomes of 25 strains that are commonly used in the yeast research community using advanced sequencing technology at high quality. We also developed a pipeline for automated pan-genome analysis, which integrates the steps of assembly, annotation, and variation calling. To assign strain-specific functional annotations, we identified genes that were not present in the reference genome. We classified these according to their presence or absence across strains and characterized each group of genes with known functional and phenotypic features. The functional roles of novel genes not found in the reference genome and associated with strains or groups of strains appear to be consistent with anticipated adaptations in specific lineages. As more S. cerevisiae strain genomes are released, our analysis can be used to collate genome data and relate it to lineage-specific patterns of genome evolution. Our new tool set will enhance our understanding of genomic and functional evolution in S. cerevisiae, and will be available to the yeast genetics and molecular biology community.

  6. AGAPE (Automated Genome Analysis PipelinE) for pan-genome analysis of Saccharomyces cerevisiae.

    Science.gov (United States)

    Song, Giltae; Dickins, Benjamin J A; Demeter, Janos; Engel, Stacia; Gallagher, Jennifer; Choe, Kisurb; Dunn, Barbara; Snyder, Michael; Cherry, J Michael

    2015-01-01

    The characterization and public release of genome sequences from thousands of organisms is expanding the scope for genetic variation studies. However, understanding the phenotypic consequences of genetic variation remains a challenge in eukaryotes due to the complexity of the genotype-phenotype map. One approach to this is the intensive study of model systems for which diverse sources of information can be accumulated and integrated. Saccharomyces cerevisiae is an extensively studied model organism, with well-known protein functions and thoroughly curated phenotype data. To develop and expand the available resources linking genomic variation with function in yeast, we aim to model the pan-genome of S. cerevisiae. To initiate the yeast pan-genome, we newly sequenced or re-sequenced the genomes of 25 strains that are commonly used in the yeast research community using advanced sequencing technology at high quality. We also developed a pipeline for automated pan-genome analysis, which integrates the steps of assembly, annotation, and variation calling. To assign strain-specific functional annotations, we identified genes that were not present in the reference genome. We classified these according to their presence or absence across strains and characterized each group of genes with known functional and phenotypic features. The functional roles of novel genes not found in the reference genome and associated with strains or groups of strains appear to be consistent with anticipated adaptations in specific lineages. As more S. cerevisiae strain genomes are released, our analysis can be used to collate genome data and relate it to lineage-specific patterns of genome evolution. Our new tool set will enhance our understanding of genomic and functional evolution in S. cerevisiae, and will be available to the yeast genetics and molecular biology community.

  7. Content Based Video Retrieval

    Directory of Open Access Journals (Sweden)

    B. V. Patel

    2012-10-01

    Full Text Available Content based video retrieval is an approach for facilitating the searching and browsing of large image collections over World Wide Web. In this approach, video analysis is conducted on low level visual properties extracted from video frame. We believed that in order to create an effective video retrieval system, visual perception must be taken into account. We conjectured that a technique which employs multiple features for indexing and retrieval would be more effective in the discrimination and search tasks of videos. In order to validate this claim, content based indexing and retrieval systems were implemented using color histogram, various texture features and other approaches. Videos were stored in Oracle 9i Database and a user study measured correctness of response.

  8. Analysis of Waves in the Near-Field of Wave Energy Converter Arrays through Stereo Video

    Science.gov (United States)

    Black, C.; Haller, M. C.

    2013-12-01

    Oregon State University conducted a series of laboratory experiments to measure and quantify the near-field wave effects caused within arrays of 3 and 5 Wave Energy Converters (WEC). As the waves and WECs interact, significant scattering and radiation occurs increasing/decreasing the wave heights as well as changing the direction the wave is traveling. These effects may vary based on the number of WECs within an array and their respective locations. The findings of this analysis will assist in selecting the WEC farm location and in improving WEC design. Analyzing the near-field waves will help determine the relative importance of absorption, scattering, and radiation as a function of the incident wave conditions and device performance. The WEC mooring system design specifications may also be impacted if the wave heights in the near-field are greater than expected. It is imperative to fully understand the near-field waves before full-scale WEC farms can be installed. Columbia Power Technologies' Manta served as the test WEC prototype on a 1 to 33 scale. Twenty-three wave gages measured the wave heights in both regular and real sea conditions at locations surrounding and within the WEC arrays. While these gages give a good overall picture of the water elevation behavior, it is difficult to resolve the complicated wave field within the WEC array using point gages. Here stereo video techniques are applied to extract the 3D water surface elevations at high resolution in order to reconstruct the multi-directional wave field in the near-field of the WEC array. The video derived wave information will also be compared against the wave gage data.

  9. Automated kidney morphology measurements from ultrasound images using texture and edge analysis

    Science.gov (United States)

    Ravishankar, Hariharan; Annangi, Pavan; Washburn, Michael; Lanning, Justin

    2016-04-01

    In a typical ultrasound scan, a sonographer measures Kidney morphology to assess renal abnormalities. Kidney morphology can also help to discriminate between chronic and acute kidney failure. The caliper placements and volume measurements are often time consuming and an automated solution will help to improve accuracy, repeatability and throughput. In this work, we developed an automated Kidney morphology measurement solution from long axis Ultrasound scans. Automated kidney segmentation is challenging due to wide variability in kidney shape, size, weak contrast of the kidney boundaries and presence of strong edges like diaphragm, fat layers. To address the challenges and be able to accurately localize and detect kidney regions, we present a two-step algorithm that makes use of edge and texture information in combination with anatomical cues. First, we use an edge analysis technique to localize kidney region by matching the edge map with predefined templates. To accurately estimate the kidney morphology, we use textural information in a machine learning algorithm framework using Haar features and Gradient boosting classifier. We have tested the algorithm on 45 unseen cases and the performance against ground truth is measured by computing Dice overlap, % error in major and minor axis of kidney. The algorithm shows successful performance on 80% cases.

  10. An approach for model-based energy cost analysis of industrial automation systems

    Energy Technology Data Exchange (ETDEWEB)

    Beck, A.; Goehner, P. [Institute of Industrial Automation and Software Engineering, University of Stuttgart, Pfaffenwaldring 47, 70550 Stuttgart (Germany)

    2012-08-15

    Current energy reports confirm the steadily dilating gap between available conventional energy resources and future energy demand. This gap results in increasing energy costs and has become a determining factor in economies. Hence, politics, industry, and research focus either on regenerative energy resources or on energy-efficient concepts, methods, and technologies for energy-consuming devices. A remaining challenge is energy optimization of complex systems during their operation time. In addition to optimization measures that can be applied in development and engineering, the generation of optimization measures that are customized to the specific dynamic operational situation, promise high-cost saving potentials. During operation time, the systems are located in unique situations and environments and are operated according to individual requirements of their users. Hence, in addition to complexity of the systems, individuality and dynamic variability of their surroundings during operation time complicate identification of goal-oriented optimization measures. This contribution introduces a model-based approach for user-centric energy cost analysis of industrial automation systems. The approach allows automated generation and appliance of individual optimization proposals. Focus of this paper is on a basic variant for a single industrial automation system and its operational parameters.

  11. Adiposoft: automated software for the analysis of white adipose tissue cellularity in histological sections.

    Science.gov (United States)

    Galarraga, Miguel; Campión, Javier; Muñoz-Barrutia, Arrate; Boqué, Noemí; Moreno, Haritz; Martínez, José Alfredo; Milagro, Fermín; Ortiz-de-Solórzano, Carlos

    2012-12-01

    The accurate estimation of the number and size of cells provides relevant information on the kinetics of growth and the physiological status of a given tissue or organ. Here, we present Adiposoft, a fully automated open-source software for the analysis of white adipose tissue cellularity in histological sections. First, we describe the sequence of image analysis routines implemented by the program. Then, we evaluate our software by comparing it with other adipose tissue quantification methods, namely, with the manual analysis of cells in histological sections (used as gold standard) and with the automated analysis of cells in suspension, the most commonly used method. Our results show significant concordance between Adiposoft and the other two methods. We also demonstrate the ability of the proposed method to distinguish the cellular composition of three different rat fat depots. Moreover, we found high correlation and low disagreement between Adiposoft and the manual delineation of cells. We conclude that Adiposoft provides accurate results while considerably reducing the amount of time and effort required for the analysis.

  12. Agreement Between Face-to-Face and Free Software Video Analysis for Assessing Hamstring Flexibility in Adolescents.

    Science.gov (United States)

    Moral-Muñoz, José A; Esteban-Moreno, Bernabé; Arroyo-Morales, Manuel; Cobo, Manuel J; Herrera-Viedma, Enrique

    2015-09-01

    The objective of this study was to determine the level of agreement between face-to-face hamstring flexibility measurements and free software video analysis in adolescents. Reduced hamstring flexibility is common in adolescents (75% of boys and 35% of girls aged 10). The length of the hamstring muscle has an important role in both the effectiveness and the efficiency of basic human movements, and reduced hamstring flexibility is related to various musculoskeletal conditions. There are various approaches to measuring hamstring flexibility with high reliability; the most commonly used approaches in the scientific literature are the sit-and-reach test, hip joint angle (HJA), and active knee extension. The assessment of hamstring flexibility using video analysis could help with adolescent flexibility follow-up. Fifty-four adolescents from a local school participated in a descriptive study of repeated measures using a crossover design. Active knee extension and HJA were measured with an inclinometer and were simultaneously recorded with a video camera. Each video was downloaded to a computer and subsequently analyzed using Kinovea 0.8.15, a free software application for movement analysis. All outcome measures showed reliability estimates with α > 0.90. The lowest reliability was obtained for HJA (α = 0.91). The preliminary findings support the use of a free software tool for assessing hamstring flexibility, offering health professionals a useful tool for adolescent flexibility follow-up.

  13. Scanner-based image quality measurement system for automated analysis of EP output

    Science.gov (United States)

    Kipman, Yair; Mehta, Prashant; Johnson, Kate

    2003-12-01

    Inspection of electrophotographic print cartridge quality and compatibility requires analysis of hundreds of pages on a wide population of printers and copiers. Although print quality inspection is often achieved through the use of anchor prints and densitometry, more comprehensive analysis and quantitative data is desired for performance tracking, benchmarking and failure mode analysis. Image quality measurement systems range in price and performance, image capture paths and levels of automation. In order to address the requirements of a specific application, careful consideration was made to print volume, budgetary limits, and the scope of the desired image quality measurements. A flatbed scanner-based image quality measurement system was selected to support high throughput, maximal automation, and sufficient flexibility for both measurement methods and image sampling rates. Using an automatic document feeder (ADF) for sample management, a half ream of prints can be measured automatically without operator intervention. The system includes optical character recognition (OCR) for automatic determination of target type for measurement suite selection. This capability also enables measurement of mixed stacks of targets since each sample is identified prior to measurement. In addition, OCR is used to read toner ID, machine ID, print count, and other pertinent information regarding the printing conditions and environment. This data is saved to a data file along with the measurement results for complete test documentation. Measurement methods were developed to replace current methods of visual inspection and densitometry. The features that were being analyzed visually could be addressed via standard measurement algorithms. Measurement of density proved to be less simple since the scanner is not a densitometer and anything short of an excellent estimation would be meaningless. In order to address the measurement of density, a transfer curve was built to translate the

  14. Conducting Video Research in the Learning Sciences: Guidance on Selection, Analysis, Technology, and Ethics

    Science.gov (United States)

    Derry, Sharon J.; Pea, Roy D.; Barron, Brigid; Engle, Randi A.; Erickson, Frederick; Goldman, Ricki; Hall, Rogers; Koschmann, Timothy; Lemke, Jay L.; Sherin, Miriam Gamoran; Sherin, Bruce L.

    2010-01-01

    Focusing on expanding technical capabilities and new collaborative possibilities, we address 4 challenges for scientists who collect and use video records to conduct research in and on complex learning environments: (a) Selection: How can researchers be systematic in deciding which elements of a complex environment or extensive video corpus to…

  15. The Effects of Violent Video Games on Aggression: A Meta-Analysis.

    Science.gov (United States)

    Sherry, John L.

    2001-01-01

    Cumulates findings across existing empirical research on the effects of violent video games to estimate overall effect size and discern important trends and moderating variables. Suggests there is a smaller effect of violent video games on aggression than has been found with television violence on aggression. (SG)

  16. Using Videos and Multimodal Discourse Analysis to Study How Students Learn a Trade

    Science.gov (United States)

    Chan, Selena

    2013-01-01

    The use of video to assist with ethnographical-based research is not a new phenomenon. Recent advances in technology have reduced the costs and technical expertise required to use videos for gathering research data. Audio-visual records of learning activities as they take place, allow for many non-vocal and inter-personal communication…

  17. In Pursuit of Reciprocity: Researchers, Teachers, and School Reformers Engaged in Collaborative Analysis of Video Records

    Science.gov (United States)

    Curry, Marnie W.

    2012-01-01

    In the ideal, reciprocity in qualitative inquiry occurs when there is give-and-take between researchers and the researched; however, the demands of the academy and resource constraints often make the pursuit of reciprocity difficult. Drawing on two video-based, qualitative studies in which researchers utilized video records as resources to enhance…

  18. How violent video games communicate violence: A literature review and content analysis of moral disengagement factors

    NARCIS (Netherlands)

    Hartmann, T.; Krakowiak, M.; Tsay-Vogel, M.

    2014-01-01

    Mechanisms of moral disengagement in violent video game play have recently received considerable attention among communication scholars. To date, however, no study has analyzed the prevalence of moral disengagement factors in violent video games. To fill this research gap, the present approach inclu

  19. Possibilities for retracing of copyright violations on current video game consoles by optical disk analysis

    Science.gov (United States)

    Irmler, Frank; Creutzburg, Reiner

    2014-02-01

    This paper deals with the possibilities of retracing copyright violations on current video game consoles (e.g. Microsoft Xbox, Sony PlayStation, ...) by studying the corresponding optical storage media DVD and Blu-ray. The possibilities of forensic investigation of DVD and Blu-ray Discs are presented. It is shown which information can be read by using freeware and commercial software for forensic examination. A detailed analysis is given on the visualization of hidden content and the possibility to find out information about the burning hardware used for writing on the optical discs. In connection with a forensic analysis of the Windows registry of a suspects PC a detailed overview of the crime scene for forged DVD and Blu-ray Discs can be obtained. Optical discs are examined under forensic aspects and the obtained results are implemented into automatic analysis scripts for the commercial forensics program EnCase Forensic. It is shown that for the optical storage media a possibility of identification of the drive used for writing can be obtained. In particular Blu-ray Discs contain the serial number of the burner. These and other findings were incorporated into the creation of various EnCase scripts for the professional forensic investigation with EnCase Forensic. Furthermore, a detailed flowchart for a forensic investigation of copyright infringement was developed.

  20. Impact Analysis of Baseband Quantizer on Coding Efficiency for HDR Video

    Science.gov (United States)

    Wong, Chau-Wai; Su, Guan-Ming; Wu, Min

    2016-10-01

    Digitally acquired high dynamic range (HDR) video baseband signal can take 10 to 12 bits per color channel. It is economically important to be able to reuse the legacy 8 or 10-bit video codecs to efficiently compress the HDR video. Linear or nonlinear mapping on the intensity can be applied to the baseband signal to reduce the dynamic range before the signal is sent to the codec, and we refer to this range reduction step as a baseband quantization. We show analytically and verify using test sequences that the use of the baseband quantizer lowers the coding efficiency. Experiments show that as the baseband quantizer is strengthened by 1.6 bits, the drop of PSNR at a high bitrate is up to 1.60dB. Our result suggests that in order to achieve high coding efficiency, information reduction of videos in terms of quantization error should be introduced in the video codec instead of on the baseband signal.

  1. Utilization of DICOM multi-frame objects for integrating kinetic and kinematic data with raw videos in movement analysis of wheel-chair users to minimize shoulder pain

    Science.gov (United States)

    Deshpande, Ruchi R.; Li, Han; Requejo, Philip; McNitt-Gray, Sarah; Ruparel, Puja; Liu, Brent J.

    2012-02-01

    Wheelchair users are at an increased risk of developing shoulder pain. The key to formulating correct wheelchair operating practices is to analyze the movement patterns of a sample set of subjects. Data collected for movement analysis includes videos and force/ motion readings. Our goal is to combine the kinetic/ kinematic data with the trial video by overlaying force vector graphics on the raw video. Furthermore, conversion of the video to a DICOM multiframe object annotated with the force vector could provide a standardized way of encoding and analyzing data across multiple studies and provide a useful tool for data mining.

  2. Integrating automated structured analysis and design with Ada programming support environments

    Science.gov (United States)

    Hecht, Alan; Simmons, Andy

    1986-01-01

    Ada Programming Support Environments (APSE) include many powerful tools that address the implementation of Ada code. These tools do not address the entire software development process. Structured analysis is a methodology that addresses the creation of complete and accurate system specifications. Structured design takes a specification and derives a plan to decompose the system subcomponents, and provides heuristics to optimize the software design to minimize errors and maintenance. It can also produce the creation of useable modules. Studies have shown that most software errors result from poor system specifications, and that these errors also become more expensive to fix as the development process continues. Structured analysis and design help to uncover error in the early stages of development. The APSE tools help to insure that the code produced is correct, and aid in finding obscure coding errors. However, they do not have the capability to detect errors in specifications or to detect poor designs. An automated system for structured analysis and design TEAMWORK, which can be integrated with an APSE to support software systems development from specification through implementation is described. These tools completement each other to help developers improve quality and productivity, as well as to reduce development and maintenance costs. Complete system documentation and reusable code also resultss from the use of these tools. Integrating an APSE with automated tools for structured analysis and design provide capabilities and advantages beyond those realized with any of these systems used by themselves.

  3. A method for the automated detection phishing websites through both site characteristics and image analysis

    Science.gov (United States)

    White, Joshua S.; Matthews, Jeanna N.; Stacy, John L.

    2012-06-01

    Phishing website analysis is largely still a time-consuming manual process of discovering potential phishing sites, verifying if suspicious sites truly are malicious spoofs and if so, distributing their URLs to the appropriate blacklisting services. Attackers increasingly use sophisticated systems for bringing phishing sites up and down rapidly at new locations, making automated response essential. In this paper, we present a method for rapid, automated detection and analysis of phishing websites. Our method relies on near real-time gathering and analysis of URLs posted on social media sites. We fetch the pages pointed to by each URL and characterize each page with a set of easily computed values such as number of images and links. We also capture a screen-shot of the rendered page image, compute a hash of the image and use the Hamming distance between these image hashes as a form of visual comparison. We provide initial results demonstrate the feasibility of our techniques by comparing legitimate sites to known fraudulent versions from Phishtank.com, by actively introducing a series of minor changes to a phishing toolkit captured in a local honeypot and by performing some initial analysis on a set of over 2.8 million URLs posted to Twitter over a 4 days in August 2011. We discuss the issues encountered during our testing such as resolvability and legitimacy of URL's posted on Twitter, the data sets used, the characteristics of the phishing sites we discovered, and our plans for future work.

  4. Automated multidimensional image analysis reveals a role for Abl in embryonic wound repair.

    Science.gov (United States)

    Zulueta-Coarasa, Teresa; Tamada, Masako; Lee, Eun J; Fernandez-Gonzalez, Rodrigo

    2014-07-01

    The embryonic epidermis displays a remarkable ability to repair wounds rapidly. Embryonic wound repair is driven by the evolutionary conserved redistribution of cytoskeletal and junctional proteins around the wound. Drosophila has emerged as a model to screen for factors implicated in wound closure. However, genetic screens have been limited by the use of manual analysis methods. We introduce MEDUSA, a novel image-analysis tool for the automated quantification of multicellular and molecular dynamics from time-lapse confocal microscopy data. We validate MEDUSA by quantifying wound closure in Drosophila embryos, and we show that the results of our automated analysis are comparable to analysis by manual delineation and tracking of the wounds, while significantly reducing the processing time. We demonstrate that MEDUSA can also be applied to the investigation of cellular behaviors in three and four dimensions. Using MEDUSA, we find that the conserved nonreceptor tyrosine kinase Abelson (Abl) contributes to rapid embryonic wound closure. We demonstrate that Abl plays a role in the organization of filamentous actin and the redistribution of the junctional protein β-catenin at the wound margin during embryonic wound repair. Finally, we discuss different models for the role of Abl in the regulation of actin architecture and adhesion dynamics at the wound margin.

  5. Analysis and segmentation of images in case of solving problems of detecting and tracing objects on real-time video

    Science.gov (United States)

    Ezhova, Kseniia; Fedorenko, Dmitriy; Chuhlamov, Anton

    2016-04-01

    The article deals with the methods of image segmentation based on color space conversion, and allow the most efficient way to carry out the detection of a single color in a complex background and lighting, as well as detection of objects on a homogeneous background. The results of the analysis of segmentation algorithms of this type, the possibility of their implementation for creating software. The implemented algorithm is very time-consuming counting, making it a limited application for the analysis of the video, however, it allows us to solve the problem of analysis of objects in the image if there is no dictionary of images and knowledge bases, as well as the problem of choosing the optimal parameters of the frame quantization for video analysis.

  6. Enhanced features for supervised lecture video segmentation and indexing

    Science.gov (United States)

    Ma, Di; Agam, Gady

    2015-03-01

    Lecture videos are common and increase rapidly. Consequently, automatically and efficiently indexing such videos is an important task. Video segmentation is a crucial step of video indexing that directly affects the indexing quality. We are developing a system for automated video indexing and in this paper discuss our approach for video segmentation and classification of video segments. The novel contributions in this paper are twofold. First we develop a dynamic Gabor filter and use it to extract features for video frame classification. Second, we propose a recursive video segmentation algorithm that is capable of clustering video frames into video segments. We then use these to classify and index the video segments. The proposed approach results in a higher True Positive Rate(TPR) 89.5% and lower False Discovery Rate(FDR) 11.2% compared with the commercial system(TPR= 81.8%, FDR=39.4%) demonstrate that the performance is significantly improved by using enhanced features.

  7. A feasibility assessment of automated FISH image and signal analysis to assist cervical cancer detection

    Science.gov (United States)

    Wang, Xingwei; Li, Yuhua; Liu, Hong; Li, Shibo; Zhang, Roy R.; Zheng, Bin

    2012-02-01

    Fluorescence in situ hybridization (FISH) technology provides a promising molecular imaging tool to detect cervical cancer. Since manual FISH analysis is difficult, time-consuming, and inconsistent, the automated FISH image scanning systems have been developed. Due to limited focal depth of scanned microscopic image, a FISH-probed specimen needs to be scanned in multiple layers that generate huge image data. To improve diagnostic efficiency of using automated FISH image analysis, we developed a computer-aided detection (CAD) scheme. In this experiment, four pap-smear specimen slides were scanned by a dual-detector fluorescence image scanning system that acquired two spectrum images simultaneously, which represent images of interphase cells and FISH-probed chromosome X. During image scanning, once detecting a cell signal, system captured nine image slides by automatically adjusting optical focus. Based on the sharpness index and maximum intensity measurement, cells and FISH signals distributed in 3-D space were projected into a 2-D con-focal image. CAD scheme was applied to each con-focal image to detect analyzable interphase cells using an adaptive multiple-threshold algorithm and detect FISH-probed signals using a top-hat transform. The ratio of abnormal cells was calculated to detect positive cases. In four scanned specimen slides, CAD generated 1676 con-focal images that depicted analyzable cells. FISH-probed signals were independently detected by our CAD algorithm and an observer. The Kappa coefficients for agreement between CAD and observer ranged from 0.69 to 1.0 in detecting/counting FISH signal spots. The study demonstrated the feasibility of applying automated FISH image and signal analysis to assist cyto-geneticists in detecting cervical cancers.

  8. Automated analysis of NF-κB nuclear translocation kinetics in high-throughput screening.

    Directory of Open Access Journals (Sweden)

    Zi Di

    Full Text Available Nuclear entry and exit of the NF-κB family of dimeric transcription factors plays an essential role in regulating cellular responses to inflammatory stress. The dynamics of this nuclear translocation can vary significantly within a cell population and may dramatically change e.g. upon drug exposure. Furthermore, there is significant heterogeneity in individual cell response upon stress signaling. In order to systematically determine factors that define NF-κB translocation dynamics, high-throughput screens that enable the analysis of dynamic NF-κB responses in individual cells in real time are essential. Thus far, only NF-κB downstream signaling responses of whole cell populations at the transcriptional level are in high-throughput mode. In this study, we developed a fully automated image analysis method to determine the time-course of NF-κB translocation in individual cells, suitable for high-throughput screenings in the context of compound screening and functional genomics. Two novel segmentation methods were used for defining the individual nuclear and cytoplasmic regions: watershed masked clustering (WMC and best-fit ellipse of Voronoi cell (BEVC. The dynamic NFκB oscillatory response at the single cell and population level was coupled to automated extraction of 26 analogue translocation parameters including number of peaks, time to reach each peak, and amplitude of each peak. Our automated image analysis method was validated through a series of statistical tests demonstrating computational efficient and accurate NF-κB translocation dynamics quantification of our algorithm. Both pharmacological inhibition of NF-κB and short interfering RNAs targeting the inhibitor of NFκB, IκBα, demonstrated the ability of our method to identify compounds and genetic players that interfere with the nuclear transition of NF-κB.

  9. Bacterial growth on surfaces: Automated image analysis for quantification of growth rate-related parameters

    DEFF Research Database (Denmark)

    Møller, S.; Sternberg, Claus; Poulsen, L. K.

    1995-01-01

    species-specific hybridizations with fluorescence-labelled ribosomal probes to estimate the single-cell concentration of RNA. By automated analysis of digitized images of stained cells, we determined four independent growth rate-related parameters: cellular RNA and DNA contents, cell volume......, and the frequency of dividing cells in a cell population. These parameters were used to compare physiological states of liquid-suspended and surfacegrowing Pseudomonas putida KT2442 in chemostat cultures. The major finding is that the correlation between substrate availability and cellular growth rate found...

  10. Automated Static Culture System Cell Module Mixing Protocol and Computational Fluid Dynamics Analysis

    Science.gov (United States)

    Kleis, Stanley J.; Truong, Tuan; Goodwin, Thomas J,

    2004-01-01

    This report is a documentation of a fluid dynamic analysis of the proposed Automated Static Culture System (ASCS) cell module mixing protocol. The report consists of a review of some basic fluid dynamics principles appropriate for the mixing of a patch of high oxygen content media into the surrounding media which is initially depleted of oxygen, followed by a computational fluid dynamics (CFD) study of this process for the proposed protocol over a range of the governing parameters. The time histories of oxygen concentration distributions and mechanical shear levels generated are used to characterize the mixing process for different parameter values.

  11. Analysis of Automated Modern Web Crawling and Testing Tools and Their Possible Employment for Information Extraction

    Directory of Open Access Journals (Sweden)

    Tomas Grigalis

    2012-04-01

    Full Text Available World Wide Web has become an enormously big repository of data. Extracting, integrating and reusing this kind of data has a wide range of applications, including meta-searching, comparison shopping, business intelligence tools and security analysis of information in websites. However, reaching information in modern WEB 2.0 web pages, where HTML tree is often dynamically modified by various JavaScript codes, new data are added by asynchronous requests to the web server and elements are positioned with the help of cascading style sheets, is a difficult task. The article reviews automated web testing tools for information extraction tasks.Article in Lithuanian

  12. Automated preparation of Kepler time series of planet hosts for asteroseismic analysis

    DEFF Research Database (Denmark)

    Handberg, R.; Lund, M. N.

    2014-01-01

    One of the tasks of the Kepler Asteroseismic Science Operations Center (KASOC) is to provide asteroseismic analyses on Kepler Objects of Interest (KOIs). However, asteroseismic analysis of planetary host stars presents some unique complications with respect to data preprocessing, compared to pure...... photometric time series than the original data. The methods are automated and can therefore easily be applied to a large number of stars. The application of the filter is not restricted to planetary hosts, but can be applied to any solar-like or red giant stars observed by Kepler/K2....

  13. Software Tool for Automated Failure Modes and Effects Analysis (FMEA) of Hydraulic Systems

    DEFF Research Database (Denmark)

    Stecki, J. S.; Conrad, Finn; Oh, B.

    2002-01-01

    Offshore, marine,aircraft and other complex engineering systems operate in harsh environmental and operational conditions and must meet stringent requirements of reliability, safety and maintability. To reduce the hight costs of development of new systems in these fields improved the design...... management techniques and a vast array of computer aided techniques are applied during design and testing stages. The paper present and discusses the research and development of a software tool for automated failure mode and effects analysis - FMEA - of hydraulic systems. The paper explains the underlying...

  14. Standing Waves in an Elastic Spring: A Systematic Study by Video Analysis

    Science.gov (United States)

    Ventura, Daniel Rodrigues; de Carvalho, Paulo Simeão; Dias, Marco Adriano

    2017-04-01

    The word "wave" is part of the daily language of every student. However, the physical understanding of the concept demands a high level of abstract thought. In physics, waves are oscillating variations of a physical quantity that involve the transfer of energy from one point to another, without displacement of matter. A wave can be formed by an elastic deformation, a variation of pressure, changes in the intensity of electric or magnetic fields, a propagation of a temperature variation, or other disturbances. Moreover, a wave can be categorized as pulsed or periodic. Most importantly, conditions can be set such that waves interfere with one another, resulting in standing waves. These have many applications in technology, although they are not always readily identified and/or understood by all students. In this work, we use a simple setup including a low-cost constant spring, such as a Slinky, and the free software Tracker for video analysis. We show they can be very useful for the teaching of mechanical wave propagation and the analysis of harmonics in standing waves.

  15. A meta-analysis of active video games on health outcomes among children and adolescents.

    Science.gov (United States)

    Gao, Z; Chen, S; Pasco, D; Pope, Z

    2015-09-01

    This meta-analysis synthesizes current literature concerning the effects of active video games (AVGs) on children/adolescents' health-related outcomes. A total of 512 published studies on AVGs were located, and 35 articles were included based on the following criteria: (i) data-based research articles published in English between 1985 and 2015; (ii) studied some types of AVGs and related outcomes among children/adolescents and (iii) had at least one comparison within each study. Data were extracted to conduct comparisons for outcome measures in three separate categories: AVGs and sedentary behaviours, AVGs and laboratory-based exercise, and AVGs and field-based physical activity. Effect size for each entry was calculated with the Comprehensive Meta-Analysis software in 2015. Mean effect size (Hedge's g) and standard deviation were calculated for each comparison. Compared with sedentary behaviours, AVGs had a large effect on health outcomes. The effect sizes for physiological outcomes were marginal when comparing AVGs with laboratory-based exercises. The comparison between AVGs and field-based physical activity had null to moderate effect sizes. AVGs could yield equivalent health benefits to children/adolescents as laboratory-based exercise or field-based physical activity. Therefore, AVGs can be a good alternative for sedentary behaviour and addition to traditional physical activity and sports in children/adolescents.

  16. Bringing Javanesse Traditional Dance into Basic Physics Class: Exemplifying Projectile Motion through Video Analysis

    Science.gov (United States)

    Handayani, Langlang; Prasetya Aji, Mahardika; Susilo; Marwoto, Putut

    2016-08-01

    An alternative approach of an arts-based instruction for Basic Physics class has been developed through the implementation of video analysis of a Javanesse traditional dance: Bambangan Cakil. A particular movement of the dance -weapon throwing- was analyzed by employing the LoggerPro software package to exemplify projectile motion. The results of analysis indicated that the movement of the thrown weapon in Bambangan Cakil dance provides some helping explanations of several physics concepts of projectile motion: object's path, velocity, and acceleration, in a form of picture, graph and also table. Such kind of weapon path and velocity can be shown via a picture or graph, while such concepts of decreasing velocity in y direction (weapon moving downward and upward) due to acceleration g can be represented through the use of a table. It was concluded that in a Javanesse traditional dance there are many physics concepts which can be explored. The study recommends to bring the traditional dance into a science class which will enable students to get more understanding of both physics concepts and Indonesia cultural heritage.

  17. 3D video analysis of the novel object recognition test in rats.

    Science.gov (United States)

    Matsumoto, Jumpei; Uehara, Takashi; Urakawa, Susumu; Takamura, Yusaku; Sumiyoshi, Tomiki; Suzuki, Michio; Ono, Taketoshi; Nishijo, Hisao

    2014-10-01

    The novel object recognition (NOR) test has been widely used to test memory function. We developed a 3D computerized video analysis system that estimates nose contact with an object in Long Evans rats to analyze object exploration during NOR tests. The results indicate that the 3D system reproducibly and accurately scores the NOR test. Furthermore, the 3D system captures a 3D trajectory of the nose during object exploration, enabling detailed analyses of spatiotemporal patterns of object exploration. The 3D trajectory analysis revealed a specific pattern of object exploration in the sample phase of the NOR test: normal rats first explored the lower parts of objects and then gradually explored the upper parts. A systematic injection of MK-801 suppressed changes in these exploration patterns. The results, along with those of previous studies, suggest that the changes in the exploration patterns reflect neophobia to a novel object and/or changes from spatial learning to object learning. These results demonstrate that the 3D tracking system is useful not only for detailed scoring of animal behaviors but also for investigation of characteristic spatiotemporal patterns of object exploration. The system has the potential to facilitate future investigation of neural mechanisms underlying object exploration that result from dynamic and complex brain activity.

  18. Video-Assisted Thoracoscopic Sympathectomy for Palmar Hyperhidrosis: A Meta-Analysis of Randomized Controlled Trials.

    Directory of Open Access Journals (Sweden)

    Wenxiong Zhang

    Full Text Available Video-assisted thoracoscopic sympathectomy (VTS is effective in treating palmar hyperhidrosis (PH. However, it is no consensus over which segment should undergo VTS to maximize efficacy and minimize the complications of compensatory hyperhidrosis (CH. This study was designed to compare the efficiency and side effects of VTS of different segments in the treatment of PH.A comprehensive search of PubMed, Ovid MEDLINE, EMBASE, Web of Science, ScienceDirect, the Cochrane Library, Scopus and Google Scholar was performed to identify studies comparing VTS of different segments for treatment of PH. The data was analyzed by Revman 5.3 software and SPSS 18.0.A total of eight randomized controlled trials (RCTs involving 1200 patients were included. Meta-analysis showed that single segment/low segments VTS could reduce the risk of moderate/severe CH compared with multiple segments/high segments. The risk of total CH had a similar trend. In the subgroup analysis of single segment VTS, no significant differences were found between T2/T3 VTS and other segments in postoperative CH and degree of CH. T4 VTS showed better efficacy in limiting CH compared with other segments.T4 appears to be the best segment for the surgical treatment of PH. Our findings require further validation in more high-quality, large-scale randomized controlled trials.

  19. Video-Assisted Thoracoscopic Sympathectomy for Palmar Hyperhidrosis: A Meta-Analysis of Randomized Controlled Trials

    Science.gov (United States)

    Zhang, Wenxiong; Yu, Dongliang; Jiang, Han; Xu, Jianjun; Wei, Yiping

    2016-01-01

    Objectives Video-assisted thoracoscopic sympathectomy (VTS) is effective in treating palmar hyperhidrosis (PH). However, it is no consensus over which segment should undergo VTS to maximize efficacy and minimize the complications of compensatory hyperhidrosis (CH). This study was designed to compare the efficiency and side effects of VTS of different segments in the treatment of PH. Methods A comprehensive search of PubMed, Ovid MEDLINE, EMBASE, Web of Science, ScienceDirect, the Cochrane Library, Scopus and Google Scholar was performed to identify studies comparing VTS of different segments for treatment of PH. The data was analyzed by Revman 5.3 software and SPSS 18.0. Results A total of eight randomized controlled trials (RCTs) involving 1200 patients were included. Meta-analysis showed that single segment/low segments VTS could reduce the risk of moderate/severe CH compared with multiple segments/high segments. The risk of total CH had a similar trend. In the subgroup analysis of single segment VTS, no significant differences were found between T2/T3 VTS and other segments in postoperative CH and degree of CH. T4 VTS showed better efficacy in limiting CH compared with other segments. Conclusions T4 appears to be the best segment for the surgical treatment of PH. Our findings require further validation in more high-quality, large-scale randomized controlled trials. PMID:27187774

  20. Further Exploration of the Classroom Video Analysis (CVA) Instrument as a Measure of Usable Knowledge for Teaching Mathematics: Taking a Knowledge System Perspective

    Science.gov (United States)

    Kersting, Nicole B.; Sutton, Taliesin; Kalinec-Craig, Crystal; Stoehr, Kathleen Jablon; Heshmati, Saeideh; Lozano, Guadalupe; Stigler, James W.

    2016-01-01

    In this article we report further explorations of the classroom video analysis instrument (CVA), a measure of usable teacher knowledge based on scoring teachers' written analyses of classroom video clips. Like other researchers, our work thus far has attempted to identify and measure separable components of teacher knowledge. In this study we take…

  1. Automated Assignment of MS/MS Cleavable Cross-Links in Protein 3D-Structure Analysis

    Science.gov (United States)

    Götze, Michael; Pettelkau, Jens; Fritzsche, Romy; Ihling, Christian H.; Schäfer, Mathias; Sinz, Andrea

    2015-01-01

    CID-MS/MS cleavable cross-linkers hold an enormous potential for an automated analysis of cross-linked products, which is essential for conducting structural proteomics studies. The created characteristic fragment ion patterns can easily be used for an automated assignment and discrimination of cross-linked products. To date, there are only a few software solutions available that make use of these properties, but none allows for an automated analysis of cleavable cross-linked products. The MeroX software fills this gap and presents a powerful tool for protein 3D-structure analysis in combination with MS/MS cleavable cross-linkers. We show that MeroX allows an automatic screening of characteristic fragment ions, considering static and variable peptide modifications, and effectively scores different types of cross-links. No manual input is required for a correct assignment of cross-links and false discovery rates are calculated. The self-explanatory graphical user interface of MeroX provides easy access for an automated cross-link search platform that is compatible with commonly used data file formats, enabling analysis of data originating from different instruments. The combination of an MS/MS cleavable cross-linker with a dedicated software tool for data analysis provides an automated workflow for 3D-structure analysis of proteins. MeroX is available at www.StavroX.com .

  2. VAPI: low-cost, rapid automated visual inspection system for Petri plate analysis

    Science.gov (United States)

    Chatburn, L. T.; Kirkup, B. C.; Polz, M. F.

    2007-09-01

    Most culture-based microbiology tasks utilize a petri plate during processing, but rarely do the scientists capture the full information available from the plate. In particular, visual analysis of plates is an under-developed rich source of data that can be rapid and non-invasive. However, collecting this data has been limited by the difficulties of standardizing and quantifying human observations, by the limits of a scientists' fatigue, and by the cost of automating the process. The availability of specialized counting equipment and intelligent camera systems has not changed this - they are prohibitively expensive for many laboratories, only process a limited number of plate types, are often destructive to the sample, and have limited accuracy. This paper describes an automated visual inspection solution, VAPI, that employs inexpensive consumer computing hardware and digital cameras along with custom cross-platform open-source software written in C++, combining Trolltech's Qt GUI toolkit with Intel's OpenCV computer vision library. The system is more accurate than common commercial systems costing many times as much, while being flexible in use and offering comparable responsiveness. VAPI not only counts colonies but also sorts and enumerates colonies by morphology, tracks colony growth by time series analysis, and provides other analytical resources. Output to XML files or directly to a database provides data that can be easily maintained and manipulated by the end user, offering ready access for system enhancement, interaction with other software systems, and rapid development of advanced analysis applications.

  3. Identifying sports videos using replay, text, and camera motion features

    Science.gov (United States)

    Kobla, Vikrant; DeMenthon, Daniel; Doermann, David S.

    1999-12-01

    Automated classification of digital video is emerging as an important piece of the puzzle in the design of content management systems for digital libraries. The ability to classify videos into various classes such as sports, news, movies, or documentaries, increases the efficiency of indexing, browsing, and retrieval of video in large databases. In this paper, we discuss the extraction of features that enable identification of sports videos directly from the compressed domain of MPEG video. These features include detecting the presence of action replays, determining the amount of scene text in vide, and calculating various statistics on camera and/or object motion. The features are derived from the macroblock, motion,and bit-rate information that is readily accessible from MPEG video with very minimal decoding, leading to substantial gains in processing speeds. Full-decoding of selective frames is required only for text analysis. A decision tree classifier built using these features is able to identify sports clips with an accuracy of about 93 percent.

  4. Analysis of growth patterns during gravitropic curvature in roots of Zea mays by use of a computer-based video digitizer

    Science.gov (United States)

    Nelson, A. J.; Evans, M. L.

    1986-01-01

    A computer-based video digitizer system is described which allows automated tracking of markers placed on a plant surface. The system uses customized software to calculate relative growth rates at selected positions along the plant surface and to determine rates of gravitropic curvature based on the changing pattern of distribution of the surface markers. The system was used to study the time course of gravitropic curvature and changes in relative growth rate along the upper and lower surface of horizontally-oriented roots of maize (Zea mays L.). The growing region of the root was found to extend from about 1 mm behind the tip to approximately 6 mm behind the tip. In vertically-oriented roots the relative growth rate was maximal at about 2.5 mm behind the tip and declined smoothly on either side of the maximum. Curvature was initiated approximately 30 min after horizontal orientation with maximal (50 degrees) curvature being attained in 3 h. Analysis of surface extension patterns during the response indicated that curvature results from a reduction in growth rate along both the upper and lower surfaces with stronger reduction along the lower surface.

  5. Integrated Microfluidic Devices for Automated Microarray-Based Gene Expression and Genotyping Analysis

    Science.gov (United States)

    Liu, Robin H.; Lodes, Mike; Fuji, H. Sho; Danley, David; McShea, Andrew

    Microarray assays typically involve multistage sample processing and fluidic handling, which are generally labor-intensive and time-consuming. Automation of these processes would improve robustness, reduce run-to-run and operator-to-operator variation, and reduce costs. In this chapter, a fully integrated and self-contained microfluidic biochip device that has been developed to automate the fluidic handling steps for microarray-based gene expression or genotyping analysis is presented. The device consists of a semiconductor-based CustomArray® chip with 12,000 features and a microfluidic cartridge. The CustomArray was manufactured using a semiconductor-based in situ synthesis technology. The micro-fluidic cartridge consists of microfluidic pumps, mixers, valves, fluid channels, and reagent storage chambers. Microarray hybridization and subsequent fluidic handling and reactions (including a number of washing and labeling steps) were performed in this fully automated and miniature device before fluorescent image scanning of the microarray chip. Electrochemical micropumps were integrated in the cartridge to provide pumping of liquid solutions. A micromixing technique based on gas bubbling generated by electrochemical micropumps was developed. Low-cost check valves were implemented in the cartridge to prevent cross-talk of the stored reagents. Gene expression study of the human leukemia cell line (K562) and genotyping detection and sequencing of influenza A subtypes have been demonstrated using this integrated biochip platform. For gene expression assays, the microfluidic CustomArray device detected sample RNAs with a concentration as low as 0.375 pM. Detection was quantitative over more than three orders of magnitude. Experiment also showed that chip-to-chip variability was low indicating that the integrated microfluidic devices eliminate manual fluidic handling steps that can be a significant source of variability in genomic analysis. The genotyping results showed

  6. Fully automated quantitative analysis of breast cancer risk in DCE-MR images

    Science.gov (United States)

    Jiang, Luan; Hu, Xiaoxin; Gu, Yajia; Li, Qiang

    2015-03-01

    Amount of fibroglandular tissue (FGT) and background parenchymal enhancement (BPE) in dynamic contrast enhanced magnetic resonance (DCE-MR) images are two important indices for breast cancer risk assessment in the clinical practice. The purpose of this study is to develop and evaluate a fully automated scheme for quantitative analysis of FGT and BPE in DCE-MR images. Our fully automated method consists of three steps, i.e., segmentation of whole breast, fibroglandular tissues, and enhanced fibroglandular tissues. Based on the volume of interest extracted automatically, dynamic programming method was applied in each 2-D slice of a 3-D MR scan to delineate the chest wall and breast skin line for segmenting the whole breast. This step took advantages of the continuity of chest wall and breast skin line across adjacent slices. We then further used fuzzy c-means clustering method with automatic selection of cluster number for segmenting the fibroglandular tissues within the segmented whole breast area. Finally, a statistical method was used to set a threshold based on the estimated noise level for segmenting the enhanced fibroglandular tissues in the subtraction images of pre- and post-contrast MR scans. Based on the segmented whole breast, fibroglandular tissues, and enhanced fibroglandular tissues, FGT and BPE were automatically computed. Preliminary results of technical evaluation and clinical validation showed that our fully automated scheme could obtain good segmentation of the whole breast, fibroglandular tissues, and enhanced fibroglandular tissues to achieve accurate assessment of FGT and BPE for quantitative analysis of breast cancer risk.

  7. Advances in Computer, Communication, Control and Automation

    CERN Document Server

    011 International Conference on Computer, Communication, Control and Automation

    2012-01-01

    The volume includes a set of selected papers extended and revised from the 2011 International Conference on Computer, Communication, Control and Automation (3CA 2011). 2011 International Conference on Computer, Communication, Control and Automation (3CA 2011) has been held in Zhuhai, China, November 19-20, 2011. This volume  topics covered include signal and Image processing, speech and audio Processing, video processing and analysis, artificial intelligence, computing and intelligent systems, machine learning, sensor and neural networks, knowledge discovery and data mining, fuzzy mathematics and Applications, knowledge-based systems, hybrid systems modeling and design, risk analysis and management, system modeling and simulation. We hope that researchers, graduate students and other interested readers benefit scientifically from the proceedings and also find it stimulating in the process.

  8. Automated, Ultra-Sterile Solid Sample Handling and Analysis on a Chip

    Science.gov (United States)

    Mora, Maria F.; Stockton, Amanda M.; Willis, Peter A.

    2013-01-01

    There are no existing ultra-sterile lab-on-a-chip systems that can accept solid samples and perform complete chemical analyses without human intervention. The proposed solution is to demonstrate completely automated lab-on-a-chip manipulation of powdered solid samples, followed by on-chip liquid extraction and chemical analysis. This technology utilizes a newly invented glass micro-device for solid manipulation, which mates with existing lab-on-a-chip instrumentation. Devices are fabricated in a Class 10 cleanroom at the JPL MicroDevices Lab, and are plasma-cleaned before and after assembly. Solid samples enter the device through a drilled hole in the top. Existing micro-pumping technology is used to transfer milligrams of powdered sample into an extraction chamber where it is mixed with liquids to extract organic material. Subsequent chemical analysis is performed using portable microchip capillary electrophoresis systems (CE). These instruments have been used for ultra-highly sensitive (parts-per-trillion, pptr) analysis of organic compounds including amines, amino acids, aldehydes, ketones, carboxylic acids, and thiols. Fully autonomous amino acid analyses in liquids were demonstrated; however, to date there have been no reports of completely automated analysis of solid samples on chip. This approach utilizes an existing portable instrument that houses optics, high-voltage power supplies, and solenoids for fully autonomous microfluidic sample processing and CE analysis with laser-induced fluorescence (LIF) detection. Furthermore, the entire system can be sterilized and placed in a cleanroom environment for analyzing samples returned from extraterrestrial targets, if desired. This is an entirely new capability never demonstrated before. The ability to manipulate solid samples, coupled with lab-on-a-chip analysis technology, will enable ultraclean and ultrasensitive end-to-end analysis of samples that is orders of magnitude more sensitive than the ppb goal given

  9. Linking Automated Data Analysis and Visualization with Applications in Developmental Biology and High-Energy Physics

    Energy Technology Data Exchange (ETDEWEB)

    Ruebel, Oliver [Technical Univ. of Darmstadt (Germany)

    2009-11-20

    Knowledge discovery from large and complex collections of today's scientific datasets is a challenging task. With the ability to measure and simulate more processes at increasingly finer spatial and temporal scales, the increasing number of data dimensions and data objects is presenting tremendous challenges for data analysis and effective data exploration methods and tools. Researchers are overwhelmed with data and standard tools are often insufficient to enable effective data analysis and knowledge discovery. The main objective of this thesis is to provide important new capabilities to accelerate scientific knowledge discovery form large, complex, and multivariate scientific data. The research covered in this thesis addresses these scientific challenges using a combination of scientific visualization, information visualization, automated data analysis, and other enabling technologies, such as efficient data management. The effectiveness of the proposed analysis methods is demonstrated via applications in two distinct scientific research fields, namely developmental biology and high-energy physics.Advances in microscopy, image analysis, and embryo registration enable for the first time measurement of gene expression at cellular resolution for entire organisms. Analysis of high-dimensional spatial gene expression datasets is a challenging task. By integrating data clustering and visualization, analysis of complex, time-varying, spatial gene expression patterns and their formation becomes possible. The analysis framework MATLAB and the visualization have been integrated, making advanced analysis tools accessible to biologist and enabling bioinformatic researchers to directly integrate their analysis with the visualization. Laser wakefield particle accelerators (LWFAs) promise to be a new compact source of high-energy particles and radiation, with wide applications ranging from medicine to physics. To gain insight into the complex physical processes of particle

  10. Automated quantification and integrative analysis of 2D and 3D mitochondrial shape and network properties.

    Directory of Open Access Journals (Sweden)

    Julie Nikolaisen

    Full Text Available Mitochondrial morphology and function are coupled in healthy cells, during pathological conditions and (adaptation to endogenous and exogenous stress. In this sense mitochondrial shape can range from small globular compartments to complex filamentous networks, even within the same cell. Understanding how mitochondrial morphological changes (i.e. "mitochondrial dynamics" are linked to cellular (patho physiology is currently the subject of intense study and requires detailed quantitative information. During the last decade, various computational approaches have been developed for automated 2-dimensional (2D analysis of mitochondrial morphology and number in microscopy images. Although these strategies are well suited for analysis of adhering cells with a flat morphology they are not applicable for thicker cells, which require a three-dimensional (3D image acquisition and analysis procedure. Here we developed and validated an automated image analysis algorithm allowing simultaneous 3D quantification of mitochondrial morphology and network properties in human endothelial cells (HUVECs. Cells expressing a mitochondria-targeted green fluorescence protein (mitoGFP were visualized by 3D confocal microscopy and mitochondrial morphology was quantified using both the established 2D method and the new 3D strategy. We demonstrate that both analyses can be used to characterize and discriminate between various mitochondrial morphologies and network properties. However, the results from 2D and 3D analysis were not equivalent when filamentous mitochondria in normal HUVECs were compared with circular/spherical mitochondria in metabolically stressed HUVECs treated with rotenone (ROT. 2D quantification suggested that metabolic stress induced mitochondrial fragmentation and loss of biomass. In contrast, 3D analysis revealed that the mitochondrial network structure was dissolved without affecting the amount and size of the organelles. Thus, our results demonstrate

  11. Foreign object detection and removal to improve automated analysis of chest radiographs

    Energy Technology Data Exchange (ETDEWEB)

    Hogeweg, Laurens; Sanchez, Clara I.; Melendez, Jaime; Maduskar, Pragnya; Ginneken, Bram van [Diagnostic Image Analysis Group, Radboud University Nijmegen Medical Centre, Nijmegen 6525 GA (Netherlands); Story, Alistair; Hayward, Andrew [University College London, Centre for Infectious Disease Epidemiology, London NW3 2PF (United Kingdom)

    2013-07-15

    Purpose: Chest radiographs commonly contain projections of foreign objects, such as buttons, brassier clips, jewellery, or pacemakers and wires. The presence of these structures can substantially affect the output of computer analysis of these images. An automated method is presented to detect, segment, and remove foreign objects from chest radiographs.Methods: Detection is performed using supervised pixel classification with a kNN classifier, resulting in a probability estimate per pixel to belong to a projected foreign object. Segmentation is performed by grouping and post-processing pixels with a probability above a certain threshold. Next, the objects are replaced by texture inpainting.Results: The method is evaluated in experiments on 257 chest radiographs. The detection at pixel level is evaluated with receiver operating characteristic analysis on pixels within the unobscured lung fields and an A{sub z} value of 0.949 is achieved. Free response operator characteristic analysis is performed at the object level, and 95.6% of objects are detected with on average 0.25 false positive detections per image. To investigate the effect of removing the detected objects through inpainting, a texture analysis system for tuberculosis detection is applied to images with and without pathology and with and without foreign object removal. Unprocessed, the texture analysis abnormality score of normal images with foreign objects is comparable to those with pathology. After removing foreign objects, the texture score of normal images with and without foreign objects is similar, while abnormal images, whether they contain foreign objects or not, achieve on average higher scores.Conclusions: The authors conclude that removal of foreign objects from chest radiographs is feasible and beneficial for automated image analysis.

  12. Wine analysis to check quality and authenticity by fully-automated 1H-NMR

    Directory of Open Access Journals (Sweden)

    Spraul Manfred

    2015-01-01

    Full Text Available Fully-automated high resolution 1H-NMR spectroscopy offers unique screening capabilities for food quality and safety by combining non-targeted and targeted screening in one analysis (15–20 min from acquisition to report. The advantage of high resolution 1H-NMR is its absolute reproducibility and transferability from laboratory to laboratory, which is not equaled by any other method currently used in food analysis. NMR reproducibility allows statistical investigations e.g. for detection of variety, geographical origin and adulterations, where smallest changes of many ingredients at the same time must be recorded. Reproducibility and transferability of the solutions shown are user-, instrument- and laboratory-independent. Sample prepara- tion, measurement and processing are based on strict standard operation procedures which are substantial for this fully automated solution. The non-targeted approach to the data allows detecting even unknown deviations, if they are visible in the 1H-NMR spectra of e.g. fruit juice, wine or honey. The same data acquired in high-throughput mode are also subjected to quantification of multiple compounds. This 1H-NMR methodology will shortly be introduced, then results on wine will be presented and the advantages of the solutions shown. The method has been proven on juice, honey and wine, where so far unknown frauds could be detected, while at the same time generating targeted parameters are obtained.

  13. Automated Aflatoxin Analysis Using Inline Reusable Immunoaffinity Column Cleanup and LC-Fluorescence Detection.

    Science.gov (United States)

    Rhemrev, Ria; Pazdanska, Monika; Marley, Elaine; Biselli, Scarlett; Staiger, Simone

    2015-01-01

    A novel reusable immunoaffinity cartridge containing monoclonal antibodies to aflatoxins coupled to a pressure resistant polymer has been developed. The cartridge is used in conjunction with a handling system inline to LC with fluorescence detection to provide fully automated aflatoxin analysis for routine monitoring of a variety of food matrixes. The handling system selects an immunoaffinity cartridge from a tray and automatically applies the sample extract. The cartridge is washed, then aflatoxins B1, B2, G1, and G2 are eluted and transferred inline to the LC system for quantitative analysis using fluorescence detection with postcolumn derivatization using a KOBRA® cell. Each immunoaffinity cartridge can be used up to 15 times without loss in performance, offering increased sample throughput and reduced costs compared to conventional manual sample preparation and cleanup. The system was validated in two independent laboratories using samples of peanuts and maize spiked at 2, 8, and 40 μg/kg total aflatoxins, and paprika, nutmeg, and dried figs spiked at 5, 20, and 100 μg/kg total aflatoxins. Recoveries exceeded 80% for both aflatoxin B1 and total aflatoxins. The between-day repeatability ranged from 2.1 to 9.6% for aflatoxin B1 for the six levels and five matrixes. Satisfactory Z-scores were obtained with this automated system when used for participation in proficiency testing (FAPAS®) for samples of chilli powder and hazelnut paste containing aflatoxins.

  14. Analysis of inflammatory response in human plasma samples by an automated multicapillary electrophoresis system.

    Science.gov (United States)

    Larsson, Anders; Hansson, Lars-Olof

    2004-01-01

    A new automated multicapillary zone electrophoresis instrument with a new high-resolution (HR) buffer (Capillarys with HR buffer) for analysis of human plasma proteins was evaluated. Albumin, alpha(1)-antitrypsin, alpha(1)-acid glycoprotein, haptoglobin, fibrinogen, immunoglobulin (Ig)A, IgG and IgM were determined nephelometrically in 200 patient plasma samples. The same samples were then analyzed on the Capillarys system (Sebia, Paris, France). The albumin concentration from the nephelometric determination was used for quantification of the individual peaks in the capillary electrophoresis (CE) electropherogram. There was strong linear correlation between the nephelometric and electrophoretic determination of alpha(1)-antitrypsin (R(2) = 0.906), alpha(1)-acid glycoprotein (R(2) =0.894) and haptoglobin (R(2) = 0.913). There was also good correlation between the two determinations of gamma-globulins (R(2) = 0.883), while the correlation was weaker for fibrinogen (R(2) = 0.377). The Capillarys instrument is a reliable system for plasma protein analysis, combining the advantages of full automation, good analytical performance and high throughput. The HR buffer in combination with albumin quantification allows the simultaneous quantification of inflammatory markers in plasma samples without the need for nephelometric determination of these proteins.

  15. Serum 5'nucleotidase activity in rats: a method for automated analysis and criteria for interpretation.

    Science.gov (United States)

    Carakostas, Michael C.; Power, Richard J.; Banerjee, Asit K.

    1990-01-01

    A manual kit for determining serum 5'nucleotidase (5'NT, EC 3.1.3.5) activity was adapted for use with rat samples on a large discrete clinical chemistry analyzer. The precision of the method was good (within-run C.V. = 2.14%; between-run C.V. = 5.5%). A comparison of the new automated method with a manual and semi-automated method gave regression statistics of y = 1.18X -3.66 (Sy. x = 4.54), and y = 0.733X + 1.97 (Sy. x = 1.69), respectively. Temperature conversion factors provided by the kit manufacturer for human samples were determined to be inaccurate for converting results from rat samples. Analysis of components contributing to normal variation in rat serum 5'NT activity showed age and sex to be major factors. Increased serum 5'NT activity was observed in female rats when compared to male rats beginning at about 5 to 6 weeks of age. An analysis of variance of serum 5'NT, alkaline phosphatase, and GGT activities observed over a 9-week period in normal rats suggests several advantages for 5'NT as a predictor of biliary lesions in rats.

  16. Progress on automated data analysis algorithms for ultrasonic inspection of composites

    Science.gov (United States)

    Aldrin, John C.; Forsyth, David S.; Welter, John T.

    2015-03-01

    Progress is presented on the development and demonstration of automated data analysis (ADA) software to address the burden in interpreting ultrasonic inspection data for large composite structures. The automated data analysis algorithm is presented in detail, which follows standard procedures for analyzing signals for time-of-flight indications and backwall amplitude dropout. New algorithms have been implemented to reliably identify indications in time-of-flight images near the front and back walls of composite panels. Adaptive call criteria have also been applied to address sensitivity to variation in backwall signal level, panel thickness variation, and internal signal noise. ADA processing results are presented for a variety of test specimens that include inserted materials and discontinuities produced under poor manufacturing conditions. Software tools have been developed to support both ADA algorithm design and certification, producing a statistical evaluation of indication results and false calls using a matching process with predefined truth tables. Parametric studies were performed to evaluate detection and false call results with respect to varying algorithm settings.

  17. Statistical analysis to assess automated level of suspicion scoring methods in breast ultrasound

    Science.gov (United States)

    Galperin, Michael

    2003-05-01

    A well-defined rule-based system has been developed for scoring 0-5 the Level of Suspicion (LOS) based on qualitative lexicon describing the ultrasound appearance of breast lesion. The purposes of the research are to asses and select one of the automated LOS scoring quantitative methods developed during preliminary studies in benign biopsies reduction. The study has used Computer Aided Imaging System (CAIS) to improve the uniformity and accuracy of applying the LOS scheme by automatically detecting, analyzing and comparing breast masses. The overall goal is to reduce biopsies on the masses with lower levels of suspicion, rather that increasing the accuracy of diagnosis of cancers (will require biopsy anyway). On complex cysts and fibroadenoma cases experienced radiologists were up to 50% less certain in true negatives than CAIS. Full correlation analysis was applied to determine which of the proposed LOS quantification methods serves CAIS accuracy the best. This paper presents current results of applying statistical analysis for automated LOS scoring quantification for breast masses with known biopsy results. It was found that First Order Ranking method yielded most the accurate results. The CAIS system (Image Companion, Data Companion software) is developed by Almen Laboratories and was used to achieve the results.

  18. Automated Nanofiber Diameter Measurement in SEM Images Using a Robust Image Analysis Method

    Directory of Open Access Journals (Sweden)

    Ertan Öznergiz

    2014-01-01

    Full Text Available Due to the high surface area, porosity, and rigidity, applications of nanofibers and nanosurfaces have developed in recent years. Nanofibers and nanosurfaces are typically produced by electrospinning method. In the production process, determination of average fiber diameter is crucial for quality assessment. Average fiber diameter is determined by manually measuring the diameters of randomly selected fibers on scanning electron microscopy (SEM images. However, as the number of the images increases, manual fiber diameter determination becomes a tedious and time consuming task as well as being sensitive to human errors. Therefore, an automated fiber diameter measurement system is desired. In the literature, this task is achieved by using image analysis algorithms. Typically, these methods first isolate each fiber in the image and measure the diameter of each isolated fiber. Fiber isolation is an error-prone process. In this study, automated calculation of nanofiber diameter is achieved without fiber isolation using image processing and analysis algorithms. Performance of the proposed method was tested on real data. The effectiveness of the proposed method is shown by comparing automatically and manually measured nanofiber diameter values.

  19. Restoring normal eating behaviour in adolescents with anorexia nervosa: A video analysis of nursing interventions.

    Science.gov (United States)

    Beukers, Laura; Berends, Tamara; de Man-van Ginkel, Janneke M; van Elburg, Annemarie A; van Meijel, Berno

    2015-12-01

    An important part of inpatient treatment for adolescents with anorexia nervosa is to restore normal eating behaviour. Health-care professionals play a significant role in this process, but little is known about their interventions during patients' meals. The purpose of the present study was to describe nursing interventions aimed at restoring normal eating behaviour in patients with anorexia nervosa. The main research question was: 'Which interventions aimed at restoring normal eating behaviour do health-care professionals in a specialist eating disorder centre use during meal times for adolescents diagnosed with anorexia nervosa? The present study was a qualitative, descriptive study that used video recordings made during mealtimes. Thematic data analysis was applied. Four categories of interventions emerged from the data: (i) monitoring and instructing; (ii) encouraging and motivating; (iii) supporting and understanding; and (iv) educating. The data revealed a directive attitude aimed at promoting behavioural change, but always in combination with empathy and understanding. In the first stage of clinical treatment, health-care professionals focus primarily on changing patients' eating behaviour. However, they also address the psychosocial needs that become visible in patients as they struggle to restore normal eating behaviour. The findings of the present study can be used to assist health-care professionals, and improve multidisciplinary guidelines and health-care professionals' training programmes.

  20. Thermal image analysis of plastic deformation and fracture behavior by a thermo-video measurement system

    Science.gov (United States)

    Ohbuchi, Yoshifumi; Sakamoto, Hidetoshi; Nagatomo, Nobuaki

    2016-12-01

    The visualization of the plastic region and the measurement of its size are necessary and indispensable to evaluate the deformation and fracture behavior of a material. In order to evaluate the plastic deformation and fracture behavior in a structural member with some flaws, the authors paid attention to the surface temperature which is generated by plastic strain energy. The visualization of the plastic deformation was developed by analyzing the relationship between the extension of the plastic deformation range and the surface temperature distribution, which was obtained by an infrared thermo-video system. Furthermore, FEM elasto-plastic analysis was carried out with the experiment, and the effectiveness of this non-contact measurement system of the plastic deformation and fracture process by a thermography system was discussed. The evaluation method using an infrared imaging device proposed in this research has a feature which does not exist in the current evaluation method, i.e. the heat distribution on the surface of the material has been measured widely by noncontact at 2D at high speed. The new measuring technique proposed here can measure the macroscopic plastic deformation distribution on the material surface widely and precisely as a 2D image, and at high speed, by calculation from the heat generation and the heat propagation distribution.

  1. Time motion and video analysis of classical ballet and contemporary dance performance.

    Science.gov (United States)

    Wyon, M A; Twitchett, E; Angioi, M; Clarke, F; Metsios, G; Koutedakis, Y

    2011-11-01

    Video analysis has become a useful tool in the preparation for sport performance and its use has highlighted the different physiological demands of seemingly similar sports and playing positions. The aim of the current study was to examine the performance differences between classical ballet and contemporary dance. In total 93 dance performances (48 ballet and 45 contemporary) were analysed for exercise intensity, changes in direction and specific discrete skills (e. g., jumps, lifts). Results revealed significant differences between the 2 dance forms for exercise intensity (pdance featured more continuous moderate exercise intensities (27 s x min(-1)). These differences have implications on the energy systems utilised during performance with ballet potentially stressing the anaerobic system more than contemporary dance. The observed high rates in the discrete skills in ballet (5 jumps x min(-1); 2 lifts x min(-1)) can cause local muscular damage, particularly in relatively weaker individuals. In conclusion, classical ballet and contemporary dance performances are as significantly different in the underlying physical demands placed on their performers as the artistic aspects of the choreography.

  2. Evaluation of video-assisted thoracoscopic surgery for pulmonary metastases: a meta-analysis.

    Directory of Open Access Journals (Sweden)

    Siyuan Dong

    Full Text Available BACKGROUND: To evaluate the evidence comparing video-assisted thoracic surgery (VATS and open thoracotomy in the treatment of metastatic lung cancer using meta-analytical techniques. METHODS: A literature search was undertaken until July 2013 to identify the comparative studies evaluating disease-free survival rates and survival rates. The pooled odds ratios (OR and the 95% confidence intervals (95% CI were calculated with the fixed or random effect models. RESULTS: Six retrospective studies were included in our meta-analysis. These studies included a total of 546 patients: 235 patients were treated with VATS, and 311 patients were treated with open thoracotomy. The VATS and the thoracotomy did not demonstrate a significant difference in the 1-,3-,5-year survival rates and the 1-year disease-free survival rate. There were significant statistical differences between the 3-year disease free survival rate (p = 0.04, which favored open thoracotomy. CONCLUSIONS: The VATS approach is a safe and feasible treatment in terms of the survival rate for metastatic lung cancer compared with the thoracotomy. The 3-year disease-free survival rate in the VATS group is inferior to that of open thoracotomy. The VATS approach could not completely replace open thoracotomy.

  3. Performance evaluation of real-time video content analysis systems in the CANDELA project

    Science.gov (United States)

    Desurmont, Xavier; Wijnhoven, Rob; Jaspers, Egbert; Caignart, Olivier; Barais, Mike; Favoreel, Wouter; Delaigle, Jean-Francois

    2005-02-01

    The CANDELA project aims at realizing a system for real-time image processing in traffic and surveillance applications. The system performs segmentation, labels the extracted blobs and tracks their movements in the scene. Performance evaluation of such a system is a major challenge since no standard methods exist and the criteria for evaluation are highly subjective. This paper proposes a performance evaluation approach for video content analysis (VCA) systems and identifies the involved research areas. For these areas we give an overview of the state-of-the-art in performance evaluation and introduce a classification into different semantic levels. The proposed evaluation approach compares the results of the VCA algorithm with a ground-truth (GT) counterpart, which contains the desired results. Both the VCA results and the ground truth comprise description files that are formatted in MPEG-7. The evaluation is required to provide an objective performance measure and a mean to choose between competitive methods. In addition, it enables algorithm developers to measure the progress of their work at the different levels in the design process. From these requirements and the state-of-the-art overview we conclude that standardization is highly desirable for which many research topics still need to be addressed.

  4. Integrated Digital Video and Experimental Data Analysis for Microgravity Combustion Experiment

    Science.gov (United States)

    1997-01-01

    The purpose of the Diffusive and Radiative Transport in Fires (DARTFire) Project is to study various mechanisms of energy transport in the ignition and growth of flames in microgravity. This sounding rocket experiment incorporates two multispectral video cameras, two 8-mm video recorders, and several temperature and pressure probes that record information on two separate flames, burning under different oxygen concentrations and flow rates. Mirrors allow each camera to view side-by-side images of both flames.

  5. High-resolution quantitative metabolome analysis of urine by automated flow injection NMR.

    Science.gov (United States)

    Da Silva, Laeticia; Godejohann, Markus; Martin, François-Pierre J; Collino, Sebastiano; Bürkle, Alexander; Moreno-Villanueva, María; Bernhardt, Jürgen; Toussaint, Olivier; Grubeck-Loebenstein, Beatrix; Gonos, Efstathios S; Sikora, Ewa; Grune, Tilman; Breusing, Nicolle; Franceschi, Claudio; Hervonen, Antti; Spraul, Manfred; Moco, Sofia

    2013-06-18

    Metabolism is essential to understand human health. To characterize human metabolism, a high-resolution read-out of the metabolic status under various physiological conditions, either in health or disease, is needed. Metabolomics offers an unprecedented approach for generating system-specific biochemical definitions of a human phenotype through the capture of a variety of metabolites in a single measurement. The emergence of large cohorts in clinical studies increases the demand of technologies able to analyze a large number of measurements, in an automated fashion, in the most robust way. NMR is an established metabolomics tool for obtaining metabolic phenotypes. Here, we describe the analysis of NMR-based urinary profiles for metabolic studies, challenged to a large human study (3007 samples). This method includes the acquisition of nuclear Overhauser effect spectroscopy one-dimensional and J-resolved two-dimensional (J-Res-2D) (1)H NMR spectra obtained on a 600 MHz spectrometer, equipped with a 120 μL flow probe, coupled to a flow-injection analysis system, in full automation under the control of a sampler manager. Samples were acquired at a throughput of ~20 (or 40 when J-Res-2D is included) min/sample. The associated technical analysis error over the full series of analysis is 12%, which demonstrates the robustness of the method. With the aim to describe an overall metabolomics workflow, the quantification of 36 metabolites, mainly related to central carbon metabolism and gut microbial host cometabolism, was obtained, as well as multivariate data analysis of the full spectral profiles. The metabolic read-outs generated using our analytical workflow can therefore be considered for further pathway modeling and/or biological interpretation.

  6. Automated image analysis of the host-pathogen interaction between phagocytes and Aspergillus fumigatus.

    Directory of Open Access Journals (Sweden)

    Franziska Mech

    Full Text Available Aspergillus fumigatus is a ubiquitous airborne fungus and opportunistic human pathogen. In immunocompromised hosts, the fungus can cause life-threatening diseases like invasive pulmonary aspergillosis. Since the incidence of fungal systemic infections drastically increased over the last years, it is a major goal to investigate the pathobiology of A. fumigatus and in particular the interactions of A. fumigatus conidia with immune cells. Many of these studies include the activity of immune effector cells, in particular of macrophages, when they are confronted with conidia of A. fumigus wild-type and mutant strains. Here, we report the development of an automated analysis of confocal laser scanning microscopy images from macrophages coincubated with different A. fumigatus strains. At present, microscopy images are often analysed manually, including cell counting and determination of interrelations between cells, which is very time consuming and error-prone. Automation of this process overcomes these disadvantages and standardises the analysis, which is a prerequisite for further systems biological studies including mathematical modeling of the infection process. For this purpose, the cells in our experimental setup were differentially stained and monitored by confocal laser scanning microscopy. To perform the image analysis in an automatic fashion, we developed a ruleset that is generally applicable to phagocytosis assays and in the present case was processed by the software Definiens Developer XD. As a result of a complete image analysis we obtained features such as size, shape, number of cells and cell-cell contacts. The analysis reported here, reveals that different mutants of A. fumigatus have a major influence on the ability of macrophages to adhere and to phagocytose the respective conidia. In particular, we observe that the phagocytosis ratio and the aggregation behaviour of pksP mutant compared to wild-type conidia are both significantly

  7. Difference Tracker: ImageJ plugins for fully automated analysis of multiple axonal transport parameters.

    Science.gov (United States)

    Andrews, Simon; Gilley, Jonathan; Coleman, Michael P

    2010-11-30

    Studies of axonal transport are critical, not only to understand its normal regulation, but also to determine the roles of transport impairment in disease. Exciting new resources have recently become available allowing live imaging of axonal transport in physiologically relevant settings, such as mammalian nerves. Thus the effects of disease, ageing and therapies can now be assessed directly in nervous system tissue. However, these imaging studies present new challenges. Manual or semi-automated analysis of the range of transport parameters required for a suitably complete evaluation is very time-consuming and can be subjective due to the complexity of the particle movements in axons in ex vivo explants or in vivo. We have developed Difference Tracker, a program combining two new plugins for the ImageJ image-analysis freeware, to provide fast, fully automated and objective analysis of a number of relevant measures of trafficking of fluorescently labeled particles so that axonal transport in different situations can be easily compared. We confirm that Difference Tracker can accurately track moving particles in highly simplified, artificial simulations. It can also identify and track multiple motile fluorescently labeled mitochondria simultaneously in time-lapse image stacks from live imaging of tibial nerve axons, reporting values for a number of parameters that are comparable to those obtained through manual analysis of the same axons. Difference Tracker therefore represents a useful free resource for the comparative analysis of axonal transport under different conditions, and could potentially be used and developed further in many other studies requiring quantification of particle movements.

  8. The National Shipbuilding Research Program. Automated Process Application in Steel Fabrication and Subassembly Facilities; Phase I (Process Analysis)

    Science.gov (United States)

    1999-05-01

    6 Automated Process Application in Steel Fabrication and Subassembly Facilities; Phase I ( Process Analysis ) U.S. DEPARTMENT OF THE NAVY CARDEROCK...Subassembly Facilities; Phase I ( Process Analysis ) 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e

  9. Immersive video

    Science.gov (United States)

    Moezzi, Saied; Katkere, Arun L.; Jain, Ramesh C.

    1996-03-01

    Interactive video and television viewers should have the power to control their viewing position. To make this a reality, we introduce the concept of Immersive Video, which employs computer vision and computer graphics technologies to provide remote users a sense of complete immersion when viewing an event. Immersive Video uses multiple videos of an event, captured from different perspectives, to generate a full 3D digital video of that event. That is accomplished by assimilating important information from each video stream into a comprehensive, dynamic, 3D model of the environment. Using this 3D digital video, interactive viewers can then move around the remote environment and observe the events taking place from any desired perspective. Our Immersive Video System currently provides interactive viewing and `walkthrus' of staged karate demonstrations, basketball games, dance performances, and typical campus scenes. In its full realization, Immersive Video will be a paradigm shift in visual communication which will revolutionize television and video media, and become an integral part of future telepresence and virtual reality systems.

  10. Automated preparation of Kepler time series of planet hosts for asteroseismic analysis

    CERN Document Server

    Handberg, R

    2014-01-01

    One of the tasks of the Kepler Asteroseismic Science Operations Center (KASOC) is to provide asteroseismic analyses on Kepler Objects of Interest (KOIs). However, asteroseismic analysis of planetary host stars presents some unique complications with respect to data preprocessing, compared to pure asteroseismic targets. If not accounted for, the presence of planetary transits in the photometric time series often greatly complicates or even hinders these asteroseismic analyses. This drives the need for specialised methods of preprocessing data to make them suitable for asteroseismic analysis. In this paper we present the KASOC Filter, which is used to automatically prepare data from the Kepler/K2 mission for asteroseismic analyses of solar-like planet host stars. The methods are very effective at removing unwanted signals of both instrumental and planetary origins and produce significantly cleaner photometric time series than the original data. The methods are automated and can therefore easily be applied to a ...

  11. Automated Analysis and Classification of Histological Tissue Features by Multi-Dimensional Microscopic Molecular Profiling.

    Directory of Open Access Journals (Sweden)

    Daniel P Riordan

    Full Text Available Characterization of the molecular attributes and spatial arrangements of cells and features within complex human tissues provides a critical basis for understanding processes involved in development and disease. Moreover, the ability to automate steps in the analysis and interpretation of histological images that currently require manual inspection by pathologists could revolutionize medical diagnostics. Toward this end, we developed a new imaging approach called multidimensional microscopic molecular profiling (MMMP that can measure several independent molecular properties in situ at subcellular resolution for the same tissue specimen. MMMP involves repeated cycles of antibody or histochemical staining, imaging, and signal removal, which ultimately can generate information analogous to a multidimensional flow cytometry analysis on intact tissue sections. We performed a MMMP analysis on a tissue microarray containing a diverse set of 102 human tissues using a panel of 15 informative antibody and 5 histochemical stains plus DAPI. Large-scale unsupervised analysis of MMMP data, and visualization of the resulting classifications, identified molecular profiles that were associated with functional tissue features. We then directly annotated H&E images from this MMMP series such that canonical histological features of interest (e.g. blood vessels, epithelium, red blood cells were individually labeled. By integrating image annotation data, we identified molecular signatures that were associated with specific histological annotations and we developed statistical models for automatically classifying these features. The classification accuracy for automated histology labeling was objectively evaluated using a cross-validation strategy, and significant accuracy (with a median per-pixel rate of 77% per feature from 15 annotated samples for de novo feature prediction was obtained. These results suggest that high-dimensional profiling may advance the

  12. Development of an automated imaging pipeline for the analysis of the zebrafish larval kidney.

    Directory of Open Access Journals (Sweden)

    Jens H Westhoff

    Full Text Available The analysis of kidney malformation caused by environmental influences during nephrogenesis or by hereditary nephropathies requires animal models allowing the in vivo observation of developmental processes. The zebrafish has emerged as a useful model system for the analysis of vertebrate organ development and function, and it is suitable for the identification of organotoxic or disease-modulating compounds on a larger scale. However, to fully exploit its potential in high content screening applications, dedicated protocols are required allowing the consistent visualization of inner organs such as the embryonic kidney. To this end, we developed a high content screening compatible pipeline for the automated imaging of standardized views of the developing pronephros in zebrafish larvae. Using a custom designed tool, cavities were generated in agarose coated microtiter plates allowing for accurate positioning and orientation of zebrafish larvae. This enabled the subsequent automated acquisition of stable and consistent dorsal views of pronephric kidneys. The established pipeline was applied in a pilot screen for the analysis of the impact of potentially nephrotoxic drugs on zebrafish pronephros development in the Tg(wt1b:EGFP transgenic line in which the developing pronephros is highlighted by GFP expression. The consistent image data that was acquired allowed for quantification of gross morphological pronephric phenotypes, revealing concentration dependent effects of several compounds on nephrogenesis. In addition, applicability of the imaging pipeline was further confirmed in a morpholino based model for cilia-associated human genetic disorders associated with different intraflagellar transport genes. The developed tools and pipeline can be used to study various aspects in zebrafish kidney research, and can be readily adapted for the analysis of other organ systems.

  13. A scanning electron microscope method for automated, quantitative analysis of mineral matter in coal

    Energy Technology Data Exchange (ETDEWEB)

    Creelman, R.A.; Ward, C.R. [R.A. Creelman and Associates, Epping, NSW (Australia)

    1996-07-01

    Quantitative mineralogical analysis has been carried out in a series of nine coal samples from Australia, South Africa and China using a newly-developed automated image analysis system coupled to a scanning electron microscopy. The image analysis system (QEM{asterisk}SEM) gathers X-ray spectra and backscattered electron data from a number of points on a conventional grain-mount polished section under the SEM, and interprets the data from each point in mineralogical terms. The cumulative data in each case was integrated to provide a volumetric modal analysis of the species present in the coal samples, expressed as percentages of the respective coals` mineral matter. Comparison was made of the QEM{asterisk}SEM results to data obtained from the same samples using other methods of quantitative mineralogical analysis, namely X-ray diffraction of the low-temperature oxygen-plasma ash and normative calculation from the (high-temperature) ash analysis and carbonate CO{sub 2} data. Good agreement was obtained from all three methods for quartz in the coals, and also for most of the iron-bearing minerals. The correlation between results from the different methods was less strong, however, for individual clay minerals, or for minerals such as calcite, dolomite and phosphate species that made up only relatively small proportions of the mineral matter. The image analysis approach, using the electron microscope for mineralogical studies, has significant potential as a supplement to optical microscopy in quantitative coal characterisation. 36 refs., 3 figs., 4 tabs.

  14. The Israel DNA database--the establishment of a rapid, semi-automated analysis system.

    Science.gov (United States)

    Zamir, Ashira; Dell'Ariccia-Carmon, Aviva; Zaken, Neomi; Oz, Carla

    2012-03-01

    The Israel Police DNA database, also known as IPDIS (Israel Police DNA Index System), has been operating since February 2007. During that time more than 135,000 reference samples have been uploaded and more than 2000 hits reported. We have developed an effective semi-automated system that includes two automated punchers, three liquid handler robots and four genetic analyzers. An inhouse LIMS program enables full tracking of every sample through the entire process of registration, pre-PCR handling, analysis of profiles, uploading to the database, hit reports and ultimately storage. The LIMS is also responsible for the future tracking of samples and their profiles to be expunged from the database according to the Israeli DNA legislation. The database is administered by an in-house developed software program, where reference and evidentiary profiles are uploaded, stored, searched and matched. The DNA database has proven to be an effective investigative tool which has gained the confidence of the Israeli public and on which the Israel National Police force has grown to rely.

  15. Automated cleaning of foraminifera shells before Mg/Ca analysis using a pipette robot

    Science.gov (United States)

    Johnstone, Heather J. H.; Steinke, Stephan; Kuhnert, Henning; Bickert, Torsten; Pälike, Heiko; Mohtadi, Mahyar

    2016-08-01

    The molar ratio of magnesium to calcium (Mg/Ca) in foraminiferal calcite is a widely used proxy for reconstructing past seawater temperatures. Thorough cleaning of tests is required before analysis to remove contaminant phases such as clay and organic matter. We have adapted a commercial pipette robot to automate an established cleaning procedure, the "Mg-cleaning" protocol of Barker et al. (2003). Efficiency of the automated nine-step method was assessed through monitoring Al/Ca of trial samples (GeoB4420-2 core catcher). Planktonic foraminifera Globigerinoides ruber, Globigerinoides sacculifer, and Neogloboquadrina dutertrei from this sample gave Mg/Ca consistent with the habitat range of the three species, and 40-60% sample recovery after cleaning. Comparison between manually cleaned and robot-cleaned samples of G. ruber (white) from a sediment core (GeoB16602) showed good correspondence between the two methods for Mg/Ca (r = 0.93, p robot-cleaned samples was 0.05 mmol/mol, showing that the samples are cleaned effectively by the robot. The robot offers increased sample throughput as batch sizes of up to 88 samples/blanks can be processed in ˜7 h with little intervention.

  16. A computer based, automated analysis of process and outcomes of diabetic care in 23 GP practices.

    LENUS (Irish Health Repository)

    Hill, F

    2012-02-01

    The predicted prevalence of diabetes in Ireland by 2015 is 190,000. Structured diabetes care in general practice has outcomes equivalent to secondary care and good diabetes care has been shown to be associated with the use of electronic healthcare records (EHRs). This automated analysis of EHRs in 23 practices took 10 minutes per practice compared with 15 hours per practice for manual searches. Data was extracted for 1901 type II diabetics. There was valid data for >80% of patients for 6 of the 9 key indicators in the previous year. 543 (34%) had a Hba1c > 7.5%, 142 (9%) had a total cholesterol >6 mmol\\/l, 83 (6%) had an LDL cholesterol >4 mmol\\/l, 367 (22%) had Triglycerides > 2.2 mmol\\/l and 162 (10%) had Blood Pressure > 160\\/100 mmHg. Data quality and key indicators of care compare well with manual audits in Ireland and the U.K. electronic healthcare records and automated audits should be a feature of all chronic disease management programs.

  17. The Reliability of a Novel Automated System for ANA Immunofluorescence Analysis in Daily Clinical Practice.

    Science.gov (United States)

    Alsuwaidi, Mohammed; Dollinger, Margit; Fleck, Martin; Ehrenstein, Boris

    2016-01-01

    Automated interpretation (AI) systems for antinuclear antibody (ANA) analysis have been introduced based on assessment of indirect immunofluorescence (IIF) patterns. The diagnostic performance of a novel automated IIF reading system was compared with visual interpretation (VI) of IIF in daily clinical practice to evaluate the reduction of workload. ANA-IIF tests of consecutive serum samples from patients with suspected connective tissue disease were carried out using HEp-2 cells according to routine clinical care. AI was performed using a visual analyser (Zenit G-Sight, Menarini, Germany). Agreement rates between ANA results by AI and VI were calculated. Of the 336 samples investigated, VI yielded 205 (61%) negative, 42 (13%) ambiguous, and 89 (26%) positive results, whereas 82 (24%) were determined to be negative, 176 (52%) ambiguous, and 78 (24%) positive by AI. AI displayed a diagnostic accuracy of 175/336 samples (52%) with a kappa coefficient of 0.34 compared to VI being the gold standard. Solely relying on AI, with VI only performed for all ambiguous samples by AI, would have missed 1 of 89 (1%) positive results by VI and misclassified 2 of 205 (1%) negative results by VI as positive. The use of AI in daily clinical practice resulted only in a moderate reduction of the VI workload (82 of 336 samples: 24%).

  18. Automated segmentation refinement of small lung nodules in CT scans by local shape analysis.

    Science.gov (United States)

    Diciotti, Stefano; Lombardo, Simone; Falchini, Massimo; Picozzi, Giulia; Mascalchi, Mario

    2011-12-01

    One of the most important problems in the segmentation of lung nodules in CT imaging arises from possible attachments occurring between nodules and other lung structures, such as vessels or pleura. In this report, we address the problem of vessels attachments by proposing an automated correction method applied to an initial rough segmentation of the lung nodule. The method is based on a local shape analysis of the initial segmentation making use of 3-D geodesic distance map representations. The correction method has the advantage that it locally refines the nodule segmentation along recognized vessel attachments only, without modifying the nodule boundary elsewhere. The method was tested using a simple initial rough segmentation, obtained by a fixed image thresholding. The validation of the complete segmentation algorithm was carried out on small lung nodules, identified in the ITALUNG screening trial and on small nodules of the lung image database consortium (LIDC) dataset. In fully automated mode, 217/256 (84.8%) lung nodules of ITALUNG and 139/157 (88.5%) individual marks of lung nodules of LIDC were correctly outlined and an excellent reproducibility was also observed. By using an additional interactive mode, based on a controlled manual interaction, 233/256 (91.0%) lung nodules of ITALUNG and 144/157 (91.7%) individual marks of lung nodules of LIDC were overall correctly segmented. The proposed correction method could also be usefully applied to any existent nodule segmentation algorithm for improving the segmentation quality of juxta-vascular nodules.

  19. An investigation of image compression on NIIRS rating degradation through automated image analysis

    Science.gov (United States)

    Chen, Hua-Mei; Blasch, Erik; Pham, Khanh; Wang, Zhonghai; Chen, Genshe

    2016-05-01

    The National Imagery Interpretability Rating Scale (NIIRS) is a subjective quantification of static image widely adopted by the Geographic Information System (GIS) community. Efforts have been made to relate NIIRS image quality to sensor parameters using the general image quality equations (GIQE), which make it possible to automatically predict the NIIRS rating of an image through automated image analysis. In this paper, we present an automated procedure to extract line edge profile based on which the NIIRS rating of a given image can be estimated through the GIQEs if the ground sampling distance (GSD) is known. Steps involved include straight edge detection, edge stripes determination, and edge intensity determination, among others. Next, we show how to employ GIQEs to estimate NIIRS degradation without knowing the ground truth GSD and investigate the effects of image compression on the degradation of an image's NIIRS rating. Specifically, we consider JPEG and JPEG2000 image compression standards. The extensive experimental results demonstrate the effect of image compression on the ground sampling distance and relative edge response, which are the major factors effecting NIIRS rating.

  20. A cross-sectional analysis of video games and attention deficit hyperactivity disorder symptoms in adolescents

    Directory of Open Access Journals (Sweden)

    Rabinowitz Terry

    2006-10-01

    Full Text Available Abstract Background Excessive use of the Internet has been associated with attention deficit hyperactivity disorder (ADHD, but the relationship between video games and ADHD symptoms in adolescents is unknown. Method A survey of adolescents and parents (n = 72 adolescents, 72 parents was performed assessing daily time spent on the Internet, television, console video games, and Internet video games, and their association with academic and social functioning. Subjects were high school students in the ninth and tenth grade. Students were administered a modified Young's Internet Addiction Scale (YIAS and asked questions about exercise, grades, work, and school detentions. Parents were asked to complete the Conners' Parent Rating Scale (CPRS and answer questions regarding medical/psychiatric conditions in their child. Results There was a significant association between time spent playing games for more than one hour a day and YIAS (p Conclusion Adolescents who play more than one hour of console or Internet video games may have more or more intense symptoms of ADHD or inattention than those who do not. Given the possible negative effects these conditions may have on scholastic performance, the added consequences of more time spent on video games may also place these individuals at increased risk for problems in school.

  1. Automated Astrometric Analysis of Satellite Observations using Wide-field Imaging

    Science.gov (United States)

    Skuljan, J.; Kay, J.

    2016-09-01

    An observational trial was conducted in the South Island of New Zealand from 24 to 28 February 2015, as a collaborative effort between the United Kingdom and New Zealand in the area of space situational awareness. The aim of the trial was to observe a number of satellites in low Earth orbit using wide-field imaging from two separate locations, in order to determine the space trajectory and compare the measurements with the predictions based on the standard two-line elements. This activity was an initial step in building a space situational awareness capability at the Defence Technology Agency of the New Zealand Defence Force. New Zealand has an important strategic position as the last land mass that many satellites selected for deorbiting pass before entering the Earth's atmosphere over the dedicated disposal area in the South Pacific. A preliminary analysis of the trial data has demonstrated that relatively inexpensive equipment can be used to successfully detect satellites at moderate altitudes. A total of 60 satellite passes were observed over the five nights of observation and about 2600 images were collected. A combination of cooled CCD and standard DSLR cameras were used, with a selection of lenses between 17 mm and 50 mm in focal length, covering a relatively wide field of view of 25 to 60 degrees. The CCD cameras were equipped with custom-made GPS modules to record the time of exposure with a high accuracy of one millisecond, or better. Specialised software has been developed for automated astrometric analysis of the trial data. The astrometric solution is obtained as a two-dimensional least-squares polynomial fit to the measured pixel positions of a large number of stars (typically 1000) detected across the image. The star identification is fully automated and works well for all camera-lens combinations used in the trial. A moderate polynomial degree of 3 to 5 is selected to take into account any image distortions introduced by the lens. A typical RMS

  2. Assessment of paclitaxel induced sensory polyneuropathy with "Catwalk" automated gait analysis in mice.

    Directory of Open Access Journals (Sweden)

    Petra Huehnchen

    Full Text Available Neuropathic pain as a symptom of sensory nerve damage is a frequent side effect of chemotherapy. The most common behavioral observation in animal models of chemotherapy induced polyneuropathy is the development of mechanical allodynia, which is quantified with von Frey filaments. The data from one study, however, cannot be easily compared with other studies owing to influences of environmental factors, inter-rater variability and differences in test paradigms. To overcome these limitations, automated quantitative gait analysis was proposed as an alternative, but its usefulness for assessing animals suffering from polyneuropathy has remained unclear. In the present study, we used a novel mouse model of paclitaxel induced polyneuropathy to compare results from electrophysiology and the von Frey method to gait alterations measured with the Catwalk test. To mimic recently improved clinical treatment strategies of gynecological malignancies, we established a mouse model of dose-dense paclitaxel therapy on the common C57Bl/6 background. In this model paclitaxel treated animals developed mechanical allodynia as well as reduced caudal sensory nerve action potential amplitudes indicative of a sensory polyneuropathy. Gait analysis with the Catwalk method detected distinct alterations of gait parameters in animals suffering from sensory neuropathy, revealing a minimized contact of the hind paws with the floor. Treatment of mechanical allodynia with gabapentin improved altered dynamic gait parameters. This study establishes a novel mouse model for investigating the side effects of dose-dense paclitaxel therapy and underlines the usefulness of automated gait analysis as an additional easy-to-use objective test for evaluating painful sensory polyneuropathy.

  3. RoboSCell: An automated single cell arraying and analysis instrument

    KAUST Repository

    Sakaki, Kelly

    2009-09-09

    Single cell research has the potential to revolutionize experimental methods in biomedical sciences and contribute to clinical practices. Recent studies suggest analysis of single cells reveals novel features of intracellular processes, cell-to-cell interactions and cell structure. The methods of single cell analysis require mechanical resolution and accuracy that is not possible using conventional techniques. Robotic instruments and novel microdevices can achieve higher throughput and repeatability; however, the development of such instrumentation is a formidable task. A void exists in the state-of-the-art for automated analysis of single cells. With the increase in interest in single cell analyses in stem cell and cancer research the ability to facilitate higher throughput and repeatable procedures is necessary. In this paper, a high-throughput, single cell microarray-based robotic instrument, called the RoboSCell, is described. The proposed instrument employs a partially transparent single cell microarray (SCM) integrated with a robotic biomanipulator for in vitro analyses of live single cells trapped at the array sites. Cells, labeled with immunomagnetic particles, are captured at the array sites by channeling magnetic fields through encapsulated permalloy channels in the SCM. The RoboSCell is capable of systematically scanning the captured cells temporarily immobilized at the array sites and using optical methods to repeatedly measure extracellular and intracellular characteristics over time. The instrument\\'s capabilities are demonstrated by arraying human T lymphocytes and measuring the uptake dynamics of calcein acetoxymethylester-all in a fully automated fashion. © 2009 Springer Science+Business Media, LLC.

  4. Rapid Automated Dissolution and Analysis Techniques for Radionuclides in Recycle Process Streams

    Energy Technology Data Exchange (ETDEWEB)

    Sudowe, Ralf [Univ. of Nevada, Las Vegas, NV (United States). Radiochemistry Program and Health Physics Dept.; Roman, Audrey [Univ. of Nevada, Las Vegas, NV (United States). Radiochemistry Program; Dailey, Ashlee [Univ. of Nevada, Las Vegas, NV (United States). Radiochemistry Program; Go, Elaine [Univ. of Nevada, Las Vegas, NV (United States). Radiochemistry Program

    2013-07-18

    The analysis of process samples for radionuclide content is an important part of current procedures for material balance and accountancy in the different process streams of a recycling plant. The destructive sample analysis techniques currently available necessitate a significant amount of time. It is therefore desirable to develop new sample analysis procedures that allow for a quick turnaround time and increased sample throughput with a minimum of deviation between samples. In particular, new capabilities for rapid sample dissolution and radiochemical separation are required. Most of the radioanalytical techniques currently employed for sample analysis are based on manual laboratory procedures. Such procedures are time- and labor-intensive, and not well suited for situations in which a rapid sample analysis is required and/or large number of samples need to be analyzed. To address this issue we are currently investigating radiochemical separation methods based on extraction chromatography that have been specifically optimized for the analysis of process stream samples. The influence of potential interferences present in the process samples as well as mass loading, flow rate and resin performance is being studied. In addition, the potential to automate these procedures utilizing a robotic platform is evaluated. Initial studies have been carried out using the commercially available DGA resin. This resin shows an affinity for Am, Pu, U, and Th and is also exhibiting signs of a possible synergistic effects in the presence of iron.

  5. Long-term live cell imaging and automated 4D analysis of drosophila neuroblast lineages.

    Directory of Open Access Journals (Sweden)

    Catarina C F Homem

    Full Text Available The developing Drosophila brain is a well-studied model system for neurogenesis and stem cell biology. In the Drosophila central brain, around 200 neural stem cells called neuroblasts undergo repeated rounds of asymmetric cell division. These divisions typically generate a larger self-renewing neuroblast and a smaller ganglion mother cell that undergoes one terminal division to create two differentiating neurons. Although single mitotic divisions of neuroblasts can easily be imaged in real time, the lack of long term imaging procedures has limited the use of neuroblast live imaging for lineage analysis. Here we describe a method that allows live imaging of cultured Drosophila neuroblasts over multiple cell cycles for up to 24 hours. We describe a 4D image analysis protocol that can be used to extract cell cycle times and growth rates from the resulting movies in an automated manner. We use it to perform lineage analysis in type II neuroblasts where clonal analysis has indicated the presence of a transit-amplifying population that potentiates the number of neurons. Indeed, our experiments verify type II lineages and provide quantitative parameters for all cell types in those lineages. As defects in type II neuroblast lineages can result in brain tumor formation, our lineage analysis method will allow more detailed and quantitative analysis of tumorigenesis and asymmetric cell division in the Drosophila brain.

  6. SigMate: a Matlab-based automated tool for extracellular neuronal signal processing and analysis.

    Science.gov (United States)

    Mahmud, Mufti; Bertoldo, Alessandra; Girardi, Stefano; Maschietto, Marta; Vassanelli, Stefano

    2012-05-30

    Rapid advances in neuronal probe technology for multisite recording of brain activity have posed a significant challenge to neuroscientists for processing and analyzing the recorded signals. To be able to infer meaningful conclusions quickly and accurately from large datasets, automated and sophisticated signal processing and analysis tools are required. This paper presents a Matlab-based novel tool, "SigMate", incorporating standard methods to analyze spikes and EEG signals, and in-house solutions for local field potentials (LFPs) analysis. Available modules at present are - 1. In-house developed algorithms for: data display (2D and 3D), file operations (file splitting, file concatenation, and file column rearranging), baseline correction, slow stimulus artifact removal, noise characterization and signal quality assessment, current source density (CSD) analysis, latency estimation from LFPs and CSDs, determination of cortical layer activation order using LFPs and CSDs, and single LFP clustering; 2. Existing modules: spike detection, sorting and spike train analysis, and EEG signal analysis. SigMate has the flexibility of analyzing multichannel signals as well as signals from multiple recording sources. The in-house developed tools for LFP analysis have been extensively tested with signals recorded using standard extracellular recording electrode, and planar and implantable multi transistor array (MTA) based neural probes. SigMate will be disseminated shortly to the neuroscience community under the open-source GNU-General Public License.

  7. Open Educational Resources from Performance Task using Video Analysis and Modeling - Tracker and K12 science education framework

    CERN Document Server

    Wee, Loo Kang

    2014-01-01

    This invited paper discusses why Physics performance task by grade 9 students in Singapore is worth participating in for two reasons; 1) the video analysis and modeling are open access, licensed creative commons attribution for advancing open educational resources in the world and 2) allows students to be like physicists, where the K12 science education framework is adopted. Personal reflections on how physics education can be made more meaningful in particular Practice 1: Ask Questions, Practice 2: Use Models and Practice 5: Mathematical and Computational Thinking using Video Modeling supported by evidence based data from video analysis. This paper hopes to spur fellow colleagues to look into open education initiatives such as our Singapore Tracker community open educational resources curate on http://weelookang.blogspot.sg/p/physics-applets-virtual-lab.html as well as digital libraries http://iwant2study.org/lookangejss/ directly accessible through Tracker 4.86, EJSS reader app on Android and iOS and EJS 5....

  8. A high-performance, safer and semi-automated approach for the delta18O analysis of diatom silica and new methods for removing exchangeable oxygen.

    Science.gov (United States)

    Chapligin, B; Meyer, H; Friedrichsen, H; Marent, A; Sohns, E; Hubberten, H-W

    2010-09-15

    The determination of the oxygen isotope composition of diatom silica in sediment cores is important for paleoclimate reconstruction, especially in non-carbonate sediments, where no other bioindicators such as ostracods and foraminifera are available. Since most currently available analytical techniques are time-consuming and labour-intensive, we have developed a new, safer, faster and semi-automated online approach for measuring oxygen isotopes in biogenic silica. Improvements include software that controls the measurement procedures and a video camera that remotely records the reaction of the samples under BrF(5) with a CO(2) laser. Maximum safety is guaranteed as the laser-fluorination unit is arranged under a fume hood in a separate room from the operator. A new routine has been developed for removing the exchangeable hydrous components within biogenic silica using ramp degassing. The sample plate is heated up to 1100 degrees C and cooled down to 400 degrees C in approximately 7 h under a flow of He gas (the inert Gas Flow Dehydration method--iGFD) before isotope analysis. Two quartz and two biogenic silica samples (approximately 1.5 mg) of known isotope composition were tested. The isotopic compositions were reproducible within an acceptable error; quartz samples gave a mean standard deviation of <0.15 per thousand (1sigma) and for biogenic silica <0.25 per thousand (1sigma) for samples down to approximately 0.3 mg. The semi-automated fluorination line is the fastest method available at present and enables a throughput of 74 samples/week.

  9. EXTRACTION OF BENTHIC COVER INFORMATION FROM VIDEO TOWS AND PHOTOGRAPHS USING OBJECT-BASED IMAGE ANALYSIS

    Directory of Open Access Journals (Sweden)

    M. T. L. Estomata

    2012-07-01

    Full Text Available Mapping benthic cover in deep waters comprises a very small proportion of studies in the field of research. Majority of benthic cover mapping makes use of satellite images and usually, classification is carried out only for shallow waters. To map the seafloor in optically deep waters, underwater videos and photos are needed. Some researchers have applied this method on underwater photos, but made use of different classification methods such as: Neural Networks, and rapid classification via down sampling. In this study, accurate bathymetric data obtained using a multi-beam echo sounder (MBES was attempted to be used as complementary data with the underwater photographs. Due to the absence of a motion reference unit (MRU, which applies correction to the data gathered by the MBES, accuracy of the said depth data was compromised. Nevertheless, even with the absence of accurate bathymetric data, object-based image analysis (OBIA, which used rule sets based on information such as shape, size, area, relative distance, and spectral information, was still applied. Compared to pixel-based classifications, OBIA was able to classify more specific benthic cover types other than coral and sand, such as rubble and fish. Through the use of rule sets on area, less than or equal to 700 pixels for fish and between 700 to 10,000 pixels for rubble, as well as standard deviation values to distinguish texture, fish and rubble were identified. OBIA produced benthic cover maps that had higher overall accuracy, 93.78±0.85%, as compared to pixel-based methods that had an average accuracy of only 87.30±6.11% (p-value = 0.0001, α = 0.05.

  10. Extraction of Benthic Cover Information from Video Tows and Photographs Using Object-Based Image Analysis

    Science.gov (United States)

    Estomata, M. T. L.; Blanco, A. C.; Nadaoka, K.; Tomoling, E. C. M.

    2012-07-01

    Mapping benthic cover in deep waters comprises a very small proportion of studies in the field of research. Majority of benthic cover mapping makes use of satellite images and usually, classification is carried out only for shallow waters. To map the seafloor in optically deep waters, underwater videos and photos are needed. Some researchers have applied this method on underwater photos, but made use of different classification methods such as: Neural Networks, and rapid classification via down sampling. In this study, accurate bathymetric data obtained using a multi-beam echo sounder (MBES) was attempted to be used as complementary data with the underwater photographs. Due to the absence of a motion reference unit (MRU), which applies correction to the data gathered by the MBES, accuracy of the said depth data was compromised. Nevertheless, even with the absence of accurate bathymetric data, object-based image analysis (OBIA), which used rule sets based on information such as shape, size, area, relative distance, and spectral information, was still applied. Compared to pixel-based classifications, OBIA was able to classify more specific benthic cover types other than coral and sand, such as rubble and fish. Through the use of rule sets on area, less than or equal to 700 pixels for fish and between 700 to 10,000 pixels for rubble, as well as standard deviation values to distinguish texture, fish and rubble were identified. OBIA produced benthic cover maps that had higher overall accuracy, 93.78±0.85%, as compared to pixel-based methods that had an average accuracy of only 87.30±6.11% (p-value = 0.0001, α = 0.05).

  11. Analysis and implementation of the Large Scale Video-on-Demand System

    CERN Document Server

    Kanrar, Soumen

    2012-01-01

    Next Generation Network (NGN) provides multimedia services over broadband based networks, which supports high definition TV (HDTV), and DVD quality video-on-demand content. The video services are thus seen as merging mainly three areas such as computing, communication, and broadcasting. It has numerous advantages and more exploration for the large-scale deployment of video-on-demand system is still needed. This is due to its economic and design constraints. It's need significant initial investments for full service provision. This paper presents different estimation for the different topologies and it require efficient planning for a VOD system network. The methodology investigates the network bandwidth requirements of a VOD system based on centralized servers, and distributed local proxies. Network traffic models are developed to evaluate the VOD system's operational bandwidth requirements for these two network architectures. This paper present an efficient estimation of the of the bandwidth requirement for ...

  12. An Automated High-Throughput Metabolic Stability Assay Using an Integrated High-Resolution Accurate Mass Method and Automated Data Analysis Software

    Science.gov (United States)

    Shah, Pranav; Kerns, Edward; Nguyen, Dac-Trung; Obach, R. Scott; Wang, Amy Q.; Zakharov, Alexey; McKew, John; Simeonov, Anton; Hop, Cornelis E. C. A.

    2016-01-01

    Advancement of in silico tools would be enabled by the availability of data for metabolic reaction rates and intrinsic clearance (CLint) of a diverse compound structure data set by specific metabolic enzymes. Our goal is to measure CLint for a large set of compounds with each major human cytochrome P450 (P450) isozyme. To achieve our goal, it is of utmost importance to develop an automated, robust, sensitive, high-throughput metabolic stability assay that can efficiently handle a large volume of compound sets. The substrate depletion method [in vitro half-life (t1/2) method] was chosen to determine CLint. The assay (384-well format) consisted of three parts: 1) a robotic system for incubation and sample cleanup; 2) two different integrated, ultraperformance liquid chromatography/mass spectrometry (UPLC/MS) platforms to determine the percent remaining of parent compound, and 3) an automated data analysis system. The CYP3A4 assay was evaluated using two long t1/2 compounds, carbamazepine and antipyrine (t1/2 > 30 minutes); one moderate t1/2 compound, ketoconazole (10 < t1/2 < 30 minutes); and two short t1/2 compounds, loperamide and buspirone (t½ < 10 minutes). Interday and intraday precision and accuracy of the assay were within acceptable range (∼12%) for the linear range observed. Using this assay, CYP3A4 CLint and t1/2 values for more than 3000 compounds were measured. This high-throughput, automated, and robust assay allows for rapid metabolic stability screening of large compound sets and enables advanced computational modeling for individual human P450 isozymes. PMID:27417180

  13. An Analysis of Cultural Values of Chinese and Foreign Cities’Publicity Videos

    Institute of Scientific and Technical Information of China (English)

    SHAN Meng-chen

    2015-01-01

    The trend of globalization has a profound impact on the development of the whole world, and there are a lot of events held by several countries. The publicity videos have become a significant tool to disseminate connotation of the city and to attract people all over the world. At the same time, videos also show us different cultural values between China and foreign countries. To analyse different cultural values, the importance of intercultural communication will be deeply understood, and it will be help⁃ful for promoting China’s cultural soft power.

  14. Guest Editorial: Analysis and Retrieval of Events/Actions and Workflows in Video Streams

    DEFF Research Database (Denmark)

    Doulamis, Anastasios; Doulamis, Nikolaos; Bertini, Marco

    2016-01-01

    activities, actions, and procedures have been in the focus of the research community over the last years. This research area has strong impact on many real-life applications such as service quality assurance, compliance to the designed procedures in industrial plants, surveillance of people-dense areas (e.......g., thematic parks, critical public infrastructures), crisis management in public service areas (e.g., train stations, airports), security (detection of abnormal behaviors in surveillance videos), semantic characterization, and annotation of video streams in various domains (e.g., broadcast or user...

  15. Mathematical support for automated geometry analysis of lathe machining of oblique peakless round–nose tools

    Science.gov (United States)

    Filippov, A. V.; Tarasov, S. Yu; Podgornyh, O. A.; Shamarin, N. N.; Filippova, E. O.

    2017-01-01

    Automatization of engineering processes requires developing relevant mathematical support and a computer software. Analysis of metal cutting kinematics and tool geometry is a necessary key task at the preproduction stage. This paper is focused on developing a procedure for determining the geometry of oblique peakless round-nose tool lathe machining with the use of vector/matrix transformations. Such an approach allows integration into modern mathematical software packages in distinction to the traditional analytic description. Such an advantage is very promising for developing automated control of the preproduction process. A kinematic criterion for the applicable tool geometry has been developed from the results of this study. The effect of tool blade inclination and curvature on the geometry-dependent process parameters was evaluated.

  16. Selection of Filtration Methods in the Analysis of Motion of Automated Guided Vehicle

    Directory of Open Access Journals (Sweden)

    Dobrzańska Magdalena

    2016-08-01

    Full Text Available In this article the issues related to mapping the route and error correction in automated guided vehicle (AGV movement have been discussed. The nature and size of disruption have been determined using the registered runs in experimental studies. On the basis of the analysis a number of numerical runs have been generated, which mapped possible to obtain runs in a real movement of the vehicle. The obtained data set has been used for further research. The aim of this paper was to test the selected methods of digital filtering on the same data set and determine their effectiveness. The results of simulation studies have been presented in the article. The effectiveness of various methods has been determined and on this basis the conclusions have been drawn.

  17. Detection and removal of ocular artifacts from EEG signals for an automated REM sleep analysis.

    Science.gov (United States)

    Betta, Monica; Gemignani, Angelo; Landi, Alberto; Laurino, Marco; Piaggi, Paolo; Menicucci, Danilo

    2013-01-01

    Rapid eye movements (REMs) are a prominent feature of REM sleep, and their distribution and time density over the night represent important physiological and clinical parameters. At the same time, REMs produce substantial distortions on the electroencephalographic (EEG) signals, which strongly affect the significance of normal REM sleep quantitative study. In this work a new procedure for a complete and automated analysis of REM sleep is proposed, which includes both a REMs detection algorithm and an ocular artifact removal system. The two steps, based respectively on Wavelet Transform and adaptive filtering, are fully integrated and their performance is evaluated using REM simulated signals. Thanks to the integration with the detection algorithm, the proposed artifact removal system shows an enhanced accuracy in the recovering of the true EEG signal, compared to a system based on the adaptive filtering only. Finally the artifact removal system is applied to physiological data and an estimation of the actual distortion induced by REMs on EEG signals is supplied.

  18. Automation of C-terminal sequence analysis of 2D-PAGE separated proteins

    Directory of Open Access Journals (Sweden)

    P.P. Moerman

    2014-06-01

    Full Text Available Experimental assignment of the protein termini remains essential to define the functional protein structure. Here, we report on the improvement of a proteomic C-terminal sequence analysis method. The approach aims to discriminate the C-terminal peptide in a CNBr-digest where Met-Xxx peptide bonds are cleaved in internal peptides ending at a homoserine lactone (hsl-derivative. pH-dependent partial opening of the lactone ring results in the formation of doublets for all internal peptides. C-terminal peptides are distinguished as singlet peaks by MALDI-TOF MS and MS/MS is then used for their identification. We present a fully automated protocol established on a robotic liquid-handling station.

  19. New strategies for medical data mining, part 3: automated workflow analysis and optimization.

    Science.gov (United States)

    Reiner, Bruce

    2011-02-01

    The practice of evidence-based medicine calls for the creation of "best practice" guidelines, leading to improved clinical outcomes. One of the primary factors limiting evidence-based medicine in radiology today is the relative paucity of standardized databases. The creation of standardized medical imaging databases offer the potential to enhance radiologist workflow and diagnostic accuracy through objective data-driven analytics, which can be categorized in accordance with specific variables relating to the individual examination, patient, provider, and technology being used. In addition to this "global" database analysis, "individual" radiologist workflow can be analyzed through the integration of electronic auditing tools into the PACS. The combination of these individual and global analyses can ultimately identify best practice patterns, which can be adapted to the individual attributes of end users and ultimately used in the creation of automated evidence-based medicine workflow templates.

  20. A Fully Automated and Robust Method to Incorporate Stamping Data in Crash, NVH and Durability Analysis

    Science.gov (United States)

    Palaniswamy, Hariharasudhan; Kanthadai, Narayan; Roy, Subir; Beauchesne, Erwan

    2011-08-01

    Crash, NVH (Noise, Vibration, Harshness), and durability analysis are commonly deployed in structural CAE analysis for mechanical design of components especially in the automotive industry. Components manufactured by stamping constitute a major portion of the automotive structure. In CAE analysis they are modeled at a nominal state with uniform thickness and no residual stresses and strains. However, in reality the stamped components have non-uniformly distributed thickness and residual stresses and strains resulting from stamping. It is essential to consider the stamping information in CAE analysis to accurately model the behavior of the sheet metal structures under different loading conditions. Especially with the current emphasis on weight reduction by replacing conventional steels with aluminum and advanced high strength steels it is imperative to avoid over design. Considering this growing need in industry, a highly automated and robust method has been integrated within Altair Hyperworks® to initialize sheet metal components in CAE models with stamping data. This paper demonstrates this new feature and the influence of stamping data for a full car frontal crash analysis.