WorldWideScience

Sample records for automated video analysis

  1. An automated method for analysis of microcirculation videos for accurate assessment of tissue perfusion

    Directory of Open Access Journals (Sweden)

    Demir Sumeyra U

    2012-12-01

    Full Text Available Abstract Background Imaging of the human microcirculation in real-time has the potential to detect injuries and illnesses that disturb the microcirculation at earlier stages and may improve the efficacy of resuscitation. Despite advanced imaging techniques to monitor the microcirculation, there are currently no tools for the near real-time analysis of the videos produced by these imaging systems. An automated system tool that can extract microvasculature information and monitor changes in tissue perfusion quantitatively might be invaluable as a diagnostic and therapeutic endpoint for resuscitation. Methods The experimental algorithm automatically extracts microvascular network and quantitatively measures changes in the microcirculation. There are two main parts in the algorithm: video processing and vessel segmentation. Microcirculatory videos are first stabilized in a video processing step to remove motion artifacts. In the vessel segmentation process, the microvascular network is extracted using multiple level thresholding and pixel verification techniques. Threshold levels are selected using histogram information of a set of training video recordings. Pixel-by-pixel differences are calculated throughout the frames to identify active blood vessels and capillaries with flow. Results Sublingual microcirculatory videos are recorded from anesthetized swine at baseline and during hemorrhage using a hand-held Side-stream Dark Field (SDF imaging device to track changes in the microvasculature during hemorrhage. Automatically segmented vessels in the recordings are analyzed visually and the functional capillary density (FCD values calculated by the algorithm are compared for both health baseline and hemorrhagic conditions. These results were compared to independently made FCD measurements using a well-known semi-automated method. Results of the fully automated algorithm demonstrated a significant decrease of FCD values. Similar, but more variable FCD

  2. Video and accelerometer-based motion analysis for automated surgical skills assessment.

    Science.gov (United States)

    Zia, Aneeq; Sharma, Yachna; Bettadapura, Vinay; Sarin, Eric L; Essa, Irfan

    2018-03-01

    Basic surgical skills of suturing and knot tying are an essential part of medical training. Having an automated system for surgical skills assessment could help save experts time and improve training efficiency. There have been some recent attempts at automated surgical skills assessment using either video analysis or acceleration data. In this paper, we present a novel approach for automated assessment of OSATS-like surgical skills and provide an analysis of different features on multi-modal data (video and accelerometer data). We conduct a large study for basic surgical skill assessment on a dataset that contained video and accelerometer data for suturing and knot-tying tasks. We introduce "entropy-based" features-approximate entropy and cross-approximate entropy, which quantify the amount of predictability and regularity of fluctuations in time series data. The proposed features are compared to existing methods of Sequential Motion Texture, Discrete Cosine Transform and Discrete Fourier Transform, for surgical skills assessment. We report average performance of different features across all applicable OSATS-like criteria for suturing and knot-tying tasks. Our analysis shows that the proposed entropy-based features outperform previous state-of-the-art methods using video data, achieving average classification accuracies of 95.1 and 92.2% for suturing and knot tying, respectively. For accelerometer data, our method performs better for suturing achieving 86.8% average accuracy. We also show that fusion of video and acceleration features can improve overall performance for skill assessment. Automated surgical skills assessment can be achieved with high accuracy using the proposed entropy features. Such a system can significantly improve the efficiency of surgical training in medical schools and teaching hospitals.

  3. Automated video analysis of non-verbal communication in a medical setting

    Directory of Open Access Journals (Sweden)

    Yuval Hart

    2016-08-01

    Full Text Available Non-verbal communication plays a significant role in establishing good rapport between physicians and patients and influences patient’s health outcomes. Therefore, it is important to measure and analyze non-verbal communication in medical settings. Current approaches to measure non-verbal interactions in medicine employ coding by human raters. Such tools are labor intensive and hence limit the scale of possible studies. Here, we present an automated video analysis tool of non-verbal interactions in a medical setting. We test the tool using videos of subjects that interact with an actor portraying a doctor performing one of two scripted scenarios of interviewing the subjects: in one scenario the actor was focused on his computer and briefly engaged with the subject. The second scenario included active listening by the doctor and heavy focus on the subject. We analyze the cross correlation in total kinetic energy of the two people in the dyad, and also characterize the frequency spectrum of their motion. We find large differences in interpersonal motion synchrony and entrainment between the two performance scenarios. The active listening scenario shows more synchrony and more symmetric followership than the other scenario. Moreover, the active listening scenario shows more high frequency motion termed jitter that has been recently suggested to be a marker of followership. The present approach may be useful for analyzing physician-patient interactions in terms of synchrony and dominance in a wide range of medical settings.

  4. Automated Video Analysis of Non-verbal Communication in a Medical Setting.

    Science.gov (United States)

    Hart, Yuval; Czerniak, Efrat; Karnieli-Miller, Orit; Mayo, Avraham E; Ziv, Amitai; Biegon, Anat; Citron, Atay; Alon, Uri

    2016-01-01

    Non-verbal communication plays a significant role in establishing good rapport between physicians and patients and may influence aspects of patient health outcomes. It is therefore important to analyze non-verbal communication in medical settings. Current approaches to measure non-verbal interactions in medicine employ coding by human raters. Such tools are labor intensive and hence limit the scale of possible studies. Here, we present an automated video analysis tool for non-verbal interactions in a medical setting. We test the tool using videos of subjects that interact with an actor portraying a doctor. The actor interviews the subjects performing one of two scripted scenarios of interviewing the subjects: in one scenario the actor showed minimal engagement with the subject. The second scenario included active listening by the doctor and attentiveness to the subject. We analyze the cross correlation in total kinetic energy of the two people in the dyad, and also characterize the frequency spectrum of their motion. We find large differences in interpersonal motion synchrony and entrainment between the two performance scenarios. The active listening scenario shows more synchrony and more symmetric followership than the other scenario. Moreover, the active listening scenario shows more high-frequency motion termed jitter that has been recently suggested to be a marker of followership. The present approach may be useful for analyzing physician-patient interactions in terms of synchrony and dominance in a range of medical settings.

  5. Automated Video Quality Assessment for Deep-Sea Video

    Science.gov (United States)

    Pirenne, B.; Hoeberechts, M.; Kalmbach, A.; Sadhu, T.; Branzan Albu, A.; Glotin, H.; Jeffries, M. A.; Bui, A. O. V.

    2015-12-01

    Video provides a rich source of data for geophysical analysis, often supplying detailed information about the environment when other instruments may not. This is especially true of deep-sea environments, where direct visual observations cannot be made. As computer vision techniques improve and volumes of video data increase, automated video analysis is emerging as a practical alternative to labor-intensive manual analysis. Automated techniques can be much more sensitive to video quality than their manual counterparts, so performing quality assessment before doing full analysis is critical to producing valid results.Ocean Networks Canada (ONC), an initiative of the University of Victoria, operates cabled ocean observatories that supply continuous power and Internet connectivity to a broad suite of subsea instruments from the coast to the deep sea, including video and still cameras. This network of ocean observatories has produced almost 20,000 hours of video (about 38 hours are recorded each day) and an additional 8,000 hours of logs from remotely operated vehicle (ROV) dives. We begin by surveying some ways in which deep-sea video poses challenges for automated analysis, including: 1. Non-uniform lighting: Single, directional, light sources produce uneven luminance distributions and shadows; remotely operated lighting equipment are also susceptible to technical failures. 2. Particulate noise: Turbidity and marine snow are often present in underwater video; particles in the water column can have sharper focus and higher contrast than the objects of interest due to their proximity to the light source and can also influence the camera's autofocus and auto white-balance routines. 3. Color distortion (low contrast): The rate of absorption of light in water varies by wavelength, and is higher overall than in air, altering apparent colors and lowering the contrast of objects at a distance.We also describe measures under development at ONC for detecting and mitigating

  6. Quantitative analysis of spider locomotion employing computer-automated video tracking

    DEFF Research Database (Denmark)

    Baatrup, E; Bayley, M

    1993-01-01

    The locomotor activity of adult specimens of the wolf spider Pardosa amentata was measured in an open-field setup, using computer-automated colour object video tracking. The x,y coordinates of the animal in the digitized image of the test arena were recorded three times per second during four...... consecutive 12-h periods, alternating between white and red (lambda > 600 nm) illumination. Male spiders were significantly more locomotor active than female spiders under both lighting conditions. They walked, on average, twice the distance of females, employed higher velocities, and spent less time...... in quiescence. Both male and female P. amentata were significantly less active in red light (simulated dark environment) than in white light. The results also revealed that P. amentata administers its walking velocity and periods of quiescence according to consistent distributions, which can be approximated...

  7. USING STEREO VISION TO SUPPORT THE AUTOMATED ANALYSIS OF SURVEILLANCE VIDEOS

    Directory of Open Access Journals (Sweden)

    M. Menze

    2012-07-01

    Full Text Available Video surveillance systems are no longer a collection of independent cameras, manually controlled by human operators. Instead, smart sensor networks are developed, able to fulfil certain tasks on their own and thus supporting security personnel by automated analyses. One well-known task is the derivation of people’s positions on a given ground plane from monocular video footage. An improved accuracy for the ground position as well as a more detailed representation of single salient people can be expected from a stereoscopic processing of overlapping views. Related work mostly relies on dedicated stereo devices or camera pairs with a small baseline. While this set-up is helpful for the essential step of image matching, the high accuracy potential of a wide baseline and the according good intersection geometry is not utilised. In this paper we present a stereoscopic approach, working on overlapping views of standard pan-tilt-zoom cameras which can easily be generated for arbitrary points of interest by an appropriate reconfiguration of parts of a sensor network. Experiments are conducted on realistic surveillance footage to show the potential of the suggested approach and to investigate the influence of different baselines on the quality of the derived surface model. Promising estimations of people’s position and height are retrieved. Although standard matching approaches show helpful results, future work will incorporate temporal dependencies available from image sequences in order to reduce computational effort and improve the derived level of detail.

  8. Electroencephalography Amplitude Modulation Analysis for Automated Affective Tagging of Music Video Clips

    Science.gov (United States)

    Clerico, Andrea; Tiwari, Abhishek; Gupta, Rishabh; Jayaraman, Srinivasan; Falk, Tiago H.

    2018-01-01

    The quantity of music content is rapidly increasing and automated affective tagging of music video clips can enable the development of intelligent retrieval, music recommendation, automatic playlist generators, and music browsing interfaces tuned to the users' current desires, preferences, or affective states. To achieve this goal, the field of affective computing has emerged, in particular the development of so-called affective brain-computer interfaces, which measure the user's affective state directly from measured brain waves using non-invasive tools, such as electroencephalography (EEG). Typically, conventional features extracted from the EEG signal have been used, such as frequency subband powers and/or inter-hemispheric power asymmetry indices. More recently, the coupling between EEG and peripheral physiological signals, such as the galvanic skin response (GSR), have also been proposed. Here, we show the importance of EEG amplitude modulations and propose several new features that measure the amplitude-amplitude cross-frequency coupling per EEG electrode, as well as linear and non-linear connections between multiple electrode pairs. When tested on a publicly available dataset of music video clips tagged with subjective affective ratings, support vector classifiers trained on the proposed features were shown to outperform those trained on conventional benchmark EEG features by as much as 6, 20, 8, and 7% for arousal, valence, dominance and liking, respectively. Moreover, fusion of the proposed features with EEG-GSR coupling features showed to be particularly useful for arousal (feature-level fusion) and liking (decision-level fusion) prediction. Together, these findings show the importance of the proposed features to characterize human affective states during music clip watching. PMID:29367844

  9. Automated high-speed video analysis of the bubble dynamics in subcooled flow boiling

    Energy Technology Data Exchange (ETDEWEB)

    Maurus, Reinhold; Ilchenko, Volodymyr; Sattelmayer, Thomas [Technische Univ. Muenchen, Lehrstuhl fuer Thermodynamik, Garching (Germany)

    2004-04-01

    Subcooled flow boiling is a commonly applied technique for achieving efficient heat transfer. In the study, an experimental investigation in the nucleate boiling regime was performed for water circulating in a closed loop at atmospheric pressure. The test-section consists of a rectangular channel with a one side heated copper strip and a very good optical access. For the optical observation of the bubble behaviour the high-speed cinematography is used. Automated image processing and analysis algorithms developed by the authors were applied for a wide range of mass flow rates and heat fluxes in order to extract characteristic length and time scales of the bubbly layer during the boiling process. Using this methodology, a huge number of bubble cycles could be analysed. The structure of the developed algorithms for the detection of the bubble diameter, the bubble lifetime, the lifetime after the detachment process and the waiting time between two bubble cycles is described. Subsequently, the results from using these automated procedures are presented. A remarkable novelty is the presentation of all results as distribution functions. This is of physical importance because the commonly applied spatial and temporal averaging leads to a loss of information and, moreover, to an unjustified deterministic view of the boiling process, which exhibits in reality a very wide spread of bubble sizes and characteristic times. The results show that the mass flux dominates the temporal bubble behaviour. An increase of the liquid mass flux reveals a strong decrease of the bubble life - and waiting time. In contrast, the variation of the heat flux has a much smaller impact. It is shown in addition that the investigation of the bubble history using automated algorithms delivers novel information with respect to the bubble lift-off probability. (Author)

  10. Automated high-speed video analysis of the bubble dynamics in subcooled flow boiling

    International Nuclear Information System (INIS)

    Maurus, Reinhold; Ilchenko, Volodymyr; Sattelmayer, Thomas

    2004-01-01

    Subcooled flow boiling is a commonly applied technique for achieving efficient heat transfer. In the study, an experimental investigation in the nucleate boiling regime was performed for water circulating in a closed loop at atmospheric pressure. The test-section consists of a rectangular channel with a one side heated copper strip and a very good optical access. For the optical observation of the bubble behaviour the high-speed cinematography is used. Automated image processing and analysis algorithms developed by the authors were applied for a wide range of mass flow rates and heat fluxes in order to extract characteristic length and time scales of the bubbly layer during the boiling process. Using this methodology, a huge number of bubble cycles could be analysed. The structure of the developed algorithms for the detection of the bubble diameter, the bubble lifetime, the lifetime after the detachment process and the waiting time between two bubble cycles is described. Subsequently, the results from using these automated procedures are presented. A remarkable novelty is the presentation of all results as distribution functions. This is of physical importance because the commonly applied spatial and temporal averaging leads to a loss of information and, moreover, to an unjustified deterministic view of the boiling process, which exhibits in reality a very wide spread of bubble sizes and characteristic times. The results show that the mass flux dominates the temporal bubble behaviour. An increase of the liquid mass flux reveals a strong decrease of the bubble life- and waiting time. In contrast, the variation of the heat flux has a much smaller impact. It is shown in addition that the investigation of the bubble history using automated algorithms delivers novel information with respect to the bubble lift-off probability

  11. Automated Analysis of Facial Cues from Videos as a Potential Method for Differentiating Stress and Boredom of Players in Games

    Directory of Open Access Journals (Sweden)

    Fernando Bevilacqua

    2018-01-01

    Full Text Available Facial analysis is a promising approach to detect emotions of players unobtrusively; however approaches are commonly evaluated in contexts not related to games or facial cues are derived from models not designed for analysis of emotions during interactions with games. We present a method for automated analysis of facial cues from videos as a potential tool for detecting stress and boredom of players behaving naturally while playing games. Computer vision is used to automatically and unobtrusively extract 7 facial features aimed at detecting the activity of a set of facial muscles. Features are mainly based on the Euclidean distance of facial landmarks and do not rely on predefined facial expressions, training of a model, or the use of facial standards. An empirical evaluation was conducted on video recordings of an experiment involving games as emotion elicitation sources. Results show statistically significant differences in the values of facial features during boring and stressful periods of gameplay for 5 of the 7 features. We believe our approach is more user-tailored, convenient, and better suited for contexts involving games.

  12. Automated tracking of whiskers in videos of head fixed rodents.

    Science.gov (United States)

    Clack, Nathan G; O'Connor, Daniel H; Huber, Daniel; Petreanu, Leopoldo; Hires, Andrew; Peron, Simon; Svoboda, Karel; Myers, Eugene W

    2012-01-01

    We have developed software for fully automated tracking of vibrissae (whiskers) in high-speed videos (>500 Hz) of head-fixed, behaving rodents trimmed to a single row of whiskers. Performance was assessed against a manually curated dataset consisting of 1.32 million video frames comprising 4.5 million whisker traces. The current implementation detects whiskers with a recall of 99.998% and identifies individual whiskers with 99.997% accuracy. The average processing rate for these images was 8 Mpx/s/cpu (2.6 GHz Intel Core2, 2 GB RAM). This translates to 35 processed frames per second for a 640 px×352 px video of 4 whiskers. The speed and accuracy achieved enables quantitative behavioral studies where the analysis of millions of video frames is required. We used the software to analyze the evolving whisking strategies as mice learned a whisker-based detection task over the course of 6 days (8148 trials, 25 million frames) and measure the forces at the sensory follicle that most underlie haptic perception.

  13. Automated mapping of the intertidal beach from video images

    NARCIS (Netherlands)

    Uunk, L.; Uunk, L.; Wijnberg, Kathelijne Mariken; Morelissen, R.; Morelissen, R.

    2010-01-01

    This paper presents a fully automated procedure to derive the intertidal beach bathymetry on a daily basis from video images of low-sloping beaches that are characterised by the intermittent emergence of intertidal bars. Bathymetry data are obtained by automated and repeated mapping of shorelines

  14. Monochromatic blue light entrains diel activity cycles in the Norway lobster, Nephrops norvegicus (L. as measured by automated video-image analysis

    Directory of Open Access Journals (Sweden)

    Jacopo Aguzzi

    2009-12-01

    Full Text Available There is growing interest in developing automated, non-invasive techniques for long-lasting, laboratory-based monitoring of behaviour in organisms from deep-water continental margins which are of ecological and commercial importance. We monitored the burrow emergence rhythms in the Norway lobster, Nephrops norvegicus, which included: a characterising the regulation of behavioural activity outside the burrow under monochromatic blue light-darkness (LD cycles of 0.1 lx, recreating slope photic conditions (i.e. 200-300 m depth and constant darkness (DD, which is necessary for the study of the circadian system; b testing the performance of a newly designed digital video-image analysis system for tracking locomotor activity. We used infrared USB web cameras and customised software (in Matlab 7.1 to acquire and process digital frames of eight animals at a rate of one frame per minute under consecutive photoperiod stages for nine days each: LD, DD, and LD (subdivided into two stages, LD1 and LD2, for analysis purposes. The automated analysis allowed the production of time series of locomotor activity based on movements of the animals’ centroids. Data were studied with periodogram, waveform, and Fourier analyses. For the first time, we report robust diurnal burrow emergence rhythms during the LD period, which became weak in DD. Our results fit with field data accounting for midday peaks in catches at the depth of slopes. The comparison of the present locomotor pattern with those recorded at different light intensities clarifies the regulation of the clock of N. norvegicus at different depths.

  15. Contextual analysis of videos

    CERN Document Server

    Thida, Myo; Monekosso, Dorothy

    2013-01-01

    Video context analysis is an active and vibrant research area, which provides means for extracting, analyzing and understanding behavior of a single target and multiple targets. Over the last few decades, computer vision researchers have been working to improve the accuracy and robustness of algorithms to analyse the context of a video automatically. In general, the research work in this area can be categorized into three major topics: 1) counting number of people in the scene 2) tracking individuals in a crowd and 3) understanding behavior of a single target or multiple targets in the scene.

  16. Gait Analysis by Multi Video Sequence Analysis

    DEFF Research Database (Denmark)

    Jensen, Karsten; Juhl, Jens

    2009-01-01

    and the calcaneus angle during gait. In the introductory phase of the project the task has been to select, purchase and draw up hardware, select and purchase software concerning video streaming and to develop special software concerning automated registration of the position of the foot during gait by Multi Video...

  17. Automated UAV-based mapping for airborne reconnaissance and video exploitation

    Science.gov (United States)

    Se, Stephen; Firoozfam, Pezhman; Goldstein, Norman; Wu, Linda; Dutkiewicz, Melanie; Pace, Paul; Naud, J. L. Pierre

    2009-05-01

    Airborne surveillance and reconnaissance are essential for successful military missions. Such capabilities are critical for force protection, situational awareness, mission planning, damage assessment and others. UAVs gather huge amount of video data but it is extremely labour-intensive for operators to analyse hours and hours of received data. At MDA, we have developed a suite of tools towards automated video exploitation including calibration, visualization, change detection and 3D reconstruction. The on-going work is to improve the robustness of these tools and automate the process as much as possible. Our calibration tool extracts and matches tie-points in the video frames incrementally to recover the camera calibration and poses, which are then refined by bundle adjustment. Our visualization tool stabilizes the video, expands its field-of-view and creates a geo-referenced mosaic from the video frames. It is important to identify anomalies in a scene, which may include detecting any improvised explosive devices (IED). However, it is tedious and difficult to compare video clips to look for differences manually. Our change detection tool allows the user to load two video clips taken from two passes at different times and flags any changes between them. 3D models are useful for situational awareness, as it is easier to understand the scene by visualizing it in 3D. Our 3D reconstruction tool creates calibrated photo-realistic 3D models from video clips taken from different viewpoints, using both semi-automated and automated approaches. The resulting 3D models also allow distance measurements and line-of- sight analysis.

  18. Significantly improved precision of cell migration analysis in time-lapse video microscopy through use of a fully automated tracking system

    Directory of Open Access Journals (Sweden)

    Seufferlein Thomas

    2010-04-01

    Full Text Available Abstract Background Cell motility is a critical parameter in many physiological as well as pathophysiological processes. In time-lapse video microscopy, manual cell tracking remains the most common method of analyzing migratory behavior of cell populations. In addition to being labor-intensive, this method is susceptible to user-dependent errors regarding the selection of "representative" subsets of cells and manual determination of precise cell positions. Results We have quantitatively analyzed these error sources, demonstrating that manual cell tracking of pancreatic cancer cells lead to mis-calculation of migration rates of up to 410%. In order to provide for objective measurements of cell migration rates, we have employed multi-target tracking technologies commonly used in radar applications to develop fully automated cell identification and tracking system suitable for high throughput screening of video sequences of unstained living cells. Conclusion We demonstrate that our automatic multi target tracking system identifies cell objects, follows individual cells and computes migration rates with high precision, clearly outperforming manual procedures.

  19. High-throughput phenotyping of plant resistance to aphids by automated video tracking.

    Science.gov (United States)

    Kloth, Karen J; Ten Broeke, Cindy Jm; Thoen, Manus Pm; Hanhart-van den Brink, Marianne; Wiegers, Gerrie L; Krips, Olga E; Noldus, Lucas Pjj; Dicke, Marcel; Jongsma, Maarten A

    2015-01-01

    Piercing-sucking insects are major vectors of plant viruses causing significant yield losses in crops. Functional genomics of plant resistance to these insects would greatly benefit from the availability of high-throughput, quantitative phenotyping methods. We have developed an automated video tracking platform that quantifies aphid feeding behaviour on leaf discs to assess the level of plant resistance. Through the analysis of aphid movement, the start and duration of plant penetrations by aphids were estimated. As a case study, video tracking confirmed the near-complete resistance of lettuce cultivar 'Corbana' against Nasonovia ribisnigri (Mosely), biotype Nr:0, and revealed quantitative resistance in Arabidopsis accession Co-2 against Myzus persicae (Sulzer). The video tracking platform was benchmarked against Electrical Penetration Graph (EPG) recordings and aphid population development assays. The use of leaf discs instead of intact plants reduced the intensity of the resistance effect in video tracking, but sufficiently replicated experiments resulted in similar conclusions as EPG recordings and aphid population assays. One video tracking platform could screen 100 samples in parallel. Automated video tracking can be used to screen large plant populations for resistance to aphids and other piercing-sucking insects.

  20. Automated interactive video playback for studies of animal communication.

    Science.gov (United States)

    Butkowski, Trisha; Yan, Wei; Gray, Aaron M; Cui, Rongfeng; Verzijden, Machteld N; Rosenthal, Gil G

    2011-02-09

    Video playback is a widely-used technique for the controlled manipulation and presentation of visual signals in animal communication. In particular, parameter-based computer animation offers the opportunity to independently manipulate any number of behavioral, morphological, or spectral characteristics in the context of realistic, moving images of animals on screen. A major limitation of conventional playback, however, is that the visual stimulus lacks the ability to interact with the live animal. Borrowing from video-game technology, we have created an automated, interactive system for video playback that controls animations in response to real-time signals from a video tracking system. We demonstrated this method by conducting mate-choice trials on female swordtail fish, Xiphophorus birchmanni. Females were given a simultaneous choice between a courting male conspecific and a courting male heterospecific (X. malinche) on opposite sides of an aquarium. The virtual male stimulus was programmed to track the horizontal position of the female, as courting males do in the wild. Mate-choice trials on wild-caught X. birchmanni females were used to validate the prototype's ability to effectively generate a realistic visual stimulus.

  1. Magnetic Braking: A Video Analysis

    Science.gov (United States)

    Molina-Bolivar, J. A.; Abella-Palacios, A. J.

    2012-01-01

    This paper presents a laboratory exercise that introduces students to the use of video analysis software and the Lenz's law demonstration. Digital techniques have proved to be very useful for the understanding of physical concepts. In particular, the availability of affordable digital video offers students the opportunity to actively engage in…

  2. Automated Motivic Analysis

    DEFF Research Database (Denmark)

    Lartillot, Olivier

    2016-01-01

    Motivic analysis provides very detailed understanding of musical composi- tions, but is also particularly difficult to formalize and systematize. A computational automation of the discovery of motivic patterns cannot be reduced to a mere extraction of all possible sequences of descriptions....... The systematic approach inexorably leads to a proliferation of redundant structures that needs to be addressed properly. Global filtering techniques cause a drastic elimination of interesting structures that damages the quality of the analysis. On the other hand, a selection of closed patterns allows...... for lossless compression. The structural complexity resulting from successive repetitions of patterns can be controlled through a simple modelling of cycles. Generally, motivic patterns cannot always be defined solely as sequences of descriptions in a fixed set of dimensions: throughout the descriptions...

  3. Contaminant analysis automation, an overview

    International Nuclear Information System (INIS)

    Hollen, R.; Ramos, O. Jr.

    1996-01-01

    To meet the environmental restoration and waste minimization goals of government and industry, several government laboratories, universities, and private companies have formed the Contaminant Analysis Automation (CAA) team. The goal of this consortium is to design and fabricate robotics systems that standardize and automate the hardware and software of the most common environmental chemical methods. In essence, the CAA team takes conventional, regulatory- approved (EPA Methods) chemical analysis processes and automates them. The automation consists of standard laboratory modules (SLMs) that perform the work in a much more efficient, accurate, and cost- effective manner

  4. Automated Analysis of Accountability

    DEFF Research Database (Denmark)

    Bruni, Alessandro; Giustolisi, Rosario; Schürmann, Carsten

    2017-01-01

    that are amenable to automated verification. Our definitions are general enough to be applied to different classes of protocols and different automated security verification tools. Furthermore, we point out formally the relation between verifiability and accountability. We validate our definitions...... with the automatic verification of three protocols: a secure exam protocol, Google’s Certificate Transparency, and an improved version of Bingo Voting. We find through automated verification that all three protocols satisfy verifiability while only the first two protocols meet accountability....

  5. Feasibility of Using Video Camera for Automated Enforcement on Red-Light Running and Managed Lanes.

    Science.gov (United States)

    2009-12-25

    The overall objective of this study is to evaluate the feasibility, effectiveness, legality, and public acceptance aspects of automated enforcement on red light running and HOV occupancy requirement using video cameras in Nevada. This objective was a...

  6. Feasibility of Using Video Cameras for Automated Enforcement on Red-Light Running and Managed Lanes.

    Science.gov (United States)

    2009-12-01

    The overall objective of this study is to evaluate the feasibility, effectiveness, legality, and public acceptance aspects of automated enforcement on red light running and high occupancy vehicle (HOV) occupancy requirement using video cameras in Nev...

  7. Automated video feature extraction : workshop summary report October 10-11 2012.

    Science.gov (United States)

    2012-12-01

    This report summarizes a 2-day workshop on automated video feature extraction. Discussion focused on the Naturalistic Driving : Study, funded by the second Strategic Highway Research Program, and also involved the companion roadway inventory dataset....

  8. Sunglass detection method for automation of video surveillance system

    Science.gov (United States)

    Sikandar, Tasriva; Samsudin, Wan Nur Azhani W.; Hawari Ghazali, Kamarul; Mohd, Izzeldin I.; Fazle Rabbi, Mohammad

    2018-04-01

    Wearing sunglass to hide face from surveillance camera is a common activity in criminal incidences. Therefore, sunglass detection from surveillance video has become a demanding issue in automation of security systems. In this paper we propose an image processing method to detect sunglass from surveillance images. Specifically, a unique feature using facial height and width has been employed to identify the covered region of the face. The presence of covered area by sunglass is evaluated using facial height-width ratio. Threshold value of covered area percentage is used to classify the glass wearing face. Two different types of glasses have been considered i.e. eye glass and sunglass. The results of this study demonstrate that the proposed method is able to detect sunglasses in two different illumination conditions such as, room illumination as well as in the presence of sunlight. In addition, due to the multi-level checking in facial region, this method has 100% accuracy of detecting sunglass. However, in an exceptional case where fabric surrounding the face has similar color as skin, the correct detection rate was found 93.33% for eye glass.

  9. The role of optical flow in automated quality assessment of full-motion video

    Science.gov (United States)

    Harguess, Josh; Shafer, Scott; Marez, Diego

    2017-09-01

    In real-world video data, such as full-motion-video (FMV) taken from unmanned vehicles, surveillance systems, and other sources, various corruptions to the raw data is inevitable. This can be due to the image acquisition process, noise, distortion, and compression artifacts, among other sources of error. However, we desire methods to analyze the quality of the video to determine whether the underlying content of the corrupted video can be analyzed by humans or machines and to what extent. Previous approaches have shown that motion estimation, or optical flow, can be an important cue in automating this video quality assessment. However, there are many different optical flow algorithms in the literature, each with their own advantages and disadvantages. We examine the effect of the choice of optical flow algorithm (including baseline and state-of-the-art), on motionbased automated video quality assessment algorithms.

  10. Automation of pharmaceutical warehouse using groups robots with remote climate control and video surveillance

    OpenAIRE

    Zhuravska, I. M.; Popel, M. I.

    2015-01-01

    In this paper, we present a complex solution for automation pharmaceutical warehouse, including the implementation of climate-control, video surveillance with remote access to video, robotics selection of medicine with the optimization of the robot motion. We describe all the elements of local area network (LAN) necessary to solve all these problems.

  11. Automated Identification and Reconstruction of YouTube Video Access

    Directory of Open Access Journals (Sweden)

    Jonathan Patterson

    2012-06-01

    Full Text Available YouTube is one of the most popular video-sharing websites on the Internet, allowing users to upload, view and share videos with other users all over the world. YouTube contains many different types of videos, from homemade sketches to instructional and educational tutorials, and therefore attracts a wide variety of users with different interests. The majority of YouTube visits are perfectly innocent, but there may be circumstances where YouTube video access is related to a digital investigation, e.g. viewing instructional videos on how to perform potentially unlawful actions or how to make unlawful articles.When a user accesses a YouTube video through their browser, certain digital artefacts relating to that video access may be left on their system in a number of different locations. However, there has been very little research published in the area of YouTube video artefacts.The paper discusses the identification of some of the artefacts that are left by the Internet Explorer web browser on a Windows system after accessing a YouTube video. The information that can be recovered from these artefacts can include the video ID, the video name and possibly a cached copy of the video itself. In addition to identifying the artefacts that are left, the paper also investigates how these artefacts can be brought together and analysed to infer specifics about the user’s interaction with the YouTube website, for example whether the video was searched for or visited as a result of a suggestion after viewing a previous video.The result of this research is a Python based prototype that will analyse a mounted disk image, automatically extract the artefacts related to YouTube visits and produce a report summarising the YouTube video accesses on a system.

  12. Are signalized intersections with cycle tracks safer? A case-control study based on automated surrogate safety analysis using video data.

    Science.gov (United States)

    Zangenehpour, Sohail; Strauss, Jillian; Miranda-Moreno, Luis F; Saunier, Nicolas

    2016-01-01

    Cities in North America have been building bicycle infrastructure, in particular cycle tracks, with the intention of promoting urban cycling and improving cyclist safety. These facilities have been built and expanded but very little research has been done to investigate the safety impacts of cycle tracks, in particular at intersections, where cyclists interact with turning motor-vehicles. Some safety research has looked at injury data and most have reached the conclusion that cycle tracks have positive effects of cyclist safety. The objective of this work is to investigate the safety effects of cycle tracks at signalized intersections using a case-control study. For this purpose, a video-based method is proposed for analyzing the post-encroachment time as a surrogate measure of the severity of the interactions between cyclists and turning vehicles travelling in the same direction. Using the city of Montreal as the case study, a sample of intersections with and without cycle tracks on the right and left sides of the road were carefully selected accounting for intersection geometry and traffic volumes. More than 90h of video were collected from 23 intersections and processed to obtain cyclist and motor-vehicle trajectories and interactions. After cyclist and motor-vehicle interactions were defined, ordered logit models with random effects were developed to evaluate the safety effects of cycle tracks at intersections. Based on the extracted data from the recorded videos, it was found that intersection approaches with cycle tracks on the right are safer than intersection approaches with no cycle track. However, intersections with cycle tracks on the left compared to no cycle tracks seem to be significantly safer. Results also identify that the likelihood of a cyclist being involved in a dangerous interaction increases with increasing turning vehicle flow and decreases as the size of the cyclist group arriving at the intersection increases. The results highlight the

  13. Video analysis of rolling cylinders

    Science.gov (United States)

    Phommarach, S.; Wattanakasiwich, P.; Johnston, I.

    2012-03-01

    In this work, we studied the rolling motion of solid and hollow cylinders down an inclined plane at different angles. The motions were captured on video at 300 frames s - 1, and the videos were analyzed frame by frame using video analysis software. Data from the real motion were compared with the theory of rolling down an inclined plane without slipping. From the experiment, the acceleration does not depend on the cylinder mass as indicated from the theory. For the wood-steel surface, we found that the coefficient of static friction was equal to 0.131 and the critical angle for the solid cylinder was 21.45°. The critical angle for the hollow cylinder depends on the inner and outer radius of the cylinder. Motion paths of a point on the hollow cylinder at small and large angles were shown to elucidate the pure rolling condition. Finally, we demonstrated that total mechanical energy was conserved during the pure rolling motion. This confirms that work done by the friction force is zero. We will use these results to design an interactive lecture demonstration on rolling without slipping.

  14. Automated benthic counting of living and non-living components in Ngedarrak Reef, Palau via subsurface underwater video.

    Science.gov (United States)

    Marcos, Ma Shiela Angeli; David, Laura; Peñaflor, Eileen; Ticzon, Victor; Soriano, Maricor

    2008-10-01

    We introduce an automated benthic counting system in application for rapid reef assessment that utilizes computer vision on subsurface underwater reef video. Video acquisition was executed by lowering a submersible bullet-type camera from a motor boat while moving across the reef area. A GPS and echo sounder were linked to the video recorder to record bathymetry and location points. Analysis of living and non-living components was implemented through image color and texture feature extraction from the reef video frames and classification via Linear Discriminant Analysis. Compared to common rapid reef assessment protocols, our system can perform fine scale data acquisition and processing in one day. Reef video was acquired in Ngedarrak Reef, Koror, Republic of Palau. Overall success performance ranges from 60% to 77% for depths of 1 to 3 m. The development of an automated rapid reef classification system is most promising for reef studies that need fast and frequent data acquisition of percent cover of living and nonliving components.

  15. Videography-Based Unconstrained Video Analysis.

    Science.gov (United States)

    Li, Kang; Li, Sheng; Oh, Sangmin; Fu, Yun

    2017-05-01

    Video analysis and understanding play a central role in visual intelligence. In this paper, we aim to analyze unconstrained videos, by designing features and approaches to represent and analyze videography styles in the videos. Videography denotes the process of making videos. The unconstrained videos are defined as the long duration consumer videos that usually have diverse editing artifacts and significant complexity of contents. We propose to construct a videography dictionary, which can be utilized to represent every video clip as a sequence of videography words. In addition to semantic features, such as foreground object motion and camera motion, we also incorporate two novel interpretable features to characterize videography, including the scale information and the motion correlations. We then demonstrate that, by using statistical analysis methods, the unique videography signatures extracted from different events can be automatically identified. For real-world applications, we explore the use of videography analysis for three types of applications, including content-based video retrieval, video summarization (both visual and textual), and videography-based feature pooling. In the experiments, we evaluate the performance of our approach and other methods on a large-scale unconstrained video dataset, and show that the proposed approach significantly benefits video analysis in various ways.

  16. Automated cell tracking and analysis in phase-contrast videos (iTrack4U: development of Java software based on combined mean-shift processes.

    Directory of Open Access Journals (Sweden)

    Fabrice P Cordelières

    Full Text Available Cell migration is a key biological process with a role in both physiological and pathological conditions. Locomotion of cells during embryonic development is essential for their correct positioning in the organism; immune cells have to migrate and circulate in response to injury. Failure of cells to migrate or an inappropriate acquisition of migratory capacities can result in severe defects such as altered pigmentation, skull and limb abnormalities during development, and defective wound repair, immunosuppression or tumor dissemination. The ability to accurately analyze and quantify cell migration is important for our understanding of development, homeostasis and disease. In vitro cell tracking experiments, using primary or established cell cultures, are often used to study migration as cells can quickly and easily be genetically or chemically manipulated. Images of the cells are acquired at regular time intervals over several hours using microscopes equipped with CCD camera. The locations (x,y,t of each cell on the recorded sequence of frames then need to be tracked. Manual computer-assisted tracking is the traditional method for analyzing the migratory behavior of cells. However, this processing is extremely tedious and time-consuming. Most existing tracking algorithms require experience in programming languages that are unfamiliar to most biologists. We therefore developed an automated cell tracking program, written in Java, which uses a mean-shift algorithm and ImageJ as a library. iTrack4U is a user-friendly software. Compared to manual tracking, it saves considerable amount of time to generate and analyze the variables characterizing cell migration, since they are automatically computed with iTrack4U. Another major interest of iTrack4U is the standardization and the lack of inter-experimenter differences. Finally, iTrack4U is adapted for phase contrast and fluorescent cells.

  17. Automated cell tracking and analysis in phase-contrast videos (iTrack4U): development of Java software based on combined mean-shift processes.

    Science.gov (United States)

    Cordelières, Fabrice P; Petit, Valérie; Kumasaka, Mayuko; Debeir, Olivier; Letort, Véronique; Gallagher, Stuart J; Larue, Lionel

    2013-01-01

    Cell migration is a key biological process with a role in both physiological and pathological conditions. Locomotion of cells during embryonic development is essential for their correct positioning in the organism; immune cells have to migrate and circulate in response to injury. Failure of cells to migrate or an inappropriate acquisition of migratory capacities can result in severe defects such as altered pigmentation, skull and limb abnormalities during development, and defective wound repair, immunosuppression or tumor dissemination. The ability to accurately analyze and quantify cell migration is important for our understanding of development, homeostasis and disease. In vitro cell tracking experiments, using primary or established cell cultures, are often used to study migration as cells can quickly and easily be genetically or chemically manipulated. Images of the cells are acquired at regular time intervals over several hours using microscopes equipped with CCD camera. The locations (x,y,t) of each cell on the recorded sequence of frames then need to be tracked. Manual computer-assisted tracking is the traditional method for analyzing the migratory behavior of cells. However, this processing is extremely tedious and time-consuming. Most existing tracking algorithms require experience in programming languages that are unfamiliar to most biologists. We therefore developed an automated cell tracking program, written in Java, which uses a mean-shift algorithm and ImageJ as a library. iTrack4U is a user-friendly software. Compared to manual tracking, it saves considerable amount of time to generate and analyze the variables characterizing cell migration, since they are automatically computed with iTrack4U. Another major interest of iTrack4U is the standardization and the lack of inter-experimenter differences. Finally, iTrack4U is adapted for phase contrast and fluorescent cells.

  18. An automated activation analysis system

    International Nuclear Information System (INIS)

    Minor, M.M.; Hensley, W.K.; Denton, M.M.; Garcia, S.R.

    1982-01-01

    An automated delayed neutron counting and instrumental neutron activation analysis system has been developed at Los Alamos National Laboratory's Omega West Reactor (OWR) to analyze samples for uranium and 31 additional elements with a maximum throughput of 400 samples per day. The system and its mode of operation for a large reconnaissance survey will be described. (author)

  19. Automated activation-analysis system

    International Nuclear Information System (INIS)

    Minor, M.M.; Hensley, W.K.; Denton, M.M.; Garcia, S.R.

    1981-01-01

    An automated delayed neutron counting and instrumental neutron activation analysis system has been developed at Los Alamos National Laboratory's Omega West Reactor (OWR) to analyze samples for uranium and 31 additional elements with a maximum throughput of 400 samples per day. The system and its mode of operation for a large reconnaissance survey are described

  20. Automated activation-analysis system

    International Nuclear Information System (INIS)

    Minor, M.M.; Garcia, S.R.; Denton, M.M.

    1982-01-01

    An automated delayed neutron counting and instrumental neutron activation analysis system has been developed at Los Alamos National Laboratory's Omega West Reactor (OWR) to analyze samples for uranium and 31 additional elements with a maximum throughput of 400 samples per day

  1. Testing music selection automation possibilities for video ads

    Directory of Open Access Journals (Sweden)

    Wiesener Oliver

    2017-09-01

    Full Text Available The importance of video ads on social media platforms can be measured by the number of views. For instance, Samsung’s commercial ad for one of its new smartphones reached more than 46 million viewers at Youtube. Video ads address users both visually and aurally. Often, the visual sense is engaged by users focusing on other screens, rather than on the screen with the video ad, which is referred to as the second screen syndrome. Therefore, the importance of the audio channel seems to gain more importance. To get back the visual attention of users that are deflected from other visual impulses it appears reasonable to adapt the music to the target group. Additionally, it appears useful to adapt the music to the content of the video. Thus, the overall success of a video ad could be improved by increasing the attention of the users. Humans typically decide which music is to be used in a video ad. If there is a correlation between music, products and target groups, a digitization of the music selection process appears to be possible. Since the digitization progress in the music sector is currently mainly focused on music composing this article strives for taking a first step towards the digitization of the music selection.

  2. AUTOMATED ANALYSIS OF BREAKERS

    Directory of Open Access Journals (Sweden)

    E. M. Farhadzade

    2014-01-01

    Full Text Available Breakers relate to Electric Power Systems’ equipment, the reliability of which influence, to a great extend, on reliability of Power Plants. In particular, the breakers determine structural reliability of switchgear circuit of Power Stations and network substations. Failure in short-circuit switching off by breaker with further failure of reservation unit or system of long-distance protection lead quite often to system emergency.The problem of breakers’ reliability improvement and the reduction of maintenance expenses is becoming ever more urgent in conditions of systematic increasing of maintenance cost and repair expenses of oil circuit and air-break circuit breakers. The main direction of this problem solution is the improvement of diagnostic control methods and organization of on-condition maintenance. But this demands to use a great amount of statistic information about nameplate data of breakers and their operating conditions, about their failures, testing and repairing, advanced developments (software of computer technologies and specific automated information system (AIS.The new AIS with AISV logo was developed at the department: “Reliability of power equipment” of AzRDSI of Energy. The main features of AISV are:· to provide the security and data base accuracy;· to carry out systematic control of breakers conformity with operating conditions;· to make the estimation of individual  reliability’s value and characteristics of its changing for given combination of characteristics variety;· to provide personnel, who is responsible for technical maintenance of breakers, not only with information but also with methodological support, including recommendations for the given problem solving  and advanced methods for its realization.

  3. Roadside video data analysis deep learning

    CERN Document Server

    Verma, Brijesh; Stockwell, David

    2017-01-01

    This book highlights the methods and applications for roadside video data analysis, with a particular focus on the use of deep learning to solve roadside video data segmentation and classification problems. It describes system architectures and methodologies that are specifically built upon learning concepts for roadside video data processing, and offers a detailed analysis of the segmentation, feature extraction and classification processes. Lastly, it demonstrates the applications of roadside video data analysis including scene labelling, roadside vegetation classification and vegetation biomass estimation in fire risk assessment.

  4. Automated Analysis of Corpora Callosa

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille; Davies, Rhodri H.

    2003-01-01

    This report describes and evaluates the steps needed to perform modern model-based interpretation of the corpus callosum in MRI. The process is discussed from the initial landmark-free contours to full-fledged statistical models based on the Active Appearance Models framework. Topics treated incl...... include landmark placement, background modelling and multi-resolution analysis. Preliminary quantitative and qualitative validation in a cross-sectional study show that fully automated analysis and segmentation of the corpus callosum are feasible....

  5. Automated surgical step recognition in normalized cataract surgery videos.

    Science.gov (United States)

    Charrière, Katia; Quellec, Gwénolé; Lamard, Mathieu; Coatrieux, Gouenou; Cochener, Béatrice; Cazuguel, Guy

    2014-01-01

    Huge amounts of surgical data are recorded during video-monitored surgery. Content-based video retrieval systems intent to reuse those data for computer-aided surgery. In this paper, we focus on real-time recognition of cataract surgery steps: the goal is to retrieve from a database surgery videos that were recorded during the same surgery step. The proposed system relies on motion features for video characterization. Motion features are usually impacted by eye motion or zoom level variations, which are not necessarily relevant for surgery step recognition. Those problems certainly limit the performance of the retrieval system. We therefore propose to refine motion feature extraction by applying pre-processing steps based on a novel pupil center and scale tracking method. Those pre-processing steps are evaluated for two different motion features. In this paper, a similarity measure adapted from Piciarelli's video surveillance system is evaluated for the first time in a surgery dataset. This similarity measure provides good results and for both motion features, the proposed preprocessing steps improved the retrieval performance of the system significantly.

  6. Automated processing of massive audio/video content using FFmpeg

    Directory of Open Access Journals (Sweden)

    Kia Siang Hock

    2014-01-01

    Full Text Available Audio and video content forms an integral, important and expanding part of the digital collections in libraries and archives world-wide. While these memory institutions are familiar and well-versed in the management of more conventional materials such as books, periodicals, ephemera and images, the handling of audio (e.g., oral history recordings and video content (e.g., audio-visual recordings, broadcast content requires additional toolkits. In particular, a robust and comprehensive tool that provides a programmable interface is indispensable when dealing with tens of thousands of hours of audio and video content. FFmpeg is comprehensive and well-established open source software that is capable of the full-range of audio/video processing tasks (such as encode, decode, transcode, mux, demux, stream and filter. It is also capable of handling a wide-range of audio and video formats, a unique challenge in memory institutions. It comes with a command line interface, as well as a set of developer libraries that can be incorporated into applications.

  7. Automated method of processing video data from track detectors

    Science.gov (United States)

    Aleksandrov, A. B.; Goncharova, L. A.; Davydov, D. A.; Publichenko, P. A.; Roganova, T. M.; Polukhina, N. G.; Feinberg, E. L.

    2007-10-01

    New automated methods simplify significantly and accelerate processing of data from emulsion detectors. In addition to acceleration, automation of measurements allows large files of experimental data to be processed and their statistics to be made sufficient. It also gives impetus to the development of projects of new experiments with large-volume targets and emulsions and large-area solid-state track detectors. In this regard, the problem of increase in the number of scientists with required level of training capable of operation with automated technical equipment of this class becomes urgent. Every year, ten Moscow students master new methods working at the P. N. Lebedev Institute of Physics of the Russian Academy of Sciences with the PAVIKOM fully-automated measuring complex [1 3]. Most students now engaged in high-energy physics gain a notion of only outdated manual methods of processing data from track detectors. In 2005, a new practical work on determination of energy of neutrons transmitted through a nuclear emulsion was prepared on the basis of the PAVIKOM complex and physical experimental work of the Physical Department of Moscow State University. This practical work makes it possible to acquaint the students with initial skills used in automated processing of data from track detectors and can be included into educational process for students of physical departments.

  8. Attacking Automatic Video Analysis Algorithms: A Case Study of Google Cloud Video Intelligence API

    OpenAIRE

    Hosseini, Hossein; Xiao, Baicen; Clark, Andrew; Poovendran, Radha

    2017-01-01

    Due to the growth of video data on Internet, automatic video analysis has gained a lot of attention from academia as well as companies such as Facebook, Twitter and Google. In this paper, we examine the robustness of video analysis algorithms in adversarial settings. Specifically, we propose targeted attacks on two fundamental classes of video analysis algorithms, namely video classification and shot detection. We show that an adversary can subtly manipulate a video in such a way that a human...

  9. Automating Commercial Video Game Development using Computational Intelligence

    OpenAIRE

    Tse G. Tan; Jason Teo; Patricia Anthony

    2011-01-01

    Problem statement: The retail sales of computer and video games have grown enormously during the last few years, not just in United States (US), but also all over the world. This is the reason a lot of game developers and academic researchers have focused on game related technologies, such as graphics, audio, physics and Artificial Intelligence (AI) with the goal of creating newer and more fun games. In recent years, there has been an increasing interest in game AI for pro...

  10. Automated Video-Based Traffic Count Analysis.

    Science.gov (United States)

    2016-01-01

    The goal of this effort has been to develop techniques that could be applied to the : detection and tracking of vehicles in overhead footage of intersections. To that end we : have developed and published techniques for vehicle tracking based on dete...

  11. RotoTexture: automated tools for texturing raw video.

    Science.gov (United States)

    Fang, Hui; Hart, John C

    2006-01-01

    We propose a video editing system that allows a user to apply a time-coherent texture to a surface depicted in the raw video from a single uncalibrated camera, including the surface texture mapping of a texture image and the surface texture synthesis from a texture swatch. Our system avoids the construction of a 3D shape model and instead uses the recovered normal field to deform the texture so that it plausibly adheres to the undulations of the depicted surface. The texture mapping method uses the nonlinear least-squares optimization of a spring model to control the behavior of the texture image as it is deformed to match the evolving normal field through the video. The texture synthesis method uses a coarse optical flow to advect clusters of pixels corresponding to patches of similarly oriented surface points. These clusters are organized into a minimum advection tree to account for the dynamic visibility of clusters. We take a rather crude approach to normal recovering and optical flow estimation, yet the results are robust and plausible for nearly diffuse surfaces such as faces and t-shirts.

  12. Automated Analysis of Infinite Scenarios

    DEFF Research Database (Denmark)

    Buchholtz, Mikael

    2005-01-01

    The security of a network protocol crucially relies on the scenario in which the protocol is deployed. This paper describes syntactic constructs for modelling network scenarios and presents an automated analysis tool, which can guarantee that security properties hold in all of the (infinitely many......) instances of a scenario. The tool is based on control flow analysis of the process calculus LySa and is applied to the Bauer, Berson, and Feiertag protocol where is reveals a previously undocumented problem, which occurs in some scenarios but not in other....

  13. A system for endobronchial video analysis

    Science.gov (United States)

    Byrnes, Patrick D.; Higgins, William E.

    2017-03-01

    Image-guided bronchoscopy is a critical component in the treatment of lung cancer and other pulmonary disorders. During bronchoscopy, a high-resolution endobronchial video stream facilitates guidance through the lungs and allows for visual inspection of a patient's airway mucosal surfaces. Despite the detailed information it contains, little effort has been made to incorporate recorded video into the clinical workflow. Follow-up procedures often required in cancer assessment or asthma treatment could significantly benefit from effectively parsed and summarized video. Tracking diagnostic regions of interest (ROIs) could potentially better equip physicians to detect early airway-wall cancer or improve asthma treatments, such as bronchial thermoplasty. To address this need, we have developed a system for the postoperative analysis of recorded endobronchial video. The system first parses an input video stream into endoscopic shots, derives motion information, and selects salient representative key frames. Next, a semi-automatic method for CT-video registration creates data linkages between a CT-derived airway-tree model and the input video. These data linkages then enable the construction of a CT-video chest model comprised of a bronchoscopy path history (BPH) - defining all airway locations visited during a procedure - and texture-mapping information for rendering registered video frames onto the airwaytree model. A suite of analysis tools is included to visualize and manipulate the extracted data. Video browsing and retrieval is facilitated through a video table of contents (TOC) and a search query interface. The system provides a variety of operational modes and additional functionality, including the ability to define regions of interest. We demonstrate the potential of our system using two human case study examples.

  14. Video micro analysis in music therapy research

    DEFF Research Database (Denmark)

    Holck, Ulla; Oldfield, Amelia; Plahl, Christine

    2004-01-01

    Three music therapy researchers from three different countries who have recently completed their PhD theses will each briefly discuss the role of video analysis in their investigations. All three of these research projects have involved music therapy work with children, some of whom were...... and qualitative approaches to data collection. In addition, participants will be encouraged to reflect on what types of knowledge can be gained from video analyses and to explore the general relevance of video analysis in music therapy research....

  15. Video content analysis of surgical procedures.

    Science.gov (United States)

    Loukas, Constantinos

    2018-02-01

    In addition to its therapeutic benefits, minimally invasive surgery offers the potential for video recording of the operation. The videos may be archived and used later for reasons such as cognitive training, skills assessment, and workflow analysis. Methods from the major field of video content analysis and representation are increasingly applied in the surgical domain. In this paper, we review recent developments and analyze future directions in the field of content-based video analysis of surgical operations. The review was obtained from PubMed and Google Scholar search on combinations of the following keywords: 'surgery', 'video', 'phase', 'task', 'skills', 'event', 'shot', 'analysis', 'retrieval', 'detection', 'classification', and 'recognition'. The collected articles were categorized and reviewed based on the technical goal sought, type of surgery performed, and structure of the operation. A total of 81 articles were included. The publication activity is constantly increasing; more than 50% of these articles were published in the last 3 years. Significant research has been performed for video task detection and retrieval in eye surgery. In endoscopic surgery, the research activity is more diverse: gesture/task classification, skills assessment, tool type recognition, shot/event detection and retrieval. Recent works employ deep neural networks for phase and tool recognition as well as shot detection. Content-based video analysis of surgical operations is a rapidly expanding field. Several future prospects for research exist including, inter alia, shot boundary detection, keyframe extraction, video summarization, pattern discovery, and video annotation. The development of publicly available benchmark datasets to evaluate and compare task-specific algorithms is essential.

  16. Automated analysis of gastric emptying

    International Nuclear Information System (INIS)

    Abutaleb, A.; Frey, D.; Spicer, K.; Spivey, M.; Buckles, D.

    1986-01-01

    The authors devised a novel method to automate the analysis of nuclear gastric emptying studies. Many previous methods have been used to measure gastric emptying but, are cumbersome and require continuing interference by the operator to use. Two specific problems that occur are related to patient movement between images and changes in the location of the radioactive material within the stomach. Their method can be used with either dual or single phase studies. For dual phase studies the authors use In-111 labeled water and Tc-99MSC (Sulfur Colloid) labeled scrambled eggs. For single phase studies either the liquid or solid phase material is used

  17. Automated analysis of complex data

    Science.gov (United States)

    Saintamant, Robert; Cohen, Paul R.

    1994-01-01

    We have examined some of the issues involved in automating exploratory data analysis, in particular the tradeoff between control and opportunism. We have proposed an opportunistic planning solution for this tradeoff, and we have implemented a prototype, Igor, to test the approach. Our experience in developing Igor was surprisingly smooth. In contrast to earlier versions that relied on rule representation, it was straightforward to increment Igor's knowledge base without causing the search space to explode. The planning representation appears to be both general and powerful, with high level strategic knowledge provided by goals and plans, and the hooks for domain-specific knowledge are provided by monitors and focusing heuristics.

  18. PIZZARO: Forensic analysis and restoration of image and video data.

    Science.gov (United States)

    Kamenicky, Jan; Bartos, Michal; Flusser, Jan; Mahdian, Babak; Kotera, Jan; Novozamsky, Adam; Saic, Stanislav; Sroubek, Filip; Sorel, Michal; Zita, Ales; Zitova, Barbara; Sima, Zdenek; Svarc, Petr; Horinek, Jan

    2016-07-01

    This paper introduces a set of methods for image and video forensic analysis. They were designed to help to assess image and video credibility and origin and to restore and increase image quality by diminishing unwanted blur, noise, and other possible artifacts. The motivation came from the best practices used in the criminal investigation utilizing images and/or videos. The determination of the image source, the verification of the image content, and image restoration were identified as the most important issues of which automation can facilitate criminalists work. Novel theoretical results complemented with existing approaches (LCD re-capture detection and denoising) were implemented in the PIZZARO software tool, which consists of the image processing functionality as well as of reporting and archiving functions to ensure the repeatability of image analysis procedures and thus fulfills formal aspects of the image/video analysis work. Comparison of new proposed methods with the state of the art approaches is shown. Real use cases are presented, which illustrate the functionality of the developed methods and demonstrate their applicability in different situations. The use cases as well as the method design were solved in tight cooperation of scientists from the Institute of Criminalistics, National Drug Headquarters of the Criminal Police and Investigation Service of the Police of the Czech Republic, and image processing experts from the Czech Academy of Sciences. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  19. Glyph-Based Video Visualization for Semen Analysis

    KAUST Repository

    Duffy, Brian

    2015-08-01

    © 2013 IEEE. The existing efforts in computer assisted semen analysis have been focused on high speed imaging and automated image analysis of sperm motility. This results in a large amount of data, and it is extremely challenging for both clinical scientists and researchers to interpret, compare and correlate the multidimensional and time-varying measurements captured from video data. In this work, we use glyphs to encode a collection of numerical measurements taken at a regular interval and to summarize spatio-temporal motion characteristics using static visual representations. The design of the glyphs addresses the needs for (a) encoding some 20 variables using separable visual channels, (b) supporting scientific observation of the interrelationships between different measurements and comparison between different sperm cells and their flagella, and (c) facilitating the learning of the encoding scheme by making use of appropriate visual abstractions and metaphors. As a case study, we focus this work on video visualization for computer-aided semen analysis, which has a broad impact on both biological sciences and medical healthcare. We demonstrate that glyph-based visualization can serve as a means of external memorization of video data as well as an overview of a large set of spatiotemporal measurements. It enables domain scientists to make scientific observation in a cost-effective manner by reducing the burden of viewing videos repeatedly, while providing them with a new visual representation for conveying semen statistics.

  20. Joint Wavelet Video Denoising and Motion Activity Detection in Multimodal Human Activity Analysis: Application to Video-Assisted Bioacoustic/Psychophysiological Monitoring

    Directory of Open Access Journals (Sweden)

    G. V. Papanikolaou

    2007-12-01

    Full Text Available The current work focuses on the design and implementation of an indoor surveillance application for long-term automated analysis of human activity, in a video-assisted biomedical monitoring system. Video processing is necessary to overcome noise-related problems, caused by suboptimal video capturing conditions, due to poor lighting or even complete darkness during overnight recordings. Modified wavelet-domain spatiotemporal Wiener filtering and motion-detection algorithms are employed to facilitate video enhancement, motion-activity-based indexing and summarization. Structural aspects for validation of the motion detection results are also used. The proposed system has been already deployed in monitoring of long-term abdominal sounds, for surveillance automation, motion-artefacts detection and connection with other psychophysiological parameters. However, it can be used to any video-assisted biomedical monitoring or other surveillance application with similar demands.

  1. Reload safety analysis automation tools

    International Nuclear Information System (INIS)

    Havlůj, F.; Hejzlar, J.; Vočka, R.

    2013-01-01

    Performing core physics calculations for the sake of reload safety analysis is a very demanding and time consuming process. This process generally begins with the preparation of libraries for the core physics code using a lattice code. The next step involves creating a very large set of calculations with the core physics code. Lastly, the results of the calculations must be interpreted, correctly applying uncertainties and checking whether applicable limits are satisfied. Such a procedure requires three specialized experts. One must understand the lattice code in order to correctly calculate and interpret its results. The next expert must have a good understanding of the physics code in order to create libraries from the lattice code results and to correctly define all the calculations involved. The third expert must have a deep knowledge of the power plant and the reload safety analysis procedure in order to verify, that all the necessary calculations were performed. Such a procedure involves many steps and is very time consuming. At ÚJV Řež, a.s., we have developed a set of tools which can be used to automate and simplify the whole process of performing reload safety analysis. Our application QUADRIGA automates lattice code calculations for library preparation. It removes user interaction with the lattice code and reduces his task to defining fuel pin types, enrichments, assembly maps and operational parameters all through a very nice and user-friendly GUI. The second part in reload safety analysis calculations is done by CycleKit, a code which is linked with our core physics code ANDREA. Through CycleKit large sets of calculations with complicated interdependencies can be performed using simple and convenient notation. CycleKit automates the interaction with ANDREA, organizes all the calculations, collects the results, performs limit verification and displays the output in clickable html format. Using this set of tools for reload safety analysis simplifies

  2. Integrating Online and Offline Three-Dimensional Deep Learning for Automated Polyp Detection in Colonoscopy Videos.

    Science.gov (United States)

    Lequan Yu; Hao Chen; Qi Dou; Jing Qin; Pheng Ann Heng

    2017-01-01

    Automated polyp detection in colonoscopy videos has been demonstrated to be a promising way for colorectal cancer prevention and diagnosis. Traditional manual screening is time consuming, operator dependent, and error prone; hence, automated detection approach is highly demanded in clinical practice. However, automated polyp detection is very challenging due to high intraclass variations in polyp size, color, shape, and texture, and low interclass variations between polyps and hard mimics. In this paper, we propose a novel offline and online three-dimensional (3-D) deep learning integration framework by leveraging the 3-D fully convolutional network (3D-FCN) to tackle this challenging problem. Compared with the previous methods employing hand-crafted features or 2-D convolutional neural network, the 3D-FCN is capable of learning more representative spatio-temporal features from colonoscopy videos, and hence has more powerful discrimination capability. More importantly, we propose a novel online learning scheme to deal with the problem of limited training data by harnessing the specific information of an input video in the learning process. We integrate offline and online learning to effectively reduce the number of false positives generated by the offline network and further improve the detection performance. Extensive experiments on the dataset of MICCAI 2015 Challenge on Polyp Detection demonstrated the better performance of our method when compared with other competitors.

  3. Autoradiography and automated image analysis

    International Nuclear Information System (INIS)

    Vardy, P.H.; Willard, A.G.

    1982-01-01

    Limitations with automated image analysis and the solution of problems encountered are discussed. With transmitted light, unstained plastic sections with planar profiles should be used. Stains potentiate signal so that television registers grains as falsely larger areas of low light intensity. Unfocussed grains in paraffin sections will not be seen by image analysers due to change in darkness and size. With incident illumination, the use of crossed polars, oil objectives and an oil filled light trap continuous with the base of the slide will reduce glare. However this procedure so enormously attenuates the light reflected by silver grains, that detection may be impossible. Autoradiographs should then be photographed and the negative images of silver grains on film analysed automatically using transmitted light

  4. Parts-based detection of AK-47s for forensic video analysis

    OpenAIRE

    Jones, Justin

    2010-01-01

    Approved for public release; distribution is unlimited Law enforcement, military personnel, and forensic analysts are increasingly reliant on imaging ystems to perform in a hostile environment and require a robust method to efficiently locate bjects of interest in videos and still images. Current approaches require a full-time operator to monitor a surveillance video or to sift a hard drive for suspicious content. In this thesis, we demonstrate the effectiveness of automated analysis tools...

  5. Automation for System Safety Analysis

    Science.gov (United States)

    Malin, Jane T.; Fleming, Land; Throop, David; Thronesbery, Carroll; Flores, Joshua; Bennett, Ted; Wennberg, Paul

    2009-01-01

    This presentation describes work to integrate a set of tools to support early model-based analysis of failures and hazards due to system-software interactions. The tools perform and assist analysts in the following tasks: 1) extract model parts from text for architecture and safety/hazard models; 2) combine the parts with library information to develop the models for visualization and analysis; 3) perform graph analysis and simulation to identify and evaluate possible paths from hazard sources to vulnerable entities and functions, in nominal and anomalous system-software configurations and scenarios; and 4) identify resulting candidate scenarios for software integration testing. There has been significant technical progress in model extraction from Orion program text sources, architecture model derivation (components and connections) and documentation of extraction sources. Models have been derived from Internal Interface Requirements Documents (IIRDs) and FMEA documents. Linguistic text processing is used to extract model parts and relationships, and the Aerospace Ontology also aids automated model development from the extracted information. Visualizations of these models assist analysts in requirements overview and in checking consistency and completeness.

  6. An Ethnografic Approach to Video Analysis

    DEFF Research Database (Denmark)

    Holck, Ulla

    2007-01-01

    a short introduction to the ethnographic approach, the workshop participants will have a chance to try out the method. First through a common exercise and then applied to video recordings of music therapy with children with severe communicative limitations. Focus will be on patterns of interaction......, followed by a discussion of their significance for the therapeutic interaction. Literature: Holck, U, Oldfield, A. and Plahl, C. (2005) Video Micro Analysis in Music Therapy Research, a Research Workshop. In: Aldridge, D., Fachner, J. & Erkkilä, J. (Eds) Many Faces of Music Therapy - Proceedings of the 6th...

  7. Video Game Characters. Theory and Analysis

    Directory of Open Access Journals (Sweden)

    Felix Schröter

    2014-06-01

    Full Text Available This essay develops a method for the analysis of video game characters based on a theoretical understanding of their medium-specific representation and the mental processes involved in their intersubjective construction by video game players. We propose to distinguish, first, between narration, simulation, and communication as three modes of representation particularly salient for contemporary video games and the characters they represent, second, between narrative, ludic, and social experience as three ways in which players perceive video game characters and their representations, and, third, between three dimensions of video game characters as ‘intersubjective constructs’, which usually are to be analyzed not only as fictional beings with certain diegetic properties but also as game pieces with certain ludic properties and, in those cases in which they function as avatars in the social space of a multiplayer game, as representations of other players. Having established these basic distinctions, we proceed to analyze their realization and interrelation by reference to the character of Martin Walker from the third-person shooter Spec Ops: The Line (Yager Development 2012, the highly customizable player-controlled characters from the role-playing game The Elder Scrolls V: Skyrim (Bethesda 2011, and the complex multidimensional characters in the massively multiplayer online role-playing game Star Wars: The Old Republic (BioWare 2011-2014.

  8. Automatic Soccer Video Analysis and Summarization

    Science.gov (United States)

    Ekin, Ahmet; Tekalp, A. Murat

    2003-01-01

    We propose a fully automatic and computationally efficient framework for analysis and summarization of soccer videos using cinematic and object-based features. The proposed framework includes some novel low-level soccer video processing algorithms, such as dominant color region detection, robust shot boundary detection, and shot classification, as well as some higher-level algorithms for goal detection, referee detection, and penalty-box detection. The system can output three types of summaries: i) all slow-motion segments in a game, ii) all goals in a game, and iii) slow-motion segments classified according to object-based features. The first two types of summaries are based on cinematic features only for speedy processing, while the summaries of the last type contain higher-level semantics. The proposed framework is efficient, effective, and robust for soccer video processing. It is efficient in the sense that there is no need to compute object-based features when cinematic features are sufficient for the detection of certain events, e.g. goals in soccer. It is effective in the sense that the framework can also employ object-based features when needed to increase accuracy (at the expense of more computation). The efficiency, effectiveness, and the robustness of the proposed framework are demonstrated over a large data set, consisting of more than 13 hours of soccer video, captured at different countries and conditions.

  9. Distribution system analysis and automation

    CERN Document Server

    Gers, Juan

    2013-01-01

    A comprehensive guide to techniques that allow engineers to simulate, analyse and optimise power distribution systems which combined with automation, underpin the emerging concept of the "smart grid". This book is supported by theoretical concepts with real-world applications and MATLAB exercises.

  10. Long-term video surveillance and automated analyses reveal arousal patterns in groups of hibernating bats

    Science.gov (United States)

    Hayman, David T.S.; Cryan, Paul; Fricker, Paul D.; Dannemiller, Nicholas G.

    2017-01-01

    Understanding natural behaviours is essential to determining how animals deal with new threats (e.g. emerging diseases). However, natural behaviours of animals with cryptic lifestyles, like hibernating bats, are often poorly characterized. White-nose syndrome (WNS) is an unprecedented disease threatening multiple species of hibernating bats, and pathogen-induced changes to host behaviour may contribute to mortality. To better understand the behaviours of hibernating bats and how they might relate to WNS, we developed new ways of studying hibernation across entire seasons.We used thermal-imaging video surveillance cameras to observe little brown bats (Myotis lucifugus) and Indiana bats (M. sodalis) in two caves over multiple winters. We developed new, sharable software to test for autocorrelation and periodicity of arousal signals in recorded video.We processed 740 days (17,760 hr) of video at a rate of >1,000 hr of video imagery in less than 1 hr using a desktop computer with sufficient resolution to detect increases in arousals during midwinter in both species and clear signals of daily arousal periodicity in infected M. sodalis.Our unexpected finding of periodic synchronous group arousals in hibernating bats demonstrate the potential for video methods and suggest some bats may have innate behavioural strategies for coping with WNS. Surveillance video and accessible analysis software make it now practical to investigate long-term behaviours of hibernating bats and other hard-to-study animals.

  11. Automated UAV-based video exploitation using service oriented architecture framework

    Science.gov (United States)

    Se, Stephen; Nadeau, Christian; Wood, Scott

    2011-05-01

    Airborne surveillance and reconnaissance are essential for successful military missions. Such capabilities are critical for troop protection, situational awareness, mission planning, damage assessment, and others. Unmanned Aerial Vehicles (UAVs) gather huge amounts of video data but it is extremely labour-intensive for operators to analyze hours and hours of received data. At MDA, we have developed a suite of tools that can process the UAV video data automatically, including mosaicking, change detection and 3D reconstruction, which have been integrated within a standard GIS framework. In addition, the mosaicking and 3D reconstruction tools have also been integrated in a Service Oriented Architecture (SOA) framework. The Visualization and Exploitation Workstation (VIEW) integrates 2D and 3D visualization, processing, and analysis capabilities developed for UAV video exploitation. Visualization capabilities are supported through a thick-client Graphical User Interface (GUI), which allows visualization of 2D imagery, video, and 3D models. The GUI interacts with the VIEW server, which provides video mosaicking and 3D reconstruction exploitation services through the SOA framework. The SOA framework allows multiple users to perform video exploitation by running a GUI client on the operator's computer and invoking the video exploitation functionalities residing on the server. This allows the exploitation services to be upgraded easily and allows the intensive video processing to run on powerful workstations. MDA provides UAV services to the Canadian and Australian forces in Afghanistan with the Heron, a Medium Altitude Long Endurance (MALE) UAV system. On-going flight operations service provides important intelligence, surveillance, and reconnaissance information to commanders and front-line soldiers.

  12. Statistical Analysis of Video Frame Size Distribution Originating from Scalable Video Codec (SVC

    Directory of Open Access Journals (Sweden)

    Sima Ahmadpour

    2017-01-01

    Full Text Available Designing an effective and high performance network requires an accurate characterization and modeling of network traffic. The modeling of video frame sizes is normally applied in simulation studies and mathematical analysis and generating streams for testing and compliance purposes. Besides, video traffic assumed as a major source of multimedia traffic in future heterogeneous network. Therefore, the statistical distribution of video data can be used as the inputs for performance modeling of networks. The finding of this paper comprises the theoretical definition of distribution which seems to be relevant to the video trace in terms of its statistical properties and finds the best distribution using both the graphical method and the hypothesis test. The data set used in this article consists of layered video traces generating from Scalable Video Codec (SVC video compression technique of three different movies.

  13. Automated analysis of slitless spectra. II. Quasars

    International Nuclear Information System (INIS)

    Edwards, G.; Beauchemin, M.; Borra, F.

    1988-01-01

    Automated software have been developed to process slitless spectra. The software, described in a previous paper, automatically separates stars from extended objects and quasars from stars. This paper describes the quasar search techniques and discusses the results. The performance of the software is compared and calibrated with a plate taken in a region of SA 57 that has been extensively surveyed by others using a variety of techniques: the proposed automated software performs very well. It is found that an eye search of the same plate is less complete than the automated search: surveys that rely on eye searches suffer from incompleteness at least from a magnitude brighter than the plate limit. It is shown how the complete automated analysis of a plate and computer simulations are used to calibrate and understand the characteristics of the present data. 20 references

  14. Gemvid, an open source, modular, automated activity recording system for rats using digital video

    Directory of Open Access Journals (Sweden)

    Leprince Pierre

    2006-08-01

    Full Text Available Abstract Background Measurement of locomotor activity is a valuable tool for analysing factors influencing behaviour and for investigating brain function. Several methods have been described in the literature for measuring the amount of animal movement but most are flawed or expensive. Here, we describe an open source, modular, low-cost, user-friendly, highly sensitive, non-invasive system that records all the movements of a rat in its cage. Methods Our activity monitoring system quantifies overall free movements of rodents without any markers, using a commercially available CCTV and a newly designed motion detection software developed on a GNU/Linux-operating computer. The operating principle is that the amount of overall movement of an object can be expressed by the difference in total area occupied by the object in two consecutive picture frames. The application is based on software modules that allow the system to be used in a high-throughput workflow. Documentation, example files, source code and binary files can be freely downloaded from the project website at http://bioinformatics.org/gemvid/. Results In a series of experiments with objects of pre-defined oscillation frequencies and movements, we documented the sensitivity, reproducibility and stability of our system. We also compared data obtained with our system and data obtained with an Actiwatch device. Finally, to validate the system, results obtained from the automated observation of 6 rats during 7 days in a regular light cycle are presented and are accompanied by a stability test. The validity of this system is further demonstrated through the observation of 2 rats in constant dark conditions that displayed the expected free running of their circadian rhythm. Conclusion The present study describes a system that relies on video frame differences to automatically quantify overall free movements of a rodent without any markers. It allows the monitoring of rats in their own

  15. Semi-automated detection of fractional shortening in zebrafish embryo heart videos

    Directory of Open Access Journals (Sweden)

    Nasrat Sara

    2016-09-01

    Full Text Available Quantifying cardiac functions in model organisms like embryonic zebrafish is of high importance in small molecule screens for new therapeutic compounds. One relevant cardiac parameter is the fractional shortening (FS. A method for semi-automatic quantification of FS in video recordings of zebrafish embryo hearts is presented. The software provides automated visual information about the end-systolic and end-diastolic stages of the heart by displaying corresponding colored lines into a Motion-mode display. After manually marking the ventricle diameters in frames of end-systolic and end-diastolic stages, the FS is calculated. The software was evaluated by comparing the results of the determination of FS with results obtained from another established method. Correlations of 0.96 < r < 0.99 between the two methods were found indicating that the new software provides comparable results for the determination of the FS.

  16. Automated Technology for Verificiation and Analysis

    DEFF Research Database (Denmark)

    This volume contains the papers presented at the 7th International Symposium on Automated Technology for Verification and Analysis held during October 13-16 in Macao SAR, China. The primary objective of the ATVA conferences remains the same: to exchange and promote the latest advances of state......-of-the-art research on theoretical and practical aspects of automated analysis, verification, and synthesis. Among 74 research papers and 10 tool papers submitted to ATVA 2009, the Program Committee accepted 23 as regular papers and 3 as tool papers. In all, 33 experts from 17 countries worked hard to make sure...

  17. Computer-automated neutron activation analysis system

    International Nuclear Information System (INIS)

    Minor, M.M.; Garcia, S.R.

    1983-01-01

    An automated delayed neutron counting and instrumental neutron activation analysis system has been developed at Los Alamos National Laboratory's Omega West Reactor (OWR) to analyze samples for uranium and 31 additional elements with a maximum throughput of 400 samples per day. 5 references

  18. Systems Analysis as a Prelude to Library Automation

    Science.gov (United States)

    Carter, Ruth C.

    1973-01-01

    Systems analysis, as a prelude to library automation, is an inevitable commonplace fact of life in libraries. Maturation of library automation and the systems analysis which precedes its implementation is observed in this article. (55 references) (Author/TW)

  19. Studying the movement behavior of benthic macroinvertebrates with automated video tracking.

    Science.gov (United States)

    Augusiak, Jacqueline; Van den Brink, Paul J

    2015-04-01

    Quantifying and understanding movement is critical for a wide range of questions in basic and applied ecology. Movement ecology is also fostered by technological advances that allow automated tracking for a wide range of animal species. However, for aquatic macroinvertebrates, such detailed methods do not yet exist. We developed a video tracking method for two different species of benthic macroinvertebrates, the crawling isopod Asellus aquaticus and the swimming fresh water amphipod Gammarus pulex. We tested the effects of different light sources and marking techniques on their movement behavior to establish the possibilities and limitations of the experimental protocol and to ensure that the basic handling of test specimens would not bias conclusions drawn from movement path analyses. To demonstrate the versatility of our method, we studied the influence of varying population densities on different movement parameters related to resting behavior, directionality, and step lengths. We found that our method allows studying species with different modes of dispersal and under different conditions. For example, we found that gammarids spend more time moving at higher population densities, while asellids rest more under similar conditions. At the same time, in response to higher densities, gammarids mostly decreased average step lengths, whereas asellids did not. Gammarids, however, were also more sensitive to general handling and marking than asellids. Our protocol for marking and video tracking can be easily adopted for other species of aquatic macroinvertebrates or testing conditions, for example, presence or absence of food sources, shelter, or predator cues. Nevertheless, limitations with regard to the marking protocol, material, and a species' physical build need to be considered and tested before a wider application, particularly for swimming species. Data obtained with this approach can deepen the understanding of population dynamics on larger spatial scales and

  20. Experiments and video analysis in classical mechanics

    CERN Document Server

    de Jesus, Vitor L B

    2017-01-01

    This book is an experimental physics textbook on classical mechanics focusing on the development of experimental skills by means of discussion of different aspects of the experimental setup and the assessment of common issues such as accuracy and graphical representation. The most important topics of an experimental physics course on mechanics are covered and the main concepts are explored in detail. Each chapter didactically connects the experiment and the theoretical models available to explain it. Real data from the proposed experiments are presented and a clear discussion over the theoretical models is given. Special attention is also dedicated to the experimental uncertainty of measurements and graphical representation of the results. In many of the experiments, the application of video analysis is proposed and compared with traditional methods.

  1. Automated assessment of Pavlovian conditioned freezing and shock reactivity in mice using the VideoFreeze system

    Directory of Open Access Journals (Sweden)

    Stephan G Anagnostaras

    2010-09-01

    Full Text Available The Pavlovian conditioned freezing paradigm has become a prominent mouse and rat model of learning and memory, as well as of pathological fear. Due to its efficiency, reproducibility, and well-defined neurobiology, the paradigm has become widely adopted in large-scale genetic and pharmacological screens. However, one major shortcoming of the use of freezing behavior has been that it has required the use of tedious hand scoring, or a variety of proprietary automated methods that are often poorly validated or difficult to obtain and implement. Here we report an extensive validation of the Video Freeze system in mice, a turn-key all-inclusive system for fear conditioning in small animals. Using digital video and near-infrared lighting, the system achieved outstanding performance in scoring both freezing and movement. Given the large-scale adoption of the conditioned freezing paradigm, we encourage similar validation of other automated systems for scoring freezing, or other behaviors.

  2. Descriptive analysis of YouTube music therapy videos.

    Science.gov (United States)

    Gooding, Lori F; Gregory, Dianne

    2011-01-01

    The purpose of this study was to conduct a descriptive analysis of music therapy-related videos on YouTube. Preliminary searches using the keywords music therapy, music therapy session, and "music therapy session" resulted in listings of 5000, 767, and 59 videos respectively. The narrowed down listing of 59 videos was divided between two investigators and reviewed in order to determine their relationship to actual music therapy practice. A total of 32 videos were determined to be depictions of music therapy sessions. These videos were analyzed using a 16-item investigator-created rubric that examined both video specific information and therapy specific information. Results of the analysis indicated that audio and visual quality was adequate, while narrative descriptions and identification information were ineffective in the majority of the videos. The top 5 videos (based on the highest number of viewings in the sample) were selected for further analysis in order to investigate demonstration of the Professional Level of Practice Competencies set forth in the American Music Therapy Association (AMTA) Professional Competencies (AMTA, 2008). Four of the five videos met basic competency criteria, with the quality of the fifth video precluding evaluation of content. Of particular interest is the fact that none of the videos included credentialing information. Results of this study suggest the need to consider ways to ensure accurate dissemination of music therapy-related information in the YouTube environment, ethical standards when posting music therapy session videos, and the possibility of creating AMTA standards for posting music therapy related video.

  3. No-Reference Video Quality Assessment using MPEG Analysis

    DEFF Research Database (Denmark)

    Søgaard, Jacob; Forchhammer, Søren; Korhonen, Jari

    2013-01-01

    and estimate the video coding parameters for MPEG-2 and H.264/AVC which can be used to improve the VQA. The analysis differs from most other video coding analysis methods since it is without access to the bitstream. The results show that our proposed method is competitive with other recent NR VQA methods...... for MPEG-2 and H.264/AVC....

  4. Techniques for Automated Performance Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Marcus, Ryan C. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2014-09-02

    The performance of a particular HPC code depends on a multitude of variables, including compiler selection, optimization flags, OpenMP pool size, file system load, memory usage, MPI configuration, etc. As a result of this complexity, current predictive models have limited applicability, especially at scale. We present a formulation of scientific codes, nodes, and clusters that reduces complex performance analysis to well-known mathematical techniques. Building accurate predictive models and enhancing our understanding of scientific codes at scale is an important step towards exascale computing.

  5. Automated information retrieval system for radioactivation analysis

    International Nuclear Information System (INIS)

    Lambrev, V.G.; Bochkov, P.E.; Gorokhov, S.A.; Nekrasov, V.V.; Tolstikova, L.I.

    1981-01-01

    An automated information retrieval system for radioactivation analysis has been developed. An ES-1022 computer and a problem-oriented software ''The description information search system'' were used for the purpose. Main aspects and sources of forming the system information fund, characteristics of the information retrieval language of the system are reported and examples of question-answer dialogue are given. Two modes can be used: selective information distribution and retrospective search [ru

  6. Automated Program Analysis for Cybersecurity (APAC)

    Science.gov (United States)

    2016-07-14

    AUTOMATED PROGRAM ANALYSIS FOR CYBERSECURITY ( APAC ) FIVE DIRECTIONS, INC JULY 2016 FINAL TECHNICAL REPORT APPROVED...CYBERSECURITY ( APAC ) 5a. CONTRACT NUMBER FA8750-14-C-0050 5b. GRANT NUMBER N/A 5c. PROGRAM ELEMENT NUMBER 61101E 6. AUTHOR(S) William Arbaugh...5d. PROJECT NUMBER APAC 5e. TASK NUMBER SD 5f. WORK UNIT NUMBER IR 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Five Directions, Inc

  7. A configurational analysis of success factors in crowdfunding video campaigns

    DEFF Research Database (Denmark)

    Lomberg, Carina; Li-Ying, Jason; Alkærsig, Lars

    Recent discussions on success factors on crowdfunding campaigns highlight a plentitude of diverse factors that stem from different, partly contradicting theories. We focus on campaign videos and assume more than one way of creating a successful crowdfunding video. We generate data of 1000 randoml...... (equifinality) and that conditions leading to success are conceptually different from failure (causal asymmetry).......Recent discussions on success factors on crowdfunding campaigns highlight a plentitude of diverse factors that stem from different, partly contradicting theories. We focus on campaign videos and assume more than one way of creating a successful crowdfunding video. We generate data of 1000 randomly...... chosen Kickstarter projects from the technology and design domain, and analyze those 715 campaigns that contain a video applying a fuzzy-set configuration analysis. Our results suggest that there are indeed several configurations of elements in videos that are correlated with different levels of success...

  8. No-Reference Video Quality Assessment using Codec Analysis

    DEFF Research Database (Denmark)

    Søgaard, Jacob; Forchhammer, Søren; Korhonen, Jari

    2015-01-01

    A no-reference video quality assessment (VQA) method is presented for videos distorted by H.264/AVC and MPEG-2. The assessment is performed without access to the bit-stream. Instead we analyze and estimate coefficients based on decoded pixels. The approach involves distinguishing between the two...... types of videos, estimating the level of quantization used in the I-frames, and exploiting this information to assess the video quality. In order to do this for H.264/AVC, the distribution of the DCT-coefficients after intra-prediction and deblocking are modeled. To obtain VQA features for H.264/AVC, we...... propose a novel estimation method of the quantization in H.264/AVC videos without bitstream access, which can also be used for Peak Signalto-Noise Ratio (PSNR) estimation. The results from the MPEG-2 and H.264/AVC analysis are mapped to a perceptual measure of video quality by Support Vector Regression...

  9. Reliability and validity of an automated computerized visual acuity and stereoacuity test in children using an interactive video game.

    Science.gov (United States)

    Ma, Dae Joong; Yang, Hee Kyung; Hwang, Jeong-Min

    2013-07-01

    To evaluate the test-retest reliability and validity of the new automated computerized distance visual acuity and stereoacuity test for children, which uses an interactive video game. Retrospective, observational case series. A total of 102 children aged between 3 and 7 years underwent the Snellen visual acuity test, the Distance Randot Stereotest, and the new automated computerized distance visual acuity and stereoacuity test. The test-retest reliability and validity of the automated computerized tests were assessed and compared with the Snellen visual acuity test and the Distance Randot Stereotest with frequency distributions of the differences, Bland-Altman plots, and Deming regression. The automated computerized distance visual acuity test had high test-retest reliability (95% limits of agreement ±0.18 logMAR, 90.0% of the differences within 0.2 logMAR) and acceptable validity as compared with the Snellen visual acuity chart (95% limits of agreement ±0.27 logMAR, 81.3% of the differences within 0.2 logMAR). The automated computerized distance stereoacuity test had high test-retest reliability (95% limits of agreement ±0.29 log arc second, 95.1% of the differences within 0.3 log arc second) and acceptable validity as compared with the Distance Randot Stereotest (95% limits of agreement ±0.35 log arc second, 93.9% of the differences within 0.3 log arc second). The new automated computerized distance visual acuity and stereoacuity test, which uses an interactive video game, has good reliability and acceptable validity compared with the Snellen visual acuity chart and the Distance Randot Stereotest. Copyright © 2013 Elsevier Inc. All rights reserved.

  10. A New Colorimetrically-Calibrated Automated Video-Imaging Protocol for Day-Night Fish Counting at the OBSEA Coastal Cabled Observatory

    Directory of Open Access Journals (Sweden)

    Joaquín del Río

    2013-10-01

    Full Text Available Field measurements of the swimming activity rhythms of fishes are scant due to the difficulty of counting individuals at a high frequency over a long period of time. Cabled observatory video monitoring allows such a sampling at a high frequency over unlimited periods of time. Unfortunately, automation for the extraction of biological information (i.e., animals’ visual counts per unit of time is still a major bottleneck. In this study, we describe a new automated video-imaging protocol for the 24-h continuous counting of fishes in colorimetrically calibrated time-lapse photographic outputs, taken by a shallow water (20 m depth cabled video-platform, the OBSEA. The spectral reflectance value for each patch was measured between 400 to 700 nm and then converted into standard RGB, used as a reference for all subsequent calibrations. All the images were acquired within a standardized Region Of Interest (ROI, represented by a 2 × 2 m methacrylate panel, endowed with a 9-colour calibration chart, and calibrated using the recently implemented “3D Thin-Plate Spline” warping approach in order to numerically define color by its coordinates in n-dimensional space. That operation was repeated on a subset of images, 500 images as a training set, manually selected since acquired under optimum visibility conditions. All images plus those for the training set were ordered together through Principal Component Analysis allowing the selection of 614 images (67.6% out of 908 as a total corresponding to 18 days (at 30 min frequency. The Roberts operator (used in image processing and computer vision for edge detection was used to highlights regions of high spatial colour gradient corresponding to fishes’ bodies. Time series in manual and visual counts were compared together for efficiency evaluation. Periodogram and waveform analysis outputs provided very similar results, although quantified parameters in relation to the strength of respective rhythms were

  11. A new colorimetrically-calibrated automated video-imaging protocol for day-night fish counting at the OBSEA coastal cabled observatory.

    Science.gov (United States)

    del Río, Joaquín; Aguzzi, Jacopo; Costa, Corrado; Menesatti, Paolo; Sbragaglia, Valerio; Nogueras, Marc; Sarda, Francesc; Manuèl, Antoni

    2013-10-30

    Field measurements of the swimming activity rhythms of fishes are scant due to the difficulty of counting individuals at a high frequency over a long period of time. Cabled observatory video monitoring allows such a sampling at a high frequency over unlimited periods of time. Unfortunately, automation for the extraction of biological information (i.e., animals' visual counts per unit of time) is still a major bottleneck. In this study, we describe a new automated video-imaging protocol for the 24-h continuous counting of fishes in colorimetrically calibrated time-lapse photographic outputs, taken by a shallow water (20 m depth) cabled video-platform, the OBSEA. The spectral reflectance value for each patch was measured between 400 to 700 nm and then converted into standard RGB, used as a reference for all subsequent calibrations. All the images were acquired within a standardized Region Of Interest (ROI), represented by a 2 × 2 m methacrylate panel, endowed with a 9-colour calibration chart, and calibrated using the recently implemented "3D Thin-Plate Spline" warping approach in order to numerically define color by its coordinates in n-dimensional space. That operation was repeated on a subset of images, 500 images as a training set, manually selected since acquired under optimum visibility conditions. All images plus those for the training set were ordered together through Principal Component Analysis allowing the selection of 614 images (67.6%) out of 908 as a total corresponding to 18 days (at 30 min frequency). The Roberts operator (used in image processing and computer vision for edge detection) was used to highlights regions of high spatial colour gradient corresponding to fishes' bodies. Time series in manual and visual counts were compared together for efficiency evaluation. Periodogram and waveform analysis outputs provided very similar results, although quantified parameters in relation to the strength of respective rhythms were different. Results

  12. Video Traffic Analysis for Abnormal Event Detection

    Science.gov (United States)

    2010-01-01

    We propose the use of video imaging sensors for the detection and classification of abnormal events to be used primarily for mitigation of traffic congestion. Successful detection of such events will allow for new road guidelines; for rapid deploymen...

  13. Video traffic analysis for abnormal event detection.

    Science.gov (United States)

    2010-01-01

    We propose the use of video imaging sensors for the detection and classification of abnormal events to : be used primarily for mitigation of traffic congestion. Successful detection of such events will allow for : new road guidelines; for rapid deplo...

  14. Automating risk analysis of software design models.

    Science.gov (United States)

    Frydman, Maxime; Ruiz, Guifré; Heymann, Elisa; César, Eduardo; Miller, Barton P

    2014-01-01

    The growth of the internet and networked systems has exposed software to an increased amount of security threats. One of the responses from software developers to these threats is the introduction of security activities in the software development lifecycle. This paper describes an approach to reduce the need for costly human expertise to perform risk analysis in software, which is common in secure development methodologies, by automating threat modeling. Reducing the dependency on security experts aims at reducing the cost of secure development by allowing non-security-aware developers to apply secure development with little to no additional cost, making secure development more accessible. To automate threat modeling two data structures are introduced, identification trees and mitigation trees, to identify threats in software designs and advise mitigation techniques, while taking into account specification requirements and cost concerns. These are the components of our model for automated threat modeling, AutSEC. We validated AutSEC by implementing it in a tool based on data flow diagrams, from the Microsoft security development methodology, and applying it to VOMS, a grid middleware component, to evaluate our model's performance.

  15. Automating Risk Analysis of Software Design Models

    Directory of Open Access Journals (Sweden)

    Maxime Frydman

    2014-01-01

    Full Text Available The growth of the internet and networked systems has exposed software to an increased amount of security threats. One of the responses from software developers to these threats is the introduction of security activities in the software development lifecycle. This paper describes an approach to reduce the need for costly human expertise to perform risk analysis in software, which is common in secure development methodologies, by automating threat modeling. Reducing the dependency on security experts aims at reducing the cost of secure development by allowing non-security-aware developers to apply secure development with little to no additional cost, making secure development more accessible. To automate threat modeling two data structures are introduced, identification trees and mitigation trees, to identify threats in software designs and advise mitigation techniques, while taking into account specification requirements and cost concerns. These are the components of our model for automated threat modeling, AutSEC. We validated AutSEC by implementing it in a tool based on data flow diagrams, from the Microsoft security development methodology, and applying it to VOMS, a grid middleware component, to evaluate our model's performance.

  16. Video Bioinformatics Analysis of Human Embryonic Stem Cell Colony Growth

    Science.gov (United States)

    Lin, Sabrina; Fonteno, Shawn; Satish, Shruthi; Bhanu, Bir; Talbot, Prue

    2010-01-01

    Because video data are complex and are comprised of many images, mining information from video material is difficult to do without the aid of computer software. Video bioinformatics is a powerful quantitative approach for extracting spatio-temporal data from video images using computer software to perform dating mining and analysis. In this article, we introduce a video bioinformatics method for quantifying the growth of human embryonic stem cells (hESC) by analyzing time-lapse videos collected in a Nikon BioStation CT incubator equipped with a camera for video imaging. In our experiments, hESC colonies that were attached to Matrigel were filmed for 48 hours in the BioStation CT. To determine the rate of growth of these colonies, recipes were developed using CL-Quant software which enables users to extract various types of data from video images. To accurately evaluate colony growth, three recipes were created. The first segmented the image into the colony and background, the second enhanced the image to define colonies throughout the video sequence accurately, and the third measured the number of pixels in the colony over time. The three recipes were run in sequence on video data collected in a BioStation CT to analyze the rate of growth of individual hESC colonies over 48 hours. To verify the truthfulness of the CL-Quant recipes, the same data were analyzed manually using Adobe Photoshop software. When the data obtained using the CL-Quant recipes and Photoshop were compared, results were virtually identical, indicating the CL-Quant recipes were truthful. The method described here could be applied to any video data to measure growth rates of hESC or other cells that grow in colonies. In addition, other video bioinformatics recipes can be developed in the future for other cell processes such as migration, apoptosis, and cell adhesion. PMID:20495527

  17. Automating Trend Analysis for Spacecraft Constellations

    Science.gov (United States)

    Davis, George; Cooter, Miranda; Updike, Clark; Carey, Everett; Mackey, Jennifer; Rykowski, Timothy; Powers, Edward I. (Technical Monitor)

    2001-01-01

    Spacecraft trend analysis is a vital mission operations function performed by satellite controllers and engineers, who perform detailed analyses of engineering telemetry data to diagnose subsystem faults and to detect trends that may potentially lead to degraded subsystem performance or failure in the future. It is this latter function that is of greatest importance, for careful trending can often predict or detect events that may lead to a spacecraft's entry into safe-hold. Early prediction and detection of such events could result in the avoidance of, or rapid return to service from, spacecraft safing, which not only results in reduced recovery costs but also in a higher overall level of service for the satellite system. Contemporary spacecraft trending activities are manually intensive and are primarily performed diagnostically after a fault occurs, rather than proactively to predict its occurrence. They also tend to rely on information systems and software that are oudated when compared to current technologies. When coupled with the fact that flight operations teams often have limited resources, proactive trending opportunities are limited, and detailed trend analysis is often reserved for critical responses to safe holds or other on-orbit events such as maneuvers. While the contemporary trend analysis approach has sufficed for current single-spacecraft operations, it will be unfeasible for NASA's planned and proposed space science constellations. Missions such as the Dynamics, Reconnection and Configuration Observatory (DRACO), for example, are planning to launch as many as 100 'nanospacecraft' to form a homogenous constellation. A simple extrapolation of resources and manpower based on single-spacecraft operations suggests that trending for such a large spacecraft fleet will be unmanageable, unwieldy, and cost-prohibitive. It is therefore imperative that an approach to automating the spacecraft trend analysis function be studied, developed, and applied to

  18. Automated reasoning applications to design analysis

    International Nuclear Information System (INIS)

    Stratton, R.C.

    1984-01-01

    Given the necessary relationships and definitions of design functions and components, validation of system incarnation (the physical product of design) and sneak function analysis can be achieved via automated reasoners. The relationships and definitions must define the design specification and incarnation functionally. For the design specification, the hierarchical functional representation is based on physics and engineering principles and bounded by design objectives and constraints. The relationships and definitions of the design incarnation are manifested as element functional definitions, state relationship to functions, functional relationship to direction, element connectivity, and functional hierarchical configuration

  19. Automated quantification and analysis of mandibular asymmetry

    DEFF Research Database (Denmark)

    Darvann, T. A.; Hermann, N. V.; Larsen, P.

    2010-01-01

    We present an automated method of spatially detailed 3D asymmetry quantification in mandibles extracted from CT and apply it to a population of infants with unilateral coronal synostosis (UCS). An atlas-based method employing non-rigid registration of surfaces is used for determining deformation ...... after mirroring the mandible across the MSP. A principal components analysis of asymmetry characterizes the major types of asymmetry in the population, and successfully separates the asymmetric UCS mandibles from a number of less asymmetric mandibles from a control population....

  20. Automated metabolic gas analysis systems: a review.

    Science.gov (United States)

    Macfarlane, D J

    2001-01-01

    The use of automated metabolic gas analysis systems or metabolic measurement carts (MMC) in exercise studies is common throughout the industrialised world. They have become essential tools for diagnosing many hospital patients, especially those with cardiorespiratory disease. Moreover, the measurement of maximal oxygen uptake (VO2max) is routine for many athletes in fitness laboratories and has become a defacto standard in spite of its limitations. The development of metabolic carts has also facilitated the noninvasive determination of the lactate threshold and cardiac output, respiratory gas exchange kinetics, as well as studies of outdoor activities via small portable systems that often use telemetry. Although the fundamental principles behind the measurement of oxygen uptake (VO2) and carbon dioxide production (VCO2) have not changed, the techniques used have, and indeed, some have almost turned through a full circle. Early scientists often employed a manual Douglas bag method together with separate chemical analyses, but the need for faster and more efficient techniques fuelled the development of semi- and full-automated systems by private and commercial institutions. Yet, recently some scientists are returning back to the traditional Douglas bag or Tissot-spirometer methods, or are using less complex automated systems to not only save capital costs, but also to have greater control over the measurement process. Over the last 40 years, a considerable number of automated systems have been developed, with over a dozen commercial manufacturers producing in excess of 20 different automated systems. The validity and reliability of all these different systems is not well known, with relatively few independent studies having been published in this area. For comparative studies to be possible and to facilitate greater consistency of measurements in test-retest or longitudinal studies of individuals, further knowledge about the performance characteristics of these

  1. SAPPHIRE: a toolkit for building efficient stream programs for medical video analysis.

    Science.gov (United States)

    Stanek, Sean R; Tavanapong, Wallapak; Wong, Johnny; Oh, JungHwan; Nawarathna, Ruwan D; Muthukudage, Jayantha; de Groen, Piet C

    2013-12-01

    This paper describes the design and implementation of SAPPHIRE--a novel middleware and software development kit for stream programing on a heterogeneous system of multi-core multi-CPUs with optional hardware accelerators such as graphics processing unit (GPU). A stream program consists of a set of tasks where the same tasks are repeated over multiple iterations of data (e.g., video frames). Examples of such programs are video analysis applications for computer-aided diagnosis and computer-assisted surgeries. Our design goal is to reduce the implementation efforts and ease collaborative software development of stream programs while supporting efficient execution of the programs on the target hardware. To validate the toolkit, we implemented EM-Automated-RT software with the toolkit and reported our experience. EM-Automated-RT performs real-time video analysis for quality of a colonoscopy procedure and provides visual feedback to assist the endoscopist to achieve optimal inspection of the colon during the procedure. The software has been deployed in a hospital setting to conduct a clinical trial. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  2. Discriminative Non-Linear Stationary Subspace Analysis for Video Classification.

    Science.gov (United States)

    Baktashmotlagh, Mahsa; Harandi, Mehrtash; Lovell, Brian C; Salzmann, Mathieu

    2014-12-01

    Low-dimensional representations are key to the success of many video classification algorithms. However, the commonly-used dimensionality reduction techniques fail to account for the fact that only part of the signal is shared across all the videos in one class. As a consequence, the resulting representations contain instance-specific information, which introduces noise in the classification process. In this paper, we introduce non-linear stationary subspace analysis: a method that overcomes this issue by explicitly separating the stationary parts of the video signal (i.e., the parts shared across all videos in one class), from its non-stationary parts (i.e., the parts specific to individual videos). Our method also encourages the new representation to be discriminative, thus accounting for the underlying classification problem. We demonstrate the effectiveness of our approach on dynamic texture recognition, scene classification and action recognition.

  3. [Automation of chemical analysis in enology].

    Science.gov (United States)

    Dubernet, M

    1978-01-01

    Automatic dosages took place a short time ago in oenology laboratories. First researchs about automation of usual manual analysis have been completed by I.N.R.A. Station of Dijon during 1969--1972 years. Then, other researchs were made and in 1974 the first automatic analyser appeared in application laboratories. In all cases continuous flow method was used. First dosages which has been carried out are volatic acidity, residual sugars, total SO2. The rate of work is 30 samples an hour. Then, an original way for free SO2 was suggested. At present, about a dozen of laboratories in France use these dosages. The ethanol dosage automation, very important in oenology, is very difficult to carry out. A new method using a thermometric analyzer is tested. Research about many dosages as tartaric, malic, lactic acids, glucose, fructose, glycérol, have been performed especially by I.N.R.A. Station in Narbonne. But these dosages are not current and at present no laboratory apply them. Now, equipments price and redemption, change of tradionnal dosages for automatical methods and the level of knowledge required for operators are well known. The reproducibility and the accuracy of the continuous flow automatic dosages allow, for enough important laboratories, to make an increasing number of analysis necessary for wine quality control.

  4. Integration of video and radiation analysis data

    Energy Technology Data Exchange (ETDEWEB)

    Menlove, H.O.; Howell, J.A.; Rodriguez, C.A.; Eccleston, G.W.; Beddingfield, D.; Smith, J.E. [Los Alamos National Lab., NM (United States); Baumgart, C.W. [EG and G Energy Measurements, Inc., Los Alamos, NM (United States)

    1994-08-01

    We have introduced a new method to integrate spatial (digital video) and time (radiation monitoring) information. This technology is based on pattern recognition by neural networks, provides significant capability to analyze complex data, and has the ability to learn and adapt to changing situations. This technique could significantly reduce the frequency of inspection visits to key facilities without a loss of safeguards effectiveness.

  5. Automated extraction of temporal motor activity signals from video recordings of neonatal seizures based on adaptive block matching.

    Science.gov (United States)

    Karayiannis, Nicolaos B; Sami, Abdul; Frost, James D; Wise, Merrill S; Mizrahi, Eli M

    2005-04-01

    This paper presents an automated procedure developed to extract quantitative information from video recordings of neonatal seizures in the form of motor activity signals. This procedure relies on optical flow computation to select anatomical sites located on the infants' body parts. Motor activity signals are extracted by tracking selected anatomical sites during the seizure using adaptive block matching. A block of pixels is tracked throughout a sequence of frames by searching for the most similar block of pixels in subsequent frames; this search is facilitated by employing various update strategies to account for the changing appearance of the block. The proposed procedure is used to extract temporal motor activity signals from video recordings of neonatal seizures and other events not associated with seizures.

  6. Forensic analysis of video steganography tools

    Directory of Open Access Journals (Sweden)

    Thomas Sloan

    2015-05-01

    Full Text Available Steganography is the art and science of concealing information in such a way that only the sender and intended recipient of a message should be aware of its presence. Digital steganography has been used in the past on a variety of media including executable files, audio, text, games and, notably, images. Additionally, there is increasing research interest towards the use of video as a media for steganography, due to its pervasive nature and diverse embedding capabilities. In this work, we examine the embedding algorithms and other security characteristics of several video steganography tools. We show how all feature basic and severe security weaknesses. This is potentially a very serious threat to the security, privacy and anonymity of their users. It is important to highlight that most steganography users have perfectly legal and ethical reasons to employ it. Some common scenarios would include citizens in oppressive regimes whose freedom of speech is compromised, people trying to avoid massive surveillance or censorship, political activists, whistle blowers, journalists, etc. As a result of our findings, we strongly recommend ceasing any use of these tools, and to remove any contents that may have been hidden, and any carriers stored, exchanged and/or uploaded online. For many of these tools, carrier files will be trivial to detect, potentially compromising any hidden data and the parties involved in the communication. We finish this work by presenting our steganalytic results, that highlight a very poor current state of the art in practical video steganography tools. There is unfortunately a complete lack of secure and publicly available tools, and even commercial tools offer very poor security. We therefore encourage the steganography community to work towards the development of more secure and accessible video steganography tools, and make them available for the general public. The results presented in this work can also be seen as a useful

  7. An investigation of automated activation analysis

    International Nuclear Information System (INIS)

    Kuykendall, William E. Jr.; Wainerdi, Richard E.

    1962-01-01

    A study has been made of the possibility of applying computer techniques to the resolution of data from the complex gamma-ray spectra obtained in non-destructive activation analysis. The primary objective has been to use computer data-handling techniques to allow the existing analytical method to be used for rapid, routine, sensitive and economical elemental analyses. The necessary conditions for the satisfactory application of automated activation analysis have been evaluated and a computer programme has been completed which will process the data from samples containing a large number of different elements. To illustrate the speed of the handling sequence, the data from a sample containing four component elements can be processed in a matter of minutes, with the speed of processing limited primarily by the speed of the output printer. (author) [fr

  8. Specdata: Automated Analysis Software for Broadband Spectra

    Science.gov (United States)

    Oliveira, Jasmine N.; Martin-Drumel, Marie-Aline; McCarthy, Michael C.

    2017-06-01

    With the advancement of chirped-pulse techniques, broadband rotational spectra with a few tens to several hundred GHz of spectral coverage are now routinely recorded. When studying multi-component mixtures that might result, for example, with the use of an electrical discharge, lines of new chemical species are often obscured by those of known compounds, and analysis can be laborious. To address this issue, we have developed SPECdata, an open source, interactive tool which is designed to simplify and greatly accelerate the spectral analysis and discovery. Our software tool combines both automated and manual components that free the user from computation, while giving him/her considerable flexibility to assign, manipulate, interpret and export their analysis. The automated - and key - component of the new software is a database query system that rapidly assigns transitions of known species in an experimental spectrum. For each experiment, the software identifies spectral features, and subsequently assigns them to known molecules within an in-house database (Pickett .cat files, list of frequencies...), or those catalogued in Splatalogue (using automatic on-line queries). With suggested assignments, the control is then handed over to the user who can choose to accept, decline or add additional species. Data visualization, statistical information, and interactive widgets assist the user in making decisions about their data. SPECdata has several other useful features intended to improve the user experience. Exporting a full report of the analysis, or a peak file in which assigned lines are removed are among several options. A user may also save their progress to continue at another time. Additional features of SPECdata help the user to maintain and expand their database for future use. A user-friendly interface allows one to search, upload, edit or update catalog or experiment entries.

  9. Automated textual descriptions for a wide range of video events with 48 human actions

    NARCIS (Netherlands)

    Hanckmann, P.; Schutte, K.; Burghouts, G.J.

    2012-01-01

    Presented is a hybrid method to generate textual descriptions of video based on actions. The method includes an action classifier and a description generator. The aim for the action classifier is to detect and classify the actions in the video, such that they can be used as verbs for the description

  10. Video Analysis in Multi-Intelligence

    Energy Technology Data Exchange (ETDEWEB)

    Key, Everett Kiusan [Univ. of Washington, Seattle, WA (United States); Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Van Buren, Kendra Lu [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Warren, Will [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Hemez, Francois M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-07-27

    This is a project which was performed by a graduated high school student at Los Alamos National Laboratory (LANL). The goal of the Multi-intelligence (MINT) project is to determine the state of a facility from multiple data streams. The data streams are indirect observations. The researcher is using DARHT (Dual-Axis Radiographic Hydrodynamic Test Facility) as a proof of concept. In summary, videos from the DARHT facility contain a rich amount of information. Distribution of car activity can inform us about the state of the facility. Counting large vehicles shows promise as another feature for identifying the state of operations. Signal processing techniques are limited by the low resolution and compression of the videos. We are working on integrating these features with features obtained from other data streams to contribute to the MINT project. Future work can pursue other observations, such as when the gate is functioning or non-functioning.

  11. Management issues in automated audit analysis

    Energy Technology Data Exchange (ETDEWEB)

    Jackson, K.A.; Hochberg, J.G.; Wilhelmy, S.K.; McClary, J.F.; Christoph, G.G.

    1994-03-01

    This paper discusses management issues associated with the design and implementation of an automated audit analysis system that we use to detect security events. It gives the viewpoint of a team directly responsible for developing and managing such a system. We use Los Alamos National Laboratory`s Network Anomaly Detection and Intrusion Reporter (NADIR) as a case in point. We examine issues encountered at Los Alamos, detail our solutions to them, and where appropriate suggest general solutions. After providing an introduction to NADIR, we explore four general management issues: cost-benefit questions, privacy considerations, legal issues, and system integrity. Our experiences are of general interest both to security professionals and to anyone who may wish to implement a similar system. While NADIR investigates security events, the methods used and the management issues are potentially applicable to a broad range of complex systems. These include those used to audit credit card transactions, medical care payments, and procurement systems.

  12. Automated image analysis of the pathological lung in CT

    NARCIS (Netherlands)

    Sluimer, Ingrid Christine

    2005-01-01

    The general objective of the thesis is automation of the analysis of the pathological lung from CT images. Specifically, we aim for automated detection and classification of abnormalities in the lung parenchyma. We first provide a review of computer analysis techniques applied to CT of the

  13. An intelligent crowdsourcing system for forensic analysis of surveillance video

    Science.gov (United States)

    Tahboub, Khalid; Gadgil, Neeraj; Ribera, Javier; Delgado, Blanca; Delp, Edward J.

    2015-03-01

    Video surveillance systems are of a great value for public safety. With an exponential increase in the number of cameras, videos obtained from surveillance systems are often archived for forensic purposes. Many automatic methods have been proposed to do video analytics such as anomaly detection and human activity recognition. However, such methods face significant challenges due to object occlusions, shadows and scene illumination changes. In recent years, crowdsourcing has become an effective tool that utilizes human intelligence to perform tasks that are challenging for machines. In this paper, we present an intelligent crowdsourcing system for forensic analysis of surveillance video that includes the video recorded as a part of search and rescue missions and large-scale investigation tasks. We describe a method to enhance crowdsourcing by incorporating human detection, re-identification and tracking. At the core of our system, we use a hierarchal pyramid model to distinguish the crowd members based on their ability, experience and performance record. Our proposed system operates in an autonomous fashion and produces a final output of the crowdsourcing analysis consisting of a set of video segments detailing the events of interest as one storyline.

  14. ASteCA: Automated Stellar Cluster Analysis

    Science.gov (United States)

    Perren, G. I.; Vázquez, R. A.; Piatti, A. E.

    2015-04-01

    We present the Automated Stellar Cluster Analysis package (ASteCA), a suit of tools designed to fully automate the standard tests applied on stellar clusters to determine their basic parameters. The set of functions included in the code make use of positional and photometric data to obtain precise and objective values for a given cluster's center coordinates, radius, luminosity function and integrated color magnitude, as well as characterizing through a statistical estimator its probability of being a true physical cluster rather than a random overdensity of field stars. ASteCA incorporates a Bayesian field star decontamination algorithm capable of assigning membership probabilities using photometric data alone. An isochrone fitting process based on the generation of synthetic clusters from theoretical isochrones and selection of the best fit through a genetic algorithm is also present, which allows ASteCA to provide accurate estimates for a cluster's metallicity, age, extinction and distance values along with its uncertainties. To validate the code we applied it on a large set of over 400 synthetic MASSCLEAN clusters with varying degrees of field star contamination as well as a smaller set of 20 observed Milky Way open clusters (Berkeley 7, Bochum 11, Czernik 26, Czernik 30, Haffner 11, Haffner 19, NGC 133, NGC 2236, NGC 2264, NGC 2324, NGC 2421, NGC 2627, NGC 6231, NGC 6383, NGC 6705, Ruprecht 1, Tombaugh 1, Trumpler 1, Trumpler 5 and Trumpler 14) studied in the literature. The results show that ASteCA is able to recover cluster parameters with an acceptable precision even for those clusters affected by substantial field star contamination. ASteCA is written in Python and is made available as an open source code which can be downloaded ready to be used from its official site.

  15. Automated quantitative analysis of coordinated locomotor behaviour in rats.

    Science.gov (United States)

    Tanger, H J; Vanwersch, R A; Wolthuis, O L

    1984-03-01

    Disturbances of motor coordination are usually difficult to quantify. Therefore, a method was developed for the automated quantitative analysis of the movements of the dyed paws of stepping rats, registered by a colour TV camera. The signals from the TV-video system were converted by an electronic interface into voltages proportional to the X- and Y-coordinates of the paws, from which a desktop computer calculated the movements of these paws in time and distance. Application 1 analysed the steps of a rat walking in a hollow rotating wheel. The results showed low variability of the walking pattern, the method was insensitive to low doses of alcohol, but was suitable to quantify overt, e.g. neurotoxic, locomotor disturbances or recovery thereof. In application 2 hurdles were placed in a similar hollow wheel and the rats were trained to step from the top of one hurdle to another. Physostigmine-induced disturbances of this acquired complex motor task could be detected at doses far below those that cause overt symptoms.

  16. Integration of video and radiation analysis data

    International Nuclear Information System (INIS)

    Menlove, H.O.; Howell, J.A.; Rodriguez, C.A.; Eccleston, G.W.; Beddingfield, D.; Smith, J.E.; Baumgart, C.W.

    1995-01-01

    For the past several years, the integration of containment and surveillance (C/S) with nondestructive assay (NDA) sensors for monitoring the movement of nuclear material has focused on the hardware and communications protocols in the transmission network. Little progress has been made in methods to utilize the combined C/S and NDA data for safeguards and to reduce the inspector time spent in nuclear facilities. One of the fundamental problems in the integration of the combined data is that the two methods operate in different dimensions. The C/S video data is spatial in nature; whereas, the NDA sensors provide radiation levels versus time data. The authors have introduced a new method to integrate spatial (digital video) with time (radiation monitoring) information. This technology is based on pattern recognition by neural networks, provides significant capability to analyze complex data, and has the ability to learn and adapt to changing situations. This technique has the potential of significantly reducing the frequency of inspection visits to key facilities without a loss of safeguards effectiveness

  17. Integration of video and radiation analysis data

    Energy Technology Data Exchange (ETDEWEB)

    Menlove, H.O.; Howell, J.A.; Rodriguez, C.A.; Eccleston, G.W.; Beddingfield, D.; Smith, J.E. [Los Alamos National Lab., NM (United States); Baumgart, C.W. [EG and G Energy Measurements, Inc., Los Alamos, NM (United States)

    1995-12-31

    For the past several years, the integration of containment and surveillance (C/S) with nondestructive assay (NDA) sensors for monitoring the movement of nuclear material has focused on the hardware and communications protocols in the transmission network. Little progress has been made in methods to utilize the combined C/S and NDA data for safeguards and to reduce the inspector time spent in nuclear facilities. One of the fundamental problems in the integration of the combined data is that the two methods operate in different dimensions. The C/S video data is spatial in nature; whereas, the NDA sensors provide radiation levels versus time data. The authors have introduced a new method to integrate spatial (digital video) with time (radiation monitoring) information. This technology is based on pattern recognition by neural networks, provides significant capability to analyze complex data, and has the ability to learn and adapt to changing situations. This technique has the potential of significantly reducing the frequency of inspection visits to key facilities without a loss of safeguards effectiveness.

  18. Learning based on library automation in mobile devices: The video production by students of Universidade Federal do Cariri Library Science Undergraduate Degree

    Directory of Open Access Journals (Sweden)

    David Vernon VIEIRA

    Full Text Available Abstract The video production for learning has been evident over the last few years especially when it involves aspects of the application of hardware and software for automation spaces. In Librarianship Undergraduate Degrees the need for practical learning focused on the knowledge of the requirements for library automation demand on teacher to develop an educational content to enable the student to learn through videos in order to increase the knowledge about information technology. Thus, discusses the possibilities of learning through mobile devices in education reporting an experience that took place with students who entered in March, 2015 (2015.1 Bachelor Degree in Library Science from the Universidade Federal do Cariri (Federal University of Cariri in state of Ceará, Brazil. The literature review includes articles publicated in scientific journals and conference proceedings and books in English, Portuguese and Spanish on the subject. The methodology with quantitative and qualitative approach includes an exploratory study, where the data collection was used online survey to find out the experience of the elaboration of library automation videos by students who studied in that course. The learning experience using mobile devices for recording of technological environments of libraries allowed them to be produced 25 videos that contemplated aspects of library automation having these actively participated in production of the video and its publication on the Internet.

  19. Ecological Automation Design, Extending Work Domain Analysis

    NARCIS (Netherlands)

    Amelink, M.H.J.

    2010-01-01

    In high–risk domains like aviation, medicine and nuclear power plant control, automation has enabled new capabilities, increased the economy of operation and has greatly contributed to safety. However, automation increases the number of couplings in a system, which can inadvertently lead to more

  20. Violence and weapon carrying in music videos. A content analysis.

    Science.gov (United States)

    DuRant, R H; Rich, M; Emans, S J; Rome, E S; Allred, E; Woods, E R

    1997-05-01

    The positive portrayal of violence and weapon carrying in televised music videos is thought to have a considerable influence on the normative expectations of adolescents about these behaviors. To perform a content analysis of the depictions of violence and weapon carrying in music videos, including 5 genres of music (rock, rap, adult contemporary, rhythm and blues, and country), from 4 television networks and to analyze the degree of sexuality or eroticism portrayed in each video and its association with violence and weapon carrying, as an indicator of the desirability of violent behaviors. Five hundred eighteen videos were recorded during randomly selected days and times of the day from the Music Television, Video Hits One, Black Entertainment Television, and Country Music Television networks. Four female and 4 male observers aged 17 to 24 years were trained to use a standardized content analysis instrument. Interobserver reliability testing resulted in a mean (+/- SD) percentage agreement of 89.25% +/- 7.10% and a mean (+/- SD) kappa of 0.73 +/- 0.20. All videos were observed by rotating 2-person, male-female teams that were required to reach agreement on each behavior that was scored. Music genre and network differences in behaviors were analyzed with chi 2 tests. A higher percentage (22.4%) of Music Television videos portrayed overt violence than Video Hits One (11.8%), Country Music Television (11.8%), and Black Entertainment Television (11.5%) videos (P = .02). Rap (20.4%) had the highest portrayal of violence, followed by rock (19.8%), country (10.8%), adult contemporary (9.7%), and rhythm and blues (5.9%) (P = .006). Weapon carrying was higher on Music Television (25.0%) than on Black Entertainment Television (11.5%), Video Hits One (8.4%), and Country Music Television (6.9%) (P music videos are between 3 and 4 minutes long, these data indicate that even modest levels of viewing may result in substantial exposure to violence and weapon carrying, which is

  1. Use of Video Analysis System for Working Posture Evaluations

    Science.gov (United States)

    McKay, Timothy D.; Whitmore, Mihriban

    1994-01-01

    In a work environment, it is important to identify and quantify the relationship among work activities, working posture, and workplace design. Working posture may impact the physical comfort and well-being of individuals, as well as performance. The Posture Video Analysis Tool (PVAT) is an interactive menu and button driven software prototype written in Supercard (trademark). Human Factors analysts are provided with a predefined set of options typically associated with postural assessments and human performance issues. Once options have been selected, the program is used to evaluate working posture and dynamic tasks from video footage. PVAT has been used to evaluate postures from Orbiter missions, as well as from experimental testing of prototype glove box designs. PVAT can be used for video analysis in a number of industries, with little or no modification. It can contribute to various aspects of workplace design such as training, task allocations, procedural analyses, and hardware usability evaluations. The major advantage of the video analysis approach is the ability to gather data, non-intrusively, in restricted-access environments, such as emergency and operation rooms, contaminated areas, and control rooms. Video analysis also provides the opportunity to conduct preliminary evaluations of existing work areas.

  2. Resampling method for balancing training data in video analysis

    Science.gov (United States)

    Giritharan, Balathasan; Yuan, Xiaohui

    2010-03-01

    Reviewing videos from medical procedures is a tedious work that requires concentration for extended hours and usually screens thousands of frames to find only a few positive cases that indicate probable presence of disease. Computational classification algorithms are sought to automate the reviewing process. The class imbalance problem becomes challenging when the learning process is driven by relative few minority class samples. The learning algorithms using imbalanced data sets generally result in large number of false negatives. In this article, we present an efficient rebalancing method for finding video frames that contain bleeding lesions. The majority class generally has clusters of data within them. Here we cluster the majority class and under-sample the each cluster based on its variance so that useful examples would not be lost during the under-sampling process. The balance of bleeding to non-bleeding frames is restored by the proposed cluster-based under-sampling and oversampling using Synthetic Minority Over-sampling Technique (SMOTE). Experiments were conducted using synthetic data and videos manually annotated by medical specialists for obscure bleeding detection. Our method achieved a high average sensitivity and specificity.

  3. Automated Image Analysis Corrosion Working Group Update: February 1, 2018

    Energy Technology Data Exchange (ETDEWEB)

    Wendelberger, James G. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2018-02-01

    These are slides for the automated image analysis corrosion working group update. The overall goals were: automate the detection and quantification of features in images (faster, more accurate), how to do this (obtain data, analyze data), focus on Laser Scanning Confocal Microscope (LCM) data (laser intensity, laser height/depth, optical RGB, optical plus laser RGB).

  4. Automated Aesthetic Analysis of Photographic Images.

    Science.gov (United States)

    Aydın, Tunç Ozan; Smolic, Aljoscha; Gross, Markus

    2015-01-01

    We present a perceptually calibrated system for automatic aesthetic evaluation of photographic images. Our work builds upon the concepts of no-reference image quality assessment, with the main difference being our focus on rating image aesthetic attributes rather than detecting image distortions. In contrast to the recent attempts on the highly subjective aesthetic judgment problems such as binary aesthetic classification and the prediction of an image's overall aesthetics rating, our method aims on providing a reliable objective basis of comparison between aesthetic properties of different photographs. To that end our system computes perceptually calibrated ratings for a set of fundamental and meaningful aesthetic attributes, that together form an "aesthetic signature" of an image. We show that aesthetic signatures can still be used to improve upon the current state-of-the-art in automatic aesthetic judgment, but also enable interesting new photo editing applications such as automated aesthetic analysis, HDR tone mapping evaluation, and providing aesthetic feedback during multi-scale contrast manipulation.

  5. Automated and connected vehicle implications and analysis.

    Science.gov (United States)

    2017-05-01

    Automated and connected vehicles (ACV) and, in particular, autonomous vehicles have captured : the interest of the public, industry and transportation authorities. ACVs can significantly reduce : accidents, fuel consumption, pollution and the costs o...

  6. Automated counting and analysis of etched tracks in CR-39 plastic

    International Nuclear Information System (INIS)

    Majborn, B.

    1986-01-01

    An image analysis system has been set up which is capable of automated counting and analysis of etched nuclear particle tracks in plastic. The system is composed of an optical microscope, CCD camera, frame grabber, personal computer, monitor, and printer. The frame grabber acquires and displays images at video rate. It has a spatial resolution of 512 x 512 pixels with 8 bits of digitisation corresponding to 256 grey levels. The software has been developed for general image processing and adapted for the present purpose. Comparisons of automated and visual microscope counting of tracks in chemically etched CR-39 detectors are presented with emphasis on results of interest for practical radon measurements or neutron dosimetry, e.g. calibration factors, background track densities and variations in background. (author)

  7. Quantitative assessment of human motion using video motion analysis

    Science.gov (United States)

    Probe, John D.

    1993-01-01

    In the study of the dynamics and kinematics of the human body a wide variety of technologies has been developed. Photogrammetric techniques are well documented and are known to provide reliable positional data from recorded images. Often these techniques are used in conjunction with cinematography and videography for analysis of planar motion, and to a lesser degree three-dimensional motion. Cinematography has been the most widely used medium for movement analysis. Excessive operating costs and the lag time required for film development, coupled with recent advances in video technology, have allowed video based motion analysis systems to emerge as a cost effective method of collecting and analyzing human movement. The Anthropometric and Biomechanics Lab at Johnson Space Center utilizes the video based Ariel Performance Analysis System (APAS) to develop data on shirtsleeved and space-suited human performance in order to plan efficient on-orbit intravehicular and extravehicular activities. APAS is a fully integrated system of hardware and software for biomechanics and the analysis of human performance and generalized motion measurement. Major components of the complete system include the video system, the AT compatible computer, and the proprietary software.

  8. Automating the construction of scene classifiers for content-based video retrieval

    NARCIS (Netherlands)

    Khan, L.; Israël, Menno; Petrushin, V.A.; van den Broek, Egon; van der Putten, Peter

    2004-01-01

    This paper introduces a real time automatic scene classifier within content-based video retrieval. In our envisioned approach end users like documentalists, not image processing experts, build classifiers interactively, by simply indicating positive examples of a scene. Classification consists of a

  9. The impact of online video lecture recordings and automated feedback on student performance

    NARCIS (Netherlands)

    Wieling, M. B.; Hofman, W. H. A.

    To what extent a blended learning configuration of face-to-face lectures, online on-demand video recordings of the face-to-face lectures and the offering of online quizzes with appropriate feedback has an additional positive impact on the performance of these students compared to the traditional

  10. Studying the movement behaviour of benthic macroinvertebrates with automated video tracking

    NARCIS (Netherlands)

    Augusiak, J.A.; Brink, van den P.J.

    2015-01-01

    Quantifying and understanding movement is critical for a wide range of questions in basic and applied ecology. Movement ecology is also fostered by technological advances that allow automated tracking for a wide range of animal species. However, for aquatic macroinvertebrates, such detailed methods

  11. Measuring energy expenditure in sports by thermal video analysis

    DEFF Research Database (Denmark)

    Gade, Rikke; Larsen, Ryan Godsk; Moeslund, Thomas B.

    2017-01-01

    Estimation of human energy expenditure in sports and exercise contributes to performance analyses and tracking of physical activity levels. The focus of this work is to develop a video-based method for estimation of energy expenditure in athletes. We propose a method using thermal video analysis...... of oxygen uptake. These initial experiments indicate a correlation between estimated step frequency and oxygen uptake. Based on the preliminary results we conclude that the proposed method has potential as a future non-invasive approach to estimate energy expenditure during sports....

  12. Drinking during marathon running in extreme heat: a video analysis ...

    African Journals Online (AJOL)

    Objective. To assess the drinking behaviours of top competitors during an Olympic marathon. Methods. Retrospective video analysis of the top four finishers in both the male and female 2004 Athens Olympic marathons plus the pre-race favourite in the female race in order to assess total time spent drinking. One male and ...

  13. Video Analysis of Musculoskeletal Injuries in Nigerian and English ...

    African Journals Online (AJOL)

    Video Analysis of Musculoskeletal Injuries in Nigerian and English Professional Soccer Leagues: A Comparative Study. ... The knee and the ankle were the most common injured parts. Most injuries were caused by tackling ... Keywords: Soccer Players, Nigerian Premier League, English Premier League. Musculoskeletal ...

  14. A video-polygraphic analysis of the cataplectic attack

    DEFF Research Database (Denmark)

    Rubboli, G; d'Orsi, G; Zaniboni, A

    2000-01-01

    OBJECTIVES AND METHODS: To perform a video-polygraphic analysis of 11 cataplectic attacks in a 39-year-old narcoleptic patient, correlating clinical manifestations with polygraphic findings. Polygraphic recordings monitored EEG, EMG activity from several cranial, trunk, upper and lower limbs musc...

  15. Statistical analysis of subjective preferences for video enhancement

    Science.gov (United States)

    Woods, Russell L.; Satgunam, PremNandhini; Bronstad, P. Matthew; Peli, Eli

    2010-02-01

    Measuring preferences for moving video quality is harder than for static images due to the fleeting and variable nature of moving video. Subjective preferences for image quality can be tested by observers indicating their preference for one image over another. Such pairwise comparisons can be analyzed using Thurstone scaling (Farrell, 1999). Thurstone (1927) scaling is widely used in applied psychology, marketing, food tasting and advertising research. Thurstone analysis constructs an arbitrary perceptual scale for the items that are compared (e.g. enhancement levels). However, Thurstone scaling does not determine the statistical significance of the differences between items on that perceptual scale. Recent papers have provided inferential statistical methods that produce an outcome similar to Thurstone scaling (Lipovetsky and Conklin, 2004). Here, we demonstrate that binary logistic regression can analyze preferences for enhanced video.

  16. Content-based analysis and indexing of sports video

    Science.gov (United States)

    Luo, Ming; Bai, Xuesheng; Xu, Guang-you

    2001-12-01

    An explosion of on-line image and video data in digital form is already well underway. With the exponential rise in interactive information exploration and dissemination through the World-Wide Web, the major inhibitors of rapid access to on-line video data are the management of capture and storage, and content-based intelligent search and indexing techniques. This paper proposes an approach for content-based analysis and event-based indexing of sports video. It includes a novel method to organize shots - classifying shots as close shots and far shots, an original idea of blur extent-based event detection, and an innovative local mutation-based algorithm for caption detection and retrieval. Results on extensive real TV programs demonstrate the applicability of our approach.

  17. Performance Analysis of Video Transmission Using Sequential Distortion Minimization Method for Digital Video Broadcasting Terrestrial

    Directory of Open Access Journals (Sweden)

    Novita Astin

    2016-12-01

    Full Text Available This paper presents about the transmission of Digital Video Broadcasting system with streaming video resolution 640x480 on different IQ rate and modulation. In the video transmission, distortion often occurs, so the received video has bad quality. Key frames selection algorithm is flexibel on a change of video, but on these methods, the temporal information of a video sequence is omitted. To minimize distortion between the original video and received video, we aimed at adding methodology using sequential distortion minimization algorithm. Its aim was to create a new video, better than original video without significant loss of content between the original video and received video, fixed sequentially. The reliability of video transmission was observed based on a constellation diagram, with the best result on IQ rate 2 Mhz and modulation 8 QAM. The best video transmission was also investigated using SEDIM (Sequential Distortion Minimization Method and without SEDIM. The experimental result showed that the PSNR (Peak Signal to Noise Ratio average of video transmission using SEDIM was an increase from 19,855 dB to 48,386 dB and SSIM (Structural Similarity average increase 10,49%. The experimental results and comparison of proposed method obtained a good performance. USRP board was used as RF front-end on 2,2 GHz.

  18. Support-vector-machine tree-based domain knowledge learning toward automated sports video classification

    Science.gov (United States)

    Xiao, Guoqiang; Jiang, Yang; Song, Gang; Jiang, Jianmin

    2010-12-01

    We propose a support-vector-machine (SVM) tree to hierarchically learn from domain knowledge represented by low-level features toward automatic classification of sports videos. The proposed SVM tree adopts a binary tree structure to exploit the nature of SVM's binary classification, where each internal node is a single SVM learning unit, and each external node represents the classified output type. Such a SVM tree presents a number of advantages, which include: 1. low computing cost; 2. integrated learning and classification while preserving individual SVM's learning strength; and 3. flexibility in both structure and learning modules, where different numbers of nodes and features can be added to address specific learning requirements, and various learning models can be added as individual nodes, such as neural networks, AdaBoost, hidden Markov models, dynamic Bayesian networks, etc. Experiments support that the proposed SVM tree achieves good performances in sports video classifications.

  19. Automated Gait Analysis Through Hues and Areas (AGATHA): a method to characterize the spatiotemporal pattern of rat gait

    Science.gov (United States)

    Kloefkorn, Heidi E.; Pettengill, Travis R.; Turner, Sara M. F.; Streeter, Kristi A.; Gonzalez-Rothi, Elisa J.; Fuller, David D.; Allen, Kyle D.

    2016-01-01

    While rodent gait analysis can quantify the behavioral consequences of disease, significant methodological differences exist between analysis platforms and little validation has been performed to understand or mitigate these sources of variance. By providing the algorithms used to quantify gait, open-source gait analysis software can be validated and used to explore methodological differences. Our group is introducing, for the first time, a fully-automated, open-source method for the characterization of rodent spatiotemporal gait patterns, termed Automated Gait Analysis Through Hues and Areas (AGATHA). This study describes how AGATHA identifies gait events, validates AGATHA relative to manual digitization methods, and utilizes AGATHA to detect gait compensations in orthopaedic and spinal cord injury models. To validate AGATHA against manual digitization, results from videos of rodent gait, recorded at 1000 frames per second (fps), were compared. To assess one common source of variance (the effects of video frame rate), these 1000 fps videos were re-sampled to mimic several lower fps and compared again. While spatial variables were indistinguishable between AGATHA and manual digitization, low video frame rates resulted in temporal errors for both methods. At frame rates over 125 fps, AGATHA achieved a comparable accuracy and precision to manual digitization for all gait variables. Moreover, AGATHA detected unique gait changes in each injury model. These data demonstrate AGATHA is an accurate and precise platform for the analysis of rodent spatiotemporal gait patterns. PMID:27554674

  20. Automated Gait Analysis Through Hues and Areas (AGATHA): A Method to Characterize the Spatiotemporal Pattern of Rat Gait.

    Science.gov (United States)

    Kloefkorn, Heidi E; Pettengill, Travis R; Turner, Sara M F; Streeter, Kristi A; Gonzalez-Rothi, Elisa J; Fuller, David D; Allen, Kyle D

    2017-03-01

    While rodent gait analysis can quantify the behavioral consequences of disease, significant methodological differences exist between analysis platforms and little validation has been performed to understand or mitigate these sources of variance. By providing the algorithms used to quantify gait, open-source gait analysis software can be validated and used to explore methodological differences. Our group is introducing, for the first time, a fully-automated, open-source method for the characterization of rodent spatiotemporal gait patterns, termed Automated Gait Analysis Through Hues and Areas (AGATHA). This study describes how AGATHA identifies gait events, validates AGATHA relative to manual digitization methods, and utilizes AGATHA to detect gait compensations in orthopaedic and spinal cord injury models. To validate AGATHA against manual digitization, results from videos of rodent gait, recorded at 1000 frames per second (fps), were compared. To assess one common source of variance (the effects of video frame rate), these 1000 fps videos were re-sampled to mimic several lower fps and compared again. While spatial variables were indistinguishable between AGATHA and manual digitization, low video frame rates resulted in temporal errors for both methods. At frame rates over 125 fps, AGATHA achieved a comparable accuracy and precision to manual digitization for all gait variables. Moreover, AGATHA detected unique gait changes in each injury model. These data demonstrate AGATHA is an accurate and precise platform for the analysis of rodent spatiotemporal gait patterns.

  1. Automated haematology analysis to diagnose malaria

    NARCIS (Netherlands)

    Campuzano-Zuluaga, Germán; Hänscheid, Thomas; Grobusch, Martin P.

    2010-01-01

    For more than a decade, flow cytometry-based automated haematology analysers have been studied for malaria diagnosis. Although current haematology analysers are not specifically designed to detect malaria-related abnormalities, most studies have found sensitivities that comply with WHO

  2. Automation of radionuclide analysis in nuclear industry

    International Nuclear Information System (INIS)

    Gostilo, V.; Sokolov, A.; Kuzmenko, V.; Kondratjev, V.

    2009-01-01

    The development results for the automated precise HPGe spectrometers and systems for radionuclide analyses in nuclear industry and environmental monitoring are presented. Automated HPGe spectrometer for radionuclide monitoring of coolant in primary circuit of NPPs is intended for technological monitoring of the radionuclide specific activity in liquid and gaseous flows in the on-line mode. The automated spectrometer based on flowing HPGe detector with the through channel is intended for control of the uniformity of distribution of uranium and/or plutonium in fresh fuel elements, transferred through the detector, as well as for on-line control of the fluids and gases flows with low activity. Automated monitoring system for radionuclide volumetric activity in outlet channels of NPPs is intended for radionuclide monitoring of water reservoirs in the regions of nuclear weapons testing, near nuclear storage, nuclear power plants and other objects of nuclear energetic. Autonomous HPGe spectrometer for deep water radionuclide monitoring is applicable for registration of gamma radionuclides, distributed in water depth up to 3000 m (radioactive wastes storage, wreck of atomic ships, lost nuclear charges, atomic industry technological waste release etc.).(authors)

  3. Video analysis of injuries and incidents in Norwegian professional football.

    Science.gov (United States)

    Andersen, T E; Tenga, A; Engebretsen, L; Bahr, R

    2004-10-01

    This study describes the characteristics of injuries and high risk situations in the Norwegian professional football league during one competitive season using Football Incident Analysis (FIA), a video based method. Videotapes and injury information were collected prospectively for 174 of 182 (96%) regular league matches during the 2000 season. Incidents where the match was interrupted due to an assumed injury were analysed using FIA to examine the characteristics of the playing situation causing the incident. Club medical staff prospectively recorded all acute injuries on a specific injury questionnaire. Each incident identified on the videotapes was cross referenced with the injury report. During the 174 matches, 425 incidents were recorded and 121 acute injuries were reported. Of these 121 injuries, 52 (43%) were identified on video including all head injuries, 58% of knee injuries, 56% of ankle injuries, and 29% of thigh injuries. Strikers were more susceptible to injury than other players and although most of the incidents and injuries resulted from duels, no single classic injury situation typical for football injuries or incidents could be recognised. However, in most cases the exposed player seemed to be unaware of the opponent challenging him for ball possession. This study shows that in spite of a thorough video analysis less than half of the injuries are identified on video. It is difficult to identify typical patterns in the playing events leading to incidents and injuries, but players seemed to be unaware of the opponent challenging them for ball possession.

  4. Multiple-Instance Learning for Medical Image and Video Analysis.

    Science.gov (United States)

    Quellec, Gwenole; Cazuguel, Guy; Cochener, Beatrice; Lamard, Mathieu

    2017-01-01

    Multiple-instance learning (MIL) is a recent machine-learning paradigm that is particularly well suited to medical image and video analysis (MIVA) tasks. Based solely on class labels assigned globally to images or videos, MIL algorithms learn to detect relevant patterns locally in images or videos. These patterns are then used for classification at a global level. Because supervision relies on global labels, manual segmentations are not needed to train MIL algorithms, unlike traditional single-instance learning (SIL) algorithms. Consequently, these solutions are attracting increasing interest from the MIVA community: since the term was coined by Dietterich et al. in 1997, 73 research papers about MIL have been published in the MIVA literature. This paper reviews the existing strategies for modeling MIVA tasks as MIL problems, recommends general-purpose MIL algorithms for each type of MIVA tasks, and discusses MIVA-specific MIL algorithms. Various experiments performed in medical image and video datasets are compiled in order to back up these discussions. This meta-analysis shows that, besides being more convenient than SIL solutions, MIL algorithms are also more accurate in many cases. In other words, MIL is the ideal solution for many MIVA tasks. Recent trends are discussed, and future directions are proposed for this emerging paradigm.

  5. Automated Steel Cleanliness Analysis Tool (ASCAT)

    Energy Technology Data Exchange (ETDEWEB)

    Gary Casuccio (RJ Lee Group); Michael Potter (RJ Lee Group); Fred Schwerer (RJ Lee Group); Dr. Richard J. Fruehan (Carnegie Mellon University); Dr. Scott Story (US Steel)

    2005-12-30

    The objective of this study was to develop the Automated Steel Cleanliness Analysis Tool (ASCATTM) to permit steelmakers to evaluate the quality of the steel through the analysis of individual inclusions. By characterizing individual inclusions, determinations can be made as to the cleanliness of the steel. Understanding the complicating effects of inclusions in the steelmaking process and on the resulting properties of steel allows the steel producer to increase throughput, better control the process, reduce remelts, and improve the quality of the product. The ASCAT (Figure 1) is a steel-smart inclusion analysis tool developed around a customized next-generation computer controlled scanning electron microscopy (NG-CCSEM) hardware platform that permits acquisition of inclusion size and composition data at a rate never before possible in SEM-based instruments. With built-in customized ''intelligent'' software, the inclusion data is automatically sorted into clusters representing different inclusion types to define the characteristics of a particular heat (Figure 2). The ASCAT represents an innovative new tool for the collection of statistically meaningful data on inclusions, and provides a means of understanding the complicated effects of inclusions in the steel making process and on the resulting properties of steel. Research conducted by RJLG with AISI (American Iron and Steel Institute) and SMA (Steel Manufactures of America) members indicates that the ASCAT has application in high-grade bar, sheet, plate, tin products, pipes, SBQ, tire cord, welding rod, and specialty steels and alloys where control of inclusions, whether natural or engineered, are crucial to their specification for a given end-use. Example applications include castability of calcium treated steel; interstitial free (IF) degasser grade slag conditioning practice; tundish clogging and erosion minimization; degasser circulation and optimization; quality assessment

  6. Automated Steel Cleanliness Analysis Tool (ASCAT)

    International Nuclear Information System (INIS)

    Gary Casuccio; Michael Potter; Fred Schwerer; Richard J. Fruehan; Dr. Scott Story

    2005-01-01

    The objective of this study was to develop the Automated Steel Cleanliness Analysis Tool (ASCATTM) to permit steelmakers to evaluate the quality of the steel through the analysis of individual inclusions. By characterizing individual inclusions, determinations can be made as to the cleanliness of the steel. Understanding the complicating effects of inclusions in the steelmaking process and on the resulting properties of steel allows the steel producer to increase throughput, better control the process, reduce remelts, and improve the quality of the product. The ASCAT (Figure 1) is a steel-smart inclusion analysis tool developed around a customized next-generation computer controlled scanning electron microscopy (NG-CCSEM) hardware platform that permits acquisition of inclusion size and composition data at a rate never before possible in SEM-based instruments. With built-in customized ''intelligent'' software, the inclusion data is automatically sorted into clusters representing different inclusion types to define the characteristics of a particular heat (Figure 2). The ASCAT represents an innovative new tool for the collection of statistically meaningful data on inclusions, and provides a means of understanding the complicated effects of inclusions in the steel making process and on the resulting properties of steel. Research conducted by RJLG with AISI (American Iron and Steel Institute) and SMA (Steel Manufactures of America) members indicates that the ASCAT has application in high-grade bar, sheet, plate, tin products, pipes, SBQ, tire cord, welding rod, and specialty steels and alloys where control of inclusions, whether natural or engineered, are crucial to their specification for a given end-use. Example applications include castability of calcium treated steel; interstitial free (IF) degasser grade slag conditioning practice; tundish clogging and erosion minimization; degasser circulation and optimization; quality assessment/steel cleanliness; slab, billet

  7. Using video analysis for concussion surveillance in Australian football.

    Science.gov (United States)

    Makdissi, Michael; Davis, Gavin

    2016-12-01

    The objectives of the study were to assess the relationship between various player and game factors and risk of concussion; and to assess the reliability of video analysis for mechanistic assessment of concussion in Australian football. Prospective cohort study. All impacts and collisions resulting in concussion were identified during the 2011 Australian Football League season. An extensive list of factors for assessment was created based upon previous analysis of concussion in Australian Football League and expert opinions. The authors independently reviewed the video clips and correlation for each factor was examined. A total of 82 concussions were reported in 194 games (rate: 8.7 concussions per 1000 match hours; 95% confidence interval: 6.9-10.5). Player demographics and game variables such as venue, timing of the game (day, night or twilight), quarter, travel status (home or interstate) or score margin did not demonstrate a significant relationship with risk of concussion; although a higher percentage of concussions occurred in the first 5min of game time of the quarter (36.6%), when compared to the last 5min (20.7%). Variables with good inter-rater agreement included position on the ground, circumstances of the injury and cause of the impact. The remainder of the variables assessed had fair-poor inter-rater agreement. Common problems included insufficient or poor quality video and interpretation issues related to the definitions used. Clear definitions and good quality video from multiple camera angles are required to improve the utility of video analysis for concussion surveillance in Australian football. Copyright © 2016 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  8. Automated Cache Performance Analysis And Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Mohror, Kathryn [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-12-23

    While there is no lack of performance counter tools for coarse-grained measurement of cache activity, there is a critical lack of tools for relating data layout to cache behavior to application performance. Generally, any nontrivial optimizations are either not done at all, or are done ”by hand” requiring significant time and expertise. To the best of our knowledge no tool available to users measures the latency of memory reference instructions for partic- ular addresses and makes this information available to users in an easy-to-use and intuitive way. In this project, we worked to enable the Open|SpeedShop performance analysis tool to gather memory reference latency information for specific instructions and memory ad- dresses, and to gather and display this information in an easy-to-use and intuitive way to aid performance analysts in identifying problematic data structures in their codes. This tool was primarily designed for use in the supercomputer domain as well as grid, cluster, cloud-based parallel e-commerce, and engineering systems and middleware. Ultimately, we envision a tool to automate optimization of application cache layout and utilization in the Open|SpeedShop performance analysis tool. To commercialize this soft- ware, we worked to develop core capabilities for gathering enhanced memory usage per- formance data from applications and create and apply novel methods for automatic data structure layout optimizations, tailoring the overall approach to support existing supercom- puter and cluster programming models and constraints. In this Phase I project, we focused on infrastructure necessary to gather performance data and present it in an intuitive way to users. With the advent of enhanced Precise Event-Based Sampling (PEBS) counters on recent Intel processor architectures and equivalent technology on AMD processors, we are now in a position to access memory reference information for particular addresses. Prior to the introduction of PEBS counters

  9. A video-polygraphic analysis of the cataplectic attack

    DEFF Research Database (Denmark)

    Rubboli, G; d'Orsi, G; Zaniboni, A

    2000-01-01

    OBJECTIVES AND METHODS: To perform a video-polygraphic analysis of 11 cataplectic attacks in a 39-year-old narcoleptic patient, correlating clinical manifestations with polygraphic findings. Polygraphic recordings monitored EEG, EMG activity from several cranial, trunk, upper and lower limbs...... muscles, eye movements, EKG, thoracic respiration. RESULTS: Eleven attacks were recorded, all of them lasting less than 1 min and ending with the fall of the patient to the ground. We identified, based on the video-polygraphic analysis of the episodes, 3 phases: initial phase, characterized essentially...... with bradycardia, that was maximal during the atonic phase. CONCLUSIONS: Analysis of the muscular phenomena that characterize cataplectic attacks in a standing patient suggests that the cataplectic fall occurs with a pattern that might result from the interaction between neuronal networks mediating muscular atonia...

  10. Automated migration analysis based on cell texture: method & reliability

    Directory of Open Access Journals (Sweden)

    Chittenden Thomas W

    2005-03-01

    Full Text Available Abstract Background In this paper, we present and validate a way to measure automatically the extent of cell migration based on automated examination of a series of digital photographs. It was designed specifically to identify the impact of Second Hand Smoke (SHS on endothelial cell migration but has broader applications. The analysis has two stages: (1 preprocessing of image texture, and (2 migration analysis. Results The output is a graphic overlay that indicates the front lines of cell migration superimposed on each original image, with automated reporting of the distance traversed vs. time. Expert preference compares to manual placement of leading edge shows complete equivalence of automated vs. manual leading edge definition for cell migration measurement. Conclusion Our method is indistinguishable from careful manual determinations of cell front lines, with the advantages of full automation, objectivity, and speed.

  11. Player-Driven Video Analysis to Enhance Reflective Soccer Practice in Talent Development

    Science.gov (United States)

    Hjort, Anders; Henriksen, Kristoffer; Elbæk, Lars

    2018-01-01

    In the present article, we investigate the introduction of a cloud-based video analysis platform called Player Universe (PU). Video analysis is not a new performance-enhancing element in sports, but PU is innovative in how it facilitates reflective learning. Video analysis is executed in the PU platform by involving the players in the analysis…

  12. Power Analysis of an Automated Dynamic Cone Penetrometer

    Science.gov (United States)

    2015-09-01

    ARL-TR-7494 ● SEP 2015 US Army Research Laboratory Power Analysis of an Automated Dynamic Cone Penetrometer by C Wesley...Automated Dynamic Cone Penetrometer by C Wesley Tipton IV and Donald H Porschet Sensors and Electron Devices Directorate, ARL...Dynamic Cone Penetrometer 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) C Wesley Tipton IV and Donald H

  13. A video automated system for nuclear data in the ENDF format

    International Nuclear Information System (INIS)

    Oliveira Silva, O. de; Corcuera, R.P.; Ferreira, P.A.; Moraes Cunha, M. de

    1992-01-01

    This paper presents a video catalogue for libraries in the ENDF-5 or ENDF-6 format (Evaluated Nuclear Data File) which can be run on an IBM-PC computer. This user friendly catalogue is of interest to nuclear and reactor physics researchers. The input is the filename of ENDF data and the output two files giving: a) the list of materials with corresponding laboratory, author and date of evaluation: b) uncorresponding about the MF and MT numbers for each material. The program is written in the C language whose capability of providing windows and interrupts along with speed and portability, has been greatly exploited. The system allows output of options (a) and (b) either on screen, printer or hard disk. (author)

  14. Learning skill-defining latent space in video-based analysis of surgical expertise - a multi-stream fusion approach.

    Science.gov (United States)

    Chen, Lin; Zhang, Qiang; Tian, Qiongjie; Li, Baoxin

    2013-01-01

    In recent years, surgical simulation has emerged at the forefront of new technologies for improving the education and training of surgical residents. To objectively evaluate the surgical skills of the trainees and reduce the training cost, an automated method for rating the performance of the operator is critical. However, automated evaluation of surgical skills in a video-based system, e.g., the FLS trainer box, is still a challenging task, both due to the lack of reliable visual features and the lack of analysis tools that bridge the semantic gap between the low-level visual features and the high-level surgical skills. This study attempts to find a latent space for the visual features for supporting more meaningful analysis of surgical skills. The approach employs multi-modality fusion and Canonical Correlation Analysis as the key techniques. Experiments were designed to evaluate the proposed approach. The results suggest that this is a promising direction.

  15. Video Analysis of Eddy Structures from Explosive Volcanic Eruptions

    Science.gov (United States)

    Fisher, M. A.; Kobs-Nawotniak, S. E.

    2013-12-01

    We present a method of analyzing turbulent eddy structures in explosive volcanic eruptions using high definition video. Film from the eruption of Sakurajima on 25 September 2011 was analyzed using a modified version of FlowJ, a Java-based toolbox released by National Institute of Health. Using the Lucas and Kanade algorithm with a Gaussian derivative gradient, it tracks the change in pixel position over a 23 image buffer to determine the optical flow. This technique assumes that the optical flow, which is the apparent motion of the pixels, is equivalent to the actual flow field. We calculated three flow fields per second for the duration of the video. FlowJ outputs flow fields in pixels per frame that were then converted to meters per second in Matlab using a known distance and video rate. We constructed a low pass filter using proper orthogonal decomposition (POD) and critical point analysis to identify the underlying eddy structure with boundaries determined by tracing the flow lines. We calculated the area of each eddy and noted its position over a series of velocity fields. The changes in shape and position were tracked to determine the eddy growth rate and overall eddy rising velocity. The eddies grow in size 1.5 times quicker than they rise vertically. Presently, this method is most successful in high contrast videos when there is little to no effect of wind on the plumes. Additionally, the pixel movement from the video images represents a 2D flow with no depth, while the actual flow is three dimensional; we are continuing to develop an algorithm that will allow 3D reprojection of the 2D data. Flow in the y-direction lessens the overall velocity magnitude as the true flow motion has larger y-direction component. POD, which only uses the pattern of the flow, and analysis of the critical points (points where flow is zero) is used to determine the shape of the eddies. The method allows for video recorded at remote distances to be used to study eruption dynamics

  16. Detecting fire in video stream using statistical analysis

    Directory of Open Access Journals (Sweden)

    Koplík Karel

    2017-01-01

    Full Text Available The real time fire detection in video stream is one of the most interesting problems in computer vision. In fact, in most cases it would be nice to have fire detection algorithm implemented in usual industrial cameras and/or to have possibility to replace standard industrial cameras with one implementing the fire detection algorithm. In this paper, we present new algorithm for detecting fire in video. The algorithm is based on tracking suspicious regions in time with statistical analysis of their trajectory. False alarms are minimized by combining multiple detection criteria: pixel brightness, trajectories of suspicious regions for evaluating characteristic fire flickering and persistence of alarm state in sequence of frames. The resulting implementation is fast and therefore can run on wide range of affordable hardware.

  17. Automated analysis of brachial ultrasound time series

    Science.gov (United States)

    Liang, Weidong; Browning, Roger L.; Lauer, Ronald M.; Sonka, Milan

    1998-07-01

    Atherosclerosis begins in childhood with the accumulation of lipid in the intima of arteries to form fatty streaks, advances through adult life when occlusive vascular disease may result in coronary heart disease, stroke and peripheral vascular disease. Non-invasive B-mode ultrasound has been found useful in studying risk factors in the symptom-free population. Large amount of data is acquired from continuous imaging of the vessels in a large study population. A high quality brachial vessel diameter measurement method is necessary such that accurate diameters can be measured consistently in all frames in a sequence, across different observers. Though human expert has the advantage over automated computer methods in recognizing noise during diameter measurement, manual measurement suffers from inter- and intra-observer variability. It is also time-consuming. An automated measurement method is presented in this paper which utilizes quality assurance approaches to adapt to specific image features, to recognize and minimize the noise effect. Experimental results showed the method's potential for clinical usage in the epidemiological studies.

  18. Initial development of an automated task analysis profiling system

    International Nuclear Information System (INIS)

    Jorgensen, C.C.

    1984-01-01

    A program for automated task analysis is described. Called TAPS (task analysis profiling system), the program accepts normal English prose and outputs skills, knowledges, attitudes, and abilities (SKAAs) along with specific guidance and recommended ability measurement tests for nuclear power plant operators. A new method for defining SKAAs is presented along with a sample program output

  19. Analysis of Trinity Power Metrics for Automated Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Michalenko, Ashley Christine [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-07-28

    This is a presentation from Los Alamos National Laboraotyr (LANL) about the analysis of trinity power metrics for automated monitoring. The following topics are covered: current monitoring efforts, motivation for analysis, tools used, the methodology, work performed during the summer, and future work planned.

  20. Development of automated system of heavy water analysis

    International Nuclear Information System (INIS)

    Fedorchenko, O.A.; Novozhilov, V.A.; Trenin, V.D.

    1993-01-01

    Application of traditional methods of qualitative and quantitative control of coolant (moderator) for the analysis of heavy water with high tritium content presents many difficulties and an inevitable accumulation of wastes that many facilities will not accept. This report describes an automated system for heavy water sampling and analysis

  1. Video analysis of concussion injury mechanism in under-18 rugby

    Science.gov (United States)

    Hendricks, Sharief; O'Connor, Sam; Lambert, Michael; Brown, James C; Burger, Nicholas; Mc Fie, Sarah; Readhead, Clint; Viljoen, Wayne

    2016-01-01

    Background Understanding the mechanism of injury is necessary for the development of effective injury prevention strategies. Video analysis of injuries provides valuable information on the playing situation and athlete-movement patterns, which can be used to formulate these strategies. Therefore, we conducted a video analysis of the mechanism of concussion injury in junior-level rugby union and compared it with a representative and matched non-injury sample. Methods Injury reports for 18 concussion events were collected from the 2011 to 2013 under-18 Craven Week tournaments. Also, video footage was recorded for all 3 years. On the basis of the injury events, a representative ‘control’ sample of matched non-injury events in the same players was identified. The video footage, which had been recorded at each tournament, was then retrospectively analysed and coded. 10 injury events (5 tackle, 4 ruck, 1 aerial collision) and 83 non-injury events were analysed. Results All concussions were a result of contact with an opponent and 60% of players were unaware of the impending contact. For the measurement of head position on contact, 43% had a ‘down’ position, 29% the ‘up and forward’ and 29% the ‘away’ position (n=7). The speed of the injured tackler was observed as ‘slow’ in 60% of injurious tackles (n=5). In 3 of the 4 rucks in which injury occurred (75%), the concussed player was acting defensively either in the capacity of ‘support’ (n=2) or as the ‘jackal’ (n=1). Conclusions Training interventions aimed at improving peripheral vision, strengthening of the cervical muscles, targeted conditioning programmes to reduce the effects of fatigue, and emphasising safe and effective playing techniques have the potential to reduce the risk of sustaining a concussion injury. PMID:27900149

  2. Video analysis of concussion injury mechanism in under-18 rugby.

    Science.gov (United States)

    Hendricks, Sharief; O'Connor, Sam; Lambert, Michael; Brown, James C; Burger, Nicholas; Mc Fie, Sarah; Readhead, Clint; Viljoen, Wayne

    2016-01-01

    Understanding the mechanism of injury is necessary for the development of effective injury prevention strategies. Video analysis of injuries provides valuable information on the playing situation and athlete-movement patterns, which can be used to formulate these strategies. Therefore, we conducted a video analysis of the mechanism of concussion injury in junior-level rugby union and compared it with a representative and matched non-injury sample. Injury reports for 18 concussion events were collected from the 2011 to 2013 under-18 Craven Week tournaments. Also, video footage was recorded for all 3 years. On the basis of the injury events, a representative 'control' sample of matched non-injury events in the same players was identified. The video footage, which had been recorded at each tournament, was then retrospectively analysed and coded. 10 injury events (5 tackle, 4 ruck, 1 aerial collision) and 83 non-injury events were analysed. All concussions were a result of contact with an opponent and 60% of players were unaware of the impending contact. For the measurement of head position on contact , 43% had a 'down' position, 29% the 'up and forward' and 29% the 'away' position (n=7). The speed of the injured tackler was observed as 'slow' in 60% of injurious tackles (n=5). In 3 of the 4 rucks in which injury occurred (75%), the concussed player was acting defensively either in the capacity of 'support' (n=2) or as the 'jackal' (n=1). Training interventions aimed at improving peripheral vision, strengthening of the cervical muscles, targeted conditioning programmes to reduce the effects of fatigue, and emphasising safe and effective playing techniques have the potential to reduce the risk of sustaining a concussion injury.

  3. Flow injection analysis: Emerging tool for laboratory automation in radiochemistry

    International Nuclear Information System (INIS)

    Egorov, O.; Ruzicka, J.; Grate, J.W.; Janata, J.

    1996-01-01

    Automation of routine and serial assays is a common practice of modern analytical laboratory, while it is virtually nonexistent in the field of radiochemistry. Flow injection analysis (FIA) is a general solution handling methodology that has been extensively used for automation of routine assays in many areas of analytical chemistry. Reproducible automated solution handling and on-line separation capabilities are among several distinctive features that make FI a very promising, yet under utilized tool for automation in analytical radiochemistry. The potential of the technique is demonstrated through the development of an automated 90 Sr analyzer and its application in the analysis of tank waste samples from the Hanford site. Sequential injection (SI), the latest generation of FIA, is used to rapidly separate 90 Sr from interfering radionuclides and deliver separated Sr zone to a flow-through liquid scintillation detector. The separation is performed on a mini column containing Sr-specific sorbent extraction material, which selectively retains Sr under acidic conditions. The 90 Sr is eluted with water, mixed with scintillation cocktail, and sent through the flow cell of a flow through counter, where 90 Sr radioactivity is detected as a transient signal. Both peak area and peak height can be used for quantification of sample radioactivity. Alternatively, stopped flow detection can be performed to improve detection precision for low activity samples. The authors current research activities are focused on expansion of radiochemical applications of FIA methodology, with an ultimate goal of creating a set of automated methods that will cover the basic needs of radiochemical analysis at the Hanford site. The results of preliminary experiments indicate that FIA is a highly suitable technique for the automation of chemically more challenging separations, such as separation of actinide elements

  4. HITCal: a software tool for analysis of video head impulse test responses.

    Science.gov (United States)

    Rey-Martinez, Jorge; Batuecas-Caletrio, Angel; Matiño, Eusebi; Perez Fernandez, Nicolás

    2015-09-01

    The developed software (HITCal) may be a useful tool in the analysis and measurement of the saccadic video head impulse test (vHIT) responses and with the experience obtained during its use the authors suggest that HITCal is an excellent method for enhanced exploration of vHIT outputs. To develop a (software) method to analyze and explore the vHIT responses, mainly saccades. HITCal was written using a computational development program; the function to access a vHIT file was programmed; extended head impulse exploration and measurement tools were created and an automated saccade analysis was developed using an experimental algorithm. For pre-release HITCal laboratory tests, a database of head impulse tests (HITs) was created with the data collected retrospectively in three reference centers. This HITs database was evaluated by humans and was also computed with HITCal. The authors have successfully built HITCal and it has been released as open source software; the developed software was fully operative and all the proposed characteristics were incorporated in the released version. The automated saccades algorithm implemented in HITCal has good concordance with the assessment by human observers (Cohen's kappa coefficient = 0.7).

  5. A catalog of automated analysis methods for enterprise models.

    Science.gov (United States)

    Florez, Hector; Sánchez, Mario; Villalobos, Jorge

    2016-01-01

    Enterprise models are created for documenting and communicating the structure and state of Business and Information Technologies elements of an enterprise. After models are completed, they are mainly used to support analysis. Model analysis is an activity typically based on human skills and due to the size and complexity of the models, this process can be complicated and omissions or miscalculations are very likely. This situation has fostered the research of automated analysis methods, for supporting analysts in enterprise analysis processes. By reviewing the literature, we found several analysis methods; nevertheless, they are based on specific situations and different metamodels; then, some analysis methods might not be applicable to all enterprise models. This paper presents the work of compilation (literature review), classification, structuring, and characterization of automated analysis methods for enterprise models, expressing them in a standardized modeling language. In addition, we have implemented the analysis methods in our modeling tool.

  6. Automation of reactor neutron activation analysis

    International Nuclear Information System (INIS)

    Pavlov, S.S.; Dmitriev, A.Yu.; Frontasyeva, M.V.

    2013-01-01

    The present status of the development of a software package designed for automation of NAA at the IBR-2 reactor of FLNP, JINR, Dubna, is reported. Following decisions adopted at the CRP Meeting in Delft, August 27-31, 2012, the missing tool - a sample changer - will be installed for NAA in compliance with the peculiar features of the radioanalytical laboratory REGATA at the IBR-2 reactor. The details of the design are presented. The software for operation with the sample changer consists of two parts. The first part is a user interface and the second one is a program to control the sample changer. The second part will be developed after installing the tool.

  7. Automated sensitivity analysis using the GRESS language

    International Nuclear Information System (INIS)

    Pin, F.G.; Oblow, E.M.; Wright, R.Q.

    1986-04-01

    An automated procedure for performing large-scale sensitivity studies based on the use of computer calculus is presented. The procedure is embodied in a FORTRAN precompiler called GRESS, which automatically processes computer models and adds derivative-taking capabilities to the normal calculated results. In this report, the GRESS code is described, tested against analytic and numerical test problems, and then applied to a major geohydrological modeling problem. The SWENT nuclear waste repository modeling code is used as the basis for these studies. Results for all problems are discussed in detail. Conclusions are drawn as to the applicability of GRESS in the problems at hand and for more general large-scale modeling sensitivity studies

  8. Learning Methods for Dynamic Topic Modeling in Automated Behavior Analysis.

    Science.gov (United States)

    Isupova, Olga; Kuzin, Danil; Mihaylova, Lyudmila

    2017-09-27

    Semisupervised and unsupervised systems provide operators with invaluable support and can tremendously reduce the operators' load. In the light of the necessity to process large volumes of video data and provide autonomous decisions, this paper proposes new learning algorithms for activity analysis in video. The activities and behaviors are described by a dynamic topic model. Two novel learning algorithms based on the expectation maximization approach and variational Bayes inference are proposed. Theoretical derivations of the posterior estimates of model parameters are given. The designed learning algorithms are compared with the Gibbs sampling inference scheme introduced earlier in the literature. A detailed comparison of the learning algorithms is presented on real video data. We also propose an anomaly localization procedure, elegantly embedded in the topic modeling framework. It is shown that the developed learning algorithms can achieve 95% success rate. The proposed framework can be applied to a number of areas, including transportation systems, security, and surveillance.

  9. Analysis of brook trout spatial behavior during passage attempts in corrugated culverts using near-infrared illumination video imagery

    Science.gov (United States)

    Bergeron, Normand E.; Constantin, Pierre-Marc; Goerig, Elsa; Castro-Santos, Theodore R.

    2016-01-01

    We used video recording and near-infrared illumination to document the spatial behavior of brook trout of various sizes attempting to pass corrugated culverts under different hydraulic conditions. Semi-automated image analysis was used to digitize fish position at high temporal resolution inside the culvert, which allowed calculation of various spatial behavior metrics, including instantaneous ground and swimming speed, path complexity, distance from side walls, velocity preference ratio (mean velocity at fish lateral position/mean crosssectional velocity) as well as number and duration of stops in forward progression. The presentation summarizes the main results and discusses how they could be used to improve fish passage performance in culverts.

  10. YouTube™ as a Source of Instructional Videos on Bowel Preparation: a Content Analysis.

    Science.gov (United States)

    Ajumobi, Adewale B; Malakouti, Mazyar; Bullen, Alexander; Ahaneku, Hycienth; Lunsford, Tisha N

    2016-12-01

    Instructional videos on bowel preparation have been shown to improve bowel preparation scores during colonoscopy. YouTube™ is one of the most frequently visited website on the internet and contains videos on bowel preparation. In an era where patients are increasingly turning to social media for guidance on their health, the content of these videos merits further investigation. We assessed the content of bowel preparation videos available on YouTube™ to determine the proportion of YouTube™ videos on bowel preparation that are high-content videos and the characteristics of these videos. YouTube™ videos were assessed for the following content: (1) definition of bowel preparation, (2) importance of bowel preparation, (3) instructions on home medications, (4) name of bowel cleansing agent (BCA), (5) instructions on when to start taking BCA, (6) instructions on volume and frequency of BCA intake, (7) diet instructions, (8) instructions on fluid intake, (9) adverse events associated with BCA, and (10) rectal effluent. Each content parameter was given 1 point for a total of 10 points. Videos with ≥5 points were considered by our group to be high-content videos. Videos with ≤4 points were considered low-content videos. Forty-nine (59 %) videos were low-content videos while 34 (41 %) were high-content videos. There was no association between number of views, number of comments, thumbs up, thumbs down or engagement score, and videos deemed high-content. Multiple regression analysis revealed bowel preparation videos on YouTube™ with length >4 minutes and non-patient authorship to be associated with high-content videos.

  11. Accurate automated apnea analysis in preterm infants.

    Science.gov (United States)

    Vergales, Brooke D; Paget-Brown, Alix O; Lee, Hoshik; Guin, Lauren E; Smoot, Terri J; Rusin, Craig G; Clark, Matthew T; Delos, John B; Fairchild, Karen D; Lake, Douglas E; Moorman, Randall; Kattwinkel, John

    2014-02-01

    In 2006 the apnea of prematurity (AOP) consensus group identified inaccurate counting of apnea episodes as a major barrier to progress in AOP research. We compare nursing records of AOP to events detected by a clinically validated computer algorithm that detects apnea from standard bedside monitors. Waveform, vital sign, and alarm data were collected continuously from all very low-birth-weight infants admitted over a 25-month period, analyzed for central apnea, bradycardia, and desaturation (ABD) events, and compared with nursing documentation collected from charts. Our algorithm defined apnea as > 10 seconds if accompanied by bradycardia and desaturation. Of the 3,019 nurse-recorded events, only 68% had any algorithm-detected ABD event. Of the 5,275 algorithm-detected prolonged apnea events > 30 seconds, only 26% had nurse-recorded documentation within 1 hour. Monitor alarms sounded in only 74% of events of algorithm-detected prolonged apnea events > 10 seconds. There were 8,190,418 monitor alarms of any description throughout the neonatal intensive care unit during the 747 days analyzed, or one alarm every 2 to 3 minutes per nurse. An automated computer algorithm for continuous ABD quantitation is a far more reliable tool than the medical record to address the important research questions identified by the 2006 AOP consensus group. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  12. Feature Weighting via Optimal Thresholding for Video Analysis (Open Access)

    Science.gov (United States)

    2014-03-03

    board trick (AaBT)”, “Feeding an ani- mal (FaA)”, “Landing a fish (LaF)”, “ Wedding ceremony (WC)”, “Working on a woodworking project (WoaWP)”, “Making a...we relax it to a convex optimization prob- lem, which is the lower bound of the original MIP problem. We then apply cutting plane algorithm to...the content analysis based on both acoustic and visual features. Since the authors have not provided the original videos of the dataset, we use the

  13. Advancements in Automated Circuit Grouping for Intellectual Property Trust Analysis

    Science.gov (United States)

    2017-03-20

    Advancements in Automated Circuit Grouping for Intellectual Property Trust Analysis James Inge, Matthew Kwiec, Stephen Baka, John Hallman...module, a custom on- chip memory module, a custom arithmetic logic unit module, and a custom Ethernet frame check sequence generator module. Though

  14. Automated image analysis in the study of collagenous colitis

    DEFF Research Database (Denmark)

    Fiehn, Anne-Marie Kanstrup; Kristensson, Martin; Engel, Ulla

    2016-01-01

    PURPOSE: The aim of this study was to develop an automated image analysis software to measure the thickness of the subepithelial collagenous band in colon biopsies with collagenous colitis (CC) and incomplete CC (CCi). The software measures the thickness of the collagenous band on microscopic...

  15. An Automated Data Analysis Tool for Livestock Market Data

    Science.gov (United States)

    Williams, Galen S.; Raper, Kellie Curry

    2011-01-01

    This article describes an automated data analysis tool that allows Oklahoma Cooperative Extension Service educators to disseminate results in a timely manner. Primary data collected at Oklahoma Quality Beef Network (OQBN) certified calf auctions across the state results in a large amount of data per sale site. Sale summaries for an individual sale…

  16. Automated procedure for performing computer security risk analysis

    International Nuclear Information System (INIS)

    Smith, S.T.; Lim, J.J.

    1984-05-01

    Computers, the invisible backbone of nuclear safeguards, monitor and control plant operations and support many materials accounting systems. Our automated procedure to assess computer security effectiveness differs from traditional risk analysis methods. The system is modeled as an interactive questionnaire, fully automated on a portable microcomputer. A set of modular event trees links the questionnaire to the risk assessment. Qualitative scores are obtained for target vulnerability, and qualitative impact measures are evaluated for a spectrum of threat-target pairs. These are then combined by a linguistic algebra to provide an accurate and meaningful risk measure. 12 references, 7 figures

  17. ORIGAMI Automator Primer. Automated ORIGEN Source Terms and Spent Fuel Storage Pool Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Wieselquist, William A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Thompson, Adam B. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Bowman, Stephen M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Peterson, Joshua L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-04-01

    Source terms and spent nuclear fuel (SNF) storage pool decay heat load analyses for operating nuclear power plants require a large number of Oak Ridge Isotope Generation and Depletion (ORIGEN) calculations. SNF source term calculations also require a significant amount of bookkeeping to track quantities such as core and assembly operating histories, spent fuel pool (SFP) residence times, heavy metal masses, and enrichments. The ORIGEN Assembly Isotopics (ORIGAMI) module in the SCALE code system provides a simple scheme for entering these data. However, given the large scope of the analysis, extensive scripting is necessary to convert formats and process data to create thousands of ORIGAMI input files (one per assembly) and to process the results into formats readily usable by follow-on analysis tools. This primer describes a project within the SCALE Fulcrum graphical user interface (GUI) called ORIGAMI Automator that was developed to automate the scripting and bookkeeping in large-scale source term analyses. The ORIGAMI Automator enables the analyst to (1) easily create, view, and edit the reactor site and assembly information, (2) automatically create and run ORIGAMI inputs, and (3) analyze the results from ORIGAMI. ORIGAMI Automator uses the standard ORIGEN binary concentrations files produced by ORIGAMI, with concentrations available at all time points in each assembly’s life. The GUI plots results such as mass, concentration, activity, and decay heat using a powerful new ORIGEN Post-Processing Utility for SCALE (OPUS) GUI component. This document includes a description and user guide for the GUI, a step-by-step tutorial for a simplified scenario, and appendices that document the file structures used.

  18. Cloud-based scalable object detection and classification in video streams

    OpenAIRE

    Yaseen, Muhammad Usman; Anjum, Ashiq; Rana, Omer; Hill, Richard

    2017-01-01

    Due to the recent advances in cameras, cell phones and camcorders, particularly the resolution at which they can record an image/ video, large amounts of data are generated daily. This video data is often so large that manually inspecting it for useful content can be time consuming and error prone, thereby it requires automated analysis to extract useful information and metadata. Existing video analysis systems lack automation, scalability and operate under a supervised learning domain, requi...

  19. Full-motion video analysis for improved gender classification

    Science.gov (United States)

    Flora, Jeffrey B.; Lochtefeld, Darrell F.; Iftekharuddin, Khan M.

    2014-06-01

    The ability of computer systems to perform gender classification using the dynamic motion of the human subject has important applications in medicine, human factors, and human-computer interface systems. Previous works in motion analysis have used data from sensors (including gyroscopes, accelerometers, and force plates), radar signatures, and video. However, full-motion video, motion capture, range data provides a higher resolution time and spatial dataset for the analysis of dynamic motion. Works using motion capture data have been limited by small datasets in a controlled environment. In this paper, we explore machine learning techniques to a new dataset that has a larger number of subjects. Additionally, these subjects move unrestricted through a capture volume, representing a more realistic, less controlled environment. We conclude that existing linear classification methods are insufficient for the gender classification for larger dataset captured in relatively uncontrolled environment. A method based on a nonlinear support vector machine classifier is proposed to obtain gender classification for the larger dataset. In experimental testing with a dataset consisting of 98 trials (49 subjects, 2 trials per subject), classification rates using leave-one-out cross-validation are improved from 73% using linear discriminant analysis to 88% using the nonlinear support vector machine classifier.

  20. An improvement analysis on video compression using file segmentation

    Science.gov (United States)

    Sharma, Shubhankar; Singh, K. John; Priya, M.

    2017-11-01

    From the past two decades the extreme evolution of the Internet has lead a massive rise in video technology and significantly video consumption over the Internet which inhabits the bulk of data traffic in general. Clearly, video consumes that so much data size on the World Wide Web, to reduce the burden on the Internet and deduction of bandwidth consume by video so that the user can easily access the video data.For this, many video codecs are developed such as HEVC/H.265 and V9. Although after seeing codec like this one gets a dilemma of which would be improved technology in the manner of rate distortion and the coding standard.This paper gives a solution about the difficulty for getting low delay in video compression and video application e.g. ad-hoc video conferencing/streaming or observation by surveillance. Also this paper describes the benchmark of HEVC and V9 technique of video compression on subjective oral estimations of High Definition video content, playback on web browsers. Moreover, this gives the experimental ideology of dividing the video file into several segments for compression and putting back together to improve the efficiency of video compression on the web as well as on the offline mode.

  1. Volumetric measurements of pulmonary nodules: variability in automated analysis tools

    Science.gov (United States)

    Juluru, Krishna; Kim, Woojin; Boonn, William; King, Tara; Siddiqui, Khan; Siegel, Eliot

    2007-03-01

    Over the past decade, several computerized tools have been developed for detection of lung nodules and for providing volumetric analysis. Incidentally detected lung nodules have traditionally been followed over time by measurements of their axial dimensions on CT scans to ensure stability or document progression. A recently published article by the Fleischner Society offers guidelines on the management of incidentally detected nodules based on size criteria. For this reason, differences in measurements obtained by automated tools from various vendors may have significant implications on management, yet the degree of variability in these measurements is not well understood. The goal of this study is to quantify the differences in nodule maximum diameter and volume among different automated analysis software. Using a dataset of lung scans obtained with both "ultra-low" and conventional doses, we identified a subset of nodules in each of five size-based categories. Using automated analysis tools provided by three different vendors, we obtained size and volumetric measurements on these nodules, and compared these data using descriptive as well as ANOVA and t-test analysis. Results showed significant differences in nodule maximum diameter measurements among the various automated lung nodule analysis tools but no significant differences in nodule volume measurements. These data suggest that when using automated commercial software, volume measurements may be a more reliable marker of tumor progression than maximum diameter. The data also suggest that volumetric nodule measurements may be relatively reproducible among various commercial workstations, in contrast to the variability documented when performing human mark-ups, as is seen in the LIDC (lung imaging database consortium) study.

  2. Automated analysis and design of complex structures

    International Nuclear Information System (INIS)

    Wilson, E.L.

    1977-01-01

    The present application of optimum design appears to be restricted to components of the structure rather than to the total structural system. Since design normally involved many analysis of the system any improvement in the efficiency of the basic methods of analysis will allow more complicated systems to be designed by optimum methods. The evaluation of the risk and reliability of a structural system can be extremely important. Reliability studies have been made of many non-structural systems for which the individual components have been extensively tested and the service environment is known. For such systems the reliability studies are valid. For most structural systems, however, the properties of the components can only be estimated and statistical data associated with the potential loads is often minimum. Also, a potentially critical loading condition may be completely neglected in the study. For these reasons and the previous problems associated with the reliability of both linear and nonlinear analysis computer programs it appears to be premature to place a significant value on such studies for complex structures. With these comments as background the purpose of this paper is to discuss the following: the relationship of analysis to design; new methods of analysis; new of improved finite elements; effect of minicomputer on structural analysis methods; the use of system of microprocessors for nonlinear structural analysis; the role of interacting graphics systems in future analysis and design. This discussion will focus on the impact of new, inexpensive computer hardware on design and analysis methods

  3. Automated haematology analysis to diagnose malaria

    Directory of Open Access Journals (Sweden)

    Grobusch Martin P

    2010-11-01

    Full Text Available Abstract For more than a decade, flow cytometry-based automated haematology analysers have been studied for malaria diagnosis. Although current haematology analysers are not specifically designed to detect malaria-related abnormalities, most studies have found sensitivities that comply with WHO malaria-diagnostic guidelines, i.e. ≥ 95% in samples with > 100 parasites/μl. Establishing a correct and early malaria diagnosis is a prerequisite for an adequate treatment and to minimizing adverse outcomes. Expert light microscopy remains the 'gold standard' for malaria diagnosis in most clinical settings. However, it requires an explicit request from clinicians and has variable accuracy. Malaria diagnosis with flow cytometry-based haematology analysers could become an important adjuvant diagnostic tool in the routine laboratory work-up of febrile patients in or returning from malaria-endemic regions. Haematology analysers so far studied for malaria diagnosis are the Cell-Dyn®, Coulter® GEN·S and LH 750, and the Sysmex XE-2100® analysers. For Cell-Dyn analysers, abnormal depolarization events mainly in the lobularity/granularity and other scatter-plots, and various reticulocyte abnormalities have shown overall sensitivities and specificities of 49% to 97% and 61% to 100%, respectively. For the Coulter analysers, a 'malaria factor' using the monocyte and lymphocyte size standard deviations obtained by impedance detection has shown overall sensitivities and specificities of 82% to 98% and 72% to 94%, respectively. For the XE-2100, abnormal patterns in the DIFF, WBC/BASO, and RET-EXT scatter-plots, and pseudoeosinophilia and other abnormal haematological variables have been described, and multivariate diagnostic models have been designed with overall sensitivities and specificities of 86% to 97% and 81% to 98%, respectively. The accuracy for malaria diagnosis may vary according to species, parasite load, immunity and clinical context where the

  4. Video Games and Youth Violence: A Prospective Analysis in Adolescents

    Science.gov (United States)

    Ferguson, Christopher J.

    2011-01-01

    The potential influence of violent video games on youth violence remains an issue of concern for psychologists, policymakers and the general public. Although several prospective studies of video game violence effects have been conducted, none have employed well validated measures of youth violence, nor considered video game violence effects in…

  5. No-Reference Video Quality Assessment by HEVC Codec Analysis

    DEFF Research Database (Denmark)

    Huang, Xin; Søgaard, Jacob; Forchhammer, Søren

    2015-01-01

    This paper proposes a No-Reference (NR) Video Quality Assessment (VQA) method for videos subject to the distortion given by High Efficiency Video Coding (HEVC). The proposed assessment can be performed either as a BitstreamBased (BB) method or as a Pixel-Based (PB). It extracts or estimates the t...

  6. On Automating and Standardising Corpus Callosum Analysis in Brain MRI

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille; Skoglund, Karl

    2005-01-01

    Corpus callosum analysis is influenced by many factors. The effort in controlling these has previously been incomplete and scattered. This paper sketches a complete pipeline for automated corpus callosum analysis from magnetic resonance images, with focus on measurement standardisation....... The presented pipeline deals with i) estimation of the mid-sagittal plane, ii) localisation and registration of the corpus callosum, iii) parameterisation and representation of its contour, and iv) means of standardising the traditional reference area measurements....

  7. Cardiac imaging: working towards fully-automated machine analysis & interpretation.

    Science.gov (United States)

    Slomka, Piotr J; Dey, Damini; Sitek, Arkadiusz; Motwani, Manish; Berman, Daniel S; Germano, Guido

    2017-03-01

    Non-invasive imaging plays a critical role in managing patients with cardiovascular disease. Although subjective visual interpretation remains the clinical mainstay, quantitative analysis facilitates objective, evidence-based management, and advances in clinical research. This has driven developments in computing and software tools aimed at achieving fully automated image processing and quantitative analysis. In parallel, machine learning techniques have been used to rapidly integrate large amounts of clinical and quantitative imaging data to provide highly personalized individual patient-based conclusions. Areas covered: This review summarizes recent advances in automated quantitative imaging in cardiology and describes the latest techniques which incorporate machine learning principles. The review focuses on the cardiac imaging techniques which are in wide clinical use. It also discusses key issues and obstacles for these tools to become utilized in mainstream clinical practice. Expert commentary: Fully-automated processing and high-level computer interpretation of cardiac imaging are becoming a reality. Application of machine learning to the vast amounts of quantitative data generated per scan and integration with clinical data also facilitates a move to more patient-specific interpretation. These developments are unlikely to replace interpreting physicians but will provide them with highly accurate tools to detect disease, risk-stratify, and optimize patient-specific treatment. However, with each technological advance, we move further from human dependence and closer to fully-automated machine interpretation.

  8. Automated Asteroseismic Analysis of Solar-type Stars

    DEFF Research Database (Denmark)

    Karoff, Christoffer; Campante, T.L.; Chaplin, W.J.

    2010-01-01

    The rapidly increasing volume of asteroseismic observations on solar-type stars has revealed a need for automated analysis tools. The reason for this is not only that individual analyses of single stars are rather time consuming, but more importantly that these large volumes of observations open...... the possibility to do population studies on large samples of stars and such population studies demand a consistent analysis. By consistent analysis we understand an analysis that can be performed without the need to make any subjective choices on e.g. mode identification and an analysis where the uncertainties...

  9. Player-Driven Video Analysis to Enhance Reflective Soccer Practice in Talent Development

    DEFF Research Database (Denmark)

    Hjort, Anders; Henriksen, Kristoffer; Elbæk, Lars

    2018-01-01

    In the present article, we investigate the introduction of a cloud-based video analysis platform called Player Universe (PU). Video analysis is not a new performance-enhancing element in sports, but PU is innovative in how it facilitates reflective learning. Video analysis is executed in the PU...... platform by involving the players in the analysis process, in the sense that they are encouraged to tag game actions in video-documented soccer matches. Following this, players can get virtual feedback from their coach. Findings show that PU can improve youth soccer players' reflection skills through...... and enhance reflective learning for better in-game performance....

  10. Introducing Player-Driven Video Analysis to Enhance Reflective Soccer Practice

    DEFF Research Database (Denmark)

    Hjort, Anders; Elbæk, Lars; Henriksen, Kristoffer

    2017-01-01

    In the present study, we investigated the introduction of a cloud-based video analysis platform called Player Universe (PU) in a Danish football club. Video analysis is not a new performance-enhancing element in sport, but PU is innovative in the way players and coaches produce footage and how...... it facilitates reflective learning. Video analysis is executed in the (PU) platform by involving the players in the analysis process, in the sense that they are encouraged to tag game actions in video-documented football matches. Following this, players can get virtual feedback from their coach. The philosophy...... motivate and enhance reflective learning for better in-game performance....

  11. Towards Automated Design, Analysis and Optimization of Declarative Curation Workflows

    Directory of Open Access Journals (Sweden)

    Tianhong Song

    2014-10-01

    Full Text Available Data curation is increasingly important. Our previous work on a Kepler curation package has demonstrated advantages that come from automating data curation pipelines by using workflow systems. However, manually designed curation workflows can be error-prone and inefficient due to a lack of user understanding of the workflow system, misuse of actors, or human error. Correcting problematic workflows is often very time-consuming. A more proactive workflow system can help users avoid such pitfalls. For example, static analysis before execution can be used to detect the potential problems in a workflow and help the user to improve workflow design. In this paper, we propose a declarative workflow approach that supports semi-automated workflow design, analysis and optimization. We show how the workflow design engine helps users to construct data curation workflows, how the workflow analysis engine detects different design problems of workflows and how workflows can be optimized by exploiting parallelism.

  12. Automated genome sequence analysis and annotation.

    Science.gov (United States)

    Andrade, M A; Brown, N P; Leroy, C; Hoersch, S; de Daruvar, A; Reich, C; Franchini, A; Tamames, J; Valencia, A; Ouzounis, C; Sander, C

    1999-05-01

    Large-scale genome projects generate a rapidly increasing number of sequences, most of them biochemically uncharacterized. Research in bioinformatics contributes to the development of methods for the computational characterization of these sequences. However, the installation and application of these methods require experience and are time consuming. We present here an automatic system for preliminary functional annotation of protein sequences that has been applied to the analysis of sets of sequences from complete genomes, both to refine overall performance and to make new discoveries comparable to those made by human experts. The GeneQuiz system includes a Web-based browser that allows examination of the evidence leading to an automatic annotation and offers additional information, views of the results, and links to biological databases that complement the automatic analysis. System structure and operating principles concerning the use of multiple sequence databases, underlying sequence analysis tools, lexical analyses of database annotations and decision criteria for functional assignments are detailed. The system makes automatic quality assessments of results based on prior experience with the underlying sequence analysis tools; overall error rates in functional assignment are estimated at 2.5-5% for cases annotated with highest reliability ('clear' cases). Sources of over-interpretation of results are discussed with proposals for improvement. A conservative definition for reporting 'new findings' that takes account of database maturity is presented along with examples of possible kinds of discoveries (new function, family and superfamily) made by the system. System performance in relation to sequence database coverage, database dynamics and database search methods is analysed, demonstrating the inherent advantages of an integrated automatic approach using multiple databases and search methods applied in an objective and repeatable manner. The GeneQuiz system

  13. Automated Acquisition and Analysis of Digital Radiographic Images

    International Nuclear Information System (INIS)

    Poland, R.

    1999-01-01

    Engineers at the Savannah River Technology Center have designed, built, and installed a fully automated small field-of-view, lens-coupled, digital radiography imaging system. The system is installed in one of the Savannah River Site''s production facilities to be used for the evaluation of production components. Custom software routines developed for the system automatically acquire, enhance, and diagnostically evaluate critical geometric features of various components that have been captured radiographically. Resolution of the digital radiograms and accuracy of the acquired measurements approaches 0.001 inches. To date, there has been zero deviation in measurement repeatability. The automated image acquisition methodology will be discussed, unique enhancement algorithms will be explained, and the automated routines for measuring the critical component features will be presented. An additional feature discussed is the independent nature of the modular software components, which allows images to be automatically acquired, processed, and evaluated by the computer in the background, while the operator reviews other images on the monitor. System components were also a key in gaining the required image resolution. System factors such as scintillator selection, x-ray source energy, optical components and layout, as well as geometric unsharpness issues are considered in the paper. Finally the paper examines the numerous quality improvement factors and cost saving advantages that will be realized at the Savannah River Site due to the implementation of the Automated Pinch Weld Analysis System (APWAS)

  14. Automated microscopic characterization of metallic ores with image analysis: a key to improve ore processing. I: test of the methodology

    International Nuclear Information System (INIS)

    Berrezueta, E.; Castroviejo, R.

    2007-01-01

    Ore microscopy has traditionally been an important support to control ore processing, but the volume of present day processes is beyond the reach of human operators. Automation is therefore compulsory, but its development through digital image analysis, DIA, is limited by various problems, such as the similarity in reflectance values of some important ores, their anisotropism, and the performance of instruments and methods. The results presented show that automated identification and quantification by DIA are possible through multiband (RGB) determinations with a research 3CCD video camera on reflected light microscope. These results were obtained by systematic measurement of selected ores accounting for most of the industrial applications. Polarized light is avoided, so the effects of anisotropism can be neglected. Quality control at various stages and statistical analysis are important, as is the application of complementary criteria (e.g. metallogenetic). The sequential methodology is described and shown through practical examples. (Author)

  15. Tank Farm Operations Surveillance Automation Analysis

    International Nuclear Information System (INIS)

    MARQUEZ, D.L.

    2000-01-01

    The Nuclear Operations Project Services identified the need to improve manual tank farm surveillance data collection, review, distribution and storage practices often referred to as Operator Rounds. This document provides the analysis in terms of feasibility to improve the manual data collection methods by using handheld computer units, barcode technology, a database for storage and acquisitions, associated software, and operational procedures to increase the efficiency of Operator Rounds associated with surveillance activities

  16. Automated software analysis of nuclear core discharge data

    International Nuclear Information System (INIS)

    Larson, T.W.; Halbig, J.K.; Howell, J.A.; Eccleston, G.W.; Klosterbuer, S.F.

    1993-03-01

    Monitoring the fueling process of an on-load nuclear reactor is a full-time job for nuclear safeguarding agencies. Nuclear core discharge monitors (CDMS) can provide continuous, unattended recording of the reactor's fueling activity for later, qualitative review by a safeguards inspector. A quantitative analysis of this collected data could prove to be a great asset to inspectors because more information can be extracted from the data and the analysis time can be reduced considerably. This paper presents a prototype for an automated software analysis system capable of identifying when fuel bundle pushes occurred and monitoring the power level of the reactor. Neural network models were developed for calculating the region on the reactor face from which the fuel was discharged and predicting the burnup. These models were created and tested using actual data collected from a CDM system at an on-load reactor facility. Collectively, these automated quantitative analysis programs could help safeguarding agencies to gain a better perspective on the complete picture of the fueling activity of an on-load nuclear reactor. This type of system can provide a cost-effective solution for automated monitoring of on-load reactors significantly reducing time and effort

  17. Automated optics inspection analysis for NIF

    International Nuclear Information System (INIS)

    Kegelmeyer, Laura M.; Clark, Raelyn; Leach, Richard R.; McGuigan, David; Kamm, Victoria Miller; Potter, Daniel; Salmon, J. Thad; Senecal, Joshua; Conder, Alan; Nostrand, Mike; Whitman, Pamela K.

    2012-01-01

    The National Ignition Facility (NIF) is a high-energy laser facility comprised of 192 beamlines that house thousands of optics. These optics guide, amplify and tightly focus light onto a tiny target for fusion ignition research and high energy density physics experiments. The condition of these optics is key to the economic, efficient and maximally energetic performance of the laser. Our goal, and novel achievement, is to find on the optics any imperfections while they are tens of microns in size, track them through time to see if they grow and if so, remove the optic and repair the single site so the entire optic can then be re-installed for further use on the laser. This paper gives an overview of the image analysis used for detecting, measuring, and tracking sites of interest on an optic while it is installed on the beamline via in situ inspection and after it has been removed for maintenance. In this way, the condition of each optic is monitored throughout the optic's lifetime. This overview paper will summarize key algorithms and technical developments for custom image analysis and processing and highlight recent improvements. (Associated papers will include more details on these issues.) We will also discuss the use of OI Analysis for daily operation of the NIF laser and its extension to inspection of NIF targets.

  18. Micro photometer's automation for quantitative spectrograph analysis

    International Nuclear Information System (INIS)

    Gutierrez E, C.Y.A.

    1996-01-01

    A Microphotometer is used to increase the sharpness of dark spectral lines. Analyzing these lines one sample content and its concentration could be determined and the analysis is known as Quantitative Spectrographic Analysis. The Quantitative Spectrographic Analysis is carried out in 3 steps, as follows. 1. Emulsion calibration. This consists of gauging a photographic emulsion, to determine the intensity variations in terms of the incident radiation. For the procedure of emulsion calibration an adjustment with square minimum to the data obtained is applied to obtain a graph. It is possible to determine the density of dark spectral line against the incident light intensity shown by the microphotometer. 2. Working curves. The values of known concentration of an element against incident light intensity are plotted. Since the sample contains several elements, it is necessary to find a work curve for each one of them. 3. Analytical results. The calibration curve and working curves are compared and the concentration of the studied element is determined. The automatic data acquisition, calculation and obtaining of resulting, is done by means of a computer (PC) and a computer program. The conditioning signal circuits have the function of delivering TTL levels (Transistor Transistor Logic) to make the communication between the microphotometer and the computer possible. Data calculation is done using a computer programm

  19. Automated reasoning applications to design validation and sneak function analysis

    International Nuclear Information System (INIS)

    Stratton, R.C.

    1984-01-01

    Argonne National Laboratory (ANL) is actively involved in the LMFBR Man-Machine Integration (MMI) Safety Program. The objective of this program is to enhance the operational safety and reliability of fast-breeder reactors by optimum integration of men and machines through the application of human factors principles and control engineering to the design, operation, and the control environment. ANL is developing methods to apply automated reasoning and computerization in the validation and sneak function analysis process. This project provides the element definitions and relations necessary for an automated reasoner (AR) to reason about design validation and sneak function analysis. This project also provides a demonstration of this AR application on an Experimental Breeder Reactor-II (EBR-II) system, the Argonne Cooling System

  20. Automated analysis of damages for radiation in plastics surfaces

    International Nuclear Information System (INIS)

    Andrade, C.; Camacho M, E.; Tavera, L.; Balcazar, M.

    1990-02-01

    Analysis of damages done by the radiation in a polymer characterized by optic properties of polished surfaces, of uniformity and chemical resistance that the acrylic; resistant until the 150 centigrade grades of temperature, and with an approximate weight of half of the glass. An objective of this work is the development of a method that analyze in automated form the superficial damages induced by radiation in plastic materials means an images analyst. (Author)

  1. Experience based ageing analysis of NPP protection automation in Finland

    International Nuclear Information System (INIS)

    Simola, K.

    2000-01-01

    This paper describes three successive studies on ageing of protection automation of nuclear power plants. These studies were aimed at developing a methodology for an experience based ageing analysis, and applying it to identify the most critical components from ageing and safety points of view. The analyses resulted also to suggestions for improvement of data collection systems for the purpose of further ageing analyses. (author)

  2. Identification of Mobile Phone and Analysis of Original Version of Videos through a Delay Time Analysis of Sound Signals from Mobile Phone Videos.

    Science.gov (United States)

    Hwang, Min Gu; Har, Dong Hwan

    2017-11-01

    This study designs a method of identifying the camera model used to take videos that are distributed through mobile phones and determines the original version of the mobile phone video for use as legal evidence. For this analysis, an experiment was conducted to find the unique characteristics of each mobile phone. The videos recorded by mobile phones were analyzed to establish the delay time of sound signals, and the differences between the delay times of sound signals for different mobile phones were traced by classifying their characteristics. Furthermore, the sound input signals for mobile phone videos used as legal evidence were analyzed to ascertain whether they have the unique characteristics of the original version. The objective of this study was to find a method for validating the use of mobile phone videos as legal evidence using mobile phones through differences in the delay times of sound input signals. © 2017 American Academy of Forensic Sciences.

  3. Automated Analysis of Security in Networking Systems

    DEFF Research Database (Denmark)

    Buchholtz, Mikael

    2004-01-01

    It has for a long time been a challenge to built secure networking systems. One way to counter this problem is to provide developers of software applications for networking systems with easy-to-use tools that can check security properties before the applications ever reach the marked. These tools...... will both help raise the general level of awareness of the problems and prevent the most basic flaws from occurring. This thesis contributes to the development of such tools. Networking systems typically try to attain secure communication by applying standard cryptographic techniques. In this thesis...... attacks, and attacks launched by insiders. Finally, the perspectives for the application of the analysis techniques are discussed, thereby, coming a small step closer to providing developers with easy- to-use tools for validating the security of networking applications....

  4. Applications of Automation Methods for Nonlinear Fracture Test Analysis

    Science.gov (United States)

    Allen, Phillip A.; Wells, Douglas N.

    2013-01-01

    As fracture mechanics material testing evolves, the governing test standards continue to be refined to better reflect the latest understanding of the physics of the fracture processes involved. The traditional format of ASTM fracture testing standards, utilizing equations expressed directly in the text of the standard to assess the experimental result, is self-limiting in the complexity that can be reasonably captured. The use of automated analysis techniques to draw upon a rich, detailed solution database for assessing fracture mechanics tests provides a foundation for a new approach to testing standards that enables routine users to obtain highly reliable assessments of tests involving complex, non-linear fracture behavior. Herein, the case for automating the analysis of tests of surface cracks in tension in the elastic-plastic regime is utilized as an example of how such a database can be generated and implemented for use in the ASTM standards framework. The presented approach forms a bridge between the equation-based fracture testing standards of today and the next generation of standards solving complex problems through analysis automation.

  5. Multimodal Semantic Analysis and Annotation for Basketball Video

    Directory of Open Access Journals (Sweden)

    Liu Song

    2006-01-01

    Full Text Available This paper presents a new multiple-modality method for extracting semantic information from basketball video. The visual, motion, and audio information are extracted from video to first generate some low-level video segmentation and classification. Domain knowledge is further exploited for detecting interesting events in the basketball video. For video, both visual and motion prediction information are utilized for shot and scene boundary detection algorithm; this will be followed by scene classification. For audio, audio keysounds are sets of specific audio sounds related to semantic events and a classification method based on hidden Markov model (HMM is used for audio keysound identification. Subsequently, by analyzing the multimodal information, the positions of potential semantic events, such as "foul" and "shot at the basket," are located with additional domain knowledge. Finally, a video annotation is generated according to MPEG-7 multimedia description schemes (MDSs. Experimental results demonstrate the effectiveness of the proposed method.

  6. Prevalence of discordant microscopic changes with automated CBC analysis

    Directory of Open Access Journals (Sweden)

    Fabiano de Jesus Santos

    2014-12-01

    Full Text Available Introduction:The most common cause of diagnostic error is related to errors in laboratory tests as well as errors of results interpretation. In order to reduce them, the laboratory currently has modern equipment which provides accurate and reliable results. The development of automation has revolutionized the laboratory procedures in Brazil and worldwide.Objective:To determine the prevalence of microscopic changes present in blood slides concordant and discordant with results obtained using fully automated procedures.Materials and method:From January to July 2013, 1,000 hematological parameters slides were analyzed. Automated analysis was performed on last generation equipment, which methodology is based on electrical impedance, and is able to quantify all the figurative elements of the blood in a universe of 22 parameters. The microscopy was performed by two experts in microscopy simultaneously.Results:The data showed that only 42.70% were concordant, comparing with 57.30% discordant. The main findings among discordant were: Changes in red blood cells 43.70% (n = 250, white blood cells 38.46% (n = 220, and number of platelet 17.80% (n = 102.Discussion:The data show that some results are not consistent with clinical or physiological state of an individual, and cannot be explained because they have not been investigated, which may compromise the final diagnosis.Conclusion:It was observed that it is of fundamental importance that the microscopy qualitative analysis must be performed in parallel with automated analysis in order to obtain reliable results, causing a positive impact on the prevention, diagnosis, prognosis, and therapeutic follow-up.

  7. Transana Video Analysis Software as a Tool for Consultation: Applications to Improving PTA Meeting Leadership

    Science.gov (United States)

    Rush, Craig

    2012-01-01

    The chief aim of this article is to illustrate the potential of using Transana, a qualitative video analysis tool, for effective and efficient school-based consultation. In this illustrative study, the Transana program facilitated analysis of excerpts of video from a representative sample of Parent Teacher Association (PTA) meetings over the…

  8. Semi-automated retinal vessel analysis in nonmydriatic fundus photography.

    Science.gov (United States)

    Schuster, Alexander Karl-Georg; Fischer, Joachim Ernst; Vossmerbaeumer, Urs

    2014-02-01

    Funduscopic assessment of the retinal vessels may be used to assess the health status of microcirculation and as a component in the evaluation of cardiovascular risk factors. Typically, the evaluation is restricted to morphological appreciation without strict quantification. Our purpose was to develop and validate a software tool for semi-automated quantitative analysis of retinal vasculature in nonmydriatic fundus photography. matlab software was used to develop a semi-automated image recognition and analysis tool for the determination of the arterial-venous (A/V) ratio in the central vessel equivalent on 45° digital fundus photographs. Validity and reproducibility of the results were ascertained using nonmydriatic photographs of 50 eyes from 25 subjects recorded from a 3DOCT device (Topcon Corp.). Two hundred and thirty-three eyes of 121 healthy subjects were evaluated to define normative values. A software tool was developed using image thresholds for vessel recognition and vessel width calculation in a semi-automated three-step procedure: vessel recognition on the photograph and artery/vein designation, width measurement and calculation of central retinal vessel equivalents. Mean vessel recognition rate was 78%, vessel class designation rate 75% and reproducibility between 0.78 and 0.91. Mean A/V ratio was 0.84. Application on a healthy norm cohort showed high congruence with prior published manual methods. Processing time per image was one minute. Quantitative geometrical assessment of the retinal vasculature may be performed in a semi-automated manner using dedicated software tools. Yielding reproducible numerical data within a short time leap, this may contribute additional value to mere morphological estimates in the clinical evaluation of fundus photographs. © 2013 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  9. Toward an Analysis of Video Games for Mathematics Education

    Science.gov (United States)

    Offenholley, Kathleen

    2011-01-01

    Video games have tremendous potential in mathematics education, yet there is a push to simply add mathematics to a video game without regard to whether the game structure suits the mathematics, and without regard to the level of mathematical thought being learned in the game. Are students practicing facts, or are they problem-solving? This paper…

  10. Overhead spine arch analysis of dairy cows from three-dimensional video

    Science.gov (United States)

    Abdul Jabbar, K.; Hansen, M. F.; Smith, M. L.; Smith, L. N.

    2017-02-01

    We present a spine arch analysis method in dairy cows using overhead 3D video data. This method is aimed for early stage lameness detection. That is important in order to allow early treatment; and thus, reduce the animal suffering and minimize the high forecasted financial losses, caused by lameness. Our physical data collection setup is non-intrusive, covert and designed to allow full automation; therefore, it could be implemented on a large scale or daily basis with high accuracy. We track the animal's spine using shape index and curvedness measure from the 3D surface as she walks freely under the 3D camera. Our spinal analysis focuses on the thoracic vertebrae region, where we found most of the arching caused by lameness. A cubic polynomial is fitted to analyze the arch and estimate the locomotion soundness. We have found more accurate results by eliminating the regular neck/head movements' effect from the arch. Using 22-cow data set, we are able to achieve an early stage lameness detection accuracy of 95.4%.

  11. Automated Frequency Domain Decomposition for Operational Modal Analysis

    DEFF Research Database (Denmark)

    Brincker, Rune; Andersen, Palle; Jacobsen, Niels-Jørgen

    2007-01-01

    The Frequency Domain Decomposition (FDD) technique is known as one of the most user friendly and powerful techniques for operational modal analysis of structures. However, the classical implementation of the technique requires some user interaction. The present paper describes an algorithm...... for automated FDD, thus a version of FDD where no user interaction is required. Such algorithm can be used for obtaining a default estimate of modal parameters in commercial software for operational modal analysis - or even more important - it can be used as the modal information engine in a system...

  12. No-Reference Video Quality Assessment by HEVC Codec Analysis

    DEFF Research Database (Denmark)

    Huang, Xin; Søgaard, Jacob; Forchhammer, Søren

    2015-01-01

    This paper proposes a No-Reference (NR) Video Quality Assessment (VQA) method for videos subject to the distortion given by High Efficiency Video Coding (HEVC). The proposed assessment can be performed either as a BitstreamBased (BB) method or as a Pixel-Based (PB). It extracts or estimates...... the transform coefficients, estimates the distortion, and assesses the video quality. The proposed scheme generates VQA features based on Intra coded frames, and then maps features using an Elastic Net to predict subjective video quality. A set of HEVC coded 4K UHD sequences are tested. Results show...... that the quality scores computed by the proposed method are highly correlated with the subjective assessment....

  13. Improvement of Binary Analysis Components in Automated Malware Analysis Framework

    Science.gov (United States)

    2017-02-21

    and by monitoring their behavior, then generate data for malware detection signature and for developing their counter measure. 15. SUBJECT TERMS...FA2386-15-1-4068 Keiji Takeda, Keio University keiji@sfc.keio.ac.jp 1 Objective This research was conducted to develop components for automated...binary program and by monitoring their behavior, then generate data for malware detection signature and for developing their counter measure. 2

  14. Introducing Player-Driven Video Analysis to Enhance Reflective Soccer Practice

    DEFF Research Database (Denmark)

    Hjort, Anders; Elbæk, Lars; Henriksen, Kristoffer

    2017-01-01

    it facilitates reflective learning. Video analysis is executed in the (PU) platform by involving the players in the analysis process, in the sense that they are encouraged to tag game actions in video-documented football matches. Following this, players can get virtual feedback from their coach. The philosophy......In the present study, we investigated the introduction of a cloud-based video analysis platform called Player Universe (PU) in a Danish football club. Video analysis is not a new performance-enhancing element in sport, but PU is innovative in the way players and coaches produce footage and how....... The implementation and evaluation of PU took place in the FC Copenhagen (FCK) School of Excellence. Findings show that PU can improve youth football players’ reflection skills through consistent video analyses and tagging, that coaches are important as role models and providers of feedback, and that the use...

  15. Real time video analysis to monitor neonatal medical condition

    Science.gov (United States)

    Shirvaikar, Mukul; Paydarfar, David; Indic, Premananda

    2017-05-01

    One in eight live births in the United States is premature and these infants have complications leading to life threatening events such as apnea (pauses in breathing), bradycardia (slowness of heart) and hypoxia (oxygen desaturation). Infant movement pattern has been hypothesized as an important predictive marker for these life threatening events. Thus estimation of movement along with behavioral states, as a precursor of life threatening events, can be useful for risk stratification of infants as well as for effective management of disease state. However, more important and challenging is the determination of the behavioral state of the infant. This information includes important cues such as sleep position and the status of the eyes, which are important markers for neonatal neurodevelopment state. This paper explores the feasibility of using real time video analysis to monitor the condition of premature infants. The image of the infant can be segmented into regions to localize and focus on specific areas of interest. Analysis of the segmented regions can be performed to identify different parts of the body including the face, arms, legs and torso. This is necessary due to real-time processing speed considerations. Such a monitoring system would be of great benefit as an aide to medical staff in neonatal hospital settings requiring constant surveillance. Any such system would have to satisfy extremely stringent reliability and accuracy requirements, before it can be deployed in a hospital care unit, due to obvious reasons. The effect of lighting conditions and interference will have to be mitigated to achieve such performance.

  16. Applications of Automation Methods for Nonlinear Fracture Test Analysis

    Science.gov (United States)

    Allen, Phillip A.; Wells, Douglas N.

    2013-01-01

    Using automated and standardized computer tools to calculate the pertinent test result values has several advantages such as: 1. allowing high-fidelity solutions to complex nonlinear phenomena that would be impractical to express in written equation form, 2. eliminating errors associated with the interpretation and programing of analysis procedures from the text of test standards, 3. lessening the need for expertise in the areas of solid mechanics, fracture mechanics, numerical methods, and/or finite element modeling, to achieve sound results, 4. and providing one computer tool and/or one set of solutions for all users for a more "standardized" answer. In summary, this approach allows a non-expert with rudimentary training to get the best practical solution based on the latest understanding with minimum difficulty.Other existing ASTM standards that cover complicated phenomena use standard computer programs: 1. ASTM C1340/C1340M-10- Standard Practice for Estimation of Heat Gain or Loss Through Ceilings Under Attics Containing Radiant Barriers by Use of a Computer Program 2. ASTM F 2815 - Standard Practice for Chemical Permeation through Protective Clothing Materials: Testing Data Analysis by Use of a Computer Program 3. ASTM E2807 - Standard Specification for 3D Imaging Data Exchange, Version 1.0 The verification, validation, and round-robin processes required of a computer tool closely parallel the methods that are used to ensure the solution validity for equations included in test standard. The use of automated analysis tools allows the creation and practical implementation of advanced fracture mechanics test standards that capture the physics of a nonlinear fracture mechanics problem without adding undue burden or expense to the user. The presented approach forms a bridge between the equation-based fracture testing standards of today and the next generation of standards solving complex problems through analysis automation.

  17. QIM blind video watermarking scheme based on Wavelet transform and principal component analysis

    Directory of Open Access Journals (Sweden)

    Nisreen I. Yassin

    2014-12-01

    Full Text Available In this paper, a blind scheme for digital video watermarking is proposed. The security of the scheme is established by using one secret key in the retrieval of the watermark. Discrete Wavelet Transform (DWT is applied on each video frame decomposing it into a number of sub-bands. Maximum entropy blocks are selected and transformed using Principal Component Analysis (PCA. Quantization Index Modulation (QIM is used to quantize the maximum coefficient of the PCA blocks of each sub-band. Then, the watermark is embedded into the selected suitable quantizer values. The proposed scheme is tested using a number of video sequences. Experimental results show high imperceptibility. The computed average PSNR exceeds 45 dB. Finally, the scheme is applied on two medical videos. The proposed scheme shows high robustness against several attacks such as JPEG coding, Gaussian noise addition, histogram equalization, gamma correction, and contrast adjustment in both cases of regular videos and medical videos.

  18. Multi-modal analysis for person type classification in news video

    Science.gov (United States)

    Yang, Jun; Hauptmann, Alexander G.

    2005-01-01

    Classifying the identities of people appearing in broadcast news video into anchor, reporter, or news subject is an im-portant topic in high-level video analysis. Given the visual resemblance of different types of people, this work explores multi-modal features derived from a variety of evidences, such as the speech identity, transcript clues, temporal video structure, named entities, and uses a statistical learning approach to combine all the features for person type classifica-tion. Experiments conducted on ABC World News Tonight video have demonstrated the effectiveness of the approach, and the contributions of different categories of features have been compared.

  19. Video game demand in Japan : a household data analysis

    OpenAIRE

    Harada, Nobuyuki

    2004-01-01

    There are many empirical studies of supply-side data for the video games industry. This paper,on the contrary, highlights the household side, estimating demand equations for video games.Using the “total households” data of the Family Income and Expenditure Survey, whichincludes one-person households and households engaged in agriculture, forestry and fishery,estimation results show that a household’s income factor has a positive effect on its share of expenditure on video games. It is also ve...

  20. MPEG-2 Compressed-Domain Algorithms for Video Analysis

    Directory of Open Access Journals (Sweden)

    Hesseler Wolfgang

    2006-01-01

    Full Text Available This paper presents new algorithms for extracting metadata from video sequences in the MPEG-2 compressed domain. Three algorithms for efficient low-level metadata extraction in preprocessing stages are described. The first algorithm detects camera motion using the motion vector field of an MPEG-2 video. The second method extends the idea of motion detection to a limited region of interest, yielding an efficient algorithm to track objects inside video sequences. The third algorithm performs a cut detection using macroblock types and motion vectors.

  1. Design of video quality metrics with multi-way data analysis a data driven approach

    CERN Document Server

    Keimel, Christian

    2016-01-01

    This book proposes a data-driven methodology using multi-way data analysis for the design of video-quality metrics. It also enables video- quality metrics to be created using arbitrary features. This data- driven design approach not only requires no detailed knowledge of the human visual system, but also allows a proper consideration of the temporal nature of video using a three-way prediction model, corresponding to the three-way structure of video. Using two simple example metrics, the author demonstrates not only that this purely data- driven approach outperforms state-of-the-art video-quality metrics, which are often optimized for specific properties of the human visual system, but also that multi-way data analysis methods outperform the combination of two-way data analysis methods and temporal pooling. .

  2. Automatic content-based analysis of georeferenced image data: Detection of Beggiatoa mats in seafloor video mosaics from the HÅkon Mosby Mud Volcano

    Science.gov (United States)

    Jerosch, K.; Lüdtke, A.; Schlüter, M.; Ioannidis, G. T.

    2007-02-01

    The combination of new underwater technology as remotely operating vehicles (ROVs), high-resolution video imagery, and software to compute georeferenced mosaics of the seafloor provides new opportunities for marine geological or biological studies and applications in offshore industry. Even during single surveys by ROVs or towed systems large amounts of images are compiled. While these underwater techniques are now well-engineered, there is still a lack of methods for the automatic analysis of the acquired image data. During ROV dives more than 4200 georeferenced video mosaics were compiled for the HÅkon Mosby Mud Volcano (HMMV). Mud volcanoes as HMMV are considered as significant source locations for methane characterised by unique chemoautotrophic communities as Beggiatoa mats. For the detection and quantification of the spatial distribution of Beggiatoa mats an automated image analysis technique was developed, which applies watershed transformation and relaxation-based labelling of pre-segmented regions. Comparison of the data derived by visual inspection of 2840 video images with the automated image analysis revealed similarities with a precision better than 90%. We consider this as a step towards a time-efficient and accurate analysis of seafloor images for computation of geochemical budgets and identification of habitats at the seafloor.

  3. Postprocessing algorithm for automated analysis of pelvic intraoperative neuromonitoring signals

    Directory of Open Access Journals (Sweden)

    Wegner Celine

    2016-09-01

    Full Text Available Two dimensional pelvic intraoperative neuromonitoring (pIONM® is based on electric stimulation of autonomic nerves under observation of electromyography of internal anal sphincter (IAS and manometry of urinary bladder. The method provides nerve identification and verification of its’ functional integrity. Currently pIONM® is gaining increased attention in times where preservation of function is becoming more and more important. Ongoing technical and methodological developments in experimental and clinical settings require further analysis of the obtained signals. This work describes a postprocessing algorithm for pIONM® signals, developed for automated analysis of huge amount of recorded data. The analysis routine includes a graphical representation of the recorded signals in the time and frequency domain, as well as a quantitative evaluation by means of features calculated from the time and frequency domain. The produced plots are summarized automatically in a PowerPoint presentation. The calculated features are filled into a standardized Excel-sheet, ready for statistical analysis.

  4. Extended automated separation techniques in destructive neutron activation analysis

    International Nuclear Information System (INIS)

    Tjioe, P.S.; Goeij, J.J.M. de; Houtman, J.P.W.

    1977-01-01

    An automated post-irradiation chemical separation scheme for the analysis of 14 trace elements in biological materials is described. The procedure consists of a destruction with sulfuric acid and hydrogen peroxide, a distillation of the volatile elements with hydrobromic acid and chromatography of both distillate and residue over Dowex 2x8 anion exchanger columns. Accuracy, precision and sensitivity are tested with reference materials (BOWEN's kale, NBS bovine liver, IAEA materials, dried animal whole blood, wheat flour, dried potatoes, powdered milk, oyster homogenate) and on a sample of pooled human blood. Blank values due to trace elements in the quartz irradiation vials are also discussed. (T.G.)

  5. Video games and youth violence: a prospective analysis in adolescents.

    Science.gov (United States)

    Ferguson, Christopher J

    2011-04-01

    The potential influence of violent video games on youth violence remains an issue of concern for psychologists, policymakers and the general public. Although several prospective studies of video game violence effects have been conducted, none have employed well validated measures of youth violence, nor considered video game violence effects in context with other influences on youth violence such as family environment, peer delinquency, and depressive symptoms. The current study builds upon previous research in a sample of 302 (52.3% female) mostly Hispanic youth. Results indicated that current levels of depressive symptoms were a strong predictor of serious aggression and violence across most outcome measures. Depressive symptoms also interacted with antisocial traits so that antisocial individuals with depressive symptoms were most inclined toward youth violence. Neither video game violence exposure, nor television violence exposure, were prospective predictors of serious acts of youth aggression or violence. These results are put into the context of criminological data on serious acts of violence among youth.

  6. Analysis of M-JPEG Video Over an ATM Network

    National Research Council Canada - National Science Library

    Kinney, Albert

    2001-01-01

    ... in the development of future naval information systems. This thesis analyzes the impact of compression, delay variance, and channel noise on perceived networked video quality using commercially available off-the-shelf equipment and software...

  7. Ball lightning observation: an objective video-camera analysis report

    OpenAIRE

    Sello, Stefano; Viviani, Paolo; Paganini, Enrico

    2011-01-01

    In this paper we describe a video-camera recording of a (probable) ball lightning event and both the related image and signal analyses for its photometric and dynamical characterization. The results strongly support the BL nature of the recorded luminous ball object and allow the researchers to have an objective and unique video document of a possible BL event for further analyses. Some general evaluations of the obtained results considering the proposed ball lightning models conclude the paper.

  8. Transana Qualitative Video and Audio Analysis Software as a Tool for Teaching Intellectual Assessment Skills to Graduate Psychology Students

    Science.gov (United States)

    Rush, S. Craig

    2014-01-01

    This article draws on the author's experience using qualitative video and audio analysis, most notably through use of the Transana qualitative video and audio analysis software program, as an alternative method for teaching IQ administration skills to students in a graduate psychology program. Qualitative video and audio analysis may be useful for…

  9. Analysis of User Requirements in Interactive 3D Video Systems

    Directory of Open Access Journals (Sweden)

    Haiyue Yuan

    2012-01-01

    Full Text Available The recent development of three dimensional (3D display technologies has resulted in a proliferation of 3D video production and broadcasting, attracting a lot of research into capture, compression and delivery of stereoscopic content. However, the predominant design practice of interactions with 3D video content has failed to address its differences and possibilities in comparison to the existing 2D video interactions. This paper presents a study of user requirements related to interaction with the stereoscopic 3D video. The study suggests that the change of view, zoom in/out, dynamic video browsing, and textual information are the most relevant interactions with stereoscopic 3D video. In addition, we identified a strong demand for object selection that resulted in a follow-up study of user preferences in 3D selection using virtual-hand and ray-casting metaphors. These results indicate that interaction modality affects users’ decision of object selection in terms of chosen location in 3D, while user attitudes do not have significant impact. Furthermore, the ray-casting-based interaction modality using Wiimote can outperform the volume-based interaction modality using mouse and keyboard for object positioning accuracy.

  10. Video laryngoscopy for tracheal intubation: an evidence-based analysis.

    Science.gov (United States)

    2004-01-01

    The objective of this health technology policy assessment was to determine the effectiveness and cost-effectiveness of video-assisted laryngoscopy for tracheal intubation. Video-assisted, rigid laryngoscopes have been recently introduced that allow for the illumination of the airway and the accurate placement of the endotracheal tube. Two such devices are available in Canada: the Bullard® Laryngoscope that relies on fibre optics for illumination and the GlideScope® that uses a video camera and a light source to illuminate the airway. Both are connected to an external monitor so health professionals other than the operator can visualize the insertion of the tube. These devices therefore may be very useful as teaching aids for tracheal intubation. The objective of this review was to examine the effectiveness of the most commonly used video-assisted rigid laryngoscopes used in Canada for tracheal intubation. According to the Medical Advisory Secretariat standard search strategy, a literature search for current health technology assessments and peer-reviewed literature from Medline (full citations, in-process and non-indexed citations) and Embase for was conducted for citations from January 1994 to January 2004. Key words used in the search were as follows: Video-assisted; video; emergency; airway management; tracheal intubation and laryngoscopy. Two video-assisted systems are available for use in Canada. The Bullard® video laryngscope has a large body of literature associated with it and has been used for the last 10 years, although most of the studies are small and not well conducted. The literature on the GlideScope® is limited. In general, these devices provide better views of the airway but are much more expensive than conventional direct laryngoscopes. As with most medical procedures, video-assisted laryngoscopy requires training and skill maintenance for successful use. There seems to be a discrepancy between the seeming advantages of these devices in the

  11. Analysis of automated highway system risks and uncertainties. Volume 5

    Energy Technology Data Exchange (ETDEWEB)

    Sicherman, A.

    1994-10-01

    This volume describes a risk analysis performed to help identify important Automated Highway System (AHS) deployment uncertainties and quantify their effect on costs and benefits for a range of AHS deployment scenarios. The analysis identified a suite of key factors affecting vehicle and roadway costs, capacities and market penetrations for alternative AHS deployment scenarios. A systematic protocol was utilized for obtaining expert judgments of key factor uncertainties in the form of subjective probability percentile assessments. Based on these assessments, probability distributions on vehicle and roadway costs, capacity and market penetration were developed for the different scenarios. The cost/benefit risk methodology and analysis provide insights by showing how uncertainties in key factors translate into uncertainties in summary cost/benefit indices.

  12. AUTOMATED DATA ANALYSIS FOR CONSECUTIVE IMAGES FROM DROPLET COMBUSTION EXPERIMENTS

    Directory of Open Access Journals (Sweden)

    Christopher Lee Dembia

    2012-09-01

    Full Text Available A simple automated image analysis algorithm has been developed that processes consecutive images from high speed, high resolution digital images of burning fuel droplets. The droplets burn under conditions that promote spherical symmetry. The algorithm performs the tasks of edge detection of the droplet’s boundary using a grayscale intensity threshold, and shape fitting either a circle or ellipse to the droplet’s boundary. The results are compared to manual measurements of droplet diameters done with commercial software. Results show that it is possible to automate data analysis for consecutive droplet burning images even in the presence of a significant amount of noise from soot formation. An adaptive grayscale intensity threshold provides the ability to extract droplet diameters for the wide range of noise encountered. In instances where soot blocks portions of the droplet, the algorithm manages to provide accurate measurements if a circle fit is used instead of an ellipse fit, as an ellipse can be too accommodating to the disturbance.

  13. Quantifying biodiversity using digital cameras and automated image analysis.

    Science.gov (United States)

    Roadknight, C. M.; Rose, R. J.; Barber, M. L.; Price, M. C.; Marshall, I. W.

    2009-04-01

    Monitoring the effects on biodiversity of extensive grazing in complex semi-natural habitats is labour intensive. There are also concerns about the standardization of semi-quantitative data collection. We have chosen to focus initially on automating the most time consuming aspect - the image analysis. The advent of cheaper and more sophisticated digital camera technology has lead to a sudden increase in the number of habitat monitoring images and information that is being collected. We report on the use of automated trail cameras (designed for the game hunting market) to continuously capture images of grazer activity in a variety of habitats at Moor House National Nature Reserve, which is situated in the North of England at an average altitude of over 600m. Rainfall is high, and in most areas the soil consists of deep peat (1m to 3m), populated by a mix of heather, mosses and sedges. The cameras have been continuously in operation over a 6 month period, daylight images are in full colour and night images (IR flash) are black and white. We have developed artificial intelligence based methods to assist in the analysis of the large number of images collected, generating alert states for new or unusual image conditions. This paper describes the data collection techniques, outlines the quantitative and qualitative data collected and proposes online and offline systems that can reduce the manpower overheads and increase focus on important subsets in the collected data. By converting digital image data into statistical composite data it can be handled in a similar way to other biodiversity statistics thus improving the scalability of monitoring experiments. Unsupervised feature detection methods and supervised neural methods were tested and offered solutions to simplifying the process. Accurate (85 to 95%) categorization of faunal content can be obtained, requiring human intervention for only those images containing rare animals or unusual (undecidable) conditions, and

  14. StrAuto: automation and parallelization of STRUCTURE analysis.

    Science.gov (United States)

    Chhatre, Vikram E; Emerson, Kevin J

    2017-03-24

    Population structure inference using the software STRUCTURE has become an integral part of population genetic studies covering a broad spectrum of taxa including humans. The ever-expanding size of genetic data sets poses computational challenges for this analysis. Although at least one tool currently implements parallel computing to reduce computational overload of this analysis, it does not fully automate the use of replicate STRUCTURE analysis runs required for downstream inference of optimal K. There is pressing need for a tool that can deploy population structure analysis on high performance computing clusters. We present an updated version of the popular Python program StrAuto, to streamline population structure analysis using parallel computing. StrAuto implements a pipeline that combines STRUCTURE analysis with the Evanno Δ K analysis and visualization of results using STRUCTURE HARVESTER. Using benchmarking tests, we demonstrate that StrAuto significantly reduces the computational time needed to perform iterative STRUCTURE analysis by distributing runs over two or more processors. StrAuto is the first tool to integrate STRUCTURE analysis with post-processing using a pipeline approach in addition to implementing parallel computation - a set up ideal for deployment on computing clusters. StrAuto is distributed under the GNU GPL (General Public License) and available to download from http://strauto.popgen.org .

  15. Using Online Interactive Physics-based Video Analysis Exercises to Enhance Learning

    Directory of Open Access Journals (Sweden)

    Priscilla W. Laws

    2017-04-01

    Full Text Available As part of our new digital video age, physics students throughout the world can use smart phones, video cameras, computers and tablets to produce and analyze videos of physical phenomena using analysis software such as Logger Pro, Tracker or Coach. For several years, LivePhoto Physics Group members have created short videos of physical phenomena. They have also developed curricular materials that enable students to make predictions and use video analysis software to verify them. In this paper a new LivePhoto Physics project that involves the creation and testing of a series of Interactive Video Vignettes (IVVs will be described. IVVs are short webbased assignments that take less than ten minutes to complete. Each vignette is designed to present a video of a phenomenon, ask for a student’s prediction about it, and then conduct on-line video observations or analyses that allow the user to compare findings with his or her initial prediction. The Vignettes are designed for web delivery as ungraded exercises to supplement textbook reading, or to serve as pre-lecture or pre-laboratory activities that span a number of topics normally introduced in introductory physics courses. A sample Vignette on the topic of Newton’s Third Law will be described, and the outcomes of preliminary research on the impact of Vignettes on student motivation, learning and attitudes will be summarized.

  16. Automated reticle inspection data analysis for wafer fabs

    Science.gov (United States)

    Summers, Derek; Chen, Gong; Reese, Bryan; Hutchinson, Trent; Liesching, Marcus; Ying, Hai; Dover, Russell

    2009-04-01

    To minimize potential wafer yield loss due to mask defects, most wafer fabs implement some form of reticle inspection system to monitor photomask quality in high-volume wafer manufacturing environments. Traditionally, experienced operators review reticle defects found by an inspection tool and then manually classify each defect as 'pass, warn, or fail' based on its size and location. However, in the event reticle defects are suspected of causing repeating wafer defects on a completed wafer, potential defects on all associated reticles must be manually searched on a layer-by-layer basis in an effort to identify the reticle responsible for the wafer yield loss. This 'problem reticle' search process is a very tedious and time-consuming task and may cause extended manufacturing line-down situations. Often times, Process Engineers and other team members need to manually investigate several reticle inspection reports to determine if yield loss can be tied to a specific layer. Because of the very nature of this detailed work, calculation errors may occur resulting in an incorrect root cause analysis effort. These delays waste valuable resources that could be spent working on other more productive activities. This paper examines an automated software solution for converting KLA-Tencor reticle inspection defect maps into a format compatible with KLA-Tencor's Klarity Defect(R) data analysis database. The objective is to use the graphical charting capabilities of Klarity Defect to reveal a clearer understanding of defect trends for individual reticle layers or entire mask sets. Automated analysis features include reticle defect count trend analysis and potentially stacking reticle defect maps for signature analysis against wafer inspection defect data. Other possible benefits include optimizing reticle inspection sample plans in an effort to support "lean manufacturing" initiatives for wafer fabs.

  17. Subjective Analysis and Objective Characterization of Adaptive Bitrate Videos

    DEFF Research Database (Denmark)

    Søgaard, Jacob; Tavakoli, Samira; Brunnström, Kjell

    2016-01-01

    The HTTP Adaptive Streaming (HAS) technology allows video service providers to improve the network utilization and thereby increasing the end-users’ Quality of Experience (QoE).This has made HAS a widely used approach for audiovisual delivery. There are several previous studies aiming to identify...... the factors influencing on subjective QoE of adaptation events.However, adapting the video quality typically lasts in a time scale much longer than what current standardized subjective testing methods are designed for, thus making the full matrix design of the experiment on an event level hard to achieve....... In this study, we investigated the overall subjective QoE of 6 minutes long video sequences containing different sequential adaptation events. This was compared to a data set from our previous work performed to evaluate the individual adaptation events. We could then derive a relationship between the overall...

  18. Automated analysis of prerecorded evoked electromyographic activity from rat muscle.

    Science.gov (United States)

    Basarab-Horwath, I; Dewhurst, D G; Dixon, R; Meehan, A S; Odusanya, S

    1989-03-01

    An automated microprocessor-based data acquisition and analysis system has been developed specifically to quantify electromyographic (EMG) activity induced by the convulsant agent catechol in the anaesthetized rat. The stimulus and EMG response are recorded on magnetic tape. On playback, the stimulus triggers a digital oscilloscope and, via interface circuitry, a BBC B microcomputer. The myoelectric activity is digitized by the oscilloscope before being transferred under computer control via a RS232 link to the microcomputer. This system overcomes the problems of dealing with signals of variable latency and allows quantification of latency, amplitude, area and frequency of occurrence of specific components within the signal. The captured data can be used to generate either signal or superimposed high resolution graphic reproductions of the original waveforms. Although this system has been designed for a specific application, it could easily be modified to allow analysis of any complex waveform.

  19. Automated image analysis for quantification of filamentous bacteria

    DEFF Research Database (Denmark)

    Fredborg, M.; Rosenvinge, F. S.; Spillum, E.

    2015-01-01

    Background: Antibiotics of the beta-lactam group are able to alter the shape of the bacterial cell wall, e.g. filamentation or a spheroplast formation. Early determination of antimicrobial susceptibility may be complicated by filamentation of bacteria as this can be falsely interpreted as growth...... displaying different resistant profiles and differences in filamentation kinetics were used to study a novel image analysis algorithm to quantify length of bacteria and bacterial filamentation. A total of 12 beta-lactam antibiotics or beta-lactam-beta-lactamase inhibitor combinations were analyzed...... in systems relying on colorimetry or turbidometry (such as Vitek-2, Phoenix, MicroScan WalkAway). The objective was to examine an automated image analysis algorithm for quantification of filamentous bacteria using the 3D digital microscopy imaging system, oCelloScope. Results: Three E. coli strains...

  20. Automated rice leaf disease detection using color image analysis

    Science.gov (United States)

    Pugoy, Reinald Adrian D. L.; Mariano, Vladimir Y.

    2011-06-01

    In rice-related institutions such as the International Rice Research Institute, assessing the health condition of a rice plant through its leaves, which is usually done as a manual eyeball exercise, is important to come up with good nutrient and disease management strategies. In this paper, an automated system that can detect diseases present in a rice leaf using color image analysis is presented. In the system, the outlier region is first obtained from a rice leaf image to be tested using histogram intersection between the test and healthy rice leaf images. Upon obtaining the outlier, it is then subjected to a threshold-based K-means clustering algorithm to group related regions into clusters. Then, these clusters are subjected to further analysis to finally determine the suspected diseases of the rice leaf.

  1. Automated sensitivity analysis: New tools for modeling complex dynamic systems

    International Nuclear Information System (INIS)

    Pin, F.G.

    1987-01-01

    Sensitivity analysis is an established methodology used by researchers in almost every field to gain essential insight in design and modeling studies and in performance assessments of complex systems. Conventional sensitivity analysis methodologies, however, have not enjoyed the widespread use they deserve considering the wealth of information they can provide, partly because of their prohibitive cost or the large initial analytical investment they require. Automated systems have recently been developed at ORNL to eliminate these drawbacks. Compilers such as GRESS and EXAP now allow automatic and cost effective calculation of sensitivities in FORTRAN computer codes. In this paper, these and other related tools are described and their impact and applicability in the general areas of modeling, performance assessment and decision making for radioactive waste isolation problems are discussed

  2. Automated High-Dimensional Flow Cytometric Data Analysis

    Science.gov (United States)

    Pyne, Saumyadipta; Hu, Xinli; Wang, Kui; Rossin, Elizabeth; Lin, Tsung-I.; Maier, Lisa; Baecher-Allan, Clare; McLachlan, Geoffrey; Tamayo, Pablo; Hafler, David; de Jager, Philip; Mesirov, Jill

    Flow cytometry is widely used for single cell interrogation of surface and intracellular protein expression by measuring fluorescence intensity of fluorophore-conjugated reagents. We focus on the recently developed procedure of Pyne et al. (2009, Proceedings of the National Academy of Sciences USA 106, 8519-8524) for automated high- dimensional flow cytometric analysis called FLAME (FLow analysis with Automated Multivariate Estimation). It introduced novel finite mixture models of heavy-tailed and asymmetric distributions to identify and model cell populations in a flow cytometric sample. This approach robustly addresses the complexities of flow data without the need for transformation or projection to lower dimensions. It also addresses the critical task of matching cell populations across samples that enables downstream analysis. It thus facilitates application of flow cytometry to new biological and clinical problems. To facilitate pipelining with standard bioinformatic applications such as high-dimensional visualization, subject classification or outcome prediction, FLAME has been incorporated with the GenePattern package of the Broad Institute. Thereby analysis of flow data can be approached similarly as other genomic platforms. We also consider some new work that proposes a rigorous and robust solution to the registration problem by a multi-level approach that allows us to model and register cell populations simultaneously across a cohort of high-dimensional flow samples. This new approach is called JCM (Joint Clustering and Matching). It enables direct and rigorous comparisons across different time points or phenotypes in a complex biological study as well as for classification of new patient samples in a more clinical setting.

  3. Widely applicable MATLAB routines for automated analysis of saccadic reaction times.

    Science.gov (United States)

    Leppänen, Jukka M; Forssman, Linda; Kaatiala, Jussi; Yrttiaho, Santeri; Wass, Sam

    2015-06-01

    Saccadic reaction time (SRT) is a widely used dependent variable in eye-tracking studies of human cognition and its disorders. SRTs are also frequently measured in studies with special populations, such as infants and young children, who are limited in their ability to follow verbal instructions and remain in a stable position over time. In this article, we describe a library of MATLAB routines (Mathworks, Natick, MA) that are designed to (1) enable completely automated implementation of SRT analysis for multiple data sets and (2) cope with the unique challenges of analyzing SRTs from eye-tracking data collected from poorly cooperating participants. The library includes preprocessing and SRT analysis routines. The preprocessing routines (i.e., moving median filter and interpolation) are designed to remove technical artifacts and missing samples from raw eye-tracking data. The SRTs are detected by a simple algorithm that identifies the last point of gaze in the area of interest, but, critically, the extracted SRTs are further subjected to a number of postanalysis verification checks to exclude values contaminated by artifacts. Example analyses of data from 5- to 11-month-old infants demonstrated that SRTs extracted with the proposed routines were in high agreement with SRTs obtained manually from video records, robust against potential sources of artifact, and exhibited moderate to high test-retest stability. We propose that the present library has wide utility in standardizing and automating SRT-based cognitive testing in various populations. The MATLAB routines are open source and can be downloaded from http://www.uta.fi/med/icl/methods.html .

  4. Obesity in the new media: a content analysis of obesity videos on YouTube.

    Science.gov (United States)

    Yoo, Jina H; Kim, Junghyun

    2012-01-01

    This study examines (1) how the topics of obesity are framed and (2) how obese persons are portrayed on YouTube video clips. The analysis of 417 obesity videos revealed that a newer medium like YouTube, similar to traditional media, appeared to assign responsibility and solutions for obesity mainly to individuals and their behaviors, although there was a tendency that some video categories have started to show other causal claims or solutions. However, due to the prevailing emphasis on personal causes and solutions, numerous YouTube videos had a theme of weight-based teasing, or showed obese persons engaging in stereotypical eating behaviors. We discuss a potential impact of YouTube videos on shaping viewers' perceptions about obesity and further reinforcing stigmatization of obese persons.

  5. Automated analysis of invadopodia dynamics in live cells

    Directory of Open Access Journals (Sweden)

    Matthew E. Berginski

    2014-07-01

    Full Text Available Multiple cell types form specialized protein complexes that are used by the cell to actively degrade the surrounding extracellular matrix. These structures are called podosomes or invadopodia and collectively referred to as invadosomes. Due to their potential importance in both healthy physiology as well as in pathological conditions such as cancer, the characterization of these structures has been of increasing interest. Following early descriptions of invadopodia, assays were developed which labelled the matrix underneath metastatic cancer cells allowing for the assessment of invadopodia activity in motile cells. However, characterization of invadopodia using these methods has traditionally been done manually with time-consuming and potentially biased quantification methods, limiting the number of experiments and the quantity of data that can be analysed. We have developed a system to automate the segmentation, tracking and quantification of invadopodia in time-lapse fluorescence image sets at both the single invadopodia level and whole cell level. We rigorously tested the ability of the method to detect changes in invadopodia formation and dynamics through the use of well-characterized small molecule inhibitors, with known effects on invadopodia. Our results demonstrate the ability of this analysis method to quantify changes in invadopodia formation from live cell imaging data in a high throughput, automated manner.

  6. Tobacco and alcohol use behaviors portrayed in music videos: a content analysis.

    Science.gov (United States)

    DuRant, R H; Rome, E S; Rich, M; Allred, E; Emans, S J; Woods, E R

    1997-07-01

    Music videos from five genres of music were analyzed for portrayals of tobacco and alcohol use and for portrayals of such behaviors in conjunction with sexuality. Music videos (n = 518) were recorded during randomly selected days and times from four television networks. Four female and four male observers aged 17 to 24 years were trained to use a standardized content analysis instrument. All videos were observed by rotating two-person, male-female teams who were required to reach agreement on each behavior that was scored. Music genre and network differences in behaviors were analyzed with chi-squared tests. A higher percentage (25.7%) of MTV videos than other network videos portrayed tobacco use. The percentage of videos showing alcohol use was similar on all four networks. In videos that portrayed tobacco and alcohol use, the lead performer was most often the one smoking or drinking and the use of alcohol was associated with a high degree of sexuality on all the videos. These data indicate that even modest levels of viewing may result in substantial exposure to glamorized depictions of alcohol and tobacco use and alcohol use coupled with sexuality.

  7. A scheme for racquet sports video analysis with the combination of audio-visual information

    Science.gov (United States)

    Xing, Liyuan; Ye, Qixiang; Zhang, Weigang; Huang, Qingming; Yu, Hua

    2005-07-01

    As a very important category in sports video, racquet sports video, e.g. table tennis, tennis and badminton, has been paid little attention in the past years. Considering the characteristics of this kind of sports video, we propose a new scheme for structure indexing and highlight generating based on the combination of audio and visual information. Firstly, a supervised classification method is employed to detect important audio symbols including impact (ball hit), audience cheers, commentator speech, etc. Meanwhile an unsupervised algorithm is proposed to group video shots into various clusters. Then, by taking advantage of temporal relationship between audio and visual signals, we can specify the scene clusters with semantic labels including rally scenes and break scenes. Thirdly, a refinement procedure is developed to reduce false rally scenes by further audio analysis. Finally, an exciting model is proposed to rank the detected rally scenes from which many exciting video clips such as game (match) points can be correctly retrieved. Experiments on two types of representative racquet sports video, table tennis video and tennis video, demonstrate encouraging results.

  8. Despeckle filtering for ultrasound imaging and video II selected applications

    CERN Document Server

    Loizou, Christos P

    2015-01-01

    In ultrasound imaging and video visual perception is hindered by speckle multiplicative noise that degrades the quality. Noise reduction is therefore essential for improving the visual observation quality or as a pre-processing step for further automated analysis, such as image/video segmentation, texture analysis and encoding in ultrasound imaging and video. The goal of the first book (book 1 of 2 books) was to introduce the problem of speckle in ultrasound image and video as well as the theoretical background, algorithmic steps, and the MatlabTM for the following group of despeckle filters:

  9. Effectiveness of slow motion video compared to real time video in improving the accuracy and consistency of subjective gait analysis in dogs

    Directory of Open Access Journals (Sweden)

    D.M. Lane

    2015-11-01

    Full Text Available Objective measures of canine gait quality via force plates, pressure mats or kinematic analysis are considered superior to subjective gait assessment (SGA. Despite research demonstrating that SGA does not accurately detect subtle lameness, it remains the most commonly performed diagnostic test for detecting lameness in dogs. This is largely because the financial, temporal and spatial requirements for existing objective gait analysis equipment makes this technology impractical for use in general practice. The utility of slow motion video as a potential tool to augment SGA is currently untested. To evaluate a more accessible way to overcome the limitations of SGA, a slow motion video study was undertaken. Three experienced veterinarians reviewed video footage of 30 dogs, 15 with a diagnosis of primary limb lameness based on history and physical examination, and 15 with no indication of limb lameness based on history and physical examination. Four different videos were made for each dog, demonstrating each dog walking and trotting in real time, and then again walking and trotting in 50% slow motion. For each video, the veterinary raters assessed both the degree of lameness, and which limb(s they felt represented the source of the lameness. Spearman’s rho, Cramer’s V, and t-tests were performed to determine if slow motion video increased either the accuracy or consistency of raters’ SGA relative to real time video. Raters demonstrated no significant increase in consistency or accuracy in their SGA of slow motion video relative to real time video. Based on these findings, slow motion video does not increase the consistency or accuracy of SGA values. Further research is required to determine if slow motion video will benefit SGA in other ways.

  10. Automated MRI Volumetric Analysis in Patients with Rasmussen Syndrome.

    Science.gov (United States)

    Wang, Z I; Krishnan, B; Shattuck, D W; Leahy, R M; Moosa, A N V; Wyllie, E; Burgess, R C; Al-Sharif, N B; Joshi, A A; Alexopoulos, A V; Mosher, J C; Udayasankar, U; Jones, S E

    2016-12-01

    Rasmussen syndrome, also known as Rasmussen encephalitis, is typically associated with volume loss of the affected hemisphere of the brain. Our aim was to apply automated quantitative volumetric MR imaging analyses to patients diagnosed with Rasmussen encephalitis, to determine the predictive value of lobar volumetric measures and to assess regional atrophy differences as well as monitor disease progression by using these measures. Nineteen patients (42 scans) with diagnosed Rasmussen encephalitis were studied. We used 2 control groups: one with 42 age- and sex-matched healthy subjects and the other with 42 epileptic patients without Rasmussen encephalitis with the same disease duration as patients with Rasmussen encephalitis. Volumetric analysis was performed on T1-weighted images by using BrainSuite. Ratios of volumes from the affected hemisphere divided by those from the unaffected hemisphere were used as input to a logistic regression classifier, which was trained to discriminate patients from controls. Using the classifier, we compared the predictive accuracy of all the volumetric measures. These ratios were used to further assess regional atrophy differences and correlate with epilepsy duration. Interhemispheric and frontal lobe ratios had the best prediction accuracy for separating patients with Rasmussen encephalitis from healthy controls and patient controls without Rasmussen encephalitis. The insula showed significantly more atrophy compared with all the other cortical regions. Patients with longitudinal scans showed progressive volume loss in the affected hemisphere. Atrophy of the frontal lobe and insula correlated significantly with epilepsy duration. Automated quantitative volumetric analysis provides accurate separation of patients with Rasmussen encephalitis from healthy controls and epileptic patients without Rasmussen encephalitis, and thus may assist the diagnosis of Rasmussen encephalitis. Volumetric analysis could also be included as part of

  11. Development of automated system for real-time LIBS analysis

    Science.gov (United States)

    Mazalan, Elham; Ali, Jalil; Tufail, Kashif; Haider, Zuhaib

    2017-03-01

    Recent developments in Laser Induced Breakdown Spectroscopy (LIBS) instrumentation allow the acquisition of several spectra in a second. The dataset from a typical LIBS experiment can consist of a few thousands of spectra. To extract the useful information from that dataset is painstaking effort and time consuming process. Most of the currently available softwares for spectral data analysis are expensive and used for offline data analysis. LabVIEW software compatible with spectrometer (in this case Ocean Optics Maya pro spectrometer), can be used to for data acquisition and real time analysis. In the present work, a LabVIEW based automated system for real-time LIBS analysis integrated with spectrometer device is developed. This system is capable of performing real time analysis based on as-acquired LIBS spectra. Here, we have demonstrated the LIBS data acquisition and real time calculations of plasma temperature and electron density. Data plots and variations in spectral intensity in response to laser energy were observed on LabVIEW monitor interface. Routine laboratory samples of brass and calcine bone were utilized in this experiment. Developed program has shown impressive performance in real time data acquisition and analysis.

  12. Designing Stories for Educational Video Games: Analysis and Evaluation

    Science.gov (United States)

    López-Arcos, J. R.; Padilla-Zea, N.; Paderewski, P.; Gutiérrez, F. L.

    2017-01-01

    The use of video games as an educational tool initially causes a higher degree of motivation in students. However, the inclusion of educational activities throughout the game can cause this initial interest to be lost. A good way to maintain motivation is to use a good story that is used as guiding thread with which to contextualize the other…

  13. Risk analysis of a video-surveillance system

    NARCIS (Netherlands)

    Rothkrantz, L.; Lefter, I.

    2011-01-01

    The paper describes a surveillance system of cameras installed at lamppost of a military area. The surveillance system has been designed to detect unwanted visitors or suspicious behaviors. The area is composed of streets, building blocks and surrounded by gates and water. The video recordings are

  14. Two Video Analysis Applications Using Foreground/Background Segmentation

    NARCIS (Netherlands)

    Zivkovic, Z.; Petkovic, M.; van Mierlo, R.; van Keulen, Maurice; van der Heijden, Ferdinand; Jonker, Willem; Rijnierse, E.

    Probably the most frequently solved problem when videos are analyzed is segmenting a foreground object from its background in an image. After some regions in an image are detected as the foreground objects, some features are extracted that describe the segmented regions. These features together with

  15. Video Analysis in Cross-Cultural Environments and Methodological Issues

    Science.gov (United States)

    Montandon, Christiane

    2015-01-01

    This paper addresses the use of videography combined with group interviews, as a way to better understand the informal learnings of 11-12 year old children in cross-cultural encounters during French-German school exchanges. The complete, consistent video data required the researchers to choose the most significant sequences to highlight the…

  16. The Effect of Information Analysis Automation Display Content on Human Judgment Performance in Noisy Environments

    Science.gov (United States)

    Bass, Ellen J.; Baumgart, Leigh A.; Shepley, Kathryn Klein

    2014-01-01

    Displaying both the strategy that information analysis automation employs to makes its judgments and variability in the task environment may improve human judgment performance, especially in cases where this variability impacts the judgment performance of the information analysis automation. This work investigated the contribution of providing either information analysis automation strategy information, task environment information, or both, on human judgment performance in a domain where noisy sensor data are used by both the human and the information analysis automation to make judgments. In a simplified air traffic conflict prediction experiment, 32 participants made probability of horizontal conflict judgments under different display content conditions. After being exposed to the information analysis automation, judgment achievement significantly improved for all participants as compared to judgments without any of the automation's information. Participants provided with additional display content pertaining to cue variability in the task environment had significantly higher aided judgment achievement compared to those provided with only the automation's judgment of a probability of conflict. When designing information analysis automation for environments where the automation's judgment achievement is impacted by noisy environmental data, it may be beneficial to show additional task environment information to the human judge in order to improve judgment performance. PMID:24847184

  17. Automated differentiation of computer models for sensitivity analysis

    International Nuclear Information System (INIS)

    Worley, B.A.

    1991-01-01

    Sensitivity analysis of reactor physics computer models is an established discipline after more than twenty years of active development of generalized perturbations theory based on direct and adjoint methods. Many reactor physics models have been enhanced to solve for sensitivities of model results to model data. The calculated sensitivities are usually normalized first derivatives, although some codes are capable of solving for higher-order sensitivities. The purpose of this paper is to report on the development and application of the GRESS system for automating the implementation of the direct and adjoint techniques into existing FORTRAN computer codes. The GRESS system was developed at ORNL to eliminate the costly man-power intensive effort required to implement the direct and adjoint techniques into already-existing FORTRAN codes. GRESS has been successfully tested for a number of codes over a wide range of applications and presently operates on VAX machines under both VMS and UNIX operating systems. (author). 9 refs, 1 tab

  18. Automated generation of burnup chain for reactor analysis applications

    International Nuclear Information System (INIS)

    Tran, Viet-Phu; Tran, Hoai-Nam; Yamamoto, Akio; Endo, Tomohiro

    2017-01-01

    This paper presents the development of an automated generation of burnup chain for reactor analysis applications. Algorithms are proposed to reevaluate decay modes, branching ratios and effective fission product (FP) cumulative yields of a given list of important FPs taking into account intermediate reactions. A new burnup chain is generated using the updated data sources taken from the JENDL FP decay data file 2011 and Fission yields data file 2011. The new burnup chain is output according to the format for the SRAC code system. Verification has been performed to evaluate the accuracy of the new burnup chain. The results show that the new burnup chain reproduces well the results of a reference one with 193 fission products used in SRAC. Burnup calculations using the new burnup chain have also been performed based on UO 2 and MOX fuel pin cells and compared with a reference chain th2cm6fp193bp6T.

  19. Automated uranium analysis by delayed-neutron counting

    International Nuclear Information System (INIS)

    Kunzendorf, H.; Loevborg, L.; Christiansen, E.M.

    1980-10-01

    Automated uranium analysis by fission-induced delayed-neutron counting is described. A short description is given of the instrumentation including transfer system, process control, irradiation and counting sites, and computer operations. Characteristic parameters of the facility (sample preparations, background, and standards) are discussed. A sensitivity of 817 +- 22 counts per 10 -6 g U is found using irradiation, delay, and counting times of 20 s, 5 s, and 10 s, respectively. Presicion is generally less than 1% for normal geological samples. Critical level and detection limits for 7.5 g samples are 8 and 16 ppb, respectively. The importance of some physical and elemental interferences are outlined. Dead-time corrections of measured count rates are necessary and a polynomical expression is used for count rates up to 10 5 . The presence of rare earth elements is regarded as the most important elemental interference. A typical application is given and other areas of application are described. (auther)

  20. Knowledge-based requirements analysis for automating software development

    Science.gov (United States)

    Markosian, Lawrence Z.

    1988-01-01

    We present a new software development paradigm that automates the derivation of implementations from requirements. In this paradigm, informally-stated requirements are expressed in a domain-specific requirements specification language. This language is machine-understable and requirements expressed in it are captured in a knowledge base. Once the requirements are captured, more detailed specifications and eventually implementations are derived by the system using transformational synthesis. A key characteristic of the process is that the required human intervention is in the form of providing problem- and domain-specific engineering knowledge, not in writing detailed implementations. We describe a prototype system that applies the paradigm in the realm of communication engineering: the prototype automatically generates implementations of buffers following analysis of the requirements on each buffer.

  1. Crowdsourcing and Automated Retinal Image Analysis for Diabetic Retinopathy.

    Science.gov (United States)

    Mudie, Lucy I; Wang, Xueyang; Friedman, David S; Brady, Christopher J

    2017-09-23

    As the number of people with diabetic retinopathy (DR) in the USA is expected to increase threefold by 2050, the need to reduce health care costs associated with screening for this treatable disease is ever present. Crowdsourcing and automated retinal image analysis (ARIA) are two areas where new technology has been applied to reduce costs in screening for DR. This paper reviews the current literature surrounding these new technologies. Crowdsourcing has high sensitivity for normal vs abnormal images; however, when multiple categories for severity of DR are added, specificity is reduced. ARIAs have higher sensitivity and specificity, and some commercial ARIA programs are already in use. Deep learning enhanced ARIAs appear to offer even more improvement in ARIA grading accuracy. The utilization of crowdsourcing and ARIAs may be a key to reducing the time and cost burden of processing images from DR screening.

  2. Speech Recognition for A Digital Video Library.

    Science.gov (United States)

    Witbrock, Michael J.; Hauptmann, Alexander G.

    1998-01-01

    Production of the meta-data supporting the Informedia Digital Video Library interface is automated using techniques derived from artificial intelligence research. Speech recognition and natural-language processing, information retrieval, and image analysis are applied to produce an interface that helps users locate information and navigate more…

  3. Attitudes towards schizophrenia on YouTube: A content analysis of Finnish and Greek videos.

    Science.gov (United States)

    Athanasopoulou, Christina; Suni, Sanna; Hätönen, Heli; Apostolakis, Ioannis; Lionis, Christos; Välimäki, Maritta

    2016-01-01

    To investigate attitudes towards schizophrenia and people with schizophrenia presented in YouTube videos. We searched YouTube using the search terms "schizophrenia" and "psychosis" in Finnish and Greek language on April 3rd, 2013. The first 20 videos from each search (N = 80) were retrieved. Deductive content analysis was first applied for coding and data interpretation and it was followed by descriptive statistical analysis. A total of 52 videos were analyzed (65%). The majority of the videos were in the "Music" category (50%, n = 26). Most of the videos (83%, n = 43) tended to present schizophrenia in a negative way, while less than a fifth (17%, n = 9) presented schizophrenia in a positive or neutral way. Specifically, the most common negative attitude towards schizophrenia was dangerousness (29%, n = 15), while the most often identified positive attitude was objective, medically appropriate beliefs (21%, n = 11). All attitudes identified were similarly present in the Finnish and Greek videos, without any statistically significant difference. Negative presentations of schizophrenia are most likely to be accessed when searching YouTube for schizophrenia in Finnish and Greek language. More research is needed to investigate to what extent, if any, YouTube viewers' attitudes are affected by the videos they watch.

  4. A standard analysis method (SAM) for the automated analysis of polychlorinated biphenyls (PCBs) in soils using the chemical analysis automation (CAA) paradigm: validation and performance

    International Nuclear Information System (INIS)

    Rzeszutko, C.; Johnson, C.R.; Monagle, M.; Klatt, L.N.

    1997-10-01

    The Chemical Analysis Automation (CAA) program is developing a standardized modular automation strategy for chemical analysis. In this automation concept, analytical chemistry is performed with modular building blocks that correspond to individual elements of the steps in the analytical process. With a standardized set of behaviors and interactions, these blocks can be assembled in a 'plug and play' manner into a complete analysis system. These building blocks, which are referred to as Standard Laboratory Modules (SLM), interface to a host control system that orchestrates the entire analytical process, from sample preparation through data interpretation. The integrated system is called a Standard Analysis Method (SAME). A SAME for the automated determination of Polychlorinated Biphenyls (PCB) in soils, assembled in a mobile laboratory, is undergoing extensive testing and validation. The SAME consists of the following SLMs: a four channel Soxhlet extractor, a High Volume Concentrator, column clean up, a gas chromatograph, a PCB data interpretation module, a robot, and a human- computer interface. The SAME is configured to meet the requirements specified in U.S. Environmental Protection Agency's (EPA) SW-846 Methods 3541/3620A/8082 for the analysis of pcbs in soils. The PCB SAME will be described along with the developmental test plan. Performance data obtained during developmental testing will also be discussed

  5. galaxieEST: addressing EST identity through automated phylogenetic analysis.

    Science.gov (United States)

    Nilsson, R Henrik; Rajashekar, Balaji; Larsson, Karl-Henrik; Ursing, Björn M

    2004-07-05

    Research involving expressed sequence tags (ESTs) is intricately coupled to the existence of large, well-annotated sequence repositories. Comparatively complete and satisfactory annotated public sequence libraries are, however, available only for a limited range of organisms, rendering the absence of sequences and gene structure information a tangible problem for those working with taxa lacking an EST or genome sequencing project. Paralogous genes belonging to the same gene family but distinguished by derived characteristics are particularly prone to misidentification and erroneous annotation; high but incomplete levels of sequence similarity are typically difficult to interpret and have formed the basis of many unsubstantiated assumptions of orthology. In these cases, a phylogenetic study of the query sequence together with the most similar sequences in the database may be of great value to the identification process. In order to facilitate this laborious procedure, a project to employ automated phylogenetic analysis in the identification of ESTs was initiated. galaxieEST is an open source Perl-CGI script package designed to complement traditional similarity-based identification of EST sequences through employment of automated phylogenetic analysis. It uses a series of BLAST runs as a sieve to retrieve nucleotide and protein sequences for inclusion in neighbour joining and parsimony analyses; the output includes the BLAST output, the results of the phylogenetic analyses, and the corresponding multiple alignments. galaxieEST is available as an on-line web service for identification of fungal ESTs and for download / local installation for use with any organism group at http://galaxie.cgb.ki.se/galaxieEST.html. By addressing sequence relatedness in addition to similarity, galaxieEST provides an integrative view on EST origin and identity, which may prove particularly useful in cases where similarity searches return one or more pertinent, but not full, matches and

  6. 14 CFR 1261.413 - Analysis of costs; automation; prevention of overpayments, delinquencies, or defaults.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 5 2010-01-01 2010-01-01 false Analysis of costs; automation; prevention of overpayments, delinquencies, or defaults. 1261.413 Section 1261.413 Aeronautics and Space NATIONAL...) § 1261.413 Analysis of costs; automation; prevention of overpayments, delinquencies, or defaults. The...

  7. Bring It to the Pitch: Combining Video and Movement Data to Enhance Team Sport Analysis.

    Science.gov (United States)

    Stein, Manuel; Janetzko, Halldor; Lamprecht, Andreas; Breitkreutz, Thorsten; Zimmermann, Philipp; Goldlucke, Bastian; Schreck, Tobias; Andrienko, Gennady; Grossniklaus, Michael; Keim, Daniel A

    2018-01-01

    Analysts in professional team sport regularly perform analysis to gain strategic and tactical insights into player and team behavior. Goals of team sport analysis regularly include identification of weaknesses of opposing teams, or assessing performance and improvement potential of a coached team. Current analysis workflows are typically based on the analysis of team videos. Also, analysts can rely on techniques from Information Visualization, to depict e.g., player or ball trajectories. However, video analysis is typically a time-consuming process, where the analyst needs to memorize and annotate scenes. In contrast, visualization typically relies on an abstract data model, often using abstract visual mappings, and is not directly linked to the observed movement context anymore. We propose a visual analytics system that tightly integrates team sport video recordings with abstract visualization of underlying trajectory data. We apply appropriate computer vision techniques to extract trajectory data from video input. Furthermore, we apply advanced trajectory and movement analysis techniques to derive relevant team sport analytic measures for region, event and player analysis in the case of soccer analysis. Our system seamlessly integrates video and visualization modalities, enabling analysts to draw on the advantages of both analysis forms. Several expert studies conducted with team sport analysts indicate the effectiveness of our integrated approach.

  8. Safety and Capacity Analysis of Automated and Manual Highway Systems

    OpenAIRE

    Carbaugh, Jason; Godbole, Datta N.; Sengupta, Raja

    1999-01-01

    This paper compares safety of automated and manual highway systems with respect to result- ing rear-end collision frequency and severity. The results show that automated driving is safer than the most alert manual drivers, at similar speeds and capacities. We also present a detailed safety-capacity tradeo study for four di erent Automated Highway System concepts that di er in their information structure and separation policy.

  9. Recording and automated analysis of naturalistic bioptic driving.

    Science.gov (United States)

    Luo, Gang; Peli, Eli

    2011-05-01

    People with moderate central vision loss are legally permitted to drive with a bioptic telescope in 39 US states and the Netherlands, but the safety of bioptic driving remains highly controversial. There is no scientific evidence about bioptic use and its impact on safety. We propose searching for evidence by recording naturalistic driving activities in patients' cars. In a pilot study we used an analogue video system to record two bioptic drivers' daily driving activities for 10 and 5 days, respectively. In this technical report, we also describe our novel digital system that collects vehicle manoeuvre information and enables recording over more extended periods, and discuss our approach to analyzing the vast amount of data. Our observations of telescope use by the pilot subjects were quite different from their reports in a previous survey. One subject used the telescope only seven times in nearly 6 h of driving. For the other subject, the average interval between telescope use was about 2 min, and Mobile (cell) phone use in one trip extended the interval to almost 5 min. We demonstrate that computerized analysis of lengthy recordings based on video, GPS, acceleration, and black box data can be used to select informative segments for efficient off-line review of naturalistic driving behaviours. The inconsistency between self reports and objective data as well as infrequent telescope use underscores the importance of recording bioptic driving behaviours in naturalistic conditions over extended periods. We argue that the new recording system is important for understanding bioptic use behaviours and bioptic driving safety. © 2011 The College of Optometrists.

  10. Automated SEM Modal Analysis Applied to the Diogenites

    Science.gov (United States)

    Bowman, L. E.; Spilde, M. N.; Papike, James J.

    1996-01-01

    Analysis of volume proportions of minerals, or modal analysis, is routinely accomplished by point counting on an optical microscope, but the process, particularly on brecciated samples such as the diogenite meteorites, is tedious and prone to error by misidentification of very small fragments, which may make up a significant volume of the sample. Precise volume percentage data can be gathered on a scanning electron microscope (SEM) utilizing digital imaging and an energy dispersive spectrometer (EDS). This form of automated phase analysis reduces error, and at the same time provides more information than could be gathered using simple point counting alone, such as particle morphology statistics and chemical analyses. We have previously studied major, minor, and trace-element chemistry of orthopyroxene from a suite of diogenites. This abstract describes the method applied to determine the modes on this same suite of meteorites and the results of that research. The modal abundances thus determined add additional information on the petrogenesis of the diogenites. In addition, low-abundance phases such as spinels were located for further analysis by this method.

  11. Online Nonparametric Bayesian Activity Mining and Analysis From Surveillance Video.

    Science.gov (United States)

    Bastani, Vahid; Marcenaro, Lucio; Regazzoni, Carlo S

    2016-05-01

    A method for online incremental mining of activity patterns from the surveillance video stream is presented in this paper. The framework consists of a learning block in which Dirichlet process mixture model is employed for the incremental clustering of trajectories. Stochastic trajectory pattern models are formed using the Gaussian process regression of the corresponding flow functions. Moreover, a sequential Monte Carlo method based on Rao-Blackwellized particle filter is proposed for tracking and online classification as well as the detection of abnormality during the observation of an object. Experimental results on real surveillance video data are provided to show the performance of the proposed algorithm in different tasks of trajectory clustering, classification, and abnormality detection.

  12. Granulometric profiling of aeolian dust deposits by automated image analysis

    Science.gov (United States)

    Varga, György; Újvári, Gábor; Kovács, János; Jakab, Gergely; Kiss, Klaudia; Szalai, Zoltán

    2016-04-01

    Determination of granulometric parameters is of growing interest in the Earth sciences. Particle size data of sedimentary deposits provide insights into the physicochemical environment of transport, accumulation and post-depositional alterations of sedimentary particles, and are important proxies applied in paleoclimatic reconstructions. It is especially true for aeolian dust deposits with a fairly narrow grain size range as a consequence of the extremely selective nature of wind sediment transport. Therefore, various aspects of aeolian sedimentation (wind strength, distance to source(s), possible secondary source regions and modes of sedimentation and transport) can be reconstructed only from precise grain size data. As terrestrial wind-blown deposits are among the most important archives of past environmental changes, proper explanation of the proxy data is a mandatory issue. Automated imaging provides a unique technique to gather direct information on granulometric characteristics of sedimentary particles. Granulometric data obtained from automatic image analysis of Malvern Morphologi G3-ID is a rarely applied new technique for particle size and shape analyses in sedimentary geology. Size and shape data of several hundred thousand (or even million) individual particles were automatically recorded in this study from 15 loess and paleosoil samples from the captured high-resolution images. Several size (e.g. circle-equivalent diameter, major axis, length, width, area) and shape parameters (e.g. elongation, circularity, convexity) were calculated by the instrument software. At the same time, the mean light intensity after transmission through each particle is automatically collected by the system as a proxy of optical properties of the material. Intensity values are dependent on chemical composition and/or thickness of the particles. The results of the automated imaging were compared to particle size data determined by three different laser diffraction instruments

  13. Automated Image Analysis of Offshore Infrastructure Marine Biofouling

    Directory of Open Access Journals (Sweden)

    Kate Gormley

    2018-01-01

    Full Text Available In the UK, some of the oldest oil and gas installations have been in the water for over 40 years and have considerable colonisation by marine organisms, which may lead to both industry challenges and/or potential biodiversity benefits (e.g., artificial reefs. The project objective was to test the use of an automated image analysis software (CoralNet on images of marine biofouling from offshore platforms on the UK continental shelf, with the aim of (i training the software to identify the main marine biofouling organisms on UK platforms; (ii testing the software performance on 3 platforms under 3 different analysis criteria (methods A–C; (iii calculating the percentage cover of marine biofouling organisms and (iv providing recommendations to industry. Following software training with 857 images, and testing of three platforms, results showed that diversity of the three platforms ranged from low (in the central North Sea to moderate (in the northern North Sea. The two central North Sea platforms were dominated by the plumose anemone Metridium dianthus; and the northern North Sea platform showed less obvious species domination. Three different analysis criteria were created, where the method of selection of points, number of points assessed and confidence level thresholds (CT varied: (method A random selection of 20 points with CT 80%, (method B stratified random of 50 points with CT of 90% and (method C a grid approach of 100 points with CT of 90%. Performed across the three platforms, the results showed that there were no significant differences across the majority of species and comparison pairs. No significant difference (across all species was noted between confirmed annotations methods (A, B and C. It was considered that the software performed well for the classification of the main fouling species in the North Sea. Overall, the study showed that the use of automated image analysis software may enable a more efficient and consistent

  14. Real-time video analysis for retail stores

    Science.gov (United States)

    Hassan, Ehtesham; Maurya, Avinash K.

    2015-03-01

    With the advancement in video processing technologies, we can capture subtle human responses in a retail store environment which play decisive role in the store management. In this paper, we present a novel surveillance video based analytic system for retail stores targeting localized and global traffic estimate. Development of an intelligent system for human traffic estimation in real-life poses a challenging problem because of the variation and noise involved. In this direction, we begin with a novel human tracking system by an intelligent combination of motion based and image level object detection. We demonstrate the initial evaluation of this approach on available standard dataset yielding promising result. Exact traffic estimate in a retail store require correct separation of customers from service providers. We present a role based human classification framework using Gaussian mixture model for this task. A novel feature descriptor named graded colour histogram is defined for object representation. Using, our role based human classification and tracking system, we have defined a novel computationally efficient framework for two types of analytics generation i.e., region specific people count and dwell-time estimation. This system has been extensively evaluated and tested on four hours of real-life video captured from a retail store.

  15. Development of a software for INAA analysis automation

    International Nuclear Information System (INIS)

    Zahn, Guilherme S.; Genezini, Frederico A.; Figueiredo, Ana Maria G.; Ticianelli, Regina B.

    2013-01-01

    In this work, a software to automate the post-counting tasks in comparative INAA has been developed that aims to become more flexible than the available options, integrating itself with some of the routines currently in use in the IPEN Activation Analysis Laboratory and allowing the user to choose between a fully-automatic analysis or an Excel-oriented one. The software makes use of the Genie 2000 data importing and analysis routines and stores each 'energy-counts-uncertainty' table as a separate ASCII file that can be used later on if required by the analyst. Moreover, it generates an Excel-compatible CSV (comma separated values) file with only the relevant results from the analyses for each sample or comparator, as well as the results of the concentration calculations and the results obtained with four different statistical tools (unweighted average, weighted average, normalized residuals and Rajeval technique), allowing the analyst to double-check the results. Finally, a 'summary' CSV file is also produced, with the final concentration results obtained for each element in each sample. (author)

  16. Automated modelling of complex refrigeration cycles through topological structure analysis

    International Nuclear Information System (INIS)

    Belman-Flores, J.M.; Riesco-Avila, J.M.; Gallegos-Munoz, A.; Navarro-Esbri, J.; Aceves, S.M.

    2009-01-01

    We have developed a computational method for analysis of refrigeration cycles. The method is well suited for automated analysis of complex refrigeration systems. The refrigerator is specified through a description of flows representing thermodynamic sates at system locations; components that modify the thermodynamic state of a flow; and controls that specify flow characteristics at selected points in the diagram. A system of equations is then established for the refrigerator, based on mass, energy and momentum balances for each of the system components. Controls specify the values of certain system variables, thereby reducing the number of unknowns. It is found that the system of equations for the refrigerator may contain a number of redundant or duplicate equations, and therefore further equations are necessary for a full characterization. The number of additional equations is related to the number of loops in the cycle, and this is calculated by a matrix-based topological method. The methodology is demonstrated through an analysis of a two-stage refrigeration cycle.

  17. Automated computer analysis of plasma-streak traces from SCYLLAC

    International Nuclear Information System (INIS)

    Whiteman, R.L.; Jahoda, F.C.; Kruger, R.P.

    1977-11-01

    An automated computer analysis technique that locates and references the approximate centroid of single- or dual-streak traces from the Los Alamos Scientific Laboratory SCYLLAC facility is described. The technique also determines the plasma-trace width over a limited self-adjusting region. The plasma traces are recorded with streak cameras on Polaroid film, then scanned and digitized for processing. The analysis technique uses scene segmentation to separate the plasma trace from a reference fiducial trace. The technique employs two methods of peak detection; one for the plasma trace and one for the fiducial trace. The width is obtained using an edge-detection, or slope, method. Timing data are derived from the intensity modulation of the fiducial trace. To smooth (despike) the output graphs showing the plasma-trace centroid and width, a technique of ''twicing'' developed by Tukey was employed. In addition, an interactive sorting algorithm allows retrieval of the centroid, width, and fiducial data from any test shot plasma for post analysis. As yet, only a limited set of the plasma traces has been processed with this technique

  18. Automated computer analysis of plasma-streak traces from SCYLLAC

    International Nuclear Information System (INIS)

    Whitman, R.L.; Jahoda, F.C.; Kruger, R.P.

    1977-01-01

    An automated computer analysis technique that locates and references the approximate centroid of single- or dual-streak traces from the Los Alamos Scientific Laboratory SCYLLAC facility is described. The technique also determines the plasma-trace width over a limited self-adjusting region. The plasma traces are recorded with streak cameras on Polaroid film, then scanned and digitized for processing. The analysis technique uses scene segmentation to separate the plasma trace from a reference fiducial trace. The technique employs two methods of peak detection; one for the plasma trace and one for the fiducial trace. The width is obtained using an edge-detection, or slope, method. Timing data are derived from the intensity modulation of the fiducial trace. To smooth (despike) the output graphs showing the plasma-trace centroid and width, a technique of ''twicing'' developed by Tukey was employed. In addition, an interactive sorting algorithm allows retrieval of the centroid, width, and fiducial data from any test shot plasma for post analysis. As yet, only a limited set of sixteen plasma traces has been processed using this technique

  19. Intelligent Control in Automation Based on Wireless Traffic Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kurt Derr; Milos Manic

    2007-08-01

    Wireless technology is a central component of many factory automation infrastructures in both the commercial and government sectors, providing connectivity among various components in industrial realms (distributed sensors, machines, mobile process controllers). However wireless technologies provide more threats to computer security than wired environments. The advantageous features of Bluetooth technology resulted in Bluetooth units shipments climbing to five million per week at the end of 2005 [1, 2]. This is why the real-time interpretation and understanding of Bluetooth traffic behavior is critical in both maintaining the integrity of computer systems and increasing the efficient use of this technology in control type applications. Although neuro-fuzzy approaches have been applied to wireless 802.11 behavior analysis in the past, a significantly different Bluetooth protocol framework has not been extensively explored using this technology. This paper presents a new neurofuzzy traffic analysis algorithm of this still new territory of Bluetooth traffic. Further enhancements of this algorithm are presented along with the comparison against the traditional, numerical approach. Through test examples, interesting Bluetooth traffic behavior characteristics were captured, and the comparative elegance of this computationally inexpensive approach was demonstrated. This analysis can be used to provide directions for future development and use of this prevailing technology in various control type applications, as well as making the use of it more secure.

  20. Intelligent Control in Automation Based on Wireless Traffic Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kurt Derr; Milos Manic

    2007-09-01

    Wireless technology is a central component of many factory automation infrastructures in both the commercial and government sectors, providing connectivity among various components in industrial realms (distributed sensors, machines, mobile process controllers). However wireless technologies provide more threats to computer security than wired environments. The advantageous features of Bluetooth technology resulted in Bluetooth units shipments climbing to five million per week at the end of 2005 [1, 2]. This is why the real-time interpretation and understanding of Bluetooth traffic behavior is critical in both maintaining the integrity of computer systems and increasing the efficient use of this technology in control type applications. Although neuro-fuzzy approaches have been applied to wireless 802.11 behavior analysis in the past, a significantly different Bluetooth protocol framework has not been extensively explored using this technology. This paper presents a new neurofuzzy traffic analysis algorithm of this still new territory of Bluetooth traffic. Further enhancements of this algorithm are presented along with the comparison against the traditional, numerical approach. Through test examples, interesting Bluetooth traffic behavior characteristics were captured, and the comparative elegance of this computationally inexpensive approach was demonstrated. This analysis can be used to provide directions for future development and use of this prevailing technology in various control type applications, as well as making the use of it more secure.

  1. A New Motion Capture System For Automated Gait Analysis Based On Multi Video Sequence Analysis

    DEFF Research Database (Denmark)

    Jensen, Karsten; Juhl, Jens

    There is an increasing demand for assessing foot mal positions and an interest in monitoring the effect of treatment. In the last decades several different motion capture systems has been used. This abstract describes a new low cost motion capture system....

  2. Interobserver and Intraobserver Variability in pH-Impedance Analysis between 10 Experts and Automated Analysis

    DEFF Research Database (Denmark)

    Loots, Clara M; van Wijk, Michiel P; Blondeau, Kathleen

    2011-01-01

    OBJECTIVE: To determine interobserver and intraobserver variability in pH-impedance interpretation between experts and accuracy of automated analysis (AA). STUDY DESIGN: Ten pediatric 24-hour pH-impedance tracings were analyzed by 10 observers from 7 world groups and with AA. Detection of gastroe...

  3. Space Environment Automated Alerts and Anomaly Analysis Assistant (SEA^5) for NASA

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to develop a comprehensive analysis and dissemination system (Space Environment Automated Alerts  & Anomaly Analysis Assistant: SEA5) that will...

  4. Video games

    OpenAIRE

    Kolář, Vojtěch

    2012-01-01

    This thesis is based on a detailed analysis of various topics related to the question of whether video games can be art. In the first place it analyzes the current academic discussion on this subject and confronts different opinions of both supporters and objectors of the idea, that video games can be a full-fledged art form. The second point of this paper is to analyze the properties, that are inherent to video games, in order to find the reason, why cultural elite considers video games as i...

  5. Video incident analysis of concussions in boys' high school lacrosse.

    Science.gov (United States)

    Lincoln, Andrew E; Caswell, Shane V; Almquist, Jon L; Dunn, Reginald E; Hinton, Richard Y

    2013-04-01

    Boys' lacrosse has one of the highest rates of concussion among boys' high school sports. A thorough understanding of injury mechanisms and game situations associated with concussions in boys' high school lacrosse is necessary to target injury prevention efforts. To characterize common game-play scenarios and mechanisms of injury associated with concussions in boys' high school lacrosse using game video. Descriptive epidemiological study. In 25 public high schools of a single school system, 518 boys' lacrosse games were videotaped by trained videographers during the 2008 and 2009 seasons. Video of concussion incidents was examined to identify game characteristics and injury mechanisms using a lacrosse-specific coding instrument. A total of 34 concussions were captured on video. All concussions resulted from player-to-player bodily contact. Players were most often injured when contact was unanticipated or players were defenseless (n = 19; 56%), attempting to pick up a loose ball (n = 16; 47%), and/or ball handling (n = 14; 41%). Most frequently, the striking player's head (n = 27; 79%) was involved in the collision, and the struck player's head was the initial point of impact in 20 incidents (59%). In 68% (n = 23) of cases, a subsequent impact with the playing surface occurred immediately after the initial impact. A penalty was called in 26% (n = 9) of collisions. Player-to-player contact was the mechanism for all concussions. Most commonly, injured players were unaware of the pending contact, and the striking player used his head to initiate contact. Further investigation of preventive measures such as education of coaches and officials and enforcement of rules designed to prevent intentional head-to-head contact is warranted to reduce the incidence of concussions in boys' lacrosse.

  6. Building a Reduced Reference Video Quality Metric with Very Low Overhead Using Multivariate Data Analysis

    Directory of Open Access Journals (Sweden)

    Tobias Oelbaum

    2008-10-01

    Full Text Available In this contribution a reduced reference video quality metric for AVC/H.264 is proposed that needs only a very low overhead (not more than two bytes per sequence. This reduced reference metric uses well established algorithms to measure objective features of the video such as 'blur' or 'blocking'. Those measurements are then combined into a single measurement for the overall video quality. The weights of the single features and the combination of those are determined using methods provided by multivariate data analysis. The proposed metric is verified using a data set of AVC/H.264 encoded videos and the corresponding results of a carefully designed and conducted subjective evaluation. Results show that the proposed reduced reference metric not only outperforms standard PSNR but also two well known full reference metrics.

  7. Analysis of the campaign videos posted by the Third Sector on YouTube

    Directory of Open Access Journals (Sweden)

    C Van-Wyck

    2013-04-01

    Full Text Available Introduction. Web 2.0 social networks have become one of the tools most widely used by the third sector organisations. This research article examines the formal aspects, content and significance of the videos posted by these organisations on YouTube. Methods. The study is based on the quantitative content analysis of 370 videos of this type, with the objective of identifying the main characteristics. Results. The results indicate that this type of videos are characterised by low levels of creativity, the incorporation of a great amount of very clear information, the predominance of explicit content and the use of very similar formats. Conclusions. Based on the research results, it was concluded that these organisations produce campaign videos with predictable messages that rely on homogeneous structures that can be easily classified in two types: predominantly informative and predominantly persuasive.

  8. GWATCH: a web platform for automated gene association discovery analysis

    Science.gov (United States)

    2014-01-01

    Background As genome-wide sequence analyses for complex human disease determinants are expanding, it is increasingly necessary to develop strategies to promote discovery and validation of potential disease-gene associations. Findings Here we present a dynamic web-based platform – GWATCH – that automates and facilitates four steps in genetic epidemiological discovery: 1) Rapid gene association search and discovery analysis of large genome-wide datasets; 2) Expanded visual display of gene associations for genome-wide variants (SNPs, indels, CNVs), including Manhattan plots, 2D and 3D snapshots of any gene region, and a dynamic genome browser illustrating gene association chromosomal regions; 3) Real-time validation/replication of candidate or putative genes suggested from other sources, limiting Bonferroni genome-wide association study (GWAS) penalties; 4) Open data release and sharing by eliminating privacy constraints (The National Human Genome Research Institute (NHGRI) Institutional Review Board (IRB), informed consent, The Health Insurance Portability and Accountability Act (HIPAA) of 1996 etc.) on unabridged results, which allows for open access comparative and meta-analysis. Conclusions GWATCH is suitable for both GWAS and whole genome sequence association datasets. We illustrate the utility of GWATCH with three large genome-wide association studies for HIV-AIDS resistance genes screened in large multicenter cohorts; however, association datasets from any study can be uploaded and analyzed by GWATCH. PMID:25374661

  9. Automated analysis for detecting beams in laser wakefield simulations

    International Nuclear Information System (INIS)

    Ushizima, Daniela M.; Rubel, Oliver; Prabhat, Mr.; Weber, Gunther H.; Bethel, E. Wes; Aragon, Cecilia R.; Geddes, Cameron G.R.; Cormier-Michel, Estelle; Hamann, Bernd; Messmer, Peter; Hagen, Hans

    2008-01-01

    Laser wakefield particle accelerators have shown the potential to generate electric fields thousands of times higher than those of conventional accelerators. The resulting extremely short particle acceleration distance could yield a potential new compact source of energetic electrons and radiation, with wide applications from medicine to physics. Physicists investigate laser-plasma internal dynamics by running particle-in-cell simulations; however, this generates a large dataset that requires time-consuming, manual inspection by experts in order to detect key features such as beam formation. This paper describes a framework to automate the data analysis and classification of simulation data. First, we propose a new method to identify locations with high density of particles in the space-time domain, based on maximum extremum point detection on the particle distribution. We analyze high density electron regions using a lifetime diagram by organizing and pruning the maximum extrema as nodes in a minimum spanning tree. Second, we partition the multivariate data using fuzzy clustering to detect time steps in a experiment that may contain a high quality electron beam. Finally, we combine results from fuzzy clustering and bunch lifetime analysis to estimate spatially confined beams. We demonstrate our algorithms successfully on four different simulation datasets

  10. Automated longitudinal intra-subject analysis (ALISA) for diffusion MRI tractography

    DEFF Research Database (Denmark)

    Aarnink, Saskia H; Vos, Sjoerd B; Leemans, Alexander

    2014-01-01

    the inter-subject and intra-subject automation in this situation are intended for subjects without gross pathology. In this work, we propose such an automated longitudinal intra-subject analysis (dubbed ALISA) approach, and assessed whether ALISA could preserve the same level of reliability as obtained...

  11. 40 CFR 13.19 - Analysis of costs; automation; prevention of overpayments, delinquencies or defaults.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 1 2010-07-01 2010-07-01 false Analysis of costs; automation; prevention of overpayments, delinquencies or defaults. 13.19 Section 13.19 Protection of Environment...; automation; prevention of overpayments, delinquencies or defaults. (a) The Administrator may periodically...

  12. The design of video and remote analysis system for gamma spectrum based on LabVIEW

    International Nuclear Information System (INIS)

    Xu Hongkun; Fang Fang; Chen Wei

    2009-01-01

    For the protection of analyst in the measurement,as well as the facilitation of expert to realize the remote analysis, a solution of live video combined with internet access and control is proposed. DirectShow technology and the LabVIEW'S IDT (Internet Develop Toolkit) module are used, video and analysis pages of the gamma energy spectrum are integrated and published in the windows system by IIS (Internet Information Sever). We realize the analysis of gamma spectrum and remote operations by internet. At the same time, the system has a friendly interface and easily to be put into practice. It also has some reference value for the related radioactive measurement. (authors)

  13. Improving Learning Outcomes in Office Automation Subjects Through Development of Video-Based Media Learning Operating Microsoft Publisher 2010

    Directory of Open Access Journals (Sweden)

    Irma Mastumasari

    2017-07-01

    Full Text Available The purpose of this research is to produce instructional media video-based operate Microsoft Publisher 2010 which is validated by experts for student at class X of Office Administration in SMKN 1 Malang through experimental class and control class. This study uses Research and Development research design (R & D through 8 steps, namely: (1 research and information gathering early, (2 planning, (3 product development, (4 validation expert, (5 product revision, (6 the trial court (small groups, (7 the revision of the product, and (8 field trials (large group. Results of validation by material experts, media experts and 12 students, the media is expressed very valid and can be used. Based on t test, it is known that a significant difference between the average student learning outcomes experimental class and control class, so that learning media can be said to be effective for use in the learning process.

  14. Application of automated image analysis to coal petrography

    Science.gov (United States)

    Chao, E.C.T.; Minkin, J.A.; Thompson, C.L.

    1982-01-01

    The coal petrologist seeks to determine the petrographic characteristics of organic and inorganic coal constituents and their lateral and vertical variations within a single coal bed or different coal beds of a particular coal field. Definitive descriptions of coal characteristics and coal facies provide the basis for interpretation of depositional environments, diagenetic changes, and burial history and determination of the degree of coalification or metamorphism. Numerous coal core or columnar samples must be studied in detail in order to adequately describe and define coal microlithotypes, lithotypes, and lithologic facies and their variations. The large amount of petrographic information required can be obtained rapidly and quantitatively by use of an automated image-analysis system (AIAS). An AIAS can be used to generate quantitative megascopic and microscopic modal analyses for the lithologic units of an entire columnar section of a coal bed. In our scheme for megascopic analysis, distinctive bands 2 mm or more thick are first demarcated by visual inspection. These bands consist of either nearly pure microlithotypes or lithotypes such as vitrite/vitrain or fusite/fusain, or assemblages of microlithotypes. Megascopic analysis with the aid of the AIAS is next performed to determine volume percentages of vitrite, inertite, minerals, and microlithotype mixtures in bands 0.5 to 2 mm thick. The microlithotype mixtures are analyzed microscopically by use of the AIAS to determine their modal composition in terms of maceral and optically observable mineral components. Megascopic and microscopic data are combined to describe the coal unit quantitatively in terms of (V) for vitrite, (E) for liptite, (I) for inertite or fusite, (M) for mineral components other than iron sulfide, (S) for iron sulfide, and (VEIM) for the composition of the mixed phases (Xi) i = 1,2, etc. in terms of the maceral groups vitrinite V, exinite E, inertinite I, and optically observable mineral

  15. Automated absolute activation analysis with californium-252 sources

    Energy Technology Data Exchange (ETDEWEB)

    MacMurdo, K.W.; Bowman, W.W.

    1978-09-01

    A 100-mg /sup 252/Cf neutron activation analysis facility is used routinely at the Savannah River Laboratory for multielement analysis of many solid and liquid samples. An absolute analysis technique converts counting data directly to elemental concentration without the use of classical comparative standards and flux monitors. With the totally automated pneumatic sample transfer system, cyclic irradiation-decay-count regimes can be pre-selected for up to 40 samples, and samples can be analyzed with the facility unattended. An automatic data control system starts and stops a high-resolution gamma-ray spectrometer and/or a delayed-neutron detector; the system also stores data and controls output modes. Gamma ray data are reduced by three main programs in the IBM 360/195 computer: the 4096-channel spectrum and pertinent experimental timing, counting, and sample data are stored on magnetic tape; the spectrum is then reduced to a list of significant photopeak energies, integrated areas, and their associated statistical errors; and the third program assigns gamma ray photopeaks to the appropriate neutron activation product(s) by comparing photopeak energies to tabulated gamma ray energies. Photopeak areas are then converted to elemental concentration by using experimental timing and sample data, calculated elemental neutron capture rates, absolute detector efficiencies, and absolute spectroscopic decay data. Calculational procedures have been developed so that fissile material can be analyzed by cyclic neutron activation and delayed-neutron counting procedures. These calculations are based on a 6 half-life group model of delayed neutron emission; calculations include corrections for delayed neutron interference from /sup 17/O. Detection sensitivities of < or = 400 ppB for natural uranium and 8 ppB (< or = 0.5 (nCi/g)) for /sup 239/Pu were demonstrated with 15-g samples at a throughput of up to 140 per day. Over 40 elements can be detected at the sub-ppM level.

  16. Automated regional behavioral analysis for human brain images.

    Science.gov (United States)

    Lancaster, Jack L; Laird, Angela R; Eickhoff, Simon B; Martinez, Michael J; Fox, P Mickle; Fox, Peter T

    2012-01-01

    Behavioral categories of functional imaging experiments along with standardized brain coordinates of associated activations were used to develop a method to automate regional behavioral analysis of human brain images. Behavioral and coordinate data were taken from the BrainMap database (http://www.brainmap.org/), which documents over 20 years of published functional brain imaging studies. A brain region of interest (ROI) for behavioral analysis can be defined in functional images, anatomical images or brain atlases, if images are spatially normalized to MNI or Talairach standards. Results of behavioral analysis are presented for each of BrainMap's 51 behavioral sub-domains spanning five behavioral domains (Action, Cognition, Emotion, Interoception, and Perception). For each behavioral sub-domain the fraction of coordinates falling within the ROI was computed and compared with the fraction expected if coordinates for the behavior were not clustered, i.e., uniformly distributed. When the difference between these fractions is large behavioral association is indicated. A z-score ≥ 3.0 was used to designate statistically significant behavioral association. The left-right symmetry of ~100K activation foci was evaluated by hemisphere, lobe, and by behavioral sub-domain. Results highlighted the classic left-side dominance for language while asymmetry for most sub-domains (~75%) was not statistically significant. Use scenarios were presented for anatomical ROIs from the Harvard-Oxford cortical (HOC) brain atlas, functional ROIs from statistical parametric maps in a TMS-PET study, a task-based fMRI study, and ROIs from the ten "major representative" functional networks in a previously published resting state fMRI study. Statistically significant behavioral findings for these use scenarios were consistent with published behaviors for associated anatomical and functional regions.

  17. Lesion Segmentation in Automated 3D Breast Ultrasound: Volumetric Analysis.

    Science.gov (United States)

    Agarwal, Richa; Diaz, Oliver; Lladó, Xavier; Gubern-Mérida, Albert; Vilanova, Joan C; Martí, Robert

    2018-03-01

    Mammography is the gold standard screening technique in breast cancer, but it has some limitations for women with dense breasts. In such cases, sonography is usually recommended as an additional imaging technique. A traditional sonogram produces a two-dimensional (2D) visualization of the breast and is highly operator dependent. Automated breast ultrasound (ABUS) has also been proposed to produce a full 3D scan of the breast automatically with reduced operator dependency, facilitating double reading and comparison with past exams. When using ABUS, lesion segmentation and tracking changes over time are challenging tasks, as the three-dimensional (3D) nature of the images makes the analysis difficult and tedious for radiologists. The goal of this work is to develop a semi-automatic framework for breast lesion segmentation in ABUS volumes which is based on the Watershed algorithm. The effect of different de-noising methods on segmentation is studied showing a significant impact ([Formula: see text]) on the performance using a dataset of 28 temporal pairs resulting in a total of 56 ABUS volumes. The volumetric analysis is also used to evaluate the performance of the developed framework. A mean Dice Similarity Coefficient of [Formula: see text] with a mean False Positive ratio [Formula: see text] has been obtained. The Pearson correlation coefficient between the segmented volumes and the corresponding ground truth volumes is [Formula: see text] ([Formula: see text]). Similar analysis, performed on 28 temporal (prior and current) pairs, resulted in a good correlation coefficient [Formula: see text] ([Formula: see text]) for prior and [Formula: see text] ([Formula: see text]) for current cases. The developed framework showed prospects to help radiologists to perform an assessment of ABUS lesion volumes, as well as to quantify volumetric changes during lesions diagnosis and follow-up.

  18. Automated absolute activation analysis with californium-252 sources

    International Nuclear Information System (INIS)

    MacMurdo, K.W.; Bowman, W.W.

    1978-09-01

    A 100-mg 252 Cf neutron activation analysis facility is used routinely at the Savannah River Laboratory for multielement analysis of many solid and liquid samples. An absolute analysis technique converts counting data directly to elemental concentration without the use of classical comparative standards and flux monitors. With the totally automated pneumatic sample transfer system, cyclic irradiation-decay-count regimes can be pre-selected for up to 40 samples, and samples can be analyzed with the facility unattended. An automatic data control system starts and stops a high-resolution gamma-ray spectrometer and/or a delayed-neutron detector; the system also stores data and controls output modes. Gamma ray data are reduced by three main programs in the IBM 360/195 computer: the 4096-channel spectrum and pertinent experimental timing, counting, and sample data are stored on magnetic tape; the spectrum is then reduced to a list of significant photopeak energies, integrated areas, and their associated statistical errors; and the third program assigns gamma ray photopeaks to the appropriate neutron activation product(s) by comparing photopeak energies to tabulated gamma ray energies. Photopeak areas are then converted to elemental concentration by using experimental timing and sample data, calculated elemental neutron capture rates, absolute detector efficiencies, and absolute spectroscopic decay data. Calculational procedures have been developed so that fissile material can be analyzed by cyclic neutron activation and delayed-neutron counting procedures. These calculations are based on a 6 half-life group model of delayed neutron emission; calculations include corrections for delayed neutron interference from 17 O. Detection sensitivities of 239 Pu were demonstrated with 15-g samples at a throughput of up to 140 per day. Over 40 elements can be detected at the sub-ppM level

  19. Automated GPR Rebar Analysis for Robotic Bridge Deck Evaluation.

    Science.gov (United States)

    Kaur, Parneet; Dana, Kristin J; Romero, Francisco A; Gucunski, Nenad

    2016-10-01

    Ground penetrating radar (GPR) is used to evaluate deterioration of reinforced concrete bridge decks based on measuring signal attenuation from embedded rebar. The existing methods for obtaining deterioration maps from GPR data often require manual interaction and offsite processing. In this paper, a novel algorithm is presented for automated rebar detection and analysis. We test the process with comprehensive measurements obtained using a novel state-of-the-art robotic bridge inspection system equipped with GPR sensors. The algorithm achieves robust performance by integrating machine learning classification using image-based gradient features and robust curve fitting of the rebar hyperbolic signature. The approach avoids edge detection, thresholding, and template matching that require manual tuning and are known to perform poorly in the presence of noise and outliers. The detected hyperbolic signatures of rebars within the bridge deck are used to generate deterioration maps of the bridge deck. The results of the rebar region detector are compared quantitatively with several methods of image-based classification and a significant performance advantage is demonstrated. High rates of accuracy are reported on real data that includes thousands of individual hyperbolic rebar signatures from three real bridge decks.

  20. A completely automated PIXE analysis system and its applications

    International Nuclear Information System (INIS)

    Li, M.; Sheng, K.; Chin, P.; Chen, Z.; Wang, X.; Chin, J.; Rong, T.; Tan, M.; Xu, Y.

    1981-01-01

    Using the 3.5 MeV proton beam from a cyclotron, a completely automated PIXE analysis system to determine the concentration of trace elements has been set up. The experimental apparatus consists of a scattering chamber with a remotely controlled automatic target changer and a Si(Li) X-ray detector. A mini-computer with a multichannel analyser is employed to record the X-ray spectrum, to acquire data and perform on-line data processing. By comparing the data recorded the internal standard and a set of reference X-ray spectra, a method of calculating the trace element concentrations and an on-line processing program have been worked out to obtain the final results in a convenient manner. The system has been applied to determine the concentrations of trace elements in lunar rock, in human serum and nucleic acids. Experimental results show that ratio of the concentration of zinc to copper in serum may be used as an important indication of the state of human health. (orig.)

  1. Automated image analysis of microstructure changes in metal alloys

    Science.gov (United States)

    Hoque, Mohammed E.; Ford, Ralph M.; Roth, John T.

    2005-02-01

    The ability to identify and quantify changes in the microstructure of metal alloys is valuable in metal cutting and shaping applications. For example, certain metals, after being cryogenically and electrically treated, have shown large increases in their tool life when used in manufacturing cutting and shaping processes. However, the mechanisms of microstructure changes in alloys under various treatments, which cause them to behave differently, are not yet fully understood. The changes are currently evaluated in a semi-quantitative manner by visual inspection of images of the microstructure. This research applies pattern recognition technology to quantitatively measure the changes in microstructure and to validate the initial assertion of increased tool life under certain treatments. Heterogeneous images of aluminum and tungsten carbide of various categories were analyzed using a process including background correction, adaptive thresholding, edge detection and other algorithms for automated analysis of microstructures. The algorithms are robust across a variety of operating conditions. This research not only facilitates better understanding of the effects of electric and cryogenic treatment of these materials, but also their impact on tooling and metal-cutting processes.

  2. Technical and economic viability of automated highway systems : preliminary analysis

    Science.gov (United States)

    1997-01-01

    Technical and economic investigations of automated highway systems (AHS) are addressed. It has generally been accepted that such systems show potential to alleviate urban traffic congestion, so most of the AHS research has been focused instead on tec...

  3. Power consumption analysis of constant bit rate video transmission over 3G networks

    DEFF Research Database (Denmark)

    Ukhanova, Ann; Belyaev, Evgeny; Wang, Le

    2012-01-01

    This paper presents an analysis of the power consumption of video data transmission with constant bit rate over 3G mobile wireless networks. The work includes the description of the radio resource control transition state machine in 3G networks, followed by a detailed power consumption analysis...

  4. Concussive convulsions: A YouTube video analysis.

    Science.gov (United States)

    Tényi, Dalma; Gyimesi, Csilla; Horváth, Réka; Kovács, Norbert; Ábrahám, Hajnalka; Darnai, Gergely; Fogarasi, András; Büki, András; Janszky, József

    2016-08-01

    To analyze seizure-like motor phenomena immediately occurring after concussion (concussive convulsions). Twenty-five videos of concussive convulsions were obtained from YouTube as a result of numerous sports-related search terms. The videos were analyzed by four independent observers, documenting observations of the casualty, the head injury, motor symptoms of the concussive convulsions, the postictal period, and the outcome. Immediate responses included the fencing response, bear hug position, and bilateral leg extension. Fencing response was the most common. The side of the hit (p = 0.039) and the head turning (p = 0.0002) was ipsilateral to the extended arm. There was a tendency that if the blow had only a vertical component, the bear hug position appeared more frequently (p = 0.12). The motor symptom that appeared with latency of 6 ± 3 s was clonus, sometimes superimposed with tonic motor phenomena. Clonus was focal, focally evolving bilateral or bilateral, with a duration of 27 ± 19 s (5-72 s). Where lateralization of clonus could be determined, the side of clonus and the side of hit were contralateral (p = 0.039). Concussive convulsions consist of two phases. The short-latency first phase encompasses motor phenomena resembling neonatal reflexes and may be of brainstem origin. The long-latency second phase consists of clonus. We hypothesize that the motor symptoms of the long-latency phase are attributed to cortical structures; however, they are probably not epileptic in origin but rather a result of a transient cortical neuronal disturbance induced by mechanical forces. Wiley Periodicals, Inc. © 2016 International League Against Epilepsy.

  5. Steam generator automated eddy current data analysis: A benchmarking study. Final report

    International Nuclear Information System (INIS)

    Brown, S.D.

    1998-12-01

    The eddy current examination of steam generator tubes is a very demanding process. Challenges include: complex signal analysis, massive amount of data to be reviewed quickly with extreme precision and accuracy, shortages of data analysts during peak periods, and the desire to reduce examination costs. One method to address these challenges is by incorporating automation into the data analysis process. Specific advantages, which automated data analysis has the potential to provide, include the ability to analyze data more quickly, consistently and accurately than can be performed manually. Also, automated data analysis can potentially perform the data analysis function with significantly smaller levels of analyst staffing. Despite the clear advantages that an automated data analysis system has the potential to provide, no automated system has been produced and qualified that can perform all of the functions that utility engineers demand. This report investigates the current status of automated data analysis, both at the commercial and developmental level. A summary of the various commercial and developmental data analysis systems is provided which includes the signal processing methodologies used and, where available, the performance data obtained for each system. Also, included in this report is input from seventeen research organizations regarding the actions required and obstacles to be overcome in order to bring automatic data analysis from the laboratory into the field environment. In order to provide assistance with ongoing and future research efforts in the automated data analysis arena, the most promising approaches to signal processing are described in this report. These approaches include: wavelet applications, pattern recognition, template matching, expert systems, artificial neural networks, fuzzy logic, case based reasoning and genetic algorithms. Utility engineers and NDE researchers can use this information to assist in developing automated data

  6. Empirical Analysis and Automated Classification of Security Bug Reports

    Science.gov (United States)

    Tyo, Jacob P.

    2016-01-01

    With the ever expanding amount of sensitive data being placed into computer systems, the need for effective cybersecurity is of utmost importance. However, there is a shortage of detailed empirical studies of security vulnerabilities from which cybersecurity metrics and best practices could be determined. This thesis has two main research goals: (1) to explore the distribution and characteristics of security vulnerabilities based on the information provided in bug tracking systems and (2) to develop data analytics approaches for automatic classification of bug reports as security or non-security related. This work is based on using three NASA datasets as case studies. The empirical analysis showed that the majority of software vulnerabilities belong only to a small number of types. Addressing these types of vulnerabilities will consequently lead to cost efficient improvement of software security. Since this analysis requires labeling of each bug report in the bug tracking system, we explored using machine learning to automate the classification of each bug report as a security or non-security related (two-class classification), as well as each security related bug report as specific security type (multiclass classification). In addition to using supervised machine learning algorithms, a novel unsupervised machine learning approach is proposed. An ac- curacy of 92%, recall of 96%, precision of 92%, probability of false alarm of 4%, F-Score of 81% and G-Score of 90% were the best results achieved during two-class classification. Furthermore, an accuracy of 80%, recall of 80%, precision of 94%, and F-score of 85% were the best results achieved during multiclass classification.

  7. Engaging Students in a Physics Course through Use of Digital Video Capture and Analysis

    Science.gov (United States)

    Lojewska, Zenobia

    2007-10-01

    Use of digital video motion analysis as a teaching tool in an introductory physics course is presented. The focus of the presentation is the application of digital video technology in a Physics for Movement Science course geared towards Physical Education, Athletic Training and Exercise Science majors. The Dickinson movie set was found to be the most applicable for in-class activities, homework assignments, and projects. Some of the movie clips chosen for analysis are focused on human motion and sports. Additionally, students are starting to capture and analyze their own movie clips.

  8. Researchers and teachers learning together and from each other using video-based multimodal analysis

    DEFF Research Database (Denmark)

    Davidsen, Jacob; Vanderlinde, Ruben

    2014-01-01

    This paper discusses a year-long technology integration project, during which teachers and researchers joined forces to explore children’s collaborative activities through the use of touch-screens. In the research project, discussed in this paper, 16 touch-screens were integrated into teaching...... integrated touch-screens into their teaching and learning. This paper examines the methodological usefulness of video-based multimodal analysis. Through reflection on the research project, we discuss how, by using video-based multimodal analysis, researchers and teachers can study children’s touch-screen...

  9. Online Curves: A Quality Analysis of Scoliosis Videos on YouTube.

    Science.gov (United States)

    Staunton, Peter F; Baker, Joseph F; Green, James; Devitt, Aiden

    2015-12-01

    A cross-sectional study. The aim of this study was to evaluate the quality of online scoliosis information available on the video sharing site YouTube. The Internet is an increasingly utilized resource for accessing information about a variety of heath conditions. YouTube is a video sharing platform used to both seek and distribute information. A search for "scoliosis" was carried out using YouTube's search engine and data were collected on the first 50 videos returned. A JAMA score to determine currency, authorship, source and disclosure, and scoliosis-specific score that measures the amount of information on the diagnosis and treatment options (as devised by Mathur et al in 2005; scored 0-32) was recorded for each video to measure quality objectively. In addition, the number of views, number of comments, and feedback positivity was documented for each. Data analysis was conducted using R 3.1.4/R Studio 0.98 with control for the age of each video in analysis models. The average number of views per video was 71,152 with an average length of 7 minutes 32  seconds. Thirty-six percent of the videos fell under the authorship category of personal experience. The average JAMA score was 1.32/4 and average scoliosis specific score was 5.38/32. There was a positive correlation between JAMA score and number of views (P = 0.003). However, in contrast, there was a negative correlation between scoliosis-specific score and number of views (P = 0.01). Online health information has historically been poor and our study shows that in an environment like YouTube that lacks a peer review process, the quality of scoliosis information is low. Further work is needed to determine whether accessing information on YouTube can play a role in patient care other than simple education pertaining to the disease and its management. 3.

  10. Video content analysis on body-worn cameras for retrospective investigation

    Science.gov (United States)

    Bouma, Henri; Baan, Jan; ter Haar, Frank B.; Eendebak, Pieter T.; den Hollander, Richard J. M.; Burghouts, Gertjan J.; Wijn, Remco; van den Broek, Sebastiaan P.; van Rest, Jeroen H. C.

    2015-10-01

    In the security domain, cameras are important to assess critical situations. Apart from fixed surveillance cameras we observe an increasing number of sensors on mobile platforms, such as drones, vehicles and persons. Mobile cameras allow rapid and local deployment, enabling many novel applications and effects, such as the reduction of violence between police and citizens. However, the increased use of bodycams also creates potential challenges. For example: how can end-users extract information from the abundance of video, how can the information be presented, and how can an officer retrieve information efficiently? Nevertheless, such video gives the opportunity to stimulate the professionals' memory, and support complete and accurate reporting. In this paper, we show how video content analysis (VCA) can address these challenges and seize these opportunities. To this end, we focus on methods for creating a complete summary of the video, which allows quick retrieval of relevant fragments. The content analysis for summarization consists of several components, such as stabilization, scene selection, motion estimation, localization, pedestrian tracking and action recognition in the video from a bodycam. The different components and visual representations of summaries are presented for retrospective investigation.

  11. Automated three-dimensional analysis of particle measurements using an optical profilometer and image analysis software.

    Science.gov (United States)

    Bullman, V

    2003-07-01

    The automated collection of topographic images from an optical profilometer coupled with existing image analysis software offers the unique ability to quantify three-dimensional particle morphology. Optional software available with most optical profilers permits automated collection of adjacent topographic images of particles dispersed onto a suitable substrate. Particles are recognized in the image as a set of continuous pixels with grey-level values above the grey level assigned to the substrate, whereas particle height or thickness is represented in the numerical differences between these grey levels. These images are loaded into remote image analysis software where macros automate image processing, and then distinguish particles for feature analysis, including standard two-dimensional measurements (e.g. projected area, length, width, aspect ratios) and third-dimensional measurements (e.g. maximum height, mean height). Feature measurements from each calibrated image are automatically added to cumulative databases and exported to a commercial spreadsheet or statistical program for further data processing and presentation. An example is given that demonstrates the superiority of quantitative three-dimensional measurements by optical profilometry and image analysis in comparison with conventional two-dimensional measurements for the characterization of pharmaceutical powders with plate-like particles.

  12. Evaluation of full field automated photoelastic analysis based on phase stepping

    Science.gov (United States)

    Haake, S. J.; Wang, Z. F.; Patterson, E. A.

    A full-field automated polariscope designed for photoelastic analysis and based on the method of phase-stepping is described. The system is evaluated through the analysis of five different photoelastic models using both the automated system and using manual analysis employing the Tardy Compensation method. Models were chosen to provide a range of different fringe patterns, orders, and stress gradients and were: a disk in diametral compression, a constrained beam subject to a point load, a tensile plate with a central hole, a turbine blade, and a turbine disk slot. The repeatability of the full-field system was found to compare well with point by point systems. The worst isochromatic error was approximately 0.007 fringes, and the corresponding isoclinic error was 0.75. Results from the manual and automated methods showed good agreement. It is concluded that automated photoelastic analysis based on phase-stepping procedures offers a potentially accurate and reliable tool for stress analysts.

  13. Automated Design and Analysis Tool for CLV/CEV Composite and Metallic Structural Components, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — The innovation of the proposed effort is a unique automated process for the analysis, design, and sizing of CLV/CEV composite and metallic structures. This developed...

  14. Automated Design and Analysis Tool for CEV Structural and TPS Components, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The innovation of the proposed effort is a unique automated process for the analysis, design, and sizing of CEV structures and TPS. This developed process will...

  15. Automated network analysis identifies core pathways in glioblastoma.

    Directory of Open Access Journals (Sweden)

    Ethan Cerami

    2010-02-01

    Full Text Available Glioblastoma multiforme (GBM is the most common and aggressive type of brain tumor in humans and the first cancer with comprehensive genomic profiles mapped by The Cancer Genome Atlas (TCGA project. A central challenge in large-scale genome projects, such as the TCGA GBM project, is the ability to distinguish cancer-causing "driver" mutations from passively selected "passenger" mutations.In contrast to a purely frequency based approach to identifying driver mutations in cancer, we propose an automated network-based approach for identifying candidate oncogenic processes and driver genes. The approach is based on the hypothesis that cellular networks contain functional modules, and that tumors target specific modules critical to their growth. Key elements in the approach include combined analysis of sequence mutations and DNA copy number alterations; use of a unified molecular interaction network consisting of both protein-protein interactions and signaling pathways; and identification and statistical assessment of network modules, i.e. cohesive groups of genes of interest with a higher density of interactions within groups than between groups.We confirm and extend the observation that GBM alterations tend to occur within specific functional modules, in spite of considerable patient-to-patient variation, and that two of the largest modules involve signaling via p53, Rb, PI3K and receptor protein kinases. We also identify new candidate drivers in GBM, including AGAP2/CENTG1, a putative oncogene and an activator of the PI3K pathway; and, three additional significantly altered modules, including one involved in microtubule organization. To facilitate the application of our network-based approach to additional cancer types, we make the method freely available as part of a software tool called NetBox.

  16. Automation Tools for Finite Element Analysis of Adhesively Bonded Joints

    Science.gov (United States)

    Tahmasebi, Farhad; Brodeur, Stephen J. (Technical Monitor)

    2002-01-01

    This article presents two new automation creation tools that obtain stresses and strains (Shear and peel) in adhesively bonded joints. For a given adhesively bonded joint Finite Element model, in which the adhesive is characterised using springs, these automation tools read the corresponding input and output files, use the spring forces and deformations to obtain the adhesive stresses and strains, sort the stresses and strains in descending order, and generate plot files for 3D visualisation of the stress and strain fields. Grids (nodes) and elements can be numbered in any order that is convenient for the user. Using the automation tools, trade-off studies, which are needed for design of adhesively bonded joints, can be performed very quickly.

  17. Automated computation of autonomous spectral submanifolds for nonlinear modal analysis

    Science.gov (United States)

    Ponsioen, Sten; Pedergnana, Tiemo; Haller, George

    2018-04-01

    We discuss an automated computational methodology for computing two-dimensional spectral submanifolds (SSMs) in autonomous nonlinear mechanical systems of arbitrary degrees of freedom. In our algorithm, SSMs, the smoothest nonlinear continuations of modal subspaces of the linearized system, are constructed up to arbitrary orders of accuracy, using the parameterization method. An advantage of this approach is that the construction of the SSMs does not break down when the SSM folds over its underlying spectral subspace. A further advantage is an automated a posteriori error estimation feature that enables a systematic increase in the orders of the SSM computation until the required accuracy is reached. We find that the present algorithm provides a major speed-up, relative to numerical continuation methods, in the computation of backbone curves, especially in higher-dimensional problems. We illustrate the accuracy and speed of the automated SSM algorithm on lower- and higher-dimensional mechanical systems.

  18. Automated Scoring and Analysis of Micronucleated Human Lymphocytes.

    Science.gov (United States)

    Callisen, Hannes Heinrich

    Physical and chemical mutagens and carcinogens in our environment produce chromosome abberations in the circulating peripheral blood lymphocytes. The abberations, in turn, give rise to micronuclei when the lymphocytes proliferate in culture. In order to improve the micronucleus assay as a method for screening human populations for chromosome damage, I have (1) developed a high-resolution optical low-light-level micrometry expert system (HOLMES) to digitize and process microscope images of micronuclei in human peripheral blood lymphocytes, (2) defined a protocol of image processing techniques to objectively and uniquely identify and score micronuclei, and (3) analysed digital images of lymphocytes in order to study methods for (a) verifying the identification of suspect micronuclei, (b) classifying proliferating and non-proliferating lymphocytes, and (c) understanding the mechanisms of micronuclei formation and micronuclei fate during cell division. For the purpose of scoring micronuclei, HOLMES promises to (a) improve counting statistics since a greater number of cells can be scored without operator/microscopist fatigue, (b) provide for a more objective and consistent criterion for the identification of micronuclei than the human observer, and (c) yield quantitative information on nuclear and micronuclear characteristics useful in better understanding the micronucleus life cycle. My results on computer aided identification of micronuclei on microscope slides are gratifying. They demonstrate that automation of the micronucleus assay is feasible. Manual verification of HOLMES' results show correct extraction of micronuclei from the scene for 70% of the digitized images and correct identification of the micronuclei for 90% of the extracted objects. Moreover, quantitative analysis on digitized images of lymphocytes using HOLMES has revealed several exciting results: (a) micronuclear DNA content may be estimated from simple area measurements, (b) micronuclei seem to

  19. Manual versus Automated Narrative Analysis of Agrammatic Production Patterns: The Northwestern Narrative Language Analysis and Computerized Language Analysis

    Science.gov (United States)

    Hsu, Chien-Ju; Thompson, Cynthia K.

    2018-01-01

    Purpose: The purpose of this study is to compare the outcomes of the manually coded Northwestern Narrative Language Analysis (NNLA) system, which was developed for characterizing agrammatic production patterns, and the automated Computerized Language Analysis (CLAN) system, which has recently been adopted to analyze speech samples of individuals…

  20. A content analysis of smoking fetish videos on YouTube: regulatory implications for tobacco control.

    Science.gov (United States)

    Kim, Kyongseok; Paek, Hye-Jin; Lynn, Jordan

    2010-03-01

    This study examined the prevalence, accessibility, and characteristics of eroticized smoking portrayal, also referred to as smoking fetish, on YouTube. The analysis of 200 smoking fetish videos revealed that the smoking fetish videos are prevalent and accessible to adolescents on the website. They featured explicit smoking behavior by sexy, young, and healthy females, with the content corresponding to PG-13 and R movie ratings. We discuss a potential impact of the prosmoking image on youth according to social cognitive theory, and implications for tobacco control.

  1. BioFoV - An open platform for forensic video analysis and biometric data extraction

    DEFF Research Database (Denmark)

    Almeida, Miguel; Correia, Paulo Lobato; Larsen, Peter Kastmand

    2016-01-01

    to tailor-made software, based on state of art knowledge in fields such as soft biometrics, gait recognition, photogrammetry, etc. This paper proposes an open and extensible platform, BioFoV (Biometric Forensic Video tool), for forensic video analysis and biometric data extraction, aiming to host some...... of the developments that researchers come up with for solving specific problems, but that are often not shared with the community. BioFoV includes a simple to use Graphical User Interface (GUI), is implemented with open software that can run in multiple software platforms, and its implementation is publicly available....

  2. The video densitometric analysis of the radiographic density and contrast

    International Nuclear Information System (INIS)

    Yoo, Young Sun; Lee, Sang Rae

    1992-01-01

    Generally the patient's absorb dose and readability of radiograms are affected by the exposure time and kVp of which are related with the radiographic density and contrast. The investigator carried studies to know the adequate level of exposure time and kVp to obtain the better readability of radiograms. In these studies dried human mandible with each other by video densitometry among various combination sets of the exposure time, such as, 5, 6, 8, 12, 15, 19, 24, 30, 38, 48 and 60, and varing level of kVp, such as 60, 65, 70, 80 and 90 respectively. The obtained results were as follows: 1. As exposure time and kVp were increased, radiographic density of radiograms was increased. 2. The subject contrast was increased where aluminum step wedge was thin and reduced in the reversed condition. As the thin aluminum step wedge, subject contrast was increased at the condition of lower kilovoltage than that of higher kilovoltage. 3. In the case of non-contrast was increased in the lower kilovoltage with the longer exposure time and the higher kiovoltage with the shorter exposure time. 4. At the condition of short exposure time, bitter readability of each reading item was obtained with the increment of the kilovoltage but at the opposite condition increasing exposure time worsened readability of radiograms.Since X-ray machine in the current dental clinics is fixed between the range of 60-70 kVp and 10 mA, good radiograms can be obtained by varied exposure time. But according to the conclusion of these studies, better radiograms can be obtained by using filtered high kVp and then the absorb dose to patient and exposure time can be reduced.

  3. Evaluating the Evidence Base of Video Analysis: A Special Education Teacher Development Tool

    Science.gov (United States)

    Nagro, Sarah A.; Cornelius, Kyena E.

    2013-01-01

    Special education teacher development is continually studied to determine best practices for improving teacher quality and promoting student learning. Video analysis is commonly included in teacher development targeting both teacher thinking and practice intended to improve learning opportunities for students. Positive research findings support…

  4. Using High Speed Smartphone Cameras and Video Analysis Techniques to Teach Mechanical Wave Physics

    Science.gov (United States)

    Bonato, Jacopo; Gratton, Luigi M.; Onorato, Pasquale; Oss, Stefano

    2017-01-01

    We propose the use of smartphone-based slow-motion video analysis techniques as a valuable tool for investigating physics concepts ruling mechanical wave propagation. The simple experimental activities presented here, suitable for both high school and undergraduate students, allows one to measure, in a simple yet rigorous way, the speed of pulses…

  5. Estimation of low back moments from video analysis: A validation study

    NARCIS (Netherlands)

    Coenen, P.; Kingma, I.; Boot, C.R.L.; Faber, G.S.; Xu, X.; Bongers, P.M.; Dieën, J.H. van

    2011-01-01

    This study aimed to develop, compare and validate two versions of a video analysis method for assessment of low back moments during occupational lifting tasks since for epidemiological studies and ergonomic practice relatively cheap and easily applicable methods to assess low back loads are needed.

  6. XbD Video 3, The SEEing process of qualitative data analysis

    DEFF Research Database (Denmark)

    2013-01-01

    This is the third video in the Experience-based Designing series. It presents a live classroom demonstration of a nine step qualitative data analysis process called SEEing: The process is useful for uncovering or discovering deeper layers of 'meaning' and meaning structures in an experience...

  7. The Case for Constructing Video Cases: Promoting Complex, Specific, Learner-Centered Analysis of Discussion

    Science.gov (United States)

    Rosaen, Cheryl; Lundeberg, Mary; Terpstra, Marjorie

    2010-01-01

    The use of reflection and analysis in preparation of elementary and secondary preservice teachers has become a standard practice aimed at helping them develop the capacity to engage in intentional and systematic investigation of their practice. Editing video may be a more powerful tool than writing reflections based on memory to help preservice…

  8. Algorithms for Analysis of Television and Thermal Images in Special Purpose Video Devices and Systems

    OpenAIRE

    Boyun, V.; Sabelnikov, P.; Sabelnikov, Yu

    2014-01-01

    Results of the research project «Development of algorithms and program models for the analysis of television and thermal images» (code VC 200.16.13) are presented. The known methods and algorithms for television and thermal imaging video processing were analyzed and new ones that will allow to create more effective devices and systems for special purposes were offered.

  9. Automated modal parameter estimation using correlation analysis and bootstrap sampling

    Science.gov (United States)

    Yaghoubi, Vahid; Vakilzadeh, Majid K.; Abrahamsson, Thomas J. S.

    2018-02-01

    The estimation of modal parameters from a set of noisy measured data is a highly judgmental task, with user expertise playing a significant role in distinguishing between estimated physical and noise modes of a test-piece. Various methods have been developed to automate this procedure. The common approach is to identify models with different orders and cluster similar modes together. However, most proposed methods based on this approach suffer from high-dimensional optimization problems in either the estimation or clustering step. To overcome this problem, this study presents an algorithm for autonomous modal parameter estimation in which the only required optimization is performed in a three-dimensional space. To this end, a subspace-based identification method is employed for the estimation and a non-iterative correlation-based method is used for the clustering. This clustering is at the heart of the paper. The keys to success are correlation metrics that are able to treat the problems of spatial eigenvector aliasing and nonunique eigenvectors of coalescent modes simultaneously. The algorithm commences by the identification of an excessively high-order model from frequency response function test data. The high number of modes of this model provides bases for two subspaces: one for likely physical modes of the tested system and one for its complement dubbed the subspace of noise modes. By employing the bootstrap resampling technique, several subsets are generated from the same basic dataset and for each of them a model is identified to form a set of models. Then, by correlation analysis with the two aforementioned subspaces, highly correlated modes of these models which appear repeatedly are clustered together and the noise modes are collected in a so-called Trashbox cluster. Stray noise modes attracted to the mode clusters are trimmed away in a second step by correlation analysis. The final step of the algorithm is a fuzzy c-means clustering procedure applied to

  10. Automated analysis of intima-media thickness: analysis and performance of CARES 3.0.

    Science.gov (United States)

    Saba, Luca; Montisci, Roberto; Famiglietti, Luca; Tallapally, Niranjan; Acharya, U Rajendra; Molinari, Filippo; Sanfilippo, Roberto; Mallarini, Giorgio; Nicolaides, Andrew; Suri, Jasjit S

    2013-07-01

    In recent years, the use of computer-based techniques has been advocated to improve intima-media thickness (IMT) quantification and its reproducibility. The purpose of this study was to test the diagnostic performance of a new IMT automated algorithm, CARES 3.0, which is a patented class of IMT measurement systems called AtheroEdge (AtheroPoint, LLC, Roseville, CA). From 2 different institutions, we analyzed the carotid arteries of 250 patients. The automated CARES 3.0 algorithm was tested versus 2 other automated algorithms, 1 semiautomated algorithm, and a reader reference to assess the IMT measurements. Bland-Altman analysis, regression analysis, and the Student t test were performed. CARES 3.0 showed an IMT measurement bias ± SD of -0.022 ± 0.288 mm compared with the expert reader. The average IMT by CARES 3.0 was 0.852 ± 0.248 mm, and that of the reader was 0.872 ± 0.325 mm. In the Bland-Altman plots, the CARES 3.0 IMT measurements showed accurate values, with about 80% of the images having an IMT measurement bias ranging between -50% and +50%. These values were better than those of the previous CARES releases and the semiautomated algorithm. Regression analysis showed that, among all techniques, the best t value was between CARES 3.0 and the reader. We have developed an improved fully automated technique for carotid IMT measurement on longitudinal ultrasound images. This new version, called CARES 3.0, consists of a new heuristic for lumen-intima and media-adventitia detection, which showed high accuracy and reproducibility for IMT measurement.

  11. Visual analysis of trash bin processing on garbage trucks in low resolution video

    Science.gov (United States)

    Sidla, Oliver; Loibner, Gernot

    2015-03-01

    We present a system for trash can detection and counting from a camera which is mounted on a garbage collection truck. A working prototype has been successfully implemented and tested with several hours of real-world video. The detection pipeline consists of HOG detectors for two trash can sizes, and meanshift tracking and low level image processing for the analysis of the garbage disposal process. Considering the harsh environment and unfavorable imaging conditions, the process works already good enough so that very useful measurements from video data can be extracted. The false positive/false negative rate of the full processing pipeline is about 5-6% at fully automatic operation. Video data of a full day (about 8 hrs) can be processed in about 30 minutes on a standard PC.

  12. Parent-Driven Campaign Videos: An Analysis of the Motivation and Affect of Videos Created by Parents of Children With Complex Healthcare Needs.

    Science.gov (United States)

    Carter, Bernie; Bray, Lucy; Keating, Paula; Wilkinson, Catherine

    2017-09-15

    Caring for a child with complex health care needs places additional stress and time demands on parents. Parents often turn to their peers to share their experiences, gain support, and lobby for change; increasingly this is done through social media. The WellChild #notanurse_but is a parent-driven campaign that states its aim is to "shine a light" on the care parents, who are not nurses, have to undertake for their child with complex health care needs and to raise decision-makers' awareness of the gaps in service provision and support. This article reports on a study that analyzed the #notanurse_but parent-driven campaign videos. The purpose of the study was to consider the videos in terms of the range, content, context, perspectivity (motivation), and affect (sense of being there) in order to inform the future direction of the campaign. Analysis involved repeated viewing of a subset of 30 purposively selected videos and documenting our analysis on a specifically designed data extraction sheet. Each video was analyzed by a minimum of 2 researchers. All but 2 of the 30 videos were filmed inside the home. A variety of filming techniques were used. Mothers were the main narrators in all but 1 set of videos. The sense of perspectivity was clearly linked to the campaign with the narration pressing home the reality, complexity, and need for vigilance in caring for a child with complex health care needs. Different clinical tasks and routines undertaken as part of the child's care were depicted. Videos also reported on a sense of feeling different than "normal families"; the affect varied among the researchers, ranging from strong to weaker emotional responses.

  13. Automatic Video-based Analysis of Human Motion

    DEFF Research Database (Denmark)

    Fihl, Preben

    The human motion contains valuable information in many situations and people frequently perform an unconscious analysis of the motion of other people to understand their actions, intentions, and state of mind. An automatic analysis of human motion will facilitate many applications and thus has...... bring the solution of fully automatic analysis and understanding of human motion closer....

  14. Prototype Software for Automated Structural Analysis of Systems

    DEFF Research Database (Denmark)

    Jørgensen, A.; Izadi-Zamanabadi, Roozbeh; Kristensen, M.

    2004-01-01

    In this paper we present a prototype software tool that is developed to analyse the structural model of automated systems in order to identify redundant information that is hence utilized for Fault detection and Isolation (FDI) purposes. The dedicated algorithms in this software tool use a tri...

  15. Prajna: adding automated reasoning to the visual- analysis process.

    Science.gov (United States)

    Swing, E

    2010-01-01

    Developers who create applications for knowledge representation must contend with challenges in both the abundance of data and the variety of toolkits, architectures, and standards for representing it. Prajna is a flexible Java toolkit designed to overcome these challenges with an extensible architecture that supports both visualization and automated reasoning.

  16. EddyOne automated analysis of PWR/WWER steam generator tubes eddy current data

    International Nuclear Information System (INIS)

    Nadinic, B.; Vanjak, Z.

    2004-01-01

    INETEC Institute for Nuclear Technology developed software package called Eddy One which has option of automated analysis of bobbin coil eddy current data. During its development and on site use, many valuable lessons were learned which are described in this article. In accordance with previous, the following topics are covered: General requirements for automated analysis of bobbin coil eddy current data; Main approaches to automated analysis; Multi rule algorithms for data screening; Landmark detection algorithms as prerequisite for automated analysis (threshold algorithms and algorithms based on neural network principles); Field experience with Eddy One software; Development directions (use of artificial intelligence with self learning abilities for indication detection and sizing); Automated analysis software qualification; Conclusions. Special emphasis is given on results obtained on different types of steam generators, condensers and heat exchangers. Such results are then compared with results obtained by other automated software vendors giving clear advantage to INETEC approach. It has to be pointed out that INETEC field experience was collected also on WWER steam generators what is for now unique experience.(author)

  17. Comparative analysis of video processing and 3D rendering for cloud video games using different virtualization technologies

    Science.gov (United States)

    Bada, Adedayo; Alcaraz-Calero, Jose M.; Wang, Qi; Grecos, Christos

    2014-05-01

    This paper describes a comprehensive empirical performance evaluation of 3D video processing employing the physical/virtual architecture implemented in a cloud environment. Different virtualization technologies, virtual video cards and various 3D benchmarks tools have been utilized in order to analyse the optimal performance in the context of 3D online gaming applications. This study highlights 3D video rendering performance under each type of hypervisors, and other factors including network I/O, disk I/O and memory usage. Comparisons of these factors under well-known virtual display technologies such as VNC, Spice and Virtual 3D adaptors reveal the strengths and weaknesses of the various hypervisors with respect to 3D video rendering and streaming.

  18. Newton’s Cradle Experiment Using Video Tracking Analysis with Multiple Representation Approach

    Science.gov (United States)

    Anissofira, A.; Latief, F. D. E.; Kholida, L.; Sinaga, P.

    2017-09-01

    This paper reports a Physics lesson using video tracking analysis applied in Newton’s Cradle experiment to train student’s multiple representation skill. This study involved 30 science high school students from class XI. In this case, Tracker software was used to verify energy conservation law, with help from data result such as graphs and tables. Newton’s Cradle is commonly used to demonstrate the law of energy and momentum conservation. It consists of swinging spherical bobs which transfers energy from one to another by means of elastic collisions. From the video analysis, it is found that there is a difference in the velocity of the two bobs of opposite ends. Furthermore, investigation of what might cause it to happen can be done by observing and analysing the recorded video. This paper discusses students’ response and teacher’s reflection after using Tracker video analysis software in the Physics lesson. Since Tracker has the ability to provide us with multiple means of data representation way, we conclude that this method could be a good alternative solution and might also be considered better than performing a hands-on experiment activity in which not every school have suitable laboratory equipment.

  19. Joint modality fusion and temporal context exploitation for semantic video analysis

    Science.gov (United States)

    Papadopoulos, Georgios Th; Mezaris, Vasileios; Kompatsiaris, Ioannis; Strintzis, Michael G.

    2011-12-01

    In this paper, a multi-modal context-aware approach to semantic video analysis is presented. Overall, the examined video sequence is initially segmented into shots and for every resulting shot appropriate color, motion and audio features are extracted. Then, Hidden Markov Models (HMMs) are employed for performing an initial association of each shot with the semantic classes that are of interest separately for each modality. Subsequently, a graphical modeling-based approach is proposed for jointly performing modality fusion and temporal context exploitation. Novelties of this work include the combined use of contextual information and multi-modal fusion, and the development of a new representation for providing motion distribution information to HMMs. Specifically, an integrated Bayesian Network is introduced for simultaneously performing information fusion of the individual modality analysis results and exploitation of temporal context, contrary to the usual practice of performing each task separately. Contextual information is in the form of temporal relations among the supported classes. Additionally, a new computationally efficient method for providing motion energy distribution-related information to HMMs, which supports the incorporation of motion characteristics from previous frames to the currently examined one, is presented. The final outcome of this overall video analysis framework is the association of a semantic class with every shot. Experimental results as well as comparative evaluation from the application of the proposed approach to four datasets belonging to the domains of tennis, news and volleyball broadcast video are presented.

  20. Automated X-ray image analysis for cargo security: Critical review and future promise.

    Science.gov (United States)

    Rogers, Thomas W; Jaccard, Nicolas; Morton, Edward J; Griffin, Lewis D

    2017-01-01

    We review the relatively immature field of automated image analysis for X-ray cargo imagery. There is increasing demand for automated analysis methods that can assist in the inspection and selection of containers, due to the ever-growing volumes of traded cargo and the increasing concerns that customs- and security-related threats are being smuggled across borders by organised crime and terrorist networks. We split the field into the classical pipeline of image preprocessing and image understanding. Preprocessing includes: image manipulation; quality improvement; Threat Image Projection (TIP); and material discrimination and segmentation. Image understanding includes: Automated Threat Detection (ATD); and Automated Contents Verification (ACV). We identify several gaps in the literature that need to be addressed and propose ideas for future research. Where the current literature is sparse we borrow from the single-view, multi-view, and CT X-ray baggage domains, which have some characteristics in common with X-ray cargo.

  1. Extending and automating a Systems-Theoretic hazard analysis for requirements generation and analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Thomas, John (Massachusetts Institute of Technology)

    2012-05-01

    Systems Theoretic Process Analysis (STPA) is a powerful new hazard analysis method designed to go beyond traditional safety techniques - such as Fault Tree Analysis (FTA) - that overlook important causes of accidents like flawed requirements, dysfunctional component interactions, and software errors. While proving to be very effective on real systems, no formal structure has been defined for STPA and its application has been ad-hoc with no rigorous procedures or model-based design tools. This report defines a formal mathematical structure underlying STPA and describes a procedure for systematically performing an STPA analysis based on that structure. A method for using the results of the hazard analysis to generate formal safety-critical, model-based system and software requirements is also presented. Techniques to automate both the analysis and the requirements generation are introduced, as well as a method to detect conflicts between the safety and other functional model-based requirements during early development of the system.

  2. SWOT Analysis of Automation for Cash and Accounts Control in Construction

    OpenAIRE

    Mariya Deriy

    2013-01-01

    The possibility has been analyzed as to computerization of control over accounting and information systems data in terms of cash and payments in company practical activity provided that the problem is solved of the existence of well-functioning single computer network between different units of a developing company. Current state of the control organization and possibility of its automation has been observed. SWOT analysis of control automation to identify its strengths and weaknesses, obstac...

  3. Real-time teleophthalmology video consultation: an analysis of patient satisfaction in rural Western Australia.

    Science.gov (United States)

    Host, Benjamin Kj; Turner, Angus W; Muir, Josephine

    2018-01-01

    Teleophthalmology, particularly real-time video consultation, holds great potential in Australia and similar countries worldwide, where geography, population and medical workforce distribution make it difficult to provide specialist eye services outside of major cities. Assessment and referrals from rural optometrists are vital to the success of teleophthalmology. While there is good evidence for the efficacy of such services, there is limited evidence for patient satisfaction with video consultation. To evaluate patient satisfaction with teleophthalmology, the current study recruited patients who underwent a video consultation with Lions Outback Vision, for a follow-up telephone-based questionnaire assessing satisfaction. Regression analysis was performed assessing which demographic features and which features of the video consultation itself were associated with highest overall satisfaction. One hundred and nine of the 137 eligible patients completed the questionnaire (79.6 per cent; 55 per cent male; mean age 64.61 years). The majority of the participants were either 'Very satisfied' (69.1 per cent) or 'Satisfied' (24.5 per cent) with the service. No one reported being either 'Dissatisfied' or 'Very dissatisfied'. Linear regression did not reveal any demographic or follow-up variables as predictive of greater total satisfaction; however, participants who were older, felt they could easily explain their medical problems to the doctor in the video consultation and believed that telemedicine enabled them to save money and time, and were more likely to report higher overall satisfaction. Teleophthalmology is a promising new way to overcome barriers to the delivery of eye care services to rural and remote populations. This study demonstrates a high level of overall satisfaction with teleophthalmological video consultation and patients are accepting of this emerging consultation modality, regardless of age. © 2017 Optometry Australia.

  4. Effectiveness of teaching automated external defibrillators use using a traditional classroom instruction versus self-instruction video in non-critical care nurses.

    Science.gov (United States)

    Saiboon, Ismail M; Qamruddin, Reza M; Jaafar, Johar M; Bakar, Afliza A; Hamzah, Faizal A; Eng, Ho S; Robertson, Colin E

    2016-04-01

    To evaluate the effectiveness and retention of learning automated external defibrillator (AED) usage taught through a traditional classroom instruction (TCI) method versus a novel self instructed video (SIV) technique in non-critical care nurses (NCCN). A prospective single-blind randomized study was conducted over 7 months (April-October 2014) at the Universiti Kebangsaan Malaysia Medical Center, Kuala Lampur, Malaysia. Eighty nurses were randomized into either TCI or SIV instructional techniques. We assessed knowledge, skill and confidence level at baseline, immediate and 6-months post-intervention. Knowledge and confidence were assessed via questionnaire; skill was assessed by a calibrated and blinded independent assessor using an objective structured clinical examination (OSCE) method. Pre-test mean scores for knowledge in the TCI group was 10.87 ± 2.34, and for the SIV group was 10.37 ± 1.85 (maximum achievable score 20.00); 4.05 ± 2.87 in the TCI and 3.71 ± 2.66 in the SIV (maximum score 11.00) in the OSCE evaluation and 9.54 ± 3.65 in the TCI and 8.56 ± 3.47 in the SIV (maximum score 25.00) in the individual's personal confidence level. Both methods increased the mean scores significantly during immediate post-intervention (0-month). At 6-months, the TCI group scored lower than the SIV group in all aspects 11.13 ± 2.70 versus 12.95 ± 2.26 (p=0.03) in knowledge, 7.27 ± 1.62 versus 7.68 ± 1.73 (p=0.47) in the OSCE, and 16.40 ± 2.72 versus 18.82 ± 3.40 (p=0.03) in confidence level. In NCCN's, SIV is as good as TCI in providing the knowledge, competency, and confidence in performing AED defibrillation.

  5. [Automated fluorescent analysis of STR profiling and sex determination].

    Science.gov (United States)

    Jiang, B; Liang, S; Guo, J

    2000-08-01

    Denaturing PAGE coupled with the ABI377 fluorescent automated DNA sequencer was used to test the performance and reproducibility of the automated DNA profiling systems at vWA31A, TH01, F13A01, FES, TPOX, CSF1PO and Amelogenin gene. The allele designation windows at the 7 genetic markers were established and implemented into the genotype reading software. Alleles differing in just 1 bp in length could easily be discriminated. Furthermore, the interpretation guidelines were outlined for the 7 genetic systems by investigating the relative peak areas of heterozygote peaks and relative stutter peak areas in various monoplex systems. Our results indicate that if the ratio between two peaks is equal to or higher than 0.404, a herozygote could be determined, otherwise the homozygote be made.

  6. Video Analysis Verification of Head Impact Events Measured by Wearable Sensors.

    Science.gov (United States)

    Cortes, Nelson; Lincoln, Andrew E; Myer, Gregory D; Hepburn, Lisa; Higgins, Michael; Putukian, Margot; Caswell, Shane V

    2017-08-01

    Wearable sensors are increasingly used to quantify the frequency and magnitude of head impact events in multiple sports. There is a paucity of evidence that verifies head impact events recorded by wearable sensors. To utilize video analysis to verify head impact events recorded by wearable sensors and describe the respective frequency and magnitude. Cohort study (diagnosis); Level of evidence, 2. Thirty male (mean age, 16.6 ± 1.2 years; mean height, 1.77 ± 0.06 m; mean weight, 73.4 ± 12.2 kg) and 35 female (mean age, 16.2 ± 1.3 years; mean height, 1.66 ± 0.05 m; mean weight, 61.2 ± 6.4 kg) players volunteered to participate in this study during the 2014 and 2015 lacrosse seasons. Participants were instrumented with GForceTracker (GFT; boys) and X-Patch sensors (girls). Simultaneous game video was recorded by a trained videographer using a single camera located at the highest midfield location. One-third of the field was framed and panned to follow the ball during games. Videographic and accelerometer data were time synchronized. Head impact counts were compared with video recordings and were deemed valid if (1) the linear acceleration was ≥20 g, (2) the player was identified on the field, (3) the player was in camera view, and (4) the head impact mechanism could be clearly identified. Descriptive statistics of peak linear acceleration (PLA) and peak rotational velocity (PRV) for all verified head impacts ≥20 g were calculated. For the boys, a total recorded 1063 impacts (2014: n = 545; 2015: n = 518) were logged by the GFT between game start and end times (mean PLA, 46 ± 31 g; mean PRV, 1093 ± 661 deg/s) during 368 player-games. Of these impacts, 690 were verified via video analysis (65%; mean PLA, 48 ± 34 g; mean PRV, 1242 ± 617 deg/s). The X-Patch sensors, worn by the girls, recorded a total 180 impacts during the course of the games, and 58 (2014: n = 33; 2015: n = 25) were verified via video analysis (32%; mean PLA, 39 ± 21 g; mean PRV, 1664

  7. Video (GIF) Sentiment Analysis using Large-Scale Mid-Level Ontology

    OpenAIRE

    Cai, Zheng; Cao, Donglin; Ji, Rongrong

    2015-01-01

    With faster connection speed, Internet users are now making social network a huge reservoir of texts, images and video clips (GIF). Sentiment analysis for such online platform can be used to predict political elections, evaluates economic indicators and so on. However, GIF sentiment analysis is quite challenging, not only because it hinges on spatio-temporal visual contentabstraction, but also for the relationship between such abstraction and final sentiment remains unknown.In this paper, we ...

  8. Dialog detection in narrative video by shot and face analysis

    NARCIS (Netherlands)

    Kroon, B.; Nesvadba, J.; Hanjalic, A.

    2007-01-01

    The proliferation of captured personal and broadcast content in personal consumer archives necessitates comfortable access to stored audiovisual content. Intuitive retrieval and navigation solutions require however a semantic level that cannot be reached by generic multimedia content analysis alone.

  9. Automics: an integrated platform for NMR-based metabonomics spectral processing and data analysis.

    Science.gov (United States)

    Wang, Tao; Shao, Kang; Chu, Qinying; Ren, Yanfei; Mu, Yiming; Qu, Lijia; He, Jie; Jin, Changwen; Xia, Bin

    2009-03-16

    Spectral processing and post-experimental data analysis are the major tasks in NMR-based metabonomics studies. While there are commercial and free licensed software tools available to assist these tasks, researchers usually have to use multiple software packages for their studies because software packages generally focus on specific tasks. It would be beneficial to have a highly integrated platform, in which these tasks can be completed within one package. Moreover, with open source architecture, newly proposed algorithms or methods for spectral processing and data analysis can be implemented much more easily and accessed freely by the public. In this paper, we report an open source software tool, Automics, which is specifically designed for NMR-based metabonomics studies. Automics is a highly integrated platform that provides functions covering almost all the stages of NMR-based metabonomics studies. Automics provides high throughput automatic modules with most recently proposed algorithms and powerful manual modules for 1D NMR spectral processing. In addition to spectral processing functions, powerful features for data organization, data pre-processing, and data analysis have been implemented. Nine statistical methods can be applied to analyses including: feature selection (Fisher's criterion), data reduction (PCA, LDA, ULDA), unsupervised clustering (K-Mean) and supervised regression and classification (PLS/PLS-DA, KNN, SIMCA, SVM). Moreover, Automics has a user-friendly graphical interface for visualizing NMR spectra and data analysis results. The functional ability of Automics is demonstrated with an analysis of a type 2 diabetes metabolic profile. Automics facilitates high throughput 1D NMR spectral processing and high dimensional data analysis for NMR-based metabonomics applications. Using Automics, users can complete spectral processing and data analysis within one software package in most cases. Moreover, with its open source architecture, interested

  10. Automics: an integrated platform for NMR-based metabonomics spectral processing and data analysis

    Directory of Open Access Journals (Sweden)

    Qu Lijia

    2009-03-01

    Full Text Available Abstract Background Spectral processing and post-experimental data analysis are the major tasks in NMR-based metabonomics studies. While there are commercial and free licensed software tools available to assist these tasks, researchers usually have to use multiple software packages for their studies because software packages generally focus on specific tasks. It would be beneficial to have a highly integrated platform, in which these tasks can be completed within one package. Moreover, with open source architecture, newly proposed algorithms or methods for spectral processing and data analysis can be implemented much more easily and accessed freely by the public. Results In this paper, we report an open source software tool, Automics, which is specifically designed for NMR-based metabonomics studies. Automics is a highly integrated platform that provides functions covering almost all the stages of NMR-based metabonomics studies. Automics provides high throughput automatic modules with most recently proposed algorithms and powerful manual modules for 1D NMR spectral processing. In addition to spectral processing functions, powerful features for data organization, data pre-processing, and data analysis have been implemented. Nine statistical methods can be applied to analyses including: feature selection (Fisher's criterion, data reduction (PCA, LDA, ULDA, unsupervised clustering (K-Mean and supervised regression and classification (PLS/PLS-DA, KNN, SIMCA, SVM. Moreover, Automics has a user-friendly graphical interface for visualizing NMR spectra and data analysis results. The functional ability of Automics is demonstrated with an analysis of a type 2 diabetes metabolic profile. Conclusion Automics facilitates high throughput 1D NMR spectral processing and high dimensional data analysis for NMR-based metabonomics applications. Using Automics, users can complete spectral processing and data analysis within one software package in most cases

  11. Understanding perceptions of genital herpes disclosure through analysis of an online video contest.

    Science.gov (United States)

    Catallozzi, Marina; Ebel, Sophia C; Chávez, Noé R; Shearer, Lee S; Mindel, Adrian; Rosenthal, Susan L

    2013-12-01

    The aims of this study were to examine pre-existing videos in order to explore the motivation for, possible approaches to, and timing and context of disclosure of genital herpes infection as described by the lay public. A thematic content analysis was performed on 63 videos submitted to an Australian online contest sponsored by the Australian Herpes Management Forum and Novartis Pharmaceuticals designed to promote disclosure of genital herpes. Videos either provided a motivation for disclosure of genital herpes or directed disclosure without an explicit rationale. Motivations included manageability of the disease or consistency with important values. Evaluation of strategies and logistics of disclosure revealed a variety of communication styles including direct and indirect. Disclosure settings included those that were private, semiprivate and public. Disclosure was portrayed in a variety of relationship types, and at different times within those relationships, with many videos demonstrating disclosure in connection with a romantic setting. Individuals with genital herpes are expected to disclose to susceptible partners. This analysis suggests that understanding lay perspectives on herpes disclosure to a partner may help healthcare providers develop counselling messages that decrease anxiety and foster disclosure to prevent transmission.

  12. EVA: laparoscopic instrument tracking based on Endoscopic Video Analysis for psychomotor skills assessment.

    Science.gov (United States)

    Oropesa, Ignacio; Sánchez-González, Patricia; Chmarra, Magdalena K; Lamata, Pablo; Fernández, Alvaro; Sánchez-Margallo, Juan A; Jansen, Frank Willem; Dankelman, Jenny; Sánchez-Margallo, Francisco M; Gómez, Enrique J

    2013-03-01

    The EVA (Endoscopic Video Analysis) tracking system is a new system for extracting motions of laparoscopic instruments based on nonobtrusive video tracking. The feasibility of using EVA in laparoscopic settings has been tested in a box trainer setup. EVA makes use of an algorithm that employs information of the laparoscopic instrument's shaft edges in the image, the instrument's insertion point, and the camera's optical center to track the three-dimensional position of the instrument tip. A validation study of EVA comprised a comparison of the measurements achieved with EVA and the TrEndo tracking system. To this end, 42 participants (16 novices, 22 residents, and 4 experts) were asked to perform a peg transfer task in a box trainer. Ten motion-based metrics were used to assess their performance. Construct validation of the EVA has been obtained for seven motion-based metrics. Concurrent validation revealed that there is a strong correlation between the results obtained by EVA and the TrEndo for metrics, such as path length (ρ = 0.97), average speed (ρ = 0.94), or economy of volume (ρ = 0.85), proving the viability of EVA. EVA has been successfully validated in a box trainer setup, showing the potential of endoscopic video analysis to assess laparoscopic psychomotor skills. The results encourage further implementation of video tracking in training setups and image-guided surgery.

  13. Automated result analysis in radiographic testing of NPPs' welded joints

    International Nuclear Information System (INIS)

    Skomorokhov, A.O.; Nakhabov, A.V.; Belousov, P.A.

    2009-01-01

    The article presents development results of algorithms for automated image interpretation of NPP welded joints radiographic inspection. The developed algorithms are based on state-of-the-art pattern recognition methods. The paper covers automatic radiographic image segmentation, defects detection and their parameters evaluation issues. The developed algorithms testing results for actual radiographic images of welded joints with significant variation of defects parameters are given [ru

  14. Alert management for home healthcare based on home automation analysis.

    Science.gov (United States)

    Truong, T T; de Lamotte, F; Diguet, J-Ph; Said-Hocine, F

    2010-01-01

    Rising healthcare for elder and disabled people can be controlled by offering people autonomy at home by means of information technology. In this paper, we present an original and sensorless alert management solution which performs multimedia and home automation service discrimination and extracts highly regular home activities as sensors for alert management. The results of simulation data, based on real context, allow us to evaluate our approach before application to real data.

  15. Automated handling for SAF batch furnace and chemistry analysis operations

    International Nuclear Information System (INIS)

    Bowen, W.W.; Sherrell, D.L.; Wiemers, M.J.

    1981-01-01

    The Secure Automated Fabrication Program is developing a remotely operated breeder reactor fuel pin fabrication line. The equipment will be installed in the Fuels and Materials Examination Facility being constructed at Hanford, Washington. Production is scheduled to start in mid-1986. The application of small pneumatically operated industrial robots for loading and unloading product into and out of batch furnaces and for distribution and handling of chemistry samples is described

  16. Fuzzy Emotional Semantic Analysis and Automated Annotation of Scene Images

    Science.gov (United States)

    Cao, Jianfang; Chen, Lichao

    2015-01-01

    With the advances in electronic and imaging techniques, the production of digital images has rapidly increased, and the extraction and automated annotation of emotional semantics implied by images have become issues that must be urgently addressed. To better simulate human subjectivity and ambiguity for understanding scene images, the current study proposes an emotional semantic annotation method for scene images based on fuzzy set theory. A fuzzy membership degree was calculated to describe the emotional degree of a scene image and was implemented using the Adaboost algorithm and a back-propagation (BP) neural network. The automated annotation method was trained and tested using scene images from the SUN Database. The annotation results were then compared with those based on artificial annotation. Our method showed an annotation accuracy rate of 91.2% for basic emotional values and 82.4% after extended emotional values were added, which correspond to increases of 5.5% and 8.9%, respectively, compared with the results from using a single BP neural network algorithm. Furthermore, the retrieval accuracy rate based on our method reached approximately 89%. This study attempts to lay a solid foundation for the automated emotional semantic annotation of more types of images and therefore is of practical significance. PMID:25838818

  17. Fuzzy Emotional Semantic Analysis and Automated Annotation of Scene Images

    Directory of Open Access Journals (Sweden)

    Jianfang Cao

    2015-01-01

    Full Text Available With the advances in electronic and imaging techniques, the production of digital images has rapidly increased, and the extraction and automated annotation of emotional semantics implied by images have become issues that must be urgently addressed. To better simulate human subjectivity and ambiguity for understanding scene images, the current study proposes an emotional semantic annotation method for scene images based on fuzzy set theory. A fuzzy membership degree was calculated to describe the emotional degree of a scene image and was implemented using the Adaboost algorithm and a back-propagation (BP neural network. The automated annotation method was trained and tested using scene images from the SUN Database. The annotation results were then compared with those based on artificial annotation. Our method showed an annotation accuracy rate of 91.2% for basic emotional values and 82.4% after extended emotional values were added, which correspond to increases of 5.5% and 8.9%, respectively, compared with the results from using a single BP neural network algorithm. Furthermore, the retrieval accuracy rate based on our method reached approximately 89%. This study attempts to lay a solid foundation for the automated emotional semantic annotation of more types of images and therefore is of practical significance.

  18. Are Health Videos from Hospitals, Health Organizations, and Active Users Available to Health Consumers? An Analysis of Diabetes Health Video Ranking in YouTube

    Science.gov (United States)

    Borras-Morell, Jose-Enrique; Martinez-Millana, Antonio; Karlsen, Randi

    2017-01-01

    Health consumers are increasingly using the Internet to search for health information. The existence of overloaded, inaccurate, obsolete, or simply incorrect health information available on the Internet is a serious obstacle for finding relevant and good-quality data that actually helps patients. Search engines of multimedia Internet platforms are thought to help users to find relevant information according to their search. But, is the information recovered by those search engines from quality sources? Is the health information uploaded from reliable sources, such as hospitals and health organizations, easily available to patients? The availability of videos is directly related to the ranking position in YouTube search. The higher the ranking of the information is, the more accessible it is. The aim of this study is to analyze the ranking evolution of diabetes health videos on YouTube in order to discover how videos from reliable channels, such as hospitals and health organizations, are evolving in the ranking. The analysis was done by tracking the ranking of 2372 videos on a daily basis during a 30-day period using 20 diabetes-related queries. Our conclusions are that the current YouTube algorithm favors the presence of reliable videos in upper rank positions in diabetes-related searches. PMID:28243314

  19. Are Health Videos from Hospitals, Health Organizations, and Active Users Available to Health Consumers? An Analysis of Diabetes Health Video Ranking in YouTube.

    Science.gov (United States)

    Fernandez-Llatas, Carlos; Traver, Vicente; Borras-Morell, Jose-Enrique; Martinez-Millana, Antonio; Karlsen, Randi

    2017-01-01

    Health consumers are increasingly using the Internet to search for health information. The existence of overloaded, inaccurate, obsolete, or simply incorrect health information available on the Internet is a serious obstacle for finding relevant and good-quality data that actually helps patients. Search engines of multimedia Internet platforms are thought to help users to find relevant information according to their search. But, is the information recovered by those search engines from quality sources? Is the health information uploaded from reliable sources, such as hospitals and health organizations, easily available to patients? The availability of videos is directly related to the ranking position in YouTube search. The higher the ranking of the information is, the more accessible it is. The aim of this study is to analyze the ranking evolution of diabetes health videos on YouTube in order to discover how videos from reliable channels, such as hospitals and health organizations, are evolving in the ranking. The analysis was done by tracking the ranking of 2372 videos on a daily basis during a 30-day period using 20 diabetes-related queries. Our conclusions are that the current YouTube algorithm favors the presence of reliable videos in upper rank positions in diabetes-related searches.

  20. Quantization of polyphenolic compounds in histological sections of grape berries by automated color image analysis

    Science.gov (United States)

    Clement, Alain; Vigouroux, Bertnand

    2003-04-01

    We present new results in applied color image analysis that put in evidence the significant influence of soil on localization and appearance of polyphenols in grapes. These results have been obtained with a new unsupervised classification algorithm founded on hierarchical analysis of color histograms. The process is automated thanks to a software platform we developed specifically for color image analysis and it's applications.

  1. Exploring Music Instrument Teaching and Learning Environments: Video Analysis as a Means of Elucidating Process and Learning Outcomes

    Science.gov (United States)

    Daniel, Ryan

    2006-01-01

    This article outlines the methods developed to engage in a detailed investigation of video footage of piano teaching, involving advanced students in both one-to-one and small-group settings. The paper presents the research to date in the field of musical instrument teaching, considers various challenges associated with video footage analysis, and…

  2. Procedures and Compliance of a Video Modeling Applied Behavior Analysis Intervention for Brazilian Parents of Children with Autism Spectrum Disorders

    Science.gov (United States)

    Bagaiolo, Leila F.; Mari, Jair de J.; Bordini, Daniela; Ribeiro, Tatiane C.; Martone, Maria Carolina C.; Caetano, Sheila C.; Brunoni, Decio; Brentani, Helena; Paula, Cristiane S.

    2017-01-01

    Video modeling using applied behavior analysis techniques is one of the most promising and cost-effective ways to improve social skills for parents with autism spectrum disorder children. The main objectives were: (1) To elaborate/describe videos to improve eye contact and joint attention, and to decrease disruptive behaviors of autism spectrum…

  3. A typology of affordances: untangling sociomaterial interactions through video analysis

    NARCIS (Netherlands)

    van Osch, W.; Mendelson, O.

    2011-01-01

    In this study we untangle the sociomaterial interactions between developers, users, and artifacts by analyzing what types of affordances occur in the interactions between actors and artifacts in the context of group generativity. Hereto, we conducted an in-depth ethnographic and interaction analysis

  4. Surrogate Safety Analysis of Pedestrian-Vehicle Conflict at Intersections Using Unmanned Aerial Vehicle Videos

    Directory of Open Access Journals (Sweden)

    Peng Chen

    2017-01-01

    Full Text Available Conflict analysis using surrogate safety measures (SSMs has become an efficient approach to investigate safety issues. The state-of-the-art studies largely resort to video images taken from high buildings. However, it suffers from heavy labor work, high cost of maintenance, and even security restrictions. Data collection and processing remains a common challenge to traffic conflict analysis. Unmanned Aerial Systems (UASs or Unmanned Aerial Vehicles (UAVs, known for easy maneuvering, outstanding flexibility, and low costs, are considered to be a novel aerial sensor. By taking full advantage of the bird’s eye view offered by UAV, this study, as a pioneer work, applied UAV videos for surrogate safety analysis of pedestrian-vehicle conflicts at one urban intersection in Beijing, China. Aerial video sequences for a period of one hour were analyzed. The detection and tracking systems for vehicle and pedestrian trajectory data extraction were developed, respectively. Two SSMs, that is, Postencroachment Time (PET and Relative Time to Collision (RTTC, were employed to represent how spatially and temporally close the pedestrian-vehicle conflict is to a collision. The results of analysis showed a high exposure of pedestrians to traffic conflict both inside and outside the crosswalk and relatively risking behavior of right-turn vehicles around the corner. The findings demonstrate that UAV can support intersection safety analysis in an accurate and cost-effective way.

  5. PIZZARO: Forensic analysis and restoration of image and video data

    Czech Academy of Sciences Publication Activity Database

    Kamenický, Jan; Bartoš, Michal; Flusser, Jan; Mahdian, Babak; Kotera, Jan; Novozámský, Adam; Saic, Stanislav; Šroubek, Filip; Šorel, Michal; Zita, Aleš; Zitová, Barbara; Šíma, Z.; Švarc, P.; Hořínek, J.

    2016-01-01

    Roč. 264, č. 1 (2016), s. 153-166 ISSN 0379-0738 R&D Projects: GA MV VG20102013064; GA ČR GA13-29225S Institutional support: RVO:67985556 Keywords : Image forensic analysis * Image restoration * Image tampering detection * Image source identification Subject RIV: JD - Computer Applications, Robotics Impact factor: 1.989, year: 2016 http://library.utia.cas.cz/separaty/2016/ZOI/kamenicky-0459504.pdf

  6. Pro-Anorexia and Anti-Pro-Anorexia Videos on YouTube: Sentiment Analysis of User Responses.

    Science.gov (United States)

    Oksanen, Atte; Garcia, David; Sirola, Anu; Näsi, Matti; Kaakinen, Markus; Keipi, Teo; Räsänen, Pekka

    2015-11-12

    Pro-anorexia communities exist online and encourage harmful weight loss and weight control practices, often through emotional content that enforces social ties within these communities. User-generated responses to videos that directly oppose pro-anorexia communities have not yet been researched in depth. The aim was to study emotional reactions to pro-anorexia and anti-pro-anorexia online content on YouTube using sentiment analysis. Using the 50 most popular YouTube pro-anorexia and anti-pro-anorexia user channels as a starting point, we gathered data on users, their videos, and their commentators. A total of 395 anorexia videos and 12,161 comments were analyzed using positive and negative sentiments and ratings submitted by the viewers of the videos. The emotional information was automatically extracted with an automatic sentiment detection tool whose reliability was tested with human coders. Ordinary least squares regression models were used to estimate the strength of sentiments. The models controlled for the number of video views and comments, number of months the video had been on YouTube, duration of the video, uploader's activity as a video commentator, and uploader's physical location by country. The 395 videos had more than 6 million views and comments by almost 8000 users. Anti-pro-anorexia video comments expressed more positive sentiments on a scale of 1 to 5 (adjusted prediction [AP] 2.15, 95% CI 2.11-2.19) than did those of pro-anorexia videos (AP 2.02, 95% CI 1.98-2.06). Anti-pro-anorexia videos also received more likes (AP 181.02, 95% CI 155.19-206.85) than pro-anorexia videos (AP 31.22, 95% CI 31.22-37.81). Negative sentiments and video dislikes were equally distributed in responses to both pro-anorexia and anti-pro-anorexia videos. Despite pro-anorexia content being widespread on YouTube, videos promoting help for anorexia and opposing the pro-anorexia community were more popular, gaining more positive feedback and comments than pro-anorexia videos

  7. A community of curious souls: an analysis of commenting behavior on TED talks videos.

    Science.gov (United States)

    Tsou, Andrew; Thelwall, Mike; Mongeon, Philippe; Sugimoto, Cassidy R

    2014-01-01

    The TED (Technology, Entertainment, Design) Talks website hosts video recordings of various experts, celebrities, academics, and others who discuss their topics of expertise. Funded by advertising and members but provided free online, TED Talks have been viewed over a billion times and are a science communication phenomenon. Although the organization has been derided for its populist slant and emphasis on entertainment value, no previous research has assessed audience reactions in order to determine the degree to which presenter characteristics and platform affect the reception of a video. This article addresses this issue via a content analysis of comments left on both the TED website and the YouTube platform (on which TED Talks videos are also posted). It was found that commenters were more likely to discuss the characteristics of a presenter on YouTube, whereas commenters tended to engage with the talk content on the TED website. In addition, people tended to be more emotional when the speaker was a woman (by leaving comments that were either positive or negative). The results can inform future efforts to popularize science amongst the public, as well as to provide insights for those looking to disseminate information via Internet videos.

  8. A community of curious souls: an analysis of commenting behavior on TED talks videos.

    Directory of Open Access Journals (Sweden)

    Andrew Tsou

    Full Text Available The TED (Technology, Entertainment, Design Talks website hosts video recordings of various experts, celebrities, academics, and others who discuss their topics of expertise. Funded by advertising and members but provided free online, TED Talks have been viewed over a billion times and are a science communication phenomenon. Although the organization has been derided for its populist slant and emphasis on entertainment value, no previous research has assessed audience reactions in order to determine the degree to which presenter characteristics and platform affect the reception of a video. This article addresses this issue via a content analysis of comments left on both the TED website and the YouTube platform (on which TED Talks videos are also posted. It was found that commenters were more likely to discuss the characteristics of a presenter on YouTube, whereas commenters tended to engage with the talk content on the TED website. In addition, people tended to be more emotional when the speaker was a woman (by leaving comments that were either positive or negative. The results can inform future efforts to popularize science amongst the public, as well as to provide insights for those looking to disseminate information via Internet videos.

  9. Reliability and accuracy of a video analysis protocol to assess core ability.

    Science.gov (United States)

    McDonald, Dawn A; Delgadillo, James Q; Fredericson, Michael; McConnell, Jennifer; Hodgins, Melissa; Besier, Thor F

    2011-03-01

    To develop and test a method to measure core ability in healthy athletes with 2-dimensional video analysis software (SiliconCOACH). Specific objectives were to: (1) develop a standardized exercise battery with progressions of increasing difficulty to evaluate areas of core ability in elite athletes; (2) develop an objective and quantitative grading rubric with the use of video analysis software; (3) assess the test-retest reliability of the exercise battery; (4) assess the interrater and intrarater reliability of the video analysis system; and (5) assess the accuracy of the assessment. Test-retest repeatability and accuracy. Testing was conducted in the Stanford Human Performance Laboratory, Stanford University, Stanford, CA. Nine female gymnasts currently training with the Stanford Varsity Women's Gymnastics Team participated in testing. Participants completed a test battery composed of planks, side planks, and leg bridges of increasing difficulty. Subjects completed two 20-minute testing sessions within a 4- to 10-day period. Two-dimensional sagittal-plane video was captured simultaneously with 3-dimensional motion capture. The main outcome measures were pelvic displacement and time that elapsed until failure occurred, as measured with SiliconCOACH video analysis software. Test-retest and interrater and intrarater reliability of the video analysis measures was assessed. Accuracy as compared with 3-dimensional motion capture also was assessed. Levels reached during the side planks and leg bridges had an excellent test-retest correlation (r(2) = 0.84, r(2) = 0.95). Pelvis displacements measured by examiner 1 and examiner 2 had an excellent correlation (r(2) = 0.86, intraclass correlation coefficient = 0.92). Pelvis displacements measured by examiner 1 during independent grading sessions had an excellent correlation (r(2) = 0.92). Pelvis displacements from the plank and from a set of combined plank and side plank exercises both had an excellent correlation with 3

  10. Mass asymmetry and tricyclic wobble motion assessment using automated launch video analysis

    Directory of Open Access Journals (Sweden)

    Ryan Decker

    2016-04-01

    Examination of the pitch and yaw histories clearly indicates that in addition to epicyclic motion's nutation and precession oscillations, an even faster wobble amplitude is present during each spin revolution, even though some of the amplitudes of the oscillation are smaller than 0.02 degree. The results are compared to a sequence of shots where little appreciable mass asymmetries were present, and only nutation and precession frequencies are predominantly apparent in the motion history results. Magnitudes of the wobble motion are estimated and compared to product of inertia measurements of the asymmetric projectiles.

  11. Application of quantum dots as analytical tools in automated chemical analysis: A review

    International Nuclear Information System (INIS)

    Frigerio, Christian; Ribeiro, David S.M.; Rodrigues, S. Sofia M.; Abreu, Vera L.R.G.; Barbosa, João A.C.; Prior, João A.V.; Marques, Karine L.; Santos, João L.M.

    2012-01-01

    Highlights: ► Review on quantum dots application in automated chemical analysis. ► Automation by using flow-based techniques. ► Quantum dots in liquid chromatography and capillary electrophoresis. ► Detection by fluorescence and chemiluminescence. ► Electrochemiluminescence and radical generation. - Abstract: Colloidal semiconductor nanocrystals or quantum dots (QDs) are one of the most relevant developments in the fast-growing world of nanotechnology. Initially proposed as luminescent biological labels, they are finding new important fields of application in analytical chemistry, where their photoluminescent properties have been exploited in environmental monitoring, pharmaceutical and clinical analysis and food quality control. Despite the enormous variety of applications that have been developed, the automation of QDs-based analytical methodologies by resorting to automation tools such as continuous flow analysis and related techniques, which would allow to take advantage of particular features of the nanocrystals such as the versatile surface chemistry and ligand binding ability, the aptitude to generate reactive species, the possibility of encapsulation in different materials while retaining native luminescence providing the means for the implementation of renewable chemosensors or even the utilisation of more drastic and even stability impairing reaction conditions, is hitherto very limited. In this review, we provide insights into the analytical potential of quantum dots focusing on prospects of their utilisation in automated flow-based and flow-related approaches and the future outlook of QDs applications in chemical analysis.

  12. An automated system for whole microscopic image acquisition and analysis.

    Science.gov (United States)

    Bueno, Gloria; Déniz, Oscar; Fernández-Carrobles, María Del Milagro; Vállez, Noelia; Salido, Jesús

    2014-09-01

    The field of anatomic pathology has experienced major changes over the last decade. Virtual microscopy (VM) systems have allowed experts in pathology and other biomedical areas to work in a safer and more collaborative way. VMs are automated systems capable of digitizing microscopic samples that were traditionally examined one by one. The possibility of having digital copies reduces the risk of damaging original samples, and also makes it easier to distribute copies among other pathologists. This article describes the development of an automated high-resolution whole slide imaging (WSI) system tailored to the needs and problems encountered in digital imaging for pathology, from hardware control to the full digitization of samples. The system has been built with an additional digital monochromatic camera together with the color camera by default and LED transmitted illumination (RGB). Monochrome cameras are the preferred method of acquisition for fluorescence microscopy. The system is able to digitize correctly and form large high resolution microscope images for both brightfield and fluorescence. The quality of the digital images has been quantified using three metrics based on sharpness, contrast and focus. It has been proved on 150 tissue samples of brain autopsies, prostate biopsies and lung cytologies, at five magnifications: 2.5×, 10×, 20×, 40×, and 63×. The article is focused on the hardware set-up and the acquisition software, although results of the implemented image processing techniques included in the software and applied to the different tissue samples are also presented. © 2014 Wiley Periodicals, Inc.

  13. Algorithms for Analysis of Television and Thermal Images in Special Purpose Video Devices and Systems

    Directory of Open Access Journals (Sweden)

    Boyun, V.

    2014-11-01

    Full Text Available Results of the research project «Development of algorithms and program models for the analysis of television and thermal images» (code VC 200.16.13 are presented. The known methods and algorithms for television and thermal imaging video processing were analyzed and new ones that will allow to create more effective devices and systems for special purposes were offered.

  14. An Evaluation on the Usage of Intelligent Video Analysis Software for Marketing Strategies

    Directory of Open Access Journals (Sweden)

    Kadri Gökhan Yılmaz

    2013-12-01

    Full Text Available This study investigates the historical development of the relation between companies and technology. Especially, it focuses on the new technology adaptation in the retail industry due to both the widespread technology usage in this sector and its technology guiding role. The usage of one of the current new technologies, intelligent video analysis software systems, in retail industry is evaluated and measures for such systems are determined.

  15. Quantitative video-based gait pattern analysis for hemiparkinsonian rats.

    Science.gov (United States)

    Lee, Hsiao-Yu; Hsieh, Tsung-Hsun; Liang, Jen-I; Yeh, Ming-Long; Chen, Jia-Jin J

    2012-09-01

    Gait disturbances are common in the rat model of Parkinson's disease (PD) by administrating 6-hydroxydopamine. However, few studies have simultaneously assessed spatiotemporal gait indices and the kinematic information of PD rats during overground locomotion. This study utilized a simple, accurate, and reproducible method for quantifying the spatiotemporal and kinematic changes of gait patterns in hemiparkinsonian rats. A transparent walkway with a tilted mirror was set to capture underview footprints and lateral joint ankle images using a high-speed and high-resolution digital camera. The footprint images were semi-automatically processed with a threshold setting to identify the boundaries of soles and the critical points of each hindlimb for deriving the spatiotemporal and kinematic indices of gait. Following PD lesion, asymmetrical gait patterns including a significant decrease in the step/stride length and increases in the base of support and ankle joint angle were found. The increased footprint length, toe spread, and intermediary toe spread were found, indicating a compensatory gait pattern for impaired locomotor function. The temporal indices showed a significant decrease in the walking speed with increased durations of the stance/swing phase and double support time, which was more evident in the affected hindlimb. Furthermore, the ankle kinematic data showed that the joint angle decreased at the toe contact stage. We conclude that the proposed gait analysis method can be used to precisely detect locomotor function changes in PD rats, which is useful for objective assessments of investigating novel treatments for PD animal model.

  16. Traitement automatique et apprentissage des langues (Automated Discourse Analysis and Language Teaching).

    Science.gov (United States)

    Garrigues, Mylene

    1992-01-01

    Issues in computerized analysis of language usage are discussed, focusing on the problems encountered as computers, linguistics, and language teaching converge. The tools of automated language and error analysis are outlined and specific problems are illustrated in several types of classroom exercise. (MSE)

  17. Web-based automation of green building rating index and life cycle cost analysis

    Science.gov (United States)

    Shahzaib Khan, Jam; Zakaria, Rozana; Aminuddin, Eeydzah; IzieAdiana Abidin, Nur; Sahamir, Shaza Rina; Ahmad, Rosli; Nafis Abas, Darul

    2018-04-01

    Sudden decline in financial markets and economic meltdown has slow down adaptation and lowered interest of investors towards green certified buildings due to their higher initial costs. Similarly, it is essential to fetch investor’s attention towards more development of green buildings through automated tools for the construction projects. Though, historical dearth is found on the automation of green building rating tools that brings up an essential gap to develop an automated analog computerized programming tool. This paper present a proposed research aim to develop an integrated web-based automated analog computerized programming that applies green building rating assessment tool, green technology and life cycle cost analysis. It also emphasizes to identify variables of MyCrest and LCC to be integrated and developed in a framework then transformed into automated analog computerized programming. A mix methodology of qualitative and quantitative survey and its development portray the planned to carry MyCrest-LCC integration to an automated level. In this study, the preliminary literature review enriches better understanding of Green Building Rating Tools (GBRT) integration to LCC. The outcome of this research is a pave way for future researchers to integrate other efficient tool and parameters that contributes towards green buildings and future agendas.

  18. Comparative analysis of automation of production process with industrial robots in Asia/Australia and Europe

    Directory of Open Access Journals (Sweden)

    I. Karabegović

    2017-01-01

    Full Text Available The term "INDUSTRY 4.0" or "fourth industrial revolution" was first introduced at the fair in 2011 in Hannover. It comes from the high-tech strategy of the German Federal Government that promotes automation-computerization to complete smart automation, meaning the introduction of a method of self-automation, self-configuration, self-diagnosing and fixing the problem, knowledge and intelligent decision-making. Any automation, including smart, cannot be imagined without industrial robots. Along with the fourth industrial revolution, ‘’robotic revolution’’ is taking place in Japan. Robotic revolution refers to the development and research of robotic technology with the aim of using robots in all production processes, and the use of robots in real life, to be of service to a man in daily life. Knowing these facts, an analysis was conducted of the representation of industrial robots in the production processes on the two continents of Europe and Asia /Australia, as well as research that industry is ready for the introduction of intelligent automation with the goal of establishing future smart factories. The paper gives a representation of the automation of production processes in Europe and Asia/Australia, with predictions for the future.

  19. Video Analysis of Primary Shoulder Dislocations in Rugby Tackles.

    Science.gov (United States)

    Maki, Nobukazu; Kawasaki, Takayuki; Mochizuki, Tomoyuki; Ota, Chihiro; Yoneda, Takeshi; Urayama, Shingo; Kaneko, Kazuo

    2017-06-01

    Characteristics of rugby tackles that lead to primary anterior shoulder dislocation remain unclear. To clarify the characteristics of tackling that lead to shoulder dislocation and to assess the correlation between the mechanism of injury and morphological damage of the glenoid. Case series; Level of evidence, 4. Eleven elite rugby players who sustained primary anterior shoulder dislocation due to one-on-one tackling between 2001 and 2014 were included. Using an assessment system, the tackler's movement, posture, and shoulder and head position were evaluated in each phase of tackling. Based on 3-dimensional computed tomography, the glenoid of the affected shoulder was classified into 3 types: intact, erosion, and bone defect. Orientation of the glenoid defect and presence of Hill-Sachs lesion were also evaluated. Eleven tackles that led to primary shoulder dislocation were divided into hand, arm, and shoulder tackle types based on the site at which the tackler contacted the ball carrier initially. In hand and arm tackles, the tackler's shoulder joint was forcibly moved to horizontal abduction by the impact of his upper limb, which appeared to result from an inappropriate approach to the ball carrier. In shoulder tackles, the tackler's head was lowered and was in front of the ball carrier at impact. There was no significant correlation between tackle types and the characteristics of bony lesions of the shoulder. Although the precise mechanism of primary anterior shoulder dislocation could not be estimated from this single-view analysis, failure of individual tackling leading to injury is not uniform and can be caused by 2 main factors: failure of approach followed by an extended arm position or inappropriate posture of the tackler at impact, such as a lowered head in front of the opponent. These findings indicate that injury mechanisms should be assessed for each type of tackle, as it is unknown whether external force to the glenoid is different in each mechanism

  20. Video Streaming in Distributed Erasure-coded Storage Systems: Stall Duration Analysis

    OpenAIRE

    Al-Abbasi, Abubakr O.; Aggarwal, Vaneet

    2017-01-01

    The demand for global video has been burgeoning across industries. With the expansion and improvement of video streaming services, cloud-based video is evolving into a necessary feature of any successful business for reaching internal and external audiences. This paper considers video streaming over distributed systems where the video segments are encoded using an erasure code for better reliability thus being the first work to our best knowledge that considers video streaming over erasure-co...

  1. Video Analysis of Primary Shoulder Dislocations in Rugby Tackles

    Science.gov (United States)

    Maki, Nobukazu; Kawasaki, Takayuki; Mochizuki, Tomoyuki; Ota, Chihiro; Yoneda, Takeshi; Urayama, Shingo; Kaneko, Kazuo

    2017-01-01

    Background: Characteristics of rugby tackles that lead to primary anterior shoulder dislocation remain unclear. Purpose: To clarify the characteristics of tackling that lead to shoulder dislocation and to assess the correlation between the mechanism of injury and morphological damage of the glenoid. Study Design: Case series; Level of evidence, 4. Methods: Eleven elite rugby players who sustained primary anterior shoulder dislocation due to one-on-one tackling between 2001 and 2014 were included. Using an assessment system, the tackler’s movement, posture, and shoulder and head position were evaluated in each phase of tackling. Based on 3-dimensional computed tomography, the glenoid of the affected shoulder was classified into 3 types: intact, erosion, and bone defect. Orientation of the glenoid defect and presence of Hill-Sachs lesion were also evaluated. Results: Eleven tackles that led to primary shoulder dislocation were divided into hand, arm, and shoulder tackle types based on the site at which the tackler contacted the ball carrier initially. In hand and arm tackles, the tackler’s shoulder joint was forcibly moved to horizontal abduction by the impact of his upper limb, which appeared to result from an inappropriate approach to the ball carrier. In shoulder tackles, the tackler’s head was lowered and was in front of the ball carrier at impact. There was no significant correlation between tackle types and the characteristics of bony lesions of the shoulder. Conclusion: Although the precise mechanism of primary anterior shoulder dislocation could not be estimated from this single-view analysis, failure of individual tackling leading to injury is not uniform and can be caused by 2 main factors: failure of approach followed by an extended arm position or inappropriate posture of the tackler at impact, such as a lowered head in front of the opponent. These findings indicate that injury mechanisms should be assessed for each type of tackle, as it is unknown

  2. The Narrative Analysis of the Discourse on Homosexual BDSM Pornograhic Video Clips of The Manhunt Variety

    Directory of Open Access Journals (Sweden)

    Milica Vasić

    2016-02-01

    Full Text Available In this paper we have analyzed the ideal-type model of the story which represents the basic framework of action in Manhunt category pornographic internet video clips, using narrative analysis methods of Claude Bremond. The results have shown that it is possible to apply the theoretical model to elements of visual and mass culture, with certain modifications and taking into account the wider context of the narrative itself. The narrative analysis indicated the significance of researching categories of pornography on the internet, because it leads to a deep analysis of the distribution of power in relations between the categories of heterosexual and homosexual within a virtual environment.

  3. Video incident analysis of head injuries in high school girls' lacrosse.

    Science.gov (United States)

    Caswell, Shane V; Lincoln, Andrew E; Almquist, Jon L; Dunn, Reginald E; Hinton, Richard Y

    2012-04-01

    Knowledge of injury mechanisms and game situations associated with head injuries in girls' high school lacrosse is necessary to target prevention efforts. To use video analysis and injury data to provide an objective and comprehensive visual record to identify mechanisms of injury, game characteristics, and penalties associated with head injury in girls' high school lacrosse. Descriptive epidemiology study. In the 25 public high schools of 1 school system, 529 varsity and junior varsity girls' lacrosse games were videotaped by trained videographers during the 2008 and 2009 seasons. Video of head injury incidents was examined to identify associated mechanisms and game characteristics using a lacrosse-specific coding instrument. Of the 25 head injuries (21 concussions and 4 contusions) recorded as game-related incidents by athletic trainers during the 2 seasons, 20 head injuries were captured on video, and 14 incidents had sufficient image quality for analysis. All 14 incidents of head injury (11 concussions, 3 contusions) involved varsity-level athletes. Most head injuries resulted from stick-to-head contact (n = 8), followed by body-to-head contact (n = 4). The most frequent player activities were defending a shot (n = 4) and competing for a loose ball (n = 4). Ten of the 14 head injuries occurred inside the 12-m arc and in front of the goal, and no penalty was called in 12 injury incidents. All injuries involved 2 players, and most resulted from unintentional actions. Turf versus grass did not appear to influence number of head injuries. Comprehensive video analysis suggests that play near the goal at the varsity high school level is associated with head injuries. Absence of penalty calls on most of these plays suggests an area for exploration, such as the extent to which current rules are enforced and the effectiveness of existing rules for the prevention of head injury.

  4. Effectiveness of teaching automated external defibrillators use using a traditional classroom instruction versus self-instruction video in non-critical care nurses

    Directory of Open Access Journals (Sweden)

    Ismail M. Saiboon

    2016-04-01

    Full Text Available Objectives: To evaluate the effectiveness and retention of learning automated external defibrillator (AED usage taught through a traditional classroom instruction (TCI method versus a novel self instructed video (SIV technique in non-critical care nurses (NCCN. Methods: A prospective single-blind randomized study was conducted over 7 months (April-October 2014 at the Universiti Kebangsaan Malaysia Medical Center, Kuala Lampur, Malaysia. Eighty nurses were randomized into either TCI or SIV instructional techniques. We assessed knowledge, skill and confidence level at baseline, immediate and 6-months post-intervention. Knowledge and confidence were assessed via questionnaire; skill was assessed by a calibrated and blinded independent assessor using an objective structured clinical examination (OSCE method. Results: Pre-test mean scores for knowledge in the TCI group was 10.87 ± 2.34, and for the SIV group was 10.37 ± 1.85 (maximum achievable score 20.00; 4.05 ± 2.87 in the TCI and 3.71 ± 2.66 in the SIV (maximum score 11.00 in the OSCE evaluation and 9.54 ± 3.65 in the TCI and 8.56 ± 3.47 in the SIV (maximum score 25.00 in the individual’s personal confidence level. Both methods increased the mean scores significantly during immediate post-intervention (0-month. At 6-months, the TCI group scored lower than the SIV group in all aspects 11.13 ± 2.70 versus 12.95 ± 2.26 (p=0.03 in knowledge, 7.27 ± 1.62 versus 7.68 ± 1.73 (p=0.47 in the OSCE, and 16.40 ± 2.72 versus 18.82 ± 3.40 (p=0.03 in confidence level. Conclusion: In NCCN’s, SIV is as good as TCI in providing the knowledge, competency, and confidence in performing AED defibrillation.

  5. A computer-video aided time motion analysis technique for match analysis.

    Science.gov (United States)

    Ali, A; Farrally, M

    1991-03-01

    The purpose of this study was to find out suitable methods for obtaining objective data on the time spent by players of different positions during walking, jogging, cruising, sprinting and standing still during match play activities. Computer programs and filming analyses with a simple notation system based upon symbolic representations of movements have been devised for analysis of individual players' behaviour. A technique was devised and employed with a small group of university players, aged 19-21 years of age. The subjects were filmed in several matches, and the video recordings were analysed using a microcomputer. The ratio of the time spent for the players were 56% walking, 30% jogging, 4% cruising, 3% sprinting and 7% standing still. ANOVA revealed that there are significant differences among the players for different positions on the field, for example the time spent on walking, jogging and standing still differed (P less than 0.05) among attackers, defenders and midfielders. A new method has been developed to obtain reliable information about the players' movement and performance in the game. The Authors believe that there should be further studies carried out involving more teams at different levels of performance to substantiate these preliminary findings.

  6. A prospective video-based analysis of injury situations in elite male football: football incident analysis.

    Science.gov (United States)

    Arnason, Arni; Tenga, Albin; Engebretsen, Lars; Bahr, Roald

    2004-09-01

    The mechanisms for football injuries are largely unknown. To describe the characteristics of injury situations in elite male football using a video-based method called football incident analysis. Prospective cohort study. During the 1999 season, videotapes from 52 matches in the Icelandic elite football league were reviewed. Incidents (N = 95) were recorded when the match was interrupted by the referee because of a suspected injury. Team physical therapists recorded injuries prospectively (N = 28 time-loss injuries). Duels caused 84 of the incidents, mostly tackling duels (n = 54). The exposed player's attention appeared to be focused away from the opponent in 93% of the cases. The 3 main mechanisms observed were (1) breakdown attacks, tackling from the side or the front, attention focused on the ball (24%); (2) defensive tackling duels, attention focused on the ball or low ball control (20%); and (3) heading duels, attention focused on the ball in the air (13%). Most incidents and injuries occurred during breakdown attacks and when a player was involved in tackling duels. Player attention appeared to be focused mainly on the ball, not on the opponent challenging him to gain ball possession.

  7. Research Prototype: Automated Analysis of Scientific and Engineering Semantics

    Science.gov (United States)

    Stewart, Mark E. M.; Follen, Greg (Technical Monitor)

    2001-01-01

    Physical and mathematical formulae and concepts are fundamental elements of scientific and engineering software. These classical equations and methods are time tested, universally accepted, and relatively unambiguous. The existence of this classical ontology suggests an ideal problem for automated comprehension. This problem is further motivated by the pervasive use of scientific code and high code development costs. To investigate code comprehension in this classical knowledge domain, a research prototype has been developed. The prototype incorporates scientific domain knowledge to recognize code properties (including units, physical, and mathematical quantity). Also, the procedure implements programming language semantics to propagate these properties through the code. This prototype's ability to elucidate code and detect errors will be demonstrated with state of the art scientific codes.

  8. Automated Multivariate Optimization Tool for Energy Analysis: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, P. G.; Griffith, B. T.; Long, N.; Torcellini, P. A.; Crawley, D.

    2006-07-01

    Building energy simulations are often used for trial-and-error evaluation of ''what-if'' options in building design--a limited search for an optimal solution, or ''optimization''. Computerized searching has the potential to automate the input and output, evaluate many options, and perform enough simulations to account for the complex interactions among combinations of options. This paper describes ongoing efforts to develop such a tool. The optimization tool employs multiple modules, including a graphical user interface, a database, a preprocessor, the EnergyPlus simulation engine, an optimization engine, and a simulation run manager. Each module is described and the overall application architecture is summarized.

  9. Using video technology to disseminate behavioral procedures: a review of Functional Analysis: a Guide for Understanding Challenging Behavior (DVD).

    Science.gov (United States)

    Carr, James E; Fox, Eric J

    2009-01-01

    Although applied behavior analysis has generated many highly effective behavior-change procedures, the procedures have not always been effectively disseminated. One solution to this problem is the use of video technology, which has been facilitated by the ready availability of video production equipment and software and multiple distribution methods (e.g., DVD, online streaming). We review a recent DVD that was produced to disseminate the successful experimental functional analysis procedure. The review is followed by general recommendations for disseminating behavior-analytic procedures via video technology.

  10. Comparison of manual & automated analysis methods for corneal endothelial cell density measurements by specular microscopy.

    Science.gov (United States)

    Huang, Jianyan; Maram, Jyotsna; Tepelus, Tudor C; Modak, Cristina; Marion, Ken; Sadda, SriniVas R; Chopra, Vikas; Lee, Olivia L

    2017-08-07

    To determine the reliability of corneal endothelial cell density (ECD) obtained by automated specular microscopy versus that of validated manual methods and factors that predict such reliability. Sharp central images from 94 control and 106 glaucomatous eyes were captured with Konan specular microscope NSP-9900. All images were analyzed by trained graders using Konan CellChek Software, employing the fully- and semi-automated methods as well as Center Method. Images with low cell count (input cells number <100) and/or guttata were compared with the Center and Flex-Center Methods. ECDs were compared and absolute error was used to assess variation. The effect on ECD of age, cell count, cell size, and cell size variation was evaluated. No significant difference was observed between the Center and Flex-Center Methods in corneas with guttata (p=0.48) or low ECD (p=0.11). No difference (p=0.32) was observed in ECD of normal controls <40 yrs old between the fully-automated method and manual Center Method. However, in older controls and glaucomatous eyes, ECD was overestimated by the fully-automated method (p=0.034) and semi-automated method (p=0.025) as compared to manual method. Our findings show that automated analysis significantly overestimates ECD in the eyes with high polymegathism and/or large cell size, compared to the manual method. Therefore, we discourage reliance upon the fully-automated method alone to perform specular microscopy analysis, particularly if an accurate ECD value is imperative. Copyright © 2017. Published by Elsevier España, S.L.U.

  11. Content-based management service for medical videos.

    Science.gov (United States)

    Mendi, Engin; Bayrak, Coskun; Cecen, Songul; Ermisoglu, Emre

    2013-01-01

    Development of health information technology has had a dramatic impact to improve the efficiency and quality of medical care. Developing interoperable health information systems for healthcare providers has the potential to improve the quality and equitability of patient-centered healthcare. In this article, we describe an automated content-based medical video analysis and management service that provides convenience and ease in accessing the relevant medical video content without sequential scanning. The system facilitates effective temporal video segmentation and content-based visual information retrieval that enable a more reliable understanding of medical video content. The system is implemented as a Web- and mobile-based service and has the potential to offer a knowledge-sharing platform for the purpose of efficient medical video content access.

  12. [Clinical application of automated digital image analysis for morphology review of peripheral blood leukocyte].

    Science.gov (United States)

    Xing, Ying; Yan, Xiaohua; Pu, Chengwei; Shang, Ke; Dong, Ning; Wang, Run; Wang, Jianzhong

    2016-03-01

    To explore the clinical application of automated digital image analysis in leukocyte morphology examination when review criteria of hematology analyzer are triggered. The reference range of leukocyte differentiation by automated digital image analysis was established by analyzing 304 healthy blood samples from Peking University First Hospital. Six hundred and ninty-seven blood samples from Peking University First Hospital were randomly collected from November 2013 to April 2014, complete blood cells were counted on hematology analyzer, blood smears were made and stained at the same time. Blood smears were detected by automated digital image analyzer and the results were checked (reclassification) by a staff with abundant morphology experience. The same smear was examined manually by microscope. The results by manual microscopic differentiation were used as"golden standard", and diagnostic efficiency of abnormal specimens by automated digital image analysis was calculated, including sensitivity, specificity and accuracy. The difference of abnormal leukocytes detected by two different methods was analyzed in 30 samples of hematological and infectious diseases. Specificity of identifying abnormalities of white blood cells by automated digital image analysis was more than 90% except monocyte. Sensitivity of neutrophil toxic abnormities (including Döhle body, toxic granulate and vacuolization) was 100%; sensitivity of blast cells, immature granulates and atypical lymphocytes were 91.7%, 60% to 81.5% and 61.5%, respectively. Sensitivity of leukocyte differential count was 91.8% for neutrophils, 88.5% for lymphocytes, 69.1% for monocytes, 78.9% for eosinophils and 36.3 for basophils. The positive rate of recognizing abnormal cells (blast, immature granulocyte and atypical lymphocyte) by manual microscopic method was 46.7%, 53.3% and 10%, respectively. The positive rate of automated digital image analysis was 43.3%, 60% and 10%, respectively. There was no statistic

  13. Deception Detection in Videos

    OpenAIRE

    Wu, Zhe; Singh, Bharat; Davis, Larry S.; Subrahmanian, V. S.

    2017-01-01

    We present a system for covert automated deception detection in real-life courtroom trial videos. We study the importance of different modalities like vision, audio and text for this task. On the vision side, our system uses classifiers trained on low level video features which predict human micro-expressions. We show that predictions of high-level micro-expressions can be used as features for deception prediction. Surprisingly, IDT (Improved Dense Trajectory) features which have been widely ...

  14. Organ donation on Web 2.0: content and audience analysis of organ donation videos on YouTube.

    Science.gov (United States)

    Tian, Yan

    2010-04-01

    This study examines the content of and audience response to organ donation videos on YouTube, a Web 2.0 platform, with framing theory. Positive frames were identified in both video content and audience comments. Analysis revealed a reciprocity relationship between media frames and audience frames. Videos covered content categories such as kidney, liver, organ donation registration process, and youth. Videos were favorably rated. No significant differences were found between videos produced by organizations and individuals in the United States and those produced in other countries. The findings provide insight into how new communication technologies are shaping health communication in ways that differ from traditional media. The implications of Web 2.0, characterized by user-generated content and interactivity, for health communication and health campaign practice are discussed.

  15. HIERARCHICAL ADAPTIVE ROOD PATTERN SEARCH FOR MOTION ESTIMATION AT VIDEO SEQUENCE ANALYSIS

    Directory of Open Access Journals (Sweden)

    V. T. Nguyen

    2016-05-01

    Full Text Available Subject of Research.The paper deals with the motion estimation algorithms for the analysis of video sequences in compression standards MPEG-4 Visual and H.264. Anew algorithm has been offered based on the analysis of the advantages and disadvantages of existing algorithms. Method. Thealgorithm is called hierarchical adaptive rood pattern search (Hierarchical ARPS, HARPS. This new algorithm includes the classic adaptive rood pattern search ARPS and hierarchical search MP (Hierarchical search or Mean pyramid. All motion estimation algorithms have been implemented using MATLAB package and tested with several video sequences. Main Results. The criteria for evaluating the algorithms were: speed, peak signal to noise ratio, mean square error and mean absolute deviation. The proposed method showed a much better performance at a comparable error and deviation. The peak signal to noise ratio in different video sequences shows better and worse results than characteristics of known algorithms so it requires further investigation. Practical Relevance. Application of this algorithm in MPEG-4 and H.264 codecs instead of the standard can significantly reduce compression time. This feature enables to recommend it in telecommunication systems for multimedia data storing, transmission and processing.

  16. The use of video analysis and the Knowledge Quartet in mathematics teacher education programmes

    Science.gov (United States)

    Liston, Miriam

    2015-01-01

    This study investigates the potential of video analysis and a mathematical knowledge for teaching framework, the Knowledge Quartet (KQ), in mathematics teacher education programmes. It reports on the effectiveness of these tools in analysing and supporting secondary level pre-service mathematics teachers' subject matter knowledge and pedagogical content knowledge. This paper describes how a videotaped lesson of one pre-service teacher, teaching a class of mature students, was analysed and makes comparisons between the teacher educators' and the pre-service teacher's observations. Inter-rater reliability was investigated and a Kappa coefficient of .72 indicated substantial agreement between both coders. Findings are presented and implications of the use of video and the KQ for mathematics teacher education are drawn.

  17. Effects of Video Games and Online Chat on Mathematics Performance in High School: An Approach of Multivariate Data Analysis

    OpenAIRE

    Lina Wu; Wenyi Lu; Ye Li

    2016-01-01

    Regarding heavy video game players for boys and super online chat lovers for girls as a symbolic phrase in the current adolescent culture, this project of data analysis verifies the displacement effect on deteriorating mathematics performance. To evaluate correlation or regression coefficients between a factor of playing video games or chatting online and mathematics performance compared with other factors, we use multivariate analysis technique and take gender difference into account. We fin...

  18. Video-tracker trajectory analysis: who meets whom, when and where

    Science.gov (United States)

    Jäger, U.; Willersinn, D.

    2010-04-01

    Unveiling unusual or hostile events by observing manifold moving persons in a crowd is a challenging task for human operators, especially when sitting in front of monitor walls for hours. Typically, hostile events are rare. Thus, due to tiredness and negligence the operator may miss important events. In such situations, an automatic alarming system is able to support the human operator. The system incorporates a processing chain consisting of (1) people tracking, (2) event detection, (3) data retrieval, and (4) display of relevant video sequence overlaid by highlighted regions of interest. In this paper we focus on the event detection stage of the processing chain mentioned above. In our case, the selected event of interest is the encounter of people. Although being based on a rather simple trajectory analysis, this kind of event embodies great practical importance because it paves the way to answer the question "who meets whom, when and where". This, in turn, forms the basis to detect potential situations where e.g. money, weapons, drugs etc. are handed over from one person to another in crowded environments like railway stations, airports or busy streets and places etc.. The input to the trajectory analysis comes from a multi-object video-based tracking system developed at IOSB which is able to track multiple individuals within a crowd in real-time [1]. From this we calculate the inter-distances between all persons on a frame-to-frame basis. We use a sequence of simple rules based on the individuals' kinematics to detect the event mentioned above to output the frame number, the persons' IDs from the tracker and the pixel coordinates of the meeting position. Using this information, a data retrieval system may extract the corresponding part of the recorded video image sequence and finally allows for replaying the selected video clip with a highlighted region of interest to attract the operator's attention for further visual inspection.

  19. Automated analysis of cell migration and nuclear envelope rupture in confined environments.

    Science.gov (United States)

    Elacqua, Joshua J; McGregor, Alexandra L; Lammerding, Jan

    2018-01-01

    Recent in vitro and in vivo studies have highlighted the importance of the cell nucleus in governing migration through confined environments. Microfluidic devices that mimic the narrow interstitial spaces of tissues have emerged as important tools to study cellular dynamics during confined migration, including the consequences of nuclear deformation and nuclear envelope rupture. However, while image acquisition can be automated on motorized microscopes, the analysis of the corresponding time-lapse sequences for nuclear transit through the pores and events such as nuclear envelope rupture currently requires manual analysis. In addition to being highly time-consuming, such manual analysis is susceptible to person-to-person variability. Studies that compare large numbers of cell types and conditions therefore require automated image analysis to achieve sufficiently high throughput. Here, we present an automated image analysis program to register microfluidic constrictions and perform image segmentation to detect individual cell nuclei. The MATLAB program tracks nuclear migration over time and records constriction-transit events, transit times, transit success rates, and nuclear envelope rupture. Such automation reduces the time required to analyze migration experiments from weeks to hours, and removes the variability that arises from different human analysts. Comparison with manual analysis confirmed that both constriction transit and nuclear envelope rupture were detected correctly and reliably, and the automated analysis results closely matched a manual analysis gold standard. Applying the program to specific biological examples, we demonstrate its ability to detect differences in nuclear transit time between cells with different levels of the nuclear envelope proteins lamin A/C, which govern nuclear deformability, and to detect an increase in nuclear envelope rupture duration in cells in which CHMP7, a protein involved in nuclear envelope repair, had been depleted

  20. Functional MRI preprocessing in lesioned brains: manual versus automated region of interest analysis

    Directory of Open Access Journals (Sweden)

    Kathleen A Garrison

    2015-09-01

    Full Text Available Functional magnetic resonance imaging has significant potential in the study and treatment of neurological disorders and stroke. Region of interest (ROI analysis in such studies allows for testing of strong a priori clinical hypotheses with improved statistical power. A commonly used automated approach to ROI analysis is to spatially normalize each participant’s structural brain image to a template brain image and define ROIs using an atlas. However, in studies of individuals with structural brain lesions such as stroke, the gold standard approach may be to manually hand-draw ROIs on each participant’s non-normalized structural brain image. Automated approaches to ROI analysis are faster and more standardized, yet are susceptible to preprocessing error (e.g., normalization error that can be greater in lesioned brains. The manual approach to ROI analysis has high demand for time and expertise but may provide a more accurate estimate of brain response. In this study, we directly compare commonly used automated and manual approaches to ROI analysis by reanalyzing data from a previously published hypothesis-driven cognitive fMRI study involving individuals with stroke. The ROI evaluated is the pars opercularis of the inferior frontal gyrus. We found a significant difference in task-related effect size and percent activated voxels in this ROI between the automated and manual approaches to ROI analysis. Task interactions, however, were consistent across ROI analysis approaches. These findings support the use of automated approaches to ROI analysis in studies of lesioned brains, provided they employ a task interaction design.

  1. Semi-automated analysis of EEG spikes in the preterm fetal sheep using wavelet analysis

    International Nuclear Information System (INIS)

    Walbran, A.C.; Unsworth, C.P.; Gunn, A.J.; Benett, L.

    2010-01-01

    Full text: Presentation Preference Oral Presentation Perinatal hypoxia plays a key role in the cause of brain injury in premature infants. Cerebral hypothermia commenced in the latent phase of evolving injury (first 6-8 h post hypoxic-ischemic insult) is the lead candidate for treatment however currently there is no means to identify which infants can benefit from treatment. Recent studies suggest that epileptiform transients in latent phase are predictive of neural outcome. To quantify this, an automated means of EEG analysis is required as EEG monitoring produces vast amounts of data which is timely to analyse manually. We have developed a semi-automated EEG spike detection method which employs a discretized version of the continuous wavelet transform (CWT). EEG data was obtained from a fetal sheep at approximately 0.7 of gestation. Fetal asphyxia was maintained for 25 min and the EEG recorded for 8 h before and after asphyxia. The CWT was calculated followed by the power of the wavelet transform coefficients. Areas of high power corresponded to spike waves so thresholding was employed to identify the spikes. The performance of the method was found have a good sensitivity and selectivity, thus demonstrating that this method is a simple, robust and potentially effective spike detection algorithm.

  2. Flight State Information Inference with Application to Helicopter Cockpit Video Data Analysis Using Data Mining Techniques

    Science.gov (United States)

    Shin, Sanghyun

    The National Transportation Safety Board (NTSB) has recently emphasized the importance of analyzing flight data as one of the most effective methods to improve eciency and safety of helicopter operations. By analyzing flight data with Flight Data Monitoring (FDM) programs, the safety and performance of helicopter operations can be evaluated and improved. In spite of the NTSB's effort, the safety of helicopter operations has not improved at the same rate as the safety of worldwide airlines, and the accident rate of helicopters continues to be much higher than that of fixed-wing aircraft. One of the main reasons is that the participation rates of the rotorcraft industry in the FDM programs are low due to the high costs of the Flight Data Recorder (FDR), the need of a special readout device to decode the FDR, anxiety of punitive action, etc. Since a video camera is easily installed, accessible, and inexpensively maintained, cockpit video data could complement the FDR in the presence of the FDR or possibly replace the role of the FDR in the absence of the FDR. Cockpit video data is composed of image and audio data: image data contains outside views through cockpit windows and activities on the flight instrument panels, whereas audio data contains sounds of the alarms within the cockpit. The goal of this research is to develop, test, and demonstrate a cockpit video data analysis algorithm based on data mining and signal processing techniques that can help better understand situations in the cockpit and the state of a helicopter by efficiently and accurately inferring the useful flight information from cockpit video data. Image processing algorithms based on data mining techniques are proposed to estimate a helicopter's attitude such as the bank and pitch angles, identify indicators from a flight instrument panel, and read the gauges and the numbers in the analogue gauge indicators and digital displays from cockpit image data. In addition, an audio processing algorithm

  3. Open-Ended Interaction in Cooperative Pro-to-typing: A Video-based Analysis

    DEFF Research Database (Denmark)

    Bødker, Susanne; Grønbæk, Kaj; Trigg, Randal

    1991-01-01

    that are tied concretely to some current version of the prototype. On the other hand, the users learn more about the potential for change in their work practice, whether computer-based or otherwise. This paper presents the results of a field study of the cooperative prototyping process. The study is based...... on a fine-grained video-based analysis of a single prototyping session, and focuses on the effects of an open-ended style of interaction between users and designers around a prototype. An analysis of focus shifts, initiative and storytelling during the session is brought to bear on the question of whether...

  4. DEFINITION AND ANALYSIS OF MOTION ACTIVITY AFTER-STROKE PATIENT FROM THE VIDEO STREAM

    Directory of Open Access Journals (Sweden)

    M. Yu. Katayev

    2014-01-01

    Full Text Available This article describes an approach to the assessment of motion activity of man in after-stroke period, allowing the doctor to get new information to give a more informed recommendations on rehabilitation treatment than in traditional approaches. Consider description of the hardware-software complex for determination and analysis of motion activity after-stroke patient for the video stream. The article provides a description of the complex, its algorithmic filling and the results of the work on the example of processing of the actual data. The algorithms and technology to significantly accelerate the gait analysis and improve the quality of diagnostics post-stroke patients.

  5. Unsupervised fully automated inline analysis of global left ventricular function in CINE MR imaging.

    Science.gov (United States)

    Theisen, Daniel; Sandner, Torleif A; Bauner, Kerstin; Hayes, Carmel; Rist, Carsten; Reiser, Maximilian F; Wintersperger, Bernd J

    2009-08-01

    To implement and evaluate the accuracy of unsupervised fully automated inline analysis of global ventricular function and myocardial mass (MM). To compare automated with manual segmentation in patients with cardiac disorders. In 50 patients, cine imaging of the left ventricle was performed with an accelerated retrogated steady state free precession sequence (GRAPPA; R = 2) on a 1.5 Tesla whole body scanner (MAGNETOM Avanto, Siemens Healthcare, Germany). A spatial resolution of 1.4 x 1.9 mm was achieved with a slice thickness of 8 mm and a temporal resolution of 42 milliseconds. Ventricular coverage was based on 9 to 12 short axis slices extending from the annulus of the mitral valve to the apex with 2 mm gaps. Fully automated segmentation and contouring was performed instantaneously after image acquisition. In addition to automated processing, cine data sets were also manually segmented using a semi-automated postprocessing software. Results of both methods were compared with regard to end-diastolic volume (EDV), end-systolic volume (ESV), ejection fraction (EF), and MM. A subgroup analysis was performed in patients with normal (> or =55%) and reduced EF (<55%) based on the results of the manual analysis. Thirty-two percent of patients had a reduced left ventricular EF of <55%. Volumetric results of the automated inline analysis for EDV (r = 0.96), ESV (r = 0.95), EF (r = 0.89), and MM (r = 0.96) showed high correlation with the results of manual segmentation (all P < 0.001). Head-to-head comparison did not show significant differences between automated and manual evaluation for EDV (153.6 +/- 52.7 mL vs. 149.1 +/- 48.3 mL; P = 0.05), ESV (61.6 +/- 31.0 mL vs. 64.1 +/- 31.7 mL; P = 0.08), and EF (58.0 +/- 11.6% vs. 58.6 +/- 11.6%; P = 0.5). However, differences were significant for MM (150.0 +/- 61.3 g vs. 142.4 +/- 59.0 g; P < 0.01). The standard error was 15.6 (EDV), 9.7 (ESV), 5.0 (EF), and 17.1 (mass). The mean time for manual analysis was 15 minutes

  6. Automated striatal uptake analysis of 18F-FDOPA PET images applied to Parkinson's disease patients

    International Nuclear Information System (INIS)

    Chang Icheng; Lue Kunhan; Hsieh Hungjen; Liu Shuhsin; Kao, Chinhao K.

    2011-01-01

    6-[ 18 F]Fluoro-L-DOPA (FDOPA) is a radiopharmaceutical valuable for assessing the presynaptic dopaminergic function when used with positron emission tomography (PET). More specifically, the striatal-to-occipital ratio (SOR) of FDOPA uptake images has been extensively used as a quantitative parameter in these PET studies. Our aim was to develop an easy, automated method capable of performing objective analysis of SOR in FDOPA PET images of Parkinson's disease (PD) patients. Brain images from FDOPA PET studies of 21 patients with PD and 6 healthy subjects were included in our automated striatal analyses. Images of each individual were spatially normalized into an FDOPA template. Subsequently, the image slice with the highest level of basal ganglia activity was chosen among the series of normalized images. Also, the immediate preceding and following slices of the chosen image were then selected. Finally, the summation of these three images was used to quantify and calculate the SOR values. The results obtained by automated analysis were compared with manual analysis by a trained and experienced image processing technologist. The SOR values obtained from the automated analysis had a good agreement and high correlation with manual analysis. The differences in caudate, putamen, and striatum were -0.023, -0.029, and -0.025, respectively; correlation coefficients 0.961, 0.957, and 0.972, respectively. We have successfully developed a method for automated striatal uptake analysis of FDOPA PET images. There was no significant difference between the SOR values obtained from this method and using manual analysis. Yet it is an unbiased time-saving and cost-effective program and easy to implement on a personal computer. (author)

  7. Automated analysis of small animal PET studies through deformable registration to an atlas

    International Nuclear Information System (INIS)

    Gutierrez, Daniel F.; Zaidi, Habib

    2012-01-01

    This work aims to develop a methodology for automated atlas-guided analysis of small animal positron emission tomography (PET) data through deformable registration to an anatomical mouse model. A non-rigid registration technique is used to put into correspondence relevant anatomical regions of rodent CT images from combined PET/CT studies to corresponding CT images of the Digimouse anatomical mouse model. The latter provides a pre-segmented atlas consisting of 21 anatomical regions suitable for automated quantitative analysis. Image registration is performed using a package based on the Insight Toolkit allowing the implementation of various image registration algorithms. The optimal parameters obtained for deformable registration were applied to simulated and experimental mouse PET/CT studies. The accuracy of the image registration procedure was assessed by segmenting mouse CT images into seven regions: brain, lungs, heart, kidneys, bladder, skeleton and the rest of the body. This was accomplished prior to image registration using a semi-automated algorithm. Each mouse segmentation was transformed using the parameters obtained during CT to CT image registration. The resulting segmentation was compared with the original Digimouse atlas to quantify image registration accuracy using established metrics such as the Dice coefficient and Hausdorff distance. PET images were then transformed using the same technique and automated quantitative analysis of tracer uptake performed. The Dice coefficient and Hausdorff distance show fair to excellent agreement and a mean registration mismatch distance of about 6 mm. The results demonstrate good quantification accuracy in most of the regions, especially the brain, but not in the bladder, as expected. Normalized mean activity estimates were preserved between the reference and automated quantification techniques with relative errors below 10 % in most of the organs considered. The proposed automated quantification technique is

  8. Automated counting of bacterial colonies by image analysis.

    Science.gov (United States)

    Chiang, Pei-Ju; Tseng, Min-Jen; He, Zong-Sian; Li, Chia-Hsun

    2015-01-01

    Research on microorganisms often involves culturing as a means to determine the survival and proliferation of bacteria. The number of colonies in a culture is counted to calculate the concentration of bacteria in the original broth; however, manual counting can be time-consuming and imprecise. To save time and prevent inconsistencies, this study proposes a fully automated counting system using image processing methods. To accurately estimate the number of viable bacteria in a known volume of suspension, colonies distributing over the whole surface area of a plate, including the central and rim areas of a Petri dish are taken into account. The performance of the proposed system is compared with verified manual counts, as well as with two freely available counting software programs. Comparisons show that the proposed system is an effective method with excellent accuracy with mean value of absolute percentage error of 3.37%. A user-friendly graphical user interface is also developed and freely available for download, providing researchers in biomedicine with a more convenient instrument for the enumeration of bacterial colonies. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Automating X-ray Fluorescence Analysis for Rapid Astrobiology Surveys.

    Science.gov (United States)

    Thompson, David R; Flannery, David T; Lanka, Ravi; Allwood, Abigail C; Bue, Brian D; Clark, Benton C; Elam, W Timothy; Estlin, Tara A; Hodyss, Robert P; Hurowitz, Joel A; Liu, Yang; Wade, Lawrence A

    2015-11-01

    A new generation of planetary rover instruments, such as PIXL (Planetary Instrument for X-ray Lithochemistry) and SHERLOC (Scanning Habitable Environments with Raman Luminescence for Organics and Chemicals) selected for the Mars 2020 mission rover payload, aim to map mineralogical and elemental composition in situ at microscopic scales. These instruments will produce large spectral cubes with thousands of channels acquired over thousands of spatial locations, a large potential science yield limited mainly by the time required to acquire a measurement after placement. A secondary bottleneck also faces mission planners after downlink; analysts must interpret the complex data products quickly to inform tactical planning for the next command cycle. This study demonstrates operational approaches to overcome these bottlenecks by specialized early-stage science data processing. Onboard, simple real-time systems can perform a basic compositional assessment, recognizing specific features of interest and optimizing sensor integration time to characterize anomalies. On the ground, statistically motivated visualization can make raw uncalibrated data products more interpretable for tactical decision making. Techniques such as manifold dimensionality reduction can help operators comprehend large databases at a glance, identifying trends and anomalies in data. These onboard and ground-side analyses can complement a quantitative interpretation. We evaluate system performance for the case study of PIXL, an X-ray fluorescence spectrometer. Experiments on three representative samples demonstrate improved methods for onboard and ground-side automation and illustrate new astrobiological science capabilities unavailable in previous planetary instruments. Dimensionality reduction-Planetary science-Visualization.

  10. An analysis of equine round pen training videos posted online: Differences between amateur and professional trainers.

    Science.gov (United States)

    Kydd, Erin; Padalino, Barbara; Henshall, Cathrynne; McGreevy, Paul

    2017-01-01

    Natural Horsemanship is popular among many amateur and professional trainers and as such, has been the subject of recent scientific enquiry. One method commonly adopted by Natural Horsemanship (NH) trainers is that of round pen training (RPT). RPT sessions are usually split into a series of bouts; each including two phases: chasing/flight and chasing offset/flight offset. However, NH training styles are heterogeneous. This study investigated online videos of RPT to explore the characteristics of RPT sessions and test for differences in techniques and outcomes between amateurs and professionals (the latter being defined as those with accompanying online materials that promote clinics, merchandise or a service to the public). From more than 300 candidate videos, we selected sample files for individual amateur (n = 24) and professional (n = 21) trainers. Inclusion criteria were: training at liberty in a Round Pen; more than one bout and good quality video. Sessions or portions of sessions were excluded if the trainer attached equipment, such as a lunge line, directly to the horse or the horse was saddled, mounted or ridden. The number of bouts and duration of each chasing and non-chasing phase were recorded, and the duration of each RPT session was calculated. General weighted regression analysis revealed that, when compared with amateurs, professionals showed fewer arm movements per bout (pamateurs did (pamateurs Overall, these findings highlight the need for selectivity when using the internet as an educational source and the importance of trainer skill and excellent timing when using negative reinforcement in horse training.

  11. The application of video image processing to quantitative analysis of extremity tremor in humans.

    Science.gov (United States)

    Swider, M

    1998-10-01

    A new method for the detection and quantification of extremity tremor is described, based on video image processing. A single CCD camera recorded the movement of the extremity. A passive marker in the shape of a black annulus was placed on the forearm and the movement of the annulus analysed. The framestore digitised the video signal at a sample rate of 10 Hz. The time period of the movement analysis was delta t = 6.4 s. A total of 32 adults with alcoholism and 22 controls participated in this study. The movement of the extremity was recorded during the usual neurological test (sitting posture, feet together, upper extremities directly in front of subject) for both extremities. In this study, it was assumed that the probability density function f(d) of some variable D is characteristic of the tremor. This function is formed by a finite mixture of bivariate continuous distributions. The results suggest that f(d) characterises patients with alcoholism and distinguishes them from control subjects with only physiological tremor. The results demonstrate the capacity of the measuring system based on video imaging to quantifying motor impairments in clinical neurology.

  12. Composite behavior analysis for video surveillance using hierarchical dynamic Bayesian networks

    Science.gov (United States)

    Cheng, Huanhuan; Shan, Yong; Wang, Runsheng

    2011-03-01

    Analyzing composite behaviors involving objects from multiple categories in surveillance videos is a challenging task due to the complicated relationships among human and objects. This paper presents a novel behavior analysis framework using a hierarchical dynamic Bayesian network (DBN) for video surveillance systems. The model is built for extracting objects' behaviors and their relationships by representing behaviors using spatial-temporal characteristics. The recognition of object behaviors is processed by the DBN at multiple levels: features of objects at low level, objects and their relationships at middle level, and event at high level, where event refers to behaviors of a single type object as well as behaviors consisting of several types of objects such as ``a person getting in a car.'' Furthermore, to reduce the complexity, a simple model selection criterion is addressed, by which the appropriated model is picked out from a pool of candidate models. Experiments are shown to demonstrate that the proposed framework could efficiently recognize and semantically describe composite object and human activities in surveillance videos.

  13. Automated ultrasound edge-tracking software comparable to established semi-automated reference software for carotid intima-media thickness analysis.

    Science.gov (United States)

    Shenouda, Ninette; Proudfoot, Nicole A; Currie, Katharine D; Timmons, Brian W; MacDonald, Maureen J

    2017-04-26

    Many commercial ultrasound systems are now including automated analysis packages for the determination of carotid intima-media thickness (cIMT); however, details regarding their algorithms and methodology are not published. Few studies have compared their accuracy and reliability with previously established automated software, and those that have were in asymptomatic adults. Therefore, this study compared cIMT measures from a fully automated ultrasound edge-tracking software (EchoPAC PC, Version 110.0.2; GE Medical Systems, Horten, Norway) to an established semi-automated reference software (Artery Measurement System (AMS) II, Version 1.141; Gothenburg, Sweden) in 30 healthy preschool children (ages 3-5 years) and 27 adults with coronary artery disease (CAD; ages 48-81 years). For both groups, Bland-Altman plots revealed good agreement with a negligible mean cIMT difference of -0·03 mm. Software differences were statistically, but not clinically, significant for preschool images (P = 0·001) and were not significant for CAD images (P = 0·09). Intra- and interoperator repeatability was high and comparable between software for preschool images (ICC, 0·90-0·96; CV, 1·3-2·5%), but slightly higher with the automated ultrasound than the semi-automated reference software for CAD images (ICC, 0·98-0·99; CV, 1·4-2·0% versus ICC, 0·84-0·89; CV, 5·6-6·8%). These findings suggest that the automated ultrasound software produces valid cIMT values in healthy preschool children and adults with CAD. Automated ultrasound software may be useful for ensuring consistency among multisite research initiatives or large cohort studies involving repeated cIMT measures, particularly in adults with documented CAD. © 2017 Scandinavian Society of Clinical Physiology and Nuclear Medicine. Published by John Wiley & Sons Ltd.

  14. Motion based parsing for video from observational psychology

    Science.gov (United States)

    Kokaram, Anil; Doyle, Erika; Lennon, Daire; Joyeux, Laurent; Fuller, Ray

    2006-01-01

    In Psychology it is common to conduct studies involving the observation of humans undertaking some task. The sessions are typically recorded on video and used for subjective visual analysis. The subjective analysis is tedious and time consuming, not only because much useless video material is recorded but also because subjective measures of human behaviour are not necessarily repeatable. This paper presents tools using content based video analysis that allow automated parsing of video from one such study involving Dyslexia. The tools rely on implicit measures of human motion that can be generalised to other applications in the domain of human observation. Results comparing quantitative assessment of human motion with subjective assessment are also presented, illustrating that the system is a useful scientific tool.

  15. Development of an automated technique for failure modes and effect analysis

    DEFF Research Database (Denmark)

    Blanke, M.; Borch, Ole; Bagnoli, F.

    implementing an automated technique for Failure Modes and Effects Analysis (FMEA). This technique is based on the matrix formulation of FMEA for the investigation of failure propagation through a system. As main result, this technique will provide the design engineer with decision tables for fault handling...

  16. Development of an Automated Technique for Failure Modes and Effect Analysis

    DEFF Research Database (Denmark)

    Blanke, M.; Borch, Ole; Allasia, G.

    1999-01-01

    implementing an automated technique for Failure Modes and Effects Analysis (FMEA). This technique is based on the matrix formulation of FMEA for the investigation of failure propagation through a system. As main result, this technique will provide the design engineer with decision tables for fault handling...

  17. UAV : Warnings From Multiple Automated Static Analysis Tools At A Glance

    NARCIS (Netherlands)

    Buckers, T.B.; Cao, C.S.; Doesburg, M.S.; Gong, Boning; Wang, Sunwei; Beller, M.M.; Zaidman, A.E.; Pinzger, Martin; Bavota, Gabriele; Marcus, Andrian

    2017-01-01

    Automated Static Analysis Tools (ASATs) are an integral part of today’s software quality assurance practices. At present, a plethora of ASATs exist, each with different strengths. However, there is little guidance for developers on which of these ASATs to choose and combine for a project. As a

  18. Software Tool for Automated Failure Modes and Effects Analysis (FMEA) of Hydraulic Systems

    DEFF Research Database (Denmark)

    Stecki, J. S.; Conrad, Finn; Oh, B.

    2002-01-01

    management techniques and a vast array of computer aided techniques are applied during design and testing stages. The paper present and discusses the research and development of a software tool for automated failure mode and effects analysis - FMEA - of hydraulic systems. The paper explains the underlying...

  19. Development of a novel and automated fluorescent immunoassay for the analysis of beta-lactam antibiotics

    NARCIS (Netherlands)

    Benito-Pena, E.; Moreno-Bondi, M.C.; Orellana, G.; Maquieira, K.; Amerongen, van A.

    2005-01-01

    An automated immunosensor for the rapid and sensitive analysis of penicillin type -lactam antibiotics has been developed and optimized. An immunogen was prepared by coupling the common structure of the penicillanic -lactam antibiotics, i.e., 6-aminopenicillanic acid to keyhole limpet hemocyanin.

  20. Miniaturized Mass-Spectrometry-Based Analysis System for Fully Automated Examination of Conditioned Cell Culture Media

    NARCIS (Netherlands)

    Weber, E.; Pinkse, M.W.H.; Bener-Aksam, E.; Vellekoop, M.J.; Verhaert, P.D.E.M.

    2012-01-01

    We present a fully automated setup for performing in-line mass spectrometry (MS) analysis of conditioned media in cell cultures, in particular focusing on the peptides therein. The goal is to assess peptides secreted by cells in different culture conditions. The developed system is compatible with

  1. Une Analyse automatique en syntaxe textuelle (An Automated Analysis of Textual Syntax). Publication K-5.

    Science.gov (United States)

    Ladouceur, Jacques

    This study reports the use of automated textual analysis on a French novel. An introductory section chronicles the history of artificial intelligence, focusing on its use with natural languages, and discusses its application to textual syntax. The first chapter examines computational linguistics in greater detail, looking at its relationship to…

  2. Qualitative Video Analysis of Track-Cycling Team Pursuit in World-Class Athletes.

    Science.gov (United States)

    Sigrist, Samuel; Maier, Thomas; Faiss, Raphael

    2017-11-01

    Track-cycling team pursuit (TP) is a highly technical effort involving 4 athletes completing 4 km from a standing start, often in less than 240 s. Transitions between athletes leading the team are obviously of utmost importance. To perform qualitative video analyses of transitions of world-class athletes in TP competitions. Videos captured at 100 Hz were recorded for 77 races (including 96 different athletes) in 5 international track-cycling competitions (eg, UCI World Cups and World Championships) and analyzed for the 12 best teams in the UCI Track Cycling TP Olympic ranking. During TP, 1013 transitions were evaluated individually to extract quantitative (eg, average lead time, transition number, length, duration, height in the curve) and qualitative (quality of transition start, quality of return at the back of the team, distance between third and returning rider score) variables. Determination of correlation coefficients between extracted variables and end time allowed assessment of relationships between variables and relevance of the video analyses. Overall quality of transitions and end time were significantly correlated (r = .35, P = .002). Similarly, transition distance (r = .26, P = .02) and duration (r = .35, P = .002) were positively correlated with end time. Conversely, no relationship was observed between transition number, average lead time, or height reached in the curve and end time. Video analysis of TP races highlights the importance of quality transitions between riders, with preferably swift and short relays rather than longer lead times for faster race times.

  3. Design and Prototype of an Automated Column-Switching HPLC System for Radiometabolite Analysis.

    Science.gov (United States)

    Vasdev, Neil; Collier, Thomas Lee

    2016-08-17

    Column-switching high performance liquid chromatography (HPLC) is extensively used for the critical analysis of radiolabeled ligands and their metabolites in plasma. However, the lack of streamlined apparatus and consequently varying protocols remain as a challenge among positron emission tomography laboratories. We report here the prototype apparatus and implementation of a fully automated and simplified column-switching procedure to allow for the easy and automated determination of radioligands and their metabolites in up to 5 mL of plasma. The system has been used with conventional UV and coincidence radiation detectors, as well as with a single quadrupole mass spectrometer.

  4. Application of fluorescence-based semi-automated AFLP analysis in barley and wheat

    DEFF Research Database (Denmark)

    Schwarz, G.; Herz, M.; Huang, X.Q.

    2000-01-01

    of semi-automated codominant analysis for hemizygous AFLP markers in an F-2 population was too low, proposing the use of dominant allele-typing defaults. Nevertheless, the efficiency of genetic mapping, especially of complex plant genomes, will be accelerated by combining the presented genotyping......Genetic mapping and the selection of closely linked molecular markers for important agronomic traits require efficient, large-scale genotyping methods. A semi-automated multifluorophore technique was applied for genotyping AFLP marker loci in barley and wheat. In comparison to conventional P-33...

  5. Automated Diatom Analysis Applied to Traditional Light Microscopy: A Proof-of-Concept Study

    Science.gov (United States)

    Little, Z. H. L.; Bishop, I.; Spaulding, S. A.; Nelson, H.; Mahoney, C.

    2017-12-01

    Diatom identification and enumeration by high resolution light microscopy is required for many areas of research and water quality assessment. Such analyses, however, are both expertise and labor-intensive. These challenges motivate the need for an automated process to efficiently and accurately identify and enumerate diatoms. Improvements in particle analysis software have increased the likelihood that diatom enumeration can be automated. VisualSpreadsheet software provides a possible solution for automated particle analysis of high-resolution light microscope diatom images. We applied the software, independent of its complementary FlowCam hardware, to automated analysis of light microscope images containing diatoms. Through numerous trials, we arrived at threshold settings to correctly segment 67% of the total possible diatom valves and fragments from broad fields of view. (183 light microscope images were examined containing 255 diatom particles. Of the 255 diatom particles present, 216 diatoms valves and fragments of valves were processed, with 170 properly analyzed and focused upon by the software). Manual analysis of the images yielded 255 particles in 400 seconds, whereas the software yielded a total of 216 particles in 68 seconds, thus highlighting that the software has an approximate five-fold efficiency advantage in particle analysis time. As in past efforts, incomplete or incorrect recognition was found for images with multiple valves in contact or valves with little contrast. The software has potential to be an effective tool in assisting taxonomists with diatom enumeration by completing a large portion of analyses. Benefits and limitations of the approach are presented to allow for development of future work in image analysis and automated enumeration of traditional light microscope images containing diatoms.

  6. High-speed video analysis of forward and backward spattered blood droplets

    Science.gov (United States)

    Comiskey, Patrick; Yarin, Alexander; Attinger, Daniel

    2017-11-01

    High-speed videos of blood spatter due to a gunshot taken by the Ames Laboratory Midwest Forensics Resource Center are analyzed. The videos used in this analysis were focused on a variety of targets hit by a bullet which caused either forward, backward, or both types of blood spatter. The analysis process utilized particle image velocimetry and particle analysis software to measure drop velocities as well as the distributions of the number of droplets and their respective side view area. This analysis revealed that forward spatter results in drops travelling twice as fast compared to backward spatter, while both types of spatter contain drops of approximately the same size. Moreover, the close-to-cone domain in which drops are issued is larger in forward spatter than in the backward one. The inclination angle of the bullet as it penetrates the target is seen to play a significant role in the directional preference of the spattered blood. Also, the aerodynamic drop-drop interaction, muzzle gases, bullet impact angle, as well as the aerodynamic wake of the bullet are seen to greatly influence the flight of the drops. The aim of this study is to provide a quantitative basis for current and future research on bloodstain pattern analysis. This work was financially supported by the United States National Institute of Justice (award NIJ 2014-DN-BXK036).

  7. Scoring of radiation-induced micronuclei in cytokinesis-blocked human lymphocytes by automated image analysis

    International Nuclear Information System (INIS)

    Verhaegen, F.; Seuntjens, J.; Thierens, H.

    1994-01-01

    The micronucleus assay in human lymphocytes is, at present, frequently used to assess chromosomal damage caused by ionizing radiation or mutagens. Manual scoring of micronuclei (MN) by trained personnel is very time-consuming, tiring work, and the results depend on subjective interpretation of scoring criteria. More objective scoring can be accomplished only if the test can be automated. Furthermore, an automated system allows scoring of large numbers of cells, thereby increasing the statistical significance of the results. This is of special importance for screening programs for low doses of chromosome-damaging agents. In this paper, the first results of our effort to automate the micronucleus assay with an image-analysis system are represented. The method we used is described in detail, and the results are compared to those of other groups. Our system is able to detect 88% of the binucleated lymphocytes on the slides. The procedure consists of a fully automated localization of binucleated cells and counting of the MN within these cells, followed by a simple and fast manual operation in which the false positives are removed. Preliminary measurements for blood samples irradiated with a dose of 1 Gy X-rays indicate that the automated system can find 89% ± 12% of the micronuclei within the binucleated cells compared to a manual screening. 18 refs., 8 figs., 1 tab

  8. Driver-centred vehicle automation: using network analysis for agent-based modelling of the driver in highly automated driving systems.

    Science.gov (United States)

    Banks, Victoria A; Stanton, Neville A

    2016-11-01

    To the average driver, the concept of automation in driving infers that they can become completely 'hands and feet free'. This is a common misconception, however, one that has been shown through the application of Network Analysis to new Cruise Assist technologies that may feature on our roads by 2020. Through the adoption of a Systems Theoretic approach, this paper introduces the concept of driver-initiated automation which reflects the role of the driver in highly automated driving systems. Using a combination of traditional task analysis and the application of quantitative network metrics, this agent-based modelling paper shows how the role of the driver remains an integral part of the driving system implicating the need for designers to ensure they are provided with the tools necessary to remain actively in-the-loop despite giving increasing opportunities to delegate their control to the automated subsystems. Practitioner Summary: This paper describes and analyses a driver-initiated command and control system of automation using representations afforded by task and social networks to understand how drivers remain actively involved in the task. A network analysis of different driver commands suggests that such a strategy does maintain the driver in the control loop.

  9. High Throughput Light Absorber Discovery, Part 1: An Algorithm for Automated Tauc Analysis.

    Science.gov (United States)

    Suram, Santosh K; Newhouse, Paul F; Gregoire, John M

    2016-11-14

    High-throughput experimentation provides efficient mapping of composition-property relationships, and its implementation for the discovery of optical materials enables advancements in solar energy and other technologies. In a high throughput pipeline, automated data processing algorithms are often required to match experimental throughput, and we present an automated Tauc analysis algorithm for estimating band gap energies from optical spectroscopy data. The algorithm mimics the judgment of an expert scientist, which is demonstrated through its application to a variety of high throughput spectroscopy data, including the identification of indirect or direct band gaps in Fe 2 O 3 , Cu 2 V 2 O 7 , and BiVO 4 . The applicability of the algorithm to estimate a range of band gap energies for various materials is demonstrated by a comparison of direct-allowed band gaps estimated by expert scientists and by automated algorithm for 60 optical spectra.

  10. CANEapp: a user-friendly application for automated next generation transcriptomic data analysis.

    Science.gov (United States)

    Velmeshev, Dmitry; Lally, Patrick; Magistri, Marco; Faghihi, Mohammad Ali

    2016-01-13

    Next generation sequencing (NGS) technologies are indispensable for molecular biology research, but data analysis represents the bottleneck in their application. Users need to be familiar with computer terminal commands, the Linux environment, and various software tools and scripts. Analysis workflows have to be optimized and experimentally validated to extract biologically meaningful data. Moreover, as larger datasets are being generated, their analysis requires use of high-performance servers. To address these needs, we developed CANEapp (application for Comprehensive automated Analysis of Next-generation sequencing Experiments), a unique suite that combines a Graphical User Interface (GUI) and an automated server-side analysis pipeline that is platform-independent, making it suitable for any server architecture. The GUI runs on a PC or Mac and seamlessly connects to the server to provide full GUI control of RNA-sequencing (RNA-seq) project analysis. The server-side analysis pipeline contains a framework that is implemented on a Linux server through completely automated installation of software components and reference files. Analysis with CANEapp is also fully automated and performs differential gene expression analysis and novel noncoding RNA discovery through alternative workflows (Cuffdiff and R packages edgeR and DESeq2). We compared CANEapp to other similar tools, and it significantly improves on previous developments. We experimentally validated CANEapp's performance by applying it to data derived from different experimental paradigms and confirming the results with quantitative real-time PCR (qRT-PCR). CANEapp adapts to any server architecture by effectively using available resources and thus handles large amounts of data efficiently. CANEapp performance has been experimentally validated on various biological datasets. CANEapp is available free of charge at http://psychiatry.med.miami.edu/research/laboratory-of-translational-rna-genomics/CANE-app . We

  11. PP025. Urinary dipstick proteinuria testing - Does automated strip analysis offer an advantage over visual testing?

    Science.gov (United States)

    De Silva, D A; Halstead, C; Côté, A-M; Sabr, Y; von Dadelszen, P; Magee, L A

    2012-07-01

    The visual urinary test strip is widely accepted for screening for proteinuria in pregnancy, given the convenience of the method and its low cost. However, test strips are known to lack sensitivity and specificity. The 2010 NICE (National Institute for Health and Clinical Excellence) guidelines for management of pregnancy hypertension have recommended the use of an automated test strip reader to confirm proteinuria (http://nice.org.uk/CG107). Superior diagnostic test performance of an automated (vs. visual) method has been proposed based on reduced subjectivity. To compare the diagnostic test properties of automated vs. visual read urine dipstick testing for detection of a random protein:creatinine ratio (PrCr) of ⩾30mg/mmol. In this prospective cohort study, consecutive inpatients or outpatients (obstetric medicine and high-risk maternity clinics) were evaluated at a tertiary care facility. Random midstream urine samples (obtained as part of normal clinical care) were split into two aliquots. The first underwent a point-of-care testing for proteinuria using both visual (Multistix 10SG, Siemens Healthcare Diagnostics, Inc., Tarrytown NY) and automated (Chemstrip 10A, Roche Diagnostics, Laval QC) test strips, the latter read by an analyser (Urisys 1100®, Roche Diagnostics, Laval QC). The second aliquot was sent to the hospital laboratory for analysis of urinary protein using a pyrocatechol violet molybdate dye-binding method, and urinary creatinine using an enzymatic method, both on an automated analyser (Vitros® 5,1 FS or Vitros® 5600, Ortho-Clinical Diagnostics, Rochester NY); random PrCr ratios were calculated in the laboratory. Following exclusion of dilute samples with urinary creatinine concentration analysis. Both visual and automated read urinary dipstick testing showed low sensitivity (56.0% and 53.9%, respectively). Positive likelihood ratios (LR+) and 95% CI were 15.0 [5.9,37.9] and 24.6 [7.6,79.6], respectively. Negative LR (LR-) were 0.46 [0

  12. A unified and efficient framework for court-net sports video analysis using 3D camera modeling

    Science.gov (United States)

    Han, Jungong; de With, Peter H. N.

    2007-01-01

    The extensive amount of video data stored on available media (hard and optical disks) necessitates video content analysis, which is a cornerstone for different user-friendly applications, such as, smart video retrieval and intelligent video summarization. This paper aims at finding a unified and efficient framework for court-net sports video analysis. We concentrate on techniques that are generally applicable for more than one sports type to come to a unified approach. To this end, our framework employs the concept of multi-level analysis, where a novel 3-D camera modeling is utilized to bridge the gap between the object-level and the scene-level analysis. The new 3-D camera modeling is based on collecting features points from two planes, which are perpendicular to each other, so that a true 3-D reference is obtained. Another important contribution is a new tracking algorithm for the objects (i.e. players). The algorithm can track up to four players simultaneously. The complete system contributes to summarization by various forms of information, of which the most important are the moving trajectory and real-speed of each player, as well as 3-D height information of objects and the semantic event segments in a game. We illustrate the performance of the proposed system by evaluating it for a variety of court-net sports videos containing badminton, tennis and volleyball, and we show that the feature detection performance is above 92% and events detection about 90%.

  13. Recent developments in the dissolution and automated analysis of plutonium and uranium for safeguards measurements

    International Nuclear Information System (INIS)

    Jackson, D.D.; Marsh, S.F.; Rein, J.E.; Waterbury, G.R.

    1976-01-01

    The status of a programme to develop assay methods for plutonium and uranium for safeguards purposes is presented. The current effort is directed more towards analyses of scrap-type material with an end goal of precise automated methods that also will be applicable to product materials. A guiding philosophy for the analysis of scrap-type materials, characterized by heterogeneity and difficult dissolution, is relatively fast dissolution treatment to carry out 90% or more solubilization of the uranium and plutonium, analysis of the soluble fraction by precise automated methods, and gamma-counting assay of any residue fraction using simple techniques. A Teflon-container metal-shell apparatus provides acid dissolutions of typical fuel-cycle materials at temperatures to 275 0 C and pressures to 340 atm. Gas-solid reactions at elevated temperatures are promising to separate uranium from refractory materials by the formation of volatile uranium compounds. The condensed compounds then are dissolved in acid for subsequent analysis. An automated spectrophotometer has been placed in operation for the determination of uranium and plutonium. The measurement range is 1 to 14 mg of either element with a relative standard deviation of 0.5% over most of the range. The throughput rate is 5 min per sample. A second-generation automated instrument, which will use a precise and specific electro analytical method as its operational basis, is being developed for the determination of plutonium. (author)

  14. Recent developments in the dissolution and automated analysis of plutonium and uranium for safeguards measurements

    International Nuclear Information System (INIS)

    Jackson, D.D.; Marsh, S.F.; Rein, J.E.; Waterbury, G.R.

    1975-01-01

    The status of a program to develop assay methods for plutonium and uranium for safeguards purposes is presented. The current effort is directed more toward analyses of scrap-type material with an end goal of precise automated methods that also will be applicable to product materials. A guiding philosophy for the analysis of scrap-type materials, characterized by heterogeneity and difficult dissolution, is relatively fast dissolution treatment to effect 90 percent or more solubilization of the uranium and plutonium, analysis of the soluble fraction by precise automated methods, and gamma-counting assay of any residue fraction using simple techniques. A Teflon-container metal-shell apparatus provides acid dissolutions of typical fuel cycle materials at temperatures to 275 0 C and pressures to 340 atm. Gas--solid reactions at elevated temperatures separate uranium from refractory materials by the formation of volatile uranium compounds. The condensed compounds then are dissolved in acid for subsequent analysis. An automated spectrophotometer is used for the determination of uranium and plutonium. The measurement range is 1 to 14 mg of either element with a relative standard deviation of 0.5 percent over most of the range. The throughput rate is 5 min per sample. A second-generation automated instrument is being developed for the determination of plutonium. A precise and specific electroanalytical method is used as its operational basis. (auth)

  15. Video analysis of the biomechanics of a bicycle accident resulting in significant facial fractures.

    Science.gov (United States)

    Syed, Shameer H; Willing, Ryan; Jenkyn, Thomas R; Yazdani, Arjang

    2013-11-01

    This study aimed to use video analysis techniques to determine the velocity, impact force, angle of impact, and impulse to fracture involved in a video-recorded bicycle accident resulting in facial fractures. Computed tomographic images of the resulting facial injury are presented for correlation with data and calculations. To our knowledge, such an analysis of an actual recorded trauma has not been reported in the literature. A video recording of the accident was split into frames and analyzed using an image editing program. Measurements of velocity and angle of impact were obtained from this analysis, and the force of impact and impulse were calculated using the inverse dynamic method with connected rigid body segments. These results were then correlated with the actual fracture pattern found on computed tomographic imaging of the subject's face. There was an impact velocity of 6.25 m/s, impact angles of 14 and 6.3 degrees of neck extension and axial rotation, respectively, an impact force of 1910.4 N, and an impulse to fracture of 47.8 Ns. These physical parameters resulted in clinically significant bilateral mid-facial Le Fort II and III pattern fractures. These data confer further understanding of the biomechanics of bicycle-related accidents by correlating an actual clinical outcome with the kinematic and dynamic parameters involved in the accident itself and yielding a concrete evidence of the velocity, force, and impulse necessary to cause clinically significant facial trauma. These findings can aid in the design of protective equipment for bicycle riders to help avoid this type of injury.

  16. ARAM: an automated image analysis software to determine rosetting parameters and parasitaemia in Plasmodium samples.

    Science.gov (United States)

    Kudella, Patrick Wolfgang; Moll, Kirsten; Wahlgren, Mats; Wixforth, Achim; Westerhausen, Christoph

    2016-04-18

    Rosetting is associated with severe malaria and a primary cause of death in Plasmodium falciparum infections. Detailed understanding of this adhesive phenomenon may enable the development of new therapies interfering with rosette formation. For this, it is crucial to determine parameters such as rosetting and parasitaemia of laboratory strains or patient isolates, a bottleneck in malaria research due to the time consuming and error prone manual analysis of specimens. Here, the automated, free, stand-alone analysis software automated rosetting analyzer for micrographs (ARAM) to determine rosetting rate, rosette size distribution as well as parasitaemia with a convenient graphical user interface is presented. Automated rosetting analyzer for micrographs is an executable with two operation modes for automated identification of objects on images. The default mode detects red blood cells and fluorescently labelled parasitized red blood cells by combining an intensity-gradient with a threshold filter. The second mode determines object location and size distribution from a single contrast method. The obtained results are compared with standardized manual analysis. Automated rosetting analyzer for micrographs calculates statistical confidence probabilities for rosetting rate and parasitaemia. Automated rosetting analyzer for micrographs analyses 25 cell objects per second reliably delivering identical results compared to manual analysis. For the first time rosette size distribution is determined in a precise and quantitative manner employing ARAM in combination with established inhibition tests. Additionally ARAM measures the essential observables parasitaemia, rosetting rate and size as well as location of all detected objects and provides confidence intervals for the determined observables. No other existing software solution offers this range of function. The second, non-malaria specific, analysis mode of ARAM offers the functionality to detect arbitrary objects

  17. Braking Deceleration Measurement Using the Video Analysis of Motions by Sw Tracker

    Directory of Open Access Journals (Sweden)

    Ondruš Ján

    2015-06-01

    Full Text Available This contribution deals with the issue of car braking, particularly with the one of M1 category. Braking deceleration measurement of the vehicle Mazda 3 MPS was carried out by the declerograph XL MeterTM Pro. The main aim of the contribution is to perform comparison of the process of braking deceleration between the decelograph and the new alternative method of video analysis and to subsequently examine these processes. The test took place at the Rosina airfield, the airstrip in a small village nearby the town of Žilina. The last part of this paper presents the results, evlauation and comparison of the measurements carried out.

  18. Rocket engine plume diagnostics using video digitization and image processing - Analysis of start-up

    Science.gov (United States)

    Disimile, P. J.; Shoe, B.; Dhawan, A. P.

    1991-01-01

    Video digitization techniques have been developed to analyze the exhaust plume of the Space Shuttle Main Engine. Temporal averaging and a frame-by-frame analysis provide data used to evaluate the capabilities of image processing techniques for use as measurement tools. Capabilities include the determination of the necessary time requirement for the Mach disk to obtain a fully-developed state. Other results show the Mach disk tracks the nozzle for short time intervals, and that dominate frequencies exist for the nozzle and Mach disk movement.

  19. Using high speed smartphone cameras and video analysis techniques to teach mechanical wave physics

    Science.gov (United States)

    Bonato, Jacopo; Gratton, Luigi M.; Onorato, Pasquale; Oss, Stefano

    2017-07-01

    We propose the use of smartphone-based slow-motion video analysis techniques as a valuable tool for investigating physics concepts ruling mechanical wave propagation. The simple experimental activities presented here, suitable for both high school and undergraduate students, allows one to measure, in a simple yet rigorous way, the speed of pulses along a spring and the period of transverse standing waves generated in the same spring. These experiments can be helpful in addressing several relevant concepts about the physics of mechanical waves and in overcoming some of the typical student misconceptions in this same field.

  20. Lipid vesicle shape analysis from populations using light video microscopy and computer vision.

    Directory of Open Access Journals (Sweden)

    Jernej Zupanc

    Full Text Available We present a method for giant lipid vesicle shape analysis that combines manually guided large-scale video microscopy and computer vision algorithms to enable analyzing vesicle populations. The method retains the benefits of light microscopy and enables non-destructive analysis of vesicles from suspensions containing up to several thousands of lipid vesicles (1-50 µm in diameter. For each sample, image analysis was employed to extract data on vesicle quantity and size distributions of their projected diameters and isoperimetric quotients (measure of contour roundness. This process enables a comparison of samples from the same population over time, or the comparison of a treated population to a control. Although vesicles in suspensions are heterogeneous in sizes and shapes and have distinctively non-homogeneous distribution throughout the suspension, this method allows for the capture and analysis of repeatable vesicle samples that are representative of the population inspected.

  1. NREM Arousal Parasomnias and Their Distinction from Nocturnal Frontal Lobe Epilepsy: A Video EEG Analysis

    Science.gov (United States)

    Derry, Christopher P.; Harvey, A. Simon; Walker, Matthew C.; Duncan, John S.; Berkovic, Samuel F.

    2009-01-01

    Study Objectives. To describe the semiological features of NREM arousal parasomnias in detail and identify features that can be used to reliably distinguish parasomnias from nocturnal frontal lobe epilepsy (NFLE). Design. Systematic semiologial evaluation of parasomnias and NFLE seizures recorded on video-EEG monitoring. Patients. 120 events (57 parasomnias, 63 NFLE seizures) from 44 subjects (14 males). Interventions. The presence or absence of 68 elemental clinical features was determined in parasomnias and NFLE seizures. Qualitative analysis of behavior patterns and ictal EEG was undertaken. Statistical analysis was undertaken using established techniques. Results. Elemental clinical features strongly favoring parasomnias included: interactive behavior, failure to wake after event, and indistinct offset (all P night terrors as prototypical behavior patterns of NREM parasomnias, but as a hierarchical continuum rather than distinct entities. Our observations provide an evidence base to assist in the clinical diagnosis of NREM parasomnias, and their distinction from NFLE seizures, on semiological grounds. Citation: Derry CP; Harvey AS; Walker MC; Duncan JS; Berkovic SF. NREM arousal parasomnias and their distinction from nocturnal frontal lobe epilepsy: a video EEG analysis. SLEEP 2009;32(12):1637-1644. PMID:20041600

  2. Evaluation of an automated analysis for pain-related evoked potentials

    Directory of Open Access Journals (Sweden)

    Wulf Michael

    2017-09-01

    Full Text Available This paper presents initial steps towards an auto-mated analysis for pain-related evoked potentials (PREP to achieve a higher objectivity and non-biased examination as well as a reduction in the time expended during clinical daily routines. While manually examining, each epoch of an en-semble of stimulus-locked EEG signals, elicited by electrical stimulation of predominantly intra-epidermal small nerve fibers and recorded over the central electrode (Cz, is in-spected for artifacts before calculating the PREP by averag-ing the artifact-free epochs. Afterwards, specific peak-latencies (like the P0-, N1 and P1-latency are identified as certain extrema in the PREP’s waveform. The proposed automated analysis uses Pearson’s correlation and low-pass differentiation to perform these tasks. To evaluate the auto-mated analysis’ accuracy its results of 232 datasets were compared to the results of the manually performed examina-tion. Results of the automated artifact rejection were compa-rable to the manual examination. Detection of peak-latencies was more heterogeneous, indicating some sensitivity of the detected events upon the criteria used during data examina-tion.

  3. Automated tool for virtual screening and pharmacology-based pathway prediction and analysis

    Directory of Open Access Journals (Sweden)

    Sugandh Kumar

    2017-10-01

    Full Text Available The virtual screening is an effective tool for the lead identification in drug discovery. However, there are limited numbers of crystal structures available as compared to the number of biological sequences which makes (Structure Based Drug Discovery SBDD a difficult choice. The current tool is an attempt to automate the protein structure modelling and automatic virtual screening followed by pharmacology-based prediction and analysis. Starting from sequence(s, this tool automates protein structure modelling, binding site identification, automated docking, ligand preparation, post docking analysis and identification of hits in the biological pathways that can be modulated by a group of ligands. This automation helps in the characterization of ligands selectivity and action of ligands on a complex biological molecular network as well as on individual receptor. The judicial combination of the ligands binding different receptors can be used to inhibit selective biological pathways in a disease. This tool also allows the user to systemically investigate network-dependent effects of a drug or drug candidate.

  4. Automated acquisition and analysis of small angle X-ray scattering data

    International Nuclear Information System (INIS)

    Franke, Daniel; Kikhney, Alexey G.; Svergun, Dmitri I.

    2012-01-01

    Small Angle X-ray Scattering (SAXS) is a powerful tool in the study of biological macromolecules providing information about the shape, conformation, assembly and folding states in solution. Recent advances in robotic fluid handling make it possible to perform automated high throughput experiments including fast screening of solution conditions, measurement of structural responses to ligand binding, changes in temperature or chemical modifications. Here, an approach to full automation of SAXS data acquisition and data analysis is presented, which advances automated experiments to the level of a routine tool suitable for large scale structural studies. The approach links automated sample loading, primary data reduction and further processing, facilitating queuing of multiple samples for subsequent measurement and analysis and providing means of remote experiment control. The system was implemented and comprehensively tested in user operation at the BioSAXS beamlines X33 and P12 of EMBL at the DORIS and PETRA storage rings of DESY, Hamburg, respectively, but is also easily applicable to other SAXS stations due to its modular design.

  5. A content analysis of the portrayal of alcohol in televised music videos in New Zealand: changes over time.

    Science.gov (United States)

    Sloane, Kate; Wilson, Nick; Imlach Gunasekara, Fiona

    2013-01-01

    We aimed to: (i) document the extent and nature of alcohol portrayal in televised music videos in New Zealand in 2010; and (ii) assess trends over time by comparing with a similar 2005 sample. We undertook a content analysis for references to alcohol in 861 music videos shown on a youth-orientated television channel in New Zealand. This was compared with a sample in 2005 (564 music videos on the same channel plus sampling from two other channels). The proportion of alcohol content in the music videos was slightly higher in 2010 than for the same channel in the 2005 sample (19.5% vs. 15.7%) but this difference was not statistically significant. Only in the genre 'Rhythm and Blues' was the increase over time significant (P = 0.015). In both studies, the portrayal of alcohol was significantly more common in music videos where the main artist was international (not from New Zealand). Furthermore, in the music videos with alcohol content, at least a third of the time, alcohol was shown being consumed and the main artist was involved with alcohol. In only 2% (in 2005) and 4% (in 2010) of these videos was the tone explicitly negative towards alcohol. In both these studies, the portrayal of alcohol was relatively common in music videos. Nevertheless, there are various ways that policy makers can denormalise alcohol in youth-orientated media such as music videos or to compensate via other alcohol control measures such as higher alcohol taxes. © 2012 Australasian Professional Society on Alcohol and other Drugs.

  6. Frame-by-Frame Video Analysis of Idiosyncratic Reach-to-Grasp Movements in Humans.

    Science.gov (United States)

    Karl, Jenni M; Kuntz, Jessica R; Lenhart, Layne A; Whishaw, Ian Q

    2018-01-15

    Prehension, the act of reaching to grasp an object, is central to the human experience. We use it to feed ourselves, groom ourselves, and manipulate objects and tools in our environment. Such behaviors are impaired by many sensorimotor disorders, yet our current understanding of their neural control is far from complete. Current technologies for investigating human reach-to-grasp movements often utilize motion tracking systems that can be expensive, require the attachment of markers or sensors to the hands, impede natural movement and sensory feedback, and provide kinematic output that can be difficult to interpret. While generally effective for studying the stereotypical reach-to-grasp movements of healthy sighted adults, many of these technologies face additional limitations when attempting to study the unpredictable and idiosyncratic reach-to-grasp movements of young infants, unsighted adults, and patients with neurological disorders. Thus, we present a novel, inexpensive, and highly reliable yet flexible protocol for quantifying the temporal and kinematic structure of idiosyncratic reach-to-grasp movements in humans. High speed video cameras capture multiple views of the reach-to-grasp movement. Frame-by-frame video analysis is then used to document the timing and magnitude of pre-defined behavioral events such as movement start, collection, maximum height, peak aperture, first contact, and final grasp. The temporal structure of the movement is reconstructed by documenting the relative frame number of each event while the kinematic structure of the hand is quantified using the ruler or measure function in photo editing software to calibrate 2 dimensional linear distances between two body parts or between a body part and the target. Frame-by-frame video analysis can provide a quantitative and comprehensive description of idiosyncratic reach-to-grasp movements and will enable researchers to expand their area of investigation to include a greater range of

  7. Paediatric palliative care by video consultation at home: a cost minimisation analysis

    Science.gov (United States)

    2014-01-01

    Background In the vast state of Queensland, Australia, access to specialist paediatric services are only available in the capital city of Brisbane, and are limited in regional and remote locations. During home-based palliative care, it is not always desirable or practical to move a patient to attend appointments, and so access to care may be even further limited. To address these problems, at the Royal Children’s Hospital (RCH) in Brisbane, a Home Telehealth Program (HTP) has been successfully established to provide palliative care consultations to families throughout Queensland. Methods A cost minimisation analysis was undertaken to compare the actual costs of the HTP consultations, with the estimated potential costs associated with face-to face-consultations occurring by either i) hospital based consultations in the outpatients department at the RCH, or ii) home visits from the Paediatric Palliative Care Service. The analysis was undertaken from the perspective of the Children’s Health Service. The analysis was based on data from 95 home video consultations which occurred over a two year period, and included costs associated with projected: clinician time and travel; costs reimbursed to families for travel through the Patients Travel Subsidy (PTS) scheme; hospital outpatient clinic costs, project co-ordination and equipment and infrastructure costs. The mean costs per consultation were calculated for each approach. Results Air travel (n = 24) significantly affected the results. The mean cost of the HTP intervention was $294 and required no travel. The estimated mean cost per consultation in the hospital outpatient department was $748. The mean cost of home visits per consultation was $1214. Video consultation in the home is the most economical method of providing a consultation. The largest costs avoided to the health service are those associated with clinician time required for travel and the PTS scheme. Conclusion While face-to-face consultations are

  8. GenePublisher: automated analysis of DNA microarray data

    DEFF Research Database (Denmark)

    Knudsen, Steen; Workman, Christopher; Sicheritz-Ponten, T.

    2003-01-01

    , statistical analysis and visualization of the data. The results are run against databases of signal transduction pathways, metabolic pathways and promoter sequences in order to extract more information. The results of the entire analysis are summarized in report form and returned to the user.......GenePublisher, a system for automatic analysis of data from DNA microarray experiments, has been implemented with a web interface at http://www.cbs.dtu.dk/services/GenePublisher. Raw data are uploaded to the server together with aspecification of the data. The server performs normalization...

  9. A qualitative analysis of methotrexate self-injection education videos on YouTube.

    Science.gov (United States)

    Rittberg, Rebekah; Dissanayake, Tharindri; Katz, Steven J

    2016-05-01

    The aim of this study is to identify and evaluate the quality of videos for patients available on YouTube for learning to self-administer subcutaneous methotrexate. Using the search term "Methotrexate injection," two clinical reviewers analyzed the first 60 videos on YouTube. Source and search rank of video, audience interaction, video duration, and time since video was uploaded on YouTube were recorded. Videos were classified as useful, misleading, or a personal patient view. Videos were rated for reliability, comprehensiveness, and global quality scale (GQS). Reasons for misleading videos were documented, and patient videos were documented as being either positive or negative towards methotrexate (MTX) injection. Fifty-one English videos overlapped between the two geographic locations; 10 videos were classified as useful (19.6 %), 14 misleading (27.5 %), and 27 personal patient view (52.9 %). Total views of videos were 161,028: 19.2 % useful, 72.8 % patient, and 8.0 % misleading. Mean GQS: 4.2 (±1.0) useful, 1.6 (±1.1) misleading, and 2.0 (±0.9) for patient videos (p tool available, clinicians need to be familiar with specific resources to help guide and educate their patients to ensure best outcomes.

  10. Automation of Safety Analysis with SysML Models Project

    Data.gov (United States)

    National Aeronautics and Space Administration — This project was a small proof-of-concept case study, generating SysML model information as a side effect of safety analysis. A prototype FMEA Assistant was...

  11. Automated cryogenic collection of carbon dioxide for stable isotope analysis and carbon-14 accelerator mass spectrometry dating

    International Nuclear Information System (INIS)

    Brenninkmeijer, C.A.M.

    1988-01-01

    A vacuum-powered high-vacuum glass valve has been used to develop gas sample bottles with automated taps. The automated, cryogenic systems have performed well for CO 2 collection to perform mass spectrometric analysis of 13 C and tandem accelerator mass spectrometry of 14 C

  12. Automated model-based quantitative analysis of phantoms with spherical inserts in FDG PET scans.

    Science.gov (United States)

    Ulrich, Ethan J; Sunderland, John J; Smith, Brian J; Mohiuddin, Imran; Parkhurst, Jessica; Plichta, Kristin A; Buatti, John M; Beichel, Reinhard R

    2018-01-01

    Quality control plays an increasingly important role in quantitative PET imaging and is typically performed using phantoms. The purpose of this work was to develop and validate a fully automated analysis method for two common PET/CT quality assurance phantoms: the NEMA NU-2 IQ and SNMMI/CTN oncology phantom. The algorithm was designed to only utilize the PET scan to enable the analysis of phantoms with thin-walled inserts. We introduce a model-based method for automated analysis of phantoms with spherical inserts. Models are first constructed for each type of phantom to be analyzed. A robust insert detection algorithm uses the model to locate all inserts inside the phantom. First, candidates for inserts are detected using a scale-space detection approach. Second, candidates are given an initial label using a score-based optimization algorithm. Third, a robust model fitting step aligns the phantom model to the initial labeling and fixes incorrect labels. Finally, the detected insert locations are refined and measurements are taken for each insert and several background regions. In addition, an approach for automated selection of NEMA and CTN phantom models is presented. The method was evaluated on a diverse set of 15 NEMA and 20 CTN phantom PET/CT scans. NEMA phantoms were filled with radioactive tracer solution at 9.7:1 activity ratio over background, and CTN phantoms were filled with 4:1 and 2:1 activity ratio over background. For quantitative evaluation, an independent reference standard was generated by two experts using PET/CT scans of the phantoms. In addition, the automated approach was compared against manual analysis, which represents the current clinical standard approach, of the PET phantom scans by four experts. The automated analysis method successfully detected and measured all inserts in all test phantom scans. It is a deterministic algorithm (zero variability), and the insert detection RMS error (i.e., bias) was 0.97, 1.12, and 1.48 mm for phantom

  13. Video-assisted Thoracoscope versus Video-assisted Mini-thoracotomy for Non-small Cell Lung Cancer: A Meta-analysis

    Directory of Open Access Journals (Sweden)

    Bing WANG

    2017-05-01

    Full Text Available Background and objective The aim of this study is to assess the effect of video-assisted thoracoscopic surgery (VATS and video-assisted mini-thoracotomy (VAMT in the treatment of non-small cell lung cancer (NSCLC. Methods We searched PubMed, EMbase, CNKI, VIP and ISI Web of Science to collect randomized controlled trials (RCTs of VATS versus VAMT for NSCLC. Each database was searched from May 2006 to May 2016. Two reviewers independently assessed the quality of the included studies and extracted relevant data, using RevMan 5.3 meta-analysis software. Results We finally identified 13 RCTs involving 1,605 patients. There were 815 patients in the VATS group and 790 patients in the VAMT group. The results of meta-analysis were as follows: statistically significant difference was found in the harvested lymph nodes (SMD=-0.48, 95%CI: -0.80--0.17, operating time (SMD=13.56, 95%CI: 4.96-22.16, operation bleeding volume (SMD=-33.68, 95%CI: -45.70--21.66, chest tube placement time (SMD=-1.05, 95%CI: -1.48--0.62, chest tube drainage flow (SMD=-83.69, 95%CI: -143.33--24.05, postoperative pain scores (SMD=-1.68, 95%CI: -1.98--1.38 and postoperative hospital stay (SMD=-2.27, 95%CI: -3.23--1.31. No statistically significant difference was found in postoperative complications (SMD=0.83, 95%CI: 0.54-1.29 and postoperative mortality (SMD=0.95, 95%CI: 0.55-1.63 between videoassisted thoracoscopic surgery lobectomy and video-assisted mini-thoracotomy lobectomy in the treatment of NSCLC. Conclusion Compared with video-assisted mini-thoracotomy lobectomy in the treatment of non-small cell lung cancer, the amount of postoperative complications and postoperative mortality were almost the same in video-assisted thoracoscopic lobectomy, but the amount of harvested lymph nodes, operating time, blood loss, chest tube drainage flow, and postoperative hospital stay were different. VATS is safe and effective in the treatment of NSCLC.

  14. Analysis of Video Signal Transmission Through DWDM Network Based on a Quality Check Algorithm

    Directory of Open Access Journals (Sweden)

    A. Markovic

    2013-04-01

    Full Text Available This paper provides an analysis of the multiplexed video signal transmission through the Dense Wavelength Division Multiplexing (DWDM network based on a quality check algorithm, which determines where the interruption of the transmission quality starts. On the basis of this algorithm, simulations of transmission for specific values of fiber parameters ​​ are executed. The analysis of the results shows how the BER and Q-factor change depends on the length of the fiber, i.e. on the number of amplifiers, and what kind of an effect the number of multiplexed channels and the flow rate per channel have on a transmited signals. Analysis of DWDM systems is performed in the software package OptiSystem 7.0, which is designed for systems with flow rates of 2.5 Gb/s and 10 Gb/s per channel.

  15. Large-Scale Automated Analysis of Location Patterns in Randomly-Tagged 3T3 Cells

    Science.gov (United States)

    Osuna, Elvira García; Hua, Juchang; Bateman, Nicholas W.; Zhao, Ting; Berget, Peter B.; Murphy, Robert F.

    2010-01-01

    Location proteomics is concerned with the systematic analysis of the subcellular location of proteins. In order to perform high-resolution, high-throughput analysis of all protein location patterns, automated methods are needed. Here we describe the use of such methods on a large collection of images obtained by automated microscopy to perform high-throughput analysis of endogenous proteins randomly-tagged with a fluorescent protein in NIH 3T3 cells. Cluster analysis was performed to identify the statistically significant location patterns in these images. This allowed us to assign a location pattern to each tagged protein without specifying what patterns are possible. To choose the best feature set for this clustering, we have used a novel method that determines which features do not artificially discriminate between control wells on different plates and uses Stepwise Discriminant Analysis (SDA) to determine which features do discriminate as much as possible among the randomly-tagged wells. Combining this feature set with consensus clustering methods resulted in 35 clusters among the first 188 clones we obtained. This approach represents a powerful automated solution to the problem of identifying subcellular locations on a proteome-wide basis for many different cell types. PMID:17285363

  16. Automated Dermoscopy Image Analysis of Pigmented Skin Lesions

    Directory of Open Access Journals (Sweden)

    Alfonso Baldi

    2010-03-01

    Full Text Available Dermoscopy (dermatoscopy, epiluminescence microscopy is a non-invasive diagnostic technique for the in vivo observation of pigmented skin lesions (PSLs, allowing a better visualization of surface and subsurface structures (from the epidermis to the papillary dermis. This diagnostic tool permits the recognition of morphologic structures not visible by the naked eye, thus opening a new dimension in the analysis of the clinical morphologic features of PSLs. In order to reduce the learning-curve of non-expert clinicians and to mitigate problems inherent in the reliability and reproducibility of the diagnostic criteria used in pattern analysis, several indicative methods based on diagnostic algorithms have been introduced in the last few years. Recently, numerous systems designed to provide computer-aided analysis of digital images obtained by dermoscopy have been reported in the literature. The goal of this article is to review these systems, focusing on the most recent approaches based on content-based image retrieval systems (CBIR.

  17. Integrating graph partitioning and matching for trajectory analysis in video surveillance.

    Science.gov (United States)

    Lin, Liang; Lu, Yongyi; Pan, Yan; Chen, Xiaowu

    2012-12-01

    In order to track moving objects in long range against occlusion, interruption, and background clutter, this paper proposes a unified approach for global trajectory analysis. Instead of the traditional frame-by-frame tracking, our method recovers target trajectories based on a short sequence of video frames, e.g., 15 frames. We initially calculate a foreground map at each frame obtained from a state-of-the-art background model. An attribute graph is then extracted from the foreground map, where the graph vertices are image primitives represented by the composite features. With this graph representation, we pose trajectory analysis as a joint task of spatial graph partitioning and temporal graph matching. The task can be formulated by maximizing a posteriori under the Bayesian framework, in which we integrate the spatio-temporal contexts and the appearance models. The probabilistic inference is achieved by a data-driven Markov chain Monte Carlo algorithm. Given a period of observed frames, the algorithm simulates an ergodic and aperiodic Markov chain, and it visits a sequence of solution states in the joint space of spatial graph partitioning and temporal graph matching. In the experiments, our method is tested on several challenging videos from the public datasets of visual surveillance, and it outperforms the state-of-the-art methods.

  18. A review of machine-vision-based analysis of wireless capsule endoscopy video.

    Science.gov (United States)

    Chen, Yingju; Lee, Jeongkyu

    2012-01-01

    Wireless capsule endoscopy (WCE) enables a physician to diagnose a patient's digestive system without surgical procedures. However, it takes 1-2 hours for a gastroenterologist to examine the video. To speed up the review process, a number of analysis techniques based on machine vision have been proposed by computer science researchers. In order to train a machine to understand the semantics of an image, the image contents need to be translated into numerical form first. The numerical form of the image is known as image abstraction. The process of selecting relevant image features is often determined by the modality of medical images and the nature of the diagnoses. For example, there are radiographic projection-based images (e.g., X-rays and PET scans), tomography-based images (e.g., MRT and CT scans), and photography-based images (e.g., endoscopy, dermatology, and microscopic histology). Each modality imposes unique image-dependent restrictions for automatic and medically meaningful image abstraction processes. In this paper, we review the current development of machine-vision-based analysis of WCE video, focusing on the research that identifies specific gastrointestinal (GI) pathology and methods of shot boundary detection.

  19. A Review of Machine-Vision-Based Analysis of Wireless Capsule Endoscopy Video

    Directory of Open Access Journals (Sweden)

    Yingju Chen

    2012-01-01

    Full Text Available Wireless capsule endoscopy (WCE enables a physician to diagnose a patient's digestive system without surgical procedures. However, it takes 1-2 hours for a gastroenterologist to examine the video. To speed up the review process, a number of analysis techniques based on machine vision have been proposed by computer science researchers. In order to train a machine to understand the semantics of an image, the image contents need to be translated into numerical form first. The numerical form of the image is known as image abstraction. The process of selecting relevant image features is often determined by the modality of medical images and the nature of the diagnoses. For example, there are radiographic projection-based images (e.g., X-rays and PET scans, tomography-based images (e.g., MRT and CT scans, and photography-based images (e.g., endoscopy, dermatology, and microscopic histology. Each modality imposes unique image-dependent restrictions for automatic and medically meaningful image abstraction processes. In this paper, we review the current development of machine-vision-based analysis of WCE video, focusing on the research that identifies specific gastrointestinal (GI pathology and methods of shot boundary detection.

  20. Analysis of automated external defibrillator device failures reported to the Food and Drug Administration.

    Science.gov (United States)

    DeLuca, Lawrence A; Simpson, Allan; Beskind, Dan; Grall, Kristi; Stoneking, Lisa; Stolz, Uwe; Spaite, Daniel W; Panchal, Ashish R; Denninghoff, Kurt R

    2012-02-01

    Automated external defibrillators are essential for treatment of cardiac arrest by lay rescuers and must determine when to shock and if they are functioning correctly. We seek to characterize automated external defibrillator failures reported to the Food and Drug Administration (FDA) and whether battery failures are properly detected by automated external defibrillators. FDA adverse event reports are catalogued in the Manufacturer and User Device Experience (MAUDE) database. We developed and internally validated an instrument for analyzing MAUDE data, reviewing all reports in which a fatality occurred. Two trained reviewers independently analyzed each report, and a third resolved discrepancies or passed them to a committee for resolution. One thousand two hundred eighty-four adverse events were reported between June 1993 and October 2008, of which 1,150 were failed defibrillation attempts. Thirty-seven automated external defibrillators never powered on, 252 failed to complete rhythm analysis, and 524 failed to deliver a recommended shock. In 149 cases, the operator disagreed with the device's rhythm analysis. In 54 cases, the defibrillator stated the batteries were low and in 110 other instances powered off unexpectedly. Interrater agreement between reviewers 1 and 2 ranged by question from 69.0% to 98.6% and for most likely cause was 55.9%. Agreement was obtained for 93.7% to 99.6% of questions by the third reviewer. Remaining discrepancies were resolved by the arbitration committee. MAUDE information is often incomplete and frequently no corroborating data are available. Some conditions not detected by automated external defibrillators during self-test cause units to power off unexpectedly, causing defibrillation delays. Backup units frequently provide shocks to patients. Copyright © 2011 American College of Emergency Physicians. Published by Mosby, Inc. All rights reserved.

  1. Automated retroillumination photography analysis for objective assessment of Fuchs Corneal Dystrophy severity

    Science.gov (United States)

    Eghrari, Allen O.; Mumtaz, Aisha A.; Garrett, Brian; Rezaei, Mahsa; Akhavan, Mina S.; Riazuddin, S. Amer; Gottsch, John D.

    2016-01-01

    Purpose Retroillumination photography analysis (RPA) is an objective tool for assessment of the number and distribution of guttae in eyes affected with Fuchs Corneal Dystrophy (FCD). Current protocols include manual processing of images; here we assess validity and interrater reliability of automated analysis across various levels of FCD severity. Methods Retroillumination photographs of 97 FCD-affected corneas were acquired and total counts of guttae previously summated manually. For each cornea, a single image was loaded into ImageJ software. We reduced color variability and subtracted background noise. Reflection of light from each gutta was identified as a local area of maximum intensity and counted automatically. Noise tolerance level was titrated for each cornea by examining a small region of each image with automated overlay to ensure appropriate coverage of individual guttae. We tested interrater reliability of automated counts of guttae across a spectrum of clinical and educational experience. Results A set of 97 retroillumination photographs were analyzed. Clinical severity as measured by a modified Krachmer scale ranged from a severity level of 1 to 5 in the set of analyzed corneas. Automated counts by an ophthalmologist correlated strongly with Krachmer grading (R2=0.79) and manual counts (R2=0.88). Intraclass correlation coefficient demonstrated strong correlation, at 0.924 (95% CI, 0.870- 0.958) among cases analyzed by three students, and 0.869 (95% CI, 0.797- 0.918) among cases for which images was analyzed by an ophthalmologist and two students. Conclusions Automated RPA allows for grading of FCD severity with high resolution across a spectrum of disease severity. PMID:27811565

  2. Fluorescence In Situ Hybridization (FISH Signal Analysis Using Automated Generated Projection Images

    Directory of Open Access Journals (Sweden)

    Xingwei Wang

    2012-01-01

    Full Text Available Fluorescence in situ hybridization (FISH tests provide promising molecular imaging biomarkers to more accurately and reliably detect and diagnose cancers and genetic disorders. Since current manual FISH signal analysis is low-efficient and inconsistent, which limits its clinical utility, developing automated FISH image scanning systems and computer-aided detection (CAD schemes has been attracting research interests. To acquire high-resolution FISH images in a multi-spectral scanning mode, a huge amount of image data with the stack of the multiple three-dimensional (3-D image slices is generated from a single specimen. Automated preprocessing these scanned images to eliminate the non-useful and redundant data is important to make the automated FISH tests acceptable in clinical applications. In this study, a dual-detector fluorescence image scanning system was applied to scan four specimen slides with FISH-probed chromosome X. A CAD scheme was developed to detect analyzable interphase cells and map the multiple imaging slices recorded FISH-probed signals into the 2-D projection images. CAD scheme was then applied to each projection image to detect analyzable interphase cells using an adaptive multiple-threshold algorithm, identify FISH-probed signals using a top-hat transform, and compute the ratios between the normal and abnormal cells. To assess CAD performance, the FISH-probed signals were also independently visually detected by an observer. The Kappa coefficients for agreement between CAD and observer ranged from 0.69 to 1.0 in detecting/counting FISH signal spots in four testing samples. The study demonstrated the feasibility of automated FISH signal analysis that applying a CAD scheme to the automated generated 2-D projection images.

  3. Automated analysis of security requirements through risk-based argumentation

    NARCIS (Netherlands)

    Yu, Yijun; Nunes Leal Franqueira, V.; Tun, Thein Tan; Wieringa, Roelf J.; Nuseibeh, Bashar

    2015-01-01

    Computer-based systems are increasingly being exposed to evolving security threats, which often reveal new vulnerabilities. A formal analysis of the evolving threats is difficult due to a number of practical considerations such as incomplete knowledge about the design, limited information about

  4. Automated Speech and Audio Analysis for Semantic Access to Multimedia

    NARCIS (Netherlands)

    Jong, F.M.G. de; Ordelman, R.; Huijbregts, M.

    2006-01-01

    The deployment and integration of audio processing tools can enhance the semantic annotation of multimedia content, and as a consequence, improve the effectiveness of conceptual access tools. This paper overviews the various ways in which automatic speech and audio analysis can contribute to

  5. Automated speech and audio analysis for semantic access to multimedia

    NARCIS (Netherlands)

    de Jong, Franciska M.G.; Ordelman, Roeland J.F.; Huijbregts, M.A.H.; Avrithis, Y.; Kompatsiaris, Y.; Staab, S.; O' Connor, N.E.

    2006-01-01

    The deployment and integration of audio processing tools can enhance the semantic annotation of multimedia content, and as a consequence, improve the effectiveness of conceptual access tools. This paper overviews the various ways in which automatic speech and audio analysis can contribute to

  6. ADDIS : an automated way to do network meta-analysis

    NARCIS (Netherlands)

    Zhao, Jing; van Valkenhoef, Gert; de Brock, E.O.; Hillege, Hans

    2012-01-01

    In evidence-based medicine, meta-analysis is an important statistical technique for combining the findings from independent clinical trials which have attempted to answer similar questions about treatment's clinical eectiveness [1]. Normally, such meta-analyses are pair-wise treatment comparisons,

  7. Alcohol, tobacco and illicit substances in music videos: a content analysis of prevalence and genre.

    Science.gov (United States)

    Gruber, Enid L; Thau, Helaine M; Hill, Douglas L; Fisher, Deborah A; Grube, Joel W

    2005-07-01

    Content analyses examined mention of alcohol, tobacco, and illicit substances in music videos (n = 359) broadcast in 2001, as well as genre and presence of humor. Findings indicated that references to illicit substances were more prevalent than tobacco in music videos. Humor was 2.5 times as likely to appear in videos containing references to substances than those without substances.

  8. The policy analysis of the film and video market in Japan

    OpenAIRE

    菅谷, 実

    2004-01-01

    IntroductionThe Economic Structure of Film and Video MarketVideo Market in the Broadband AgeThe Japanese Video and Film MarketThe Government Policy on the Production and Distribution MarketThe Present Policies on Film in JapanStarting Film Commission: Non-Government OrganizationSummary and Conclusion

  9. Nurse-surgeon object transfer: video analysis of communication and situation awareness in the operating theatre.

    Science.gov (United States)

    Korkiakangas, Terhi; Weldon, Sharon-Marie; Bezemer, Jeff; Kneebone, Roger

    2014-09-01

    One of the most central collaborative tasks during surgical operations is the passing of objects, including instruments. Little is known about how nurses and surgeons achieve this. The aim of the present study was to explore what factors affect this routine-like task, resulting in fast or slow transfer of objects. A qualitative video study, informed by an observational ethnographic approach, was conducted in a major teaching hospital in the UK. A total of 20 general surgical operations were observed. In total, approximately 68 h of video data have been reviewed. A subsample of 225 min has been analysed in detail using interactional video-analysis developed within the social sciences. Two factors affecting object transfer were observed: (1) relative instrument trolley position and (2) alignment. The scrub nurse's instrument trolley position (close to vs. further back from the surgeon) and alignment (gaze direction) impacts on the communication with the surgeon, and consequently, on the speed of object transfer. When the scrub nurse was standing close to the surgeon, and "converged" to follow the surgeon's movements, the transfer occurred more seamlessly and faster (1.0 s). The smoothness of object transfer can be improved by adjusting the scrub nurse's instrument trolley position, enabling a better monitoring of surgeon's bodily conduct and affording early orientation (awareness) to an upcoming request (changing situation). Object transfer is facilitated by the surgeon's embodied practices, which can elicit the nurse's attention to the request and, as a response, maximise a faster object transfer. A simple intervention to highlight the significance of these factors could improve communication in the operating theatre. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Automated Spectral Analysis, the Virtual Observatory and Computational Grids

    Science.gov (United States)

    Jeffery, C. S.

    The newest generation of telescopes and detectors and the facilities like the Virtual Observatory (VO) are delivering vast volumes of astronomical data and creating increasing demands for their analysis and interpretation. Methods for such analyses rely heavily on computer-generated models of growing sophistication and realism. These pose two problems. First, simulations are carried out at increasingly high spatial and temporal resolution and physical dimension. Second, the dimensionality of parameter-search space continues to grow. Major computational problems include ensuring that parameter-space volumes to be searched are physically interesting and to match observational data efficiently and without overloading the computational infrastructure. For the analysis of highly-evolved hot stars, we have developed a toolkit for the modelling of stellar atmospheres and stellar spectra. We can automatically fit observed flux distributions and/or high-resolution spectra and solve for a wide range of atmospheric parameters for both single and binary stars. The software represents a prototype for generic toolkits that could facilitate data analysis within, for example, the VO. We introduce a proposal to integrate a range of such toolkits within a heterogeneous network (such as the VO) so as to facilitate data analysis. For example, functions will be required to combine new observations with data from established archives. A goal-seeking algorithm will use this data to guide a sequence of theoretical calculations. These simulations may need to retrieve data from other sources, atomic data, pre-computed model atmospheres and so on. Such applications using widely distributed and heterogeneous resources will require the emerging technologies of computational grids.

  11. Molecular Detection of Bladder Cancer by Fluorescence Microsatellite Analysis and an Automated Genetic Analyzing System

    Directory of Open Access Journals (Sweden)

    Sarel Halachmi

    2007-01-01

    Full Text Available To investigate the ability of an automated fluorescent analyzing system to detect microsatellite alterations, in patients with bladder cancer. We investigated 11 with pathology proven bladder Transitional Cell Carcinoma (TCC for microsatellite alterations in blood, urine, and tumor biopsies. DNA was prepared by standard methods from blood, urine and resected tumor specimens, and was used for microsatellite analysis. After the primers were fluorescent labeled, amplification of the DNA was performed with PCR. The PCR products were placed into the automated genetic analyser (ABI Prism 310, Perkin Elmer, USA and were subjected to fluorescent scanning with argon ion laser beams. The fluorescent signal intensity measured by the genetic analyzer measured the product size in terms of base pairs. We found loss of heterozygocity (LOH or microsatellite alterations (a loss or gain of nucleotides, which alter the original normal locus size in all the patients by using fluorescent microsatellite analysis and an automated analyzing system. In each case the genetic changes found in urine samples were identical to those found in the resected tumor sample. The studies demonstrated the ability to detect bladder tumor non-invasively by fluorescent microsatellite analysis of urine samples. Our study supports the worldwide trend for the search of non-invasive methods to detect bladder cancer. We have overcome major obstacles that prevented the clinical use of an experimental system. With our new tested system microsatellite analysis can be done cheaper, faster, easier and with higher scientific accuracy.

  12. An automated image analysis system to measure and count organisms in laboratory microcosms.

    Directory of Open Access Journals (Sweden)

    François Mallard

    Full Text Available 1. Because of recent technological improvements in the way computer and digital camera perform, the potential use of imaging for contributing to the study of communities, populations or individuals in laboratory microcosms has risen enormously. However its limited use is due to difficulties in the automation of image analysis. 2. We present an accurate and flexible method of image analysis for detecting, counting and measuring moving particles on a fixed but heterogeneous substrate. This method has been specifically designed to follow individuals, or entire populations, in experimental laboratory microcosms. It can be used in other applications. 3. The method consists in comparing multiple pictures of the same experimental microcosm in order to generate an image of the fixed background. This background is then used to extract, measure and count the moving organisms, leaving out the fixed background and the motionless or dead individuals. 4. We provide different examples (springtails, ants, nematodes, daphnia to show that this non intrusive method is efficient at detecting organisms under a wide variety of conditions even on faintly contrasted and heterogeneous substrates. 5. The repeatability and reliability of this method has been assessed using experimental populations of the Collembola Folsomia candida. 6. We present an ImageJ plugin to automate the analysis of digital pictures of laboratory microcosms. The plugin automates the successive steps of the analysis and recursively analyses multiple sets of images, rapidly producing measurements from a large number of replicated microcosms.

  13. Automated analysis of image mammogram for breast cancer diagnosis

    Science.gov (United States)

    Nurhasanah, Sampurno, Joko; Faryuni, Irfana Diah; Ivansyah, Okto

    2016-03-01

    Medical imaging help doctors in diagnosing and detecting diseases that attack the inside of the body without surgery. Mammogram image is a medical image of the inner breast imaging. Diagnosis of breast cancer needs to be done in detail and as soon as possible for determination of next medical treatment. The aim of this work is to increase the objectivity of clinical diagnostic by using fractal analysis. This study applies fractal method based on 2D Fourier analysis to determine the density of normal and abnormal and applying the segmentation technique based on K-Means clustering algorithm to image abnormal for determine the boundary of the organ and calculate the area of organ segmentation results. The results show fractal method based on 2D Fourier analysis can be used to distinguish between the normal and abnormal breast and segmentation techniques with K-Means Clustering algorithm is able to generate the boundaries of normal and abnormal tissue organs, so area of the abnormal tissue can be determined.

  14. Automated NMR relaxation dispersion data analysis using NESSY

    Directory of Open Access Journals (Sweden)

    Gooley Paul R

    2011-10-01

    Full Text Available Abstract Background Proteins are dynamic molecules with motions ranging from picoseconds to longer than seconds. Many protein functions, however, appear to occur on the micro to millisecond timescale and therefore there has been intense research of the importance of these motions in catalysis and molecular interactions. Nuclear Magnetic Resonance (NMR relaxation dispersion experiments are used to measure motion of discrete nuclei within the micro to millisecond timescale. Information about conformational/chemical exchange, populations of exchanging states and chemical shift differences are extracted from these experiments. To ensure these parameters are correctly extracted, accurate and careful analysis of these experiments is necessary. Results The software introduced in this article is designed for the automatic analysis of relaxation dispersion data and the extraction of the parameters mentioned above. It is written in Python for multi platform use and highest performance. Experimental data can be fitted to different models using the Levenberg-Marquardt minimization algorithm and different statistical tests can be used to select the best model. To demonstrate the functionality of this program, synthetic data as well as NMR data were analyzed. Analysis of these data including the generation of plots and color coded structures can be performed with minimal user intervention and using standard procedures that are included in the program. Conclusions NESSY is easy to use open source software to analyze NMR relaxation data. The robustness and standard procedures are demonstrated in this article.

  15. Satellite Imagery Analysis for Automated Global Food Security Forecasting

    Science.gov (United States)

    Moody, D.; Brumby, S. P.; Chartrand, R.; Keisler, R.; Mathis, M.; Beneke, C. M.; Nicholaeff, D.; Skillman, S.; Warren, M. S.; Poehnelt, J.

    2017-12-01

    The recent computing performance revolution has driven improvements in sensor, communication, and storage technology. Multi-decadal remote sensing datasets at the petabyte scale are now available in commercial clouds, with new satellite constellations generating petabytes/year of daily high-resolution global coverage imagery. Cloud computing and storage, combined with recent advances in machine learning, are enabling understanding of the world at a scale and at a level of detail never before feasible. We present results from an ongoing effort to develop satellite imagery analysis tools that aggregate temporal, spatial, and spectral information and that can scale with the high-rate and dimensionality of imagery being collected. We focus on the problem of monitoring food crop productivity across the Middle East and North Africa, and show how an analysis-ready, multi-sensor data platform enables quick prototyping of satellite imagery analysis algorithms, from land use/land cover classification and natural resource mapping, to yearly and monthly vegetative health change trends at the structural field level.

  16. Automated three-dimensional X-ray analysis using a dual-beam FIB

    International Nuclear Information System (INIS)

    Schaffer, Miroslava; Wagner, Julian; Schaffer, Bernhard; Schmied, Mario; Mulders, Hans

    2007-01-01

    We present a fully automated method for three-dimensional (3D) elemental analysis demonstrated using a ceramic sample of chemistry (Ca)MgTiO x . The specimen is serially sectioned by a focused ion beam (FIB) microscope, and energy-dispersive X-ray spectrometry (EDXS) is used for elemental analysis of each cross-section created. A 3D elemental model is reconstructed from the stack of two-dimensional (2D) data. This work concentrates on issues arising from process automation, the large sample volume of approximately 17x17x10 μm 3 , and the insulating nature of the specimen. A new routine for post-acquisition data correction of different drift effects is demonstrated. Furthermore, it is shown that EDXS data may be erroneous for specimens containing voids, and that back-scattered electron images have to be used to correct for these errors

  17. Quantification of Pulmonary Fibrosis in a Bleomycin Mouse Model Using Automated Histological Image Analysis.

    Directory of Open Access Journals (Sweden)

    Jean-Claude Gilhodes

    Full Text Available Current literature on pulmonary fibrosis induced in animal models highlights the need of an accurate, reliable and reproducible histological quantitative analysis. One of the major limits of histological scoring concerns the fact that it is observer-dependent and consequently subject to variability, which may preclude comparative studies between different laboratories. To achieve a reliable and observer-independent quantification of lung fibrosis we developed an automated software histological image analysis performed from digital image of entire lung sections. This automated analysis was compared to standard evaluation methods with regard to its validation as an end-point measure of fibrosis. Lung fibrosis was induced in mice by intratracheal administration of bleomycin (BLM at 0.25, 0.5, 0.75 and 1 mg/kg. A detailed characterization of BLM-induced fibrosis was performed 14 days after BLM administration using lung function testing, micro-computed tomography and Ashcroft scoring analysis. Quantification of fibrosis by automated analysis was assessed based on pulmonary tissue density measured from thousands of micro-tiles processed from digital images of entire lung sections. Prior to analysis, large bronchi and vessels were manually excluded from the original images. Measurement of fibrosis has been expressed by two indexes: the mean pulmonary tissue density and the high pulmonary tissue density frequency. We showed that tissue density indexes gave access to a very accurate and reliable quantification of morphological changes induced by BLM even for the lowest concentration used (0.25 mg/kg. A reconstructed 2D-image of the entire lung section at high resolution (3.6 μm/pixel has been performed from tissue density values allowing the visualization of their distribution throughout fibrotic and non-fibrotic regions. A significant correlation (p<0.0001 was found between automated analysis and the above standard evaluation methods. This correlation

  18. Automated Astrophysical False Positive Analysis of Transiting Planet Signals

    Science.gov (United States)

    Morton, Timothy

    2015-08-01

    Beginning with Kepler, but continuing with K2 and TESS, transiting planet candidates are now found at a much faster rate than follow-up observations can be obtained. Thus, distinguishing true planet candidates from astrophysical false positives has become primarily a statistical exercise. I will describe a new publicly available open-source Python package for analyzing the astrophysical false positive probabilities of transiting exoplanet signals. In addition, I will present results of applying this analysis to both Kepler and K2 planet candidates, resulting in the probabilistic validation of thousands of exoplanets, as well as identifying many likely false positives.

  19. When experiment and energy conservation collide: video analysis of an unrolling mat

    Science.gov (United States)

    Mungan, Carl E.; Lipscombe, Trevor C.

    2018-03-01

    A mat consisting of round bamboo rods connected by strings perpendicular to their axes unrolls without slipping on a horizontal table. Video analysis is used to measure the position of the centre of the remaining roll as a function of time. It is found to accelerate with time due to the ‘rocket effect’ of the roll ejecting rods backward relative to itself. Mechanical energy is not conserved because of the inelastic collisions of the rods with the table. The fitted coefficient of restitution (COR) is 0.59 ± 0.04 which is consistent with known values for wood on wood. In support of this explanation, progressively smaller values of the COR are found when the mat is unrolled on a flat woven rug and on a shock-absorbing pad. The level of analysis is appropriate to an undergraduate course in physical mechanics.

  20. Automated economic analysis model for hazardous waste minimization

    International Nuclear Information System (INIS)

    Dharmavaram, S.; Mount, J.B.; Donahue, B.A.

    1990-01-01

    The US Army has established a policy of achieving a 50 percent reduction in hazardous waste generation by the end of 1992. To assist the Army in reaching this goal, the Environmental Division of the US Army Construction Engineering Research Laboratory (USACERL) designed the Economic Analysis Model for Hazardous Waste Minimization (EAHWM). The EAHWM was designed to allow the user to evaluate the life cycle costs for various techniques used in hazardous waste minimization and to compare them to the life cycle costs of current operating practices. The program was developed in C language on an IBM compatible PC and is consistent with other pertinent models for performing economic analyses. The potential hierarchical minimization categories used in EAHWM include source reduction, recovery and/or reuse, and treatment. Although treatment is no longer an acceptable minimization option, its use is widespread and has therefore been addressed in the model. The model allows for economic analysis for minimization of the Army's six most important hazardous waste streams. These include, solvents, paint stripping wastes, metal plating wastes, industrial waste-sludges, used oils, and batteries and battery electrolytes. The EAHWM also includes a general application which can be used to calculate and compare the life cycle costs for minimization alternatives of any waste stream, hazardous or non-hazardous. The EAHWM has been fully tested and implemented in more than 60 Army installations in the United States

  1. Automated sequence analysis and editing software for HIV drug resistance testing.

    Science.gov (United States)

    Struck, Daniel; Wallis, Carole L; Denisov, Gennady; Lambert, Christine; Servais, Jean-Yves; Viana, Raquel V; Letsoalo, Esrom; Bronze, Michelle; Aitken, Sue C; Schuurman, Rob; Stevens, Wendy; Schmit, Jean Claude; Rinke de Wit, Tobias; Perez Bercoff, Danielle

    2012-05-01

    Access to antiretroviral treatment in resource-limited-settings is inevitably paralleled by the emergence of HIV drug resistance. Monitoring treatment efficacy and HIV drugs resistance testing are therefore of increasing importance in resource-limited settings. Yet low-cost technologies and procedures suited to the particular context and constraints of such settings are still lacking. The ART-A (Affordable Resistance Testing for Africa) consortium brought together public and private partners to address this issue. To develop an automated sequence analysis and editing software to support high throughput automated sequencing. The ART-A Software was designed to automatically process and edit ABI chromatograms or FASTA files from HIV-1 isolates. The ART-A Software performs the basecalling, assigns quality values, aligns query sequences against a set reference, infers a consensus sequence, identifies the HIV type and subtype, translates the nucleotide sequence to amino acids and reports insertions/deletions, premature stop codons, ambiguities and mixed calls. The results can be automatically exported to Excel to identify mutations. Automated analysis was compared to manual analysis using a panel of 1624 PR-RT sequences generated in 3 different laboratories. Discrepancies between manual and automated sequence analysis were 0.69% at the nucleotide level and 0.57% at the amino acid level (668,047 AA analyzed), and discordances at major resistance mutations were recorded in 62 cases (4.83% of differences, 0.04% of all AA) for PR and 171 (6.18% of differences, 0.03% of all AA) cases for RT. The ART-A Software is a time-sparing tool for pre-analyzing HIV and viral quasispecies sequences in high throughput laboratories and highlighting positions requiring attention. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. Data for automated, high-throughput microscopy analysis of intracellular bacterial colonies using spot detection

    DEFF Research Database (Denmark)

    Ernstsen, Christina L; Login, Frédéric H; Jensen, Helene H

    2017-01-01

    Quantification of intracellular bacterial colonies is useful in strategies directed against bacterial attachment, subsequent cellular invasion and intracellular proliferation. An automated, high-throughput microscopy-method was established to quantify the number and size of intracellular bacterial...... of cell nuclei were automatically quantified using a spot detection-tool. The spot detection-output was exported to Excel, where data analysis was performed. In this article, micrographs and spot detection data are made available to facilitate implementation of the method....

  3. Automated Quantitative Bone Analysis in In Vivo X-ray Micro-Computed Tomography.

    Science.gov (United States)

    Behrooz, Ali; Kask, Peet; Meganck, Jeff; Kempner, Joshua

    2017-09-01

    Measurement and analysis of bone morphometry in 3D micro-computed tomography volumes using automated image processing and analysis improve the accuracy, consistency, reproducibility, and speed of preclinical osteological research studies. Automating segmentation and separation of individual bones in 3D micro-computed tomography volumes of murine models presents significant challenges considering partial volume effects and joints with thin spacing, i.e., 50 to [Formula: see text]. In this paper, novel hybrid splitting filters are presented to overcome the challenge of automated bone separation. This is achieved by enhancing joint contrast using rotationally invariant second-derivative operators. These filters generate split components that seed marker-controlled watershed segmentation. In addition, these filters can be used to separate metaphysis and epiphysis in long bones, e.g., femur, and remove the metaphyseal growth plate from the detected bone mask in morphometric measurements. Moreover, for slice-by-slice stereological measurements of long bones, particularly curved bones, such as tibia, the accuracy of the analysis can be improved if the planar measurements are guided to follow the longitudinal direction of the bone. In this paper, an approach is presented for characterizing the bone medial axis using morphological thinning and centerline operations. Building upon the medial axis, a novel framework is presented to automatically guide stereological measurements of long bones and enhance measurement accuracy and consistency. These image processing and analysis approaches are combined in an automated streamlined software workflow and applied to a range of in vivo micro-computed tomography studies for validation.

  4. The impact of air pollution on the level of micronuclei measured by automated image analysis

    Czech Academy of Sciences Publication Activity Database

    Rössnerová, Andrea; Špátová, Milada; Rossner, P.; Solanský, I.; Šrám, Radim

    2009-01-01

    Roč. 669, 1-2 (2009), s. 42-47 ISSN 0027-5107 R&D Projects: GA AV ČR 1QS500390506; GA MŠk 2B06088; GA MŠk 2B08005 Institutional research plan: CEZ:AV0Z50390512 Keywords : micronuclei * binucleated cells * automated image analysis Subject RIV: DN - Health Impact of the Environment Quality Impact factor: 3.556, year: 2009

  5. An Automated Bayesian Framework for Integrative Gene Expression Analysis and Predictive Medicine

    OpenAIRE

    Parikh, Neena; Zollanvari, Amin; Alterovitz, Gil

    2012-01-01

    Motivation: This work constructs a closed loop Bayesian Network framework for predictive medicine via integrative analysis of publicly available gene expression findings pertaining to various diseases. Results: An automated pipeline was successfully constructed. Integrative models were made based on gene expression data obtained from GEO experiments relating to four different diseases using Bayesian statistical methods. Many of these models demonstrated a high level of accuracy and predictive...

  6. An analysis of lecture video utilization in undergraduate medical education: associations with performance in the courses

    Directory of Open Access Journals (Sweden)

    Chandrasekhar Arcot

    2009-01-01

    Full Text Available Abstract Background Increasing numbers of medical schools are providing videos of lectures to their students. This study sought to analyze utilization of lecture videos by medical students in their basic science courses and to determine if student utilization was associated with performance on exams. Methods Streaming videos of lectures (n = 149 to first year and second year medical students (n = 284 were made available through a password-protected server. Server logs were analyzed over a 10-week period for both classes. For each lecture, the logs recorded time and location from which students accessed the file. A survey was administered at the end of the courses to obtain additional information about student use of the videos. Results There was a wide disparity in the level of use of lecture videos by medical students with the majority of students accessing the lecture videos sparingly (60% of the students viewed less than 10% of the available videos. The anonymous student survey revealed that students tended to view the videos by themselves from home during weekends and prior to exams. Students who accessed lecture videos more frequently had significantly (p Conclusion We conclude that videos of lectures are used by relatively few medical students and that individual use of videos is associated with the degree to which students are having difficulty with the subject matter.

  7. Weighted cross-correlation based variational optical flow for gastric flow analysis in ultrasonic videos.

    Science.gov (United States)

    Chen, Chaojie; Wang, Yuanyuan; Yu, Jinhua; Zhou, Zhuyu; Shen, Li; Chen, Yaqing

    2013-05-01

    Estimating the fluid motion in ultrasonic videos is a crucial step in the analysis of duodenogastric reflux. Severe image noise and illumination changes in the pyloric region (the region of interest) challenge the accurate estimation of gastric flow. In this paper, the authors propose an illumination-robust optical flow method based on the weighted cross-correlation. Cross-correlation was combined with the variational optical method framework as an illumination-robust local feature identifier. In consideration of accuracy near edges, they constructed visual similarity weights according to the characteristics of ultrasonic images. A processing procedure containing coarse-to-fine step and refinement was designed to get the final results. They tested the proposed method on synthetic and real ultrasonic images and compared it with other three optical flow methods. For quantitative evaluation, two metrics of angular and amplitude error were used. The synthetic results demonstrate that the proposed method performs better on ultrasonic images, with angular error of 4.1° and amplitude error of 3.3%. In qualitative comparison, the proposed method kept the motion field smooth in the homogeneous region while preserving edge information. When they used the results of the proposed method to judge the gastric flow direction, the automatic judgments agreed well with visual observation. The proposed method is a good tool for image velocimetry in ultrasonic images. It provides promising results to estimate the motion of gastric flow in ultrasonic videos.

  8. Exposure to violent video games and aggression in German adolescents: a longitudinal analysis.

    Science.gov (United States)

    Möller, Ingrid; Krahé, Barbara

    2009-01-01

    The relationship between exposure to violent electronic games and aggressive cognitions and behavior was examined in a longitudinal study. A total of 295 German adolescents completed the measures of violent video game usage, endorsement of aggressive norms, hostile attribution bias, and physical as well as indirect/relational aggression cross-sectionally, and a subsample of N=143 was measured again 30 months later. Cross-sectional results at T1 showed a direct relationship between violent game usage and aggressive norms, and an indirect link to hostile attribution bias through aggressive norms. In combination, exposure to game violence, normative beliefs, and hostile attribution bias predicted physical and indirect/relational aggression. Longitudinal analyses using path analysis showed that violence exposure at T1 predicted physical (but not indirect/relational) aggression 30 months later, whereas aggression at T1 was unrelated to later video game use. Exposure to violent games at T1 influenced physical (but not indirect/relational) aggression at T2 via an increase of aggressive norms and hostile attribution bias. The findings are discussed in relation to social-cognitive explanations of long-term effects of media violence on aggression. Copyright 2008 Wiley-Liss, Inc.

  9. Mechanisms of injuries in World Cup Snowboard Cross: a systematic video analysis of 19 cases.

    Science.gov (United States)

    Bakken, Arnhild; Bere, Tone; Bahr, Roald; Kristianslund, Eirik; Nordsletten, Lars

    2011-12-01

    Snowboard cross (SBX) became an official Olympic sport in 2006. This discipline includes manoeuvring several obstacles while competing in heats. It is common for the riders to collide, making this sport both exciting and at risk of injuries. Although a recent study from the 2010 Olympic Games has shown that the injury risk was high, little is known about the injury mechanisms. To qualitatively describe the injury situation and mechanism of injuries in World Cup Snowboard Cross. Descriptive video analysis. Nineteen video recordings of SBX injuries reported through the International Ski Federation Injury Surveillance System for four World Cup seasons (2006 to 2010) were obtained. Five experts in the field of sports medicine, snowboard and biomechanics performed analyses of each case to describe the injury mechanism in detail (riding situation and rider behaviour). Injuries occurred at jumping (n=13), bank turning (n=5) or rollers (n=1). The primary cause of the injuries was a technical error at take-off resulting in a too high jump and subsequent flat-landing. The rider was then unable to recover leading to fall at the time of injury. Injuries at bank turn was characterised by a pattern where the rider in a balanced position lost control due to unintentional contact with another rider. Jumping appeared to be the most challenging obstacle in SBX, where a technical error at take-off was the primary cause of the injuries. The second most common inciting event was unintentional board contact between riders at bank turning.

  10. Automated Fetal Heart Rate Analysis in Labor: Decelerations and Overshoots

    Science.gov (United States)

    Georgieva, A. E.; Payne, S. J.; Moulden, M.; Redman, C. W. G.

    2010-10-01

    Electronic fetal heart rate (FHR) recording is a standard way of monitoring fetal health in labor. Decelerations and accelerations usually indicate fetal distress and normality respectively. But one type of acceleration may differ, namely an overshoot that may atypically reflect fetal stress. Here we describe a new method for detecting decelerations, accelerations and overshoots as part of a novel system for computerized FHR analysis (OxSyS). There was poor agreement between clinicians when identifying these FHR features visually, which precluded setting a gold standard of interpretation. We therefore introduced `modified' Sensitivity (SE°) and `modified' Positive Predictive Value (PPV°) as appropriate performance measures with which the algorithm was optimized. The relation between overshoots and fetal compromise in labor was studied in 15 cases and 15 controls. Overshoots showed promise as an indicator of fetal compromise. Unlike ordinary accelerations, overshoots cannot be considered to be reassuring features of fetal health.

  11. Application of quantum dots as analytical tools in automated chemical analysis: a review.

    Science.gov (United States)

    Frigerio, Christian; Ribeiro, David S M; Rodrigues, S Sofia M; Abreu, Vera L R G; Barbosa, João A C; Prior, João A V; Marques, Karine L; Santos, João L M

    2012-07-20

    Colloidal semiconductor nanocrystals or quantum dots (QDs) are one of the most relevant developments in the fast-growing world of nanotechnology. Initially proposed as luminescent biological labels, they are finding new important fields of application in analytical chemistry, where their photoluminescent properties have been exploited in environmental monitoring, pharmaceutical and clinical analysis and food quality control. Despite the enormous variety of applications that have been developed, the automation of QDs-based analytical methodologies by resorting to automation tools such as continuous flow analysis and related techniques, which would allow to take advantage of particular features of the nanocrystals such as the versatile surface chemistry and ligand binding ability, the aptitude to generate reactive species, the possibility of encapsulation in different materials while retaining native luminescence providing the means for the implementation of renewable chemosensors or even the utilisation of more drastic and even stability impairing reaction conditions, is hitherto very limited. In this review, we provide insights into the analytical potential of quantum dots focusing on prospects of their utilisation in automated flow-based and flow-related approaches and the future outlook of QDs applications in chemical analysis. Copyright © 2012 Elsevier B.V. All rights reserved.

  12. Trends and applications of integrated automated ultra-trace sample handling and analysis (T9)

    International Nuclear Information System (INIS)

    Kingston, H.M.S.; Ye Han; Stewart, L.; Link, D.

    2002-01-01

    Full text: Automated analysis, sub-ppt detection limits, and the trend toward speciated analysis (rather than just elemental analysis) force the innovation of sophisticated and integrated sample preparation and analysis techniques. Traditionally, the ability to handle samples at ppt and sub-ppt levels has been limited to clean laboratories and special sample handling techniques and equipment. The world of sample handling has passed a threshold where older or 'old fashioned' traditional techniques no longer provide the ability to see the sample due to the influence of the analytical blank and the fragile nature of the analyte. When samples require decomposition, extraction, separation and manipulation, application of newer more sophisticated sample handling systems are emerging that enable ultra-trace analysis and species manipulation. In addition, new instrumentation has emerged which integrate sample preparation and analysis to enable on-line near real-time analysis. Examples of those newer sample-handling methods will be discussed and current examples provided as alternatives to traditional sample handling. Two new techniques applying ultra-trace microwave energy enhanced sample handling have been developed that permit sample separation and refinement while performing species manipulation during decomposition. A demonstration, that applies to semiconductor materials, will be presented. Next, a new approach to the old problem of sample evaporation without losses will be demonstrated that is capable of retaining all elements and species tested. Both of those methods require microwave energy manipulation in specialized systems and are not accessible through convection, conduction, or other traditional energy applications. A new automated integrated method for handling samples for ultra-trace analysis has been developed. An on-line near real-time measurement system will be described that enables many new automated sample handling and measurement capabilities. This

  13. Application of Automated Facial Expression Analysis and Qualitative Analysis to Assess Consumer Perception and Acceptability of Beverages and Water

    OpenAIRE

    Crist, Courtney Alissa

    2016-01-01

    Sensory and consumer sciences aim to understand the influences of product acceptability and purchase decisions. The food industry measures product acceptability through hedonic testing but often does not assess implicit or qualitative response. Incorporation of qualitative research and automated facial expression analysis (AFEA) may supplement hedonic acceptability testing to provide product insights. The purpose of this research was to assess the application of AFEA and qualitative analysis ...

  14. Search Analytics: Automated Learning, Analysis, and Search with Open Source

    Science.gov (United States)

    Hundman, K.; Mattmann, C. A.; Hyon, J.; Ramirez, P.

    2016-12-01

    The sheer volume of unstructured scientific data makes comprehensive human analysis impossible, resulting in missed opportunities to identify relationships, trends, gaps, and outliers. As the open source community continues to grow, tools like Apache Tika, Apache Solr, Stanford's DeepDive, and Data-Driven Documents (D3) can help address this challenge. With a focus on journal publications and conference abstracts often in the form of PDF and Microsoft Office documents, we've initiated an exploratory NASA Advanced Concepts project aiming to use the aforementioned open source text analytics tools to build a data-driven justification for the HyspIRI Decadal Survey mission. We call this capability Search Analytics, and it fuses and augments these open source tools to enable the automatic discovery and extraction of salient information. In the case of HyspIRI, a hyperspectral infrared imager mission, key findings resulted from the extractions and visualizations of relationships from thousands of unstructured scientific documents. The relationships include links between satellites (e.g. Landsat 8), domain-specific measurements (e.g. spectral coverage) and subjects (e.g. invasive species). Using the above open source tools, Search Analytics mined and characterized a corpus of information that would be infeasible for a human to process. More broadly, Search Analytics offers insights into various scientific and commercial applications enabled through missions and instrumentation with specific technical capabilities. For example, the following phrases were extracted in close proximity within a publication: "In this study, hyperspectral images…with high spatial resolution (1 m) were analyzed to detect cutleaf teasel in two areas. …Classification of cutleaf teasel reached a users accuracy of 82 to 84%." Without reading a single paper we can use Search Analytics to automatically identify that a 1 m spatial resolution provides a cutleaf teasel detection users accuracy of 82

  15. Conventional Versus Automated Implantation of Loose Seeds in Prostate Brachytherapy: Analysis of Dosimetric and Clinical Results

    Energy Technology Data Exchange (ETDEWEB)

    Genebes, Caroline, E-mail: genebes.caroline@claudiusregaud.fr [Radiation Oncology Department, Institut Claudius Regaud, Toulouse (France); Filleron, Thomas; Graff, Pierre [Radiation Oncology Department, Institut Claudius Regaud, Toulouse (France); Jonca, Frédéric [Department of Urology, Clinique Ambroise Paré, Toulouse (France); Huyghe, Eric; Thoulouzan, Matthieu; Soulie, Michel; Malavaud, Bernard [Department of Urology and Andrology, CHU Rangueil, Toulouse (France); Aziza, Richard; Brun, Thomas; Delannes, Martine; Bachaud, Jean-Marc [Radiation Oncology Department, Institut Claudius Regaud, Toulouse (France)

    2013-11-15

    Purpose: To review the clinical outcome of I-125 permanent prostate brachytherapy (PPB) for low-risk and intermediate-risk prostate cancer and to compare 2 techniques of loose-seed implantation. Methods and Materials: 574 consecutive patients underwent I-125 PPB for low-risk and intermediate-risk prostate cancer between 2000 and 2008. Two successive techniques were used: conventional implantation from 2000 to 2004 and automated implantation (Nucletron, FIRST system) from 2004 to 2008. Dosimetric and biochemical recurrence-free (bNED) survival results were reported and compared for the 2 techniques. Univariate and multivariate analysis researched independent predictors for bNED survival. Results: 419 (73%) and 155 (27%) patients with low-risk and intermediate-risk disease, respectively, were treated (median follow-up time, 69.3 months). The 60-month bNED survival rates were 95.2% and 85.7%, respectively, for patients with low-risk and intermediate-risk disease (P=.04). In univariate analysis, patients treated with automated implantation had worse bNED survival rates than did those treated with conventional implantation (P<.0001). By day 30, patients treated with automated implantation showed lower values of dose delivered to 90% of prostate volume (D90) and volume of prostate receiving 100% of prescribed dose (V100). In multivariate analysis, implantation technique, Gleason score, and V100 on day 30 were independent predictors of recurrence-free status. Grade 3 urethritis and urinary incontinence were observed in 2.6% and 1.6% of the cohort, respectively, with no significant differences between the 2 techniques. No grade 3 proctitis was observed. Conclusion: Satisfactory 60-month bNED survival rates (93.1%) and acceptable toxicity (grade 3 urethritis <3%) were achieved by loose-seed implantation. Automated implantation was associated with worse dosimetric and bNED survival outcomes.

  16. Conventional Versus Automated Implantation of Loose Seeds in Prostate Brachytherapy: Analysis of Dosimetric and Clinical Results

    International Nuclear Information System (INIS)

    Genebes, Caroline; Filleron, Thomas; Graff, Pierre; Jonca, Frédéric; Huyghe, Eric; Thoulouzan, Matthieu; Soulie, Michel; Malavaud, Bernard; Aziza, Richard; Brun, Thomas; Delannes, Martine; Bachaud, Jean-Marc

    2013-01-01

    Purpose: To review the clinical outcome of I-125 permanent prostate brachytherapy (PPB) for low-risk and intermediate-risk prostate cancer and to compare 2 techniques of loose-seed implantation. Methods and Materials: 574 consecutive patients underwent I-125 PPB for low-risk and intermediate-risk prostate cancer between 2000 and 2008. Two successive techniques were used: conventional implantation from 2000 to 2004 and automated implantation (Nucletron, FIRST system) from 2004 to 2008. Dosimetric and biochemical recurrence-free (bNED) survival results were reported and compared for the 2 techniques. Univariate and multivariate analysis researched independent predictors for bNED survival. Results: 419 (73%) and 155 (27%) patients with low-risk and intermediate-risk disease, respectively, were treated (median follow-up time, 69.3 months). The 60-month bNED survival rates were 95.2% and 85.7%, respectively, for patients with low-risk and intermediate-risk disease (P=.04). In univariate analysis, patients treated with automated implantation had worse bNED survival rates than did those treated with conventional implantation (P<.0001). By day 30, patients treated with automated implantation showed lower values of dose delivered to 90% of prostate volume (D90) and volume of prostate receiving 100% of prescribed dose (V100). In multivariate analysis, implantation technique, Gleason score, and V100 on day 30 were independent predictors of recurrence-free status. Grade 3 urethritis and urinary incontinence were observed in 2.6% and 1.6% of the cohort, respectively, with no significant differences between the 2 techniques. No grade 3 proctitis was observed. Conclusion: Satisfactory 60-month bNED survival rates (93.1%) and acceptable toxicity (grade 3 urethritis <3%) were achieved by loose-seed implantation. Automated implantation was associated with worse dosimetric and bNED survival outcomes

  17. Interactive video algorithms and technologies

    CERN Document Server

    Hammoud, Riad

    2006-01-01

    This book covers both algorithms and technologies of interactive videos, so that businesses in IT and data managements, scientists and software engineers in video processing and computer vision, coaches and instructors that use video technology in teaching, and finally end-users will greatly benefit from it. This book contains excellent scientific contributions made by a number of pioneering scientists and experts from around the globe. It consists of five parts. The first part introduces the reader to interactive video and video summarization and presents effective methodologies for automatic abstraction of a single video sequence, a set of video sequences, and a combined audio-video sequence. In the second part, a list of advanced algorithms and methodologies for automatic and semi-automatic analysis and editing of audio-video documents are presented. The third part tackles a more challenging level of automatic video re-structuring, filtering of video stream by extracting of highlights, events, and meaningf...

  18. Knowledge Support and Automation for Performance Analysis with PerfExplorer 2.0

    Directory of Open Access Journals (Sweden)

    Kevin A. Huck

    2008-01-01

    Full Text Available The integration of scalable performance analysis in parallel development tools is difficult. The potential size of data sets and the need to compare results from multiple experiments presents a challenge to manage and process the information. Simply to characterize the performance of parallel applications running on potentially hundreds of thousands of processor cores requires new scalable analysis techniques. Furthermore, many exploratory analysis processes are repeatable and could be automated, but are now implemented as manual procedures. In this paper, we will discuss the current version of PerfExplorer, a performance analysis framework which provides dimension reduction, clustering and correlation analysis of individual trails of large dimensions, and can perform relative performance analysis between multiple application executions. PerfExplorer analysis processes can be captured in the form of Python scripts, automating what would otherwise be time-consuming tasks. We will give examples of large-scale analysis results, and discuss the future development of the framework, including the encoding and processing of expert performance rules, and the increasing use of performance metadata.

  19. Test-retest reliability of computer-based video analysis of general movements in healthy term-born infants.

    Science.gov (United States)

    Valle, Susanne Collier; Støen, Ragnhild; Sæther, Rannei; Jensenius, Alexander Refsum; Adde, Lars

    2015-10-01

    A computer-based video analysis has recently been presented for quantitative assessment of general movements (GMs). This method's test-retest reliability, however, has not yet been evaluated. The aim of the current study was to evaluate the test-retest reliability of computer-based video analysis of GMs, and to explore the association between computer-based video analysis and the temporal organization of fidgety movements (FMs). Test-retest reliability study. 75 healthy, term-born infants were recorded twice the same day during the FMs period using a standardized video set-up. The computer-based movement variables "quantity of motion mean" (Qmean), "quantity of motion standard deviation" (QSD) and "centroid of motion standard deviation" (CSD) were analyzed, reflecting the amount of motion and the variability of the spatial center of motion of the infant, respectively. In addition, the association between the variable CSD and the temporal organization of FMs was explored. Intraclass correlation coefficients (ICC 1.1 and ICC 3.1) were calculated to assess test-retest reliability. The ICC values for the variables CSD, Qmean and QSD were 0.80, 0.80 and 0.86 for ICC (1.1), respectively; and 0.80, 0.86 and 0.90 for ICC (3.1), respectively. There were significantly lower CSD values in the recordings with continual FMs compared to the recordings with intermittent FMs (ptest-retest reliability of computer-based video analysis of GMs, and a significant association between our computer-based video analysis and the temporal organization of FMs. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  20. Automated validation of patient safety clinical incident classification: macro analysis.

    Science.gov (United States)

    Gupta, Jaiprakash; Patrick, Jon

    2013-01-01

    Patient safety is the buzz word in healthcare. Incident Information Management System (IIMS) is electronic software that stores clinical mishaps narratives in places where patients are treated. It is estimated that in one state alone over one million electronic text documents are available in IIMS. In this paper we investigate the data density available in the fields entered to notify an incident and the validity of the built in classification used by clinician to categories the incidents. Waikato Environment for Knowledge Analysis (WEKA) software was used to test the classes. Four statistical classifier based on J48, Naïve Bayes (NB), Naïve Bayes Multinominal (NBM) and Support Vector Machine using radial basis function (SVM_RBF) algorithms were used to validate the classes. The data pool was 10,000 clinical incidents drawn from 7 hospitals in one state in Australia. In first part of the study 1000 clinical incidents were selected to determine type and number of fields worth investigating and in the second part another 5448 clinical incidents were randomly selected to validate 13 clinical incident types. Result shows 74.6% of the cells were empty and only 23 fields had content over 70% of the time. The percentage correctly classified classes on four algorithms using categorical dataset ranged from 42 to 49%, using free-text datasets from 65% to 77% and using both datasets from 72% to 79%. Kappa statistic ranged from 0.36 to 0.4. for categorical data, from 0.61 to 0.74. for free-text and from 0.67 to 0.77 for both datasets. Similar increases in performance in the 3 experiments was noted on true positive rate, precision, F-measure and area under curve (AUC) of receiver operating characteristics (ROC) scores. The study demonstrates only 14 of 73 fields in IIMS have data that is usable for machine learning experiments. Irrespective of the type of algorithms used when all datasets are used performance was better. Classifier NBM showed best performance. We think the