WorldWideScience

Sample records for automated video analysis

  1. Automated analysis and annotation of basketball video

    Science.gov (United States)

    Saur, Drew D.; Tan, Yap-Peng; Kulkarni, Sanjeev R.; Ramadge, Peter J.

    1997-01-01

    Automated analysis and annotation of video sequences are important for digital video libraries, content-based video browsing and data mining projects. A successful video annotation system should provide users with useful video content summary in a reasonable processing time. Given the wide variety of video genres available today, automatically extracting meaningful video content for annotation still remains hard by using current available techniques. However, a wide range video has inherent structure such that some prior knowledge about the video content can be exploited to improve our understanding of the high-level video semantic content. In this paper, we develop tools and techniques for analyzing structured video by using the low-level information available directly from MPEG compressed video. Being able to work directly in the video compressed domain can greatly reduce the processing time and enhance storage efficiency. As a testbed, we have developed a basketball annotation system which combines the low-level information extracted from MPEG stream with the prior knowledge of basketball video structure to provide high level content analysis, annotation and browsing for events such as wide- angle and close-up views, fast breaks, steals, potential shots, number of possessions and possession times. We expect our approach can also be extended to structured video in other domains.

  2. Automated Large-Scale Shoreline Variability Analysis From Video

    Science.gov (United States)

    Pearre, N. S.

    2006-12-01

    Land-based video has been used to quantify changes in nearshore conditions for over twenty years. By combining the ability to track rapid, short-term shoreline change and changes associated with longer term or seasonal processes, video has proved to be a cost effective and versatile tool for coastal science. Previous video-based studies of shoreline change have typically examined the position of the shoreline along a small number of cross-shore lines as a proxy for the continuous coast. The goal of this study is twofold: (1) to further develop automated shoreline extraction algorithms for continuous shorelines, and (2) to track the evolution of a nourishment project at Rehoboth Beach, DE that was concluded in June 2005. Seven cameras are situated approximately 30 meters above mean sea level and 70 meters from the shoreline. Time exposure and variance images are captured hourly during daylight and transferred to a local processing computer. After correcting for lens distortion and geo-rectifying to a shore-normal coordinate system, the images are merged to form a composite planform image of 6 km of coast. Automated extraction algorithms establish shoreline and breaker positions throughout a tidal cycle on a daily basis. Short and long term variability in the daily shoreline will be characterized using empirical orthogonal function (EOF) analysis. Periodic sediment volume information will be extracted by incorporating the results of monthly ground-based LIDAR surveys and by correlating the hourly shorelines to the corresponding tide level under conditions with minimal wave activity. The Delaware coast in the area downdrift of the nourishment site is intermittently interrupted by short groins. An Even/Odd analysis of the shoreline response around these groins will be performed. The impact of groins on the sediment volume transport along the coast during periods of accretive and erosive conditions will be discussed. [This work is being supported by DNREC and the

  3. An Automated Video Object Extraction System Based on Spatiotemporal Independent Component Analysis and Multiscale Segmentation

    Directory of Open Access Journals (Sweden)

    Zhang Xiao-Ping

    2006-01-01

    Full Text Available Video content analysis is essential for efficient and intelligent utilizations of vast multimedia databases over the Internet. In video sequences, object-based extraction techniques are important for content-based video processing in many applications. In this paper, a novel technique is developed to extract objects from video sequences based on spatiotemporal independent component analysis (stICA and multiscale analysis. The stICA is used to extract the preliminary source images containing moving objects in video sequences. The source image data obtained after stICA analysis are further processed using wavelet-based multiscale image segmentation and region detection techniques to improve the accuracy of the extracted object. An automated video object extraction system is developed based on these new techniques. Preliminary results demonstrate great potential for the new stICA and multiscale-segmentation-based object extraction system in content-based video processing applications.

  4. Mass asymmetry and tricyclic wobble motion assessment using automated launch video analysis

    Institute of Scientific and Technical Information of China (English)

    Ryan DECKER; Joseph DONINI; William GARDNER; Jobin JOHN; Walter KOENIG

    2016-01-01

    This paper describes an approach to identify epicyclic and tricyclic motion during projectile flight caused by mass asymmetries in spin-stabilized projectiles. Flight video was captured following projectile launch of several M110A2E1 155 mm artillery projectiles. These videos were then analyzed using the automated flight video analysis method to attain their initial position and orientation histories. Examination of the pitch and yaw histories clearly indicates that in addition to epicyclic motion’s nutation and precession oscillations, an even faster wobble amplitude is present during each spin revolution, even though some of the amplitudes of the oscillation are smaller than 0.02 degree. The results are compared to a sequence of shots where little appreciable mass asymmetries were present, and only nutation and precession frequencies are predominantly apparent in the motion history results. Magnitudes of the wobble motion are estimated and compared to product of inertia measurements of the asymmetric projectiles.

  5. Use of automated video analysis for the evaluation of bicycle movement and interaction

    Science.gov (United States)

    Twaddle, Heather; Schendzielorz, Tobias; Fakler, Oliver; Amini, Sasan

    2014-03-01

    With the purpose of developing valid models of microscopic bicycle behavior, a large quantity of video data is collected at three busy urban intersections in Munich, Germany. Due to the volume of data, the manual processing of this data is infeasible and an automated or semi-automated analysis method must be implemented. An open source software, "Traffic Intelligence", is used and extended to analyze the collected video data with regard to research questions concerning the tactical behavior of bicyclists. In a first step, the feature detection parameters, the tracking parameters and the object grouping parameters are calibrated, making it possible to accurately track and group the objects at intersections used by large volumes of motor vehicles, bicycles and pedestrians. The resulting parameters for the three intersections are presented. A methodology for the classification of road users as cars, bicycles or pedestrians is presented and evaluated. This is achieved by making hypotheses about which features belong to cars, or bicycles and pedestrians, and using grouping parameters specified for that road user group to cluster the features into objects. These objects are then classified based on their dynamic characteristics. A classification structure for the maneuvers of different road users is presented and future applications are discussed.

  6. Automated Video Analysis of Non-verbal Communication in a Medical Setting.

    Science.gov (United States)

    Hart, Yuval; Czerniak, Efrat; Karnieli-Miller, Orit; Mayo, Avraham E; Ziv, Amitai; Biegon, Anat; Citron, Atay; Alon, Uri

    2016-01-01

    Non-verbal communication plays a significant role in establishing good rapport between physicians and patients and may influence aspects of patient health outcomes. It is therefore important to analyze non-verbal communication in medical settings. Current approaches to measure non-verbal interactions in medicine employ coding by human raters. Such tools are labor intensive and hence limit the scale of possible studies. Here, we present an automated video analysis tool for non-verbal interactions in a medical setting. We test the tool using videos of subjects that interact with an actor portraying a doctor. The actor interviews the subjects performing one of two scripted scenarios of interviewing the subjects: in one scenario the actor showed minimal engagement with the subject. The second scenario included active listening by the doctor and attentiveness to the subject. We analyze the cross correlation in total kinetic energy of the two people in the dyad, and also characterize the frequency spectrum of their motion. We find large differences in interpersonal motion synchrony and entrainment between the two performance scenarios. The active listening scenario shows more synchrony and more symmetric followership than the other scenario. Moreover, the active listening scenario shows more high-frequency motion termed jitter that has been recently suggested to be a marker of followership. The present approach may be useful for analyzing physician-patient interactions in terms of synchrony and dominance in a range of medical settings. PMID:27602002

  7. Automated Video Quality Assessment for Deep-Sea Video

    Science.gov (United States)

    Pirenne, B.; Hoeberechts, M.; Kalmbach, A.; Sadhu, T.; Branzan Albu, A.; Glotin, H.; Jeffries, M. A.; Bui, A. O. V.

    2015-12-01

    Video provides a rich source of data for geophysical analysis, often supplying detailed information about the environment when other instruments may not. This is especially true of deep-sea environments, where direct visual observations cannot be made. As computer vision techniques improve and volumes of video data increase, automated video analysis is emerging as a practical alternative to labor-intensive manual analysis. Automated techniques can be much more sensitive to video quality than their manual counterparts, so performing quality assessment before doing full analysis is critical to producing valid results.Ocean Networks Canada (ONC), an initiative of the University of Victoria, operates cabled ocean observatories that supply continuous power and Internet connectivity to a broad suite of subsea instruments from the coast to the deep sea, including video and still cameras. This network of ocean observatories has produced almost 20,000 hours of video (about 38 hours are recorded each day) and an additional 8,000 hours of logs from remotely operated vehicle (ROV) dives. We begin by surveying some ways in which deep-sea video poses challenges for automated analysis, including: 1. Non-uniform lighting: Single, directional, light sources produce uneven luminance distributions and shadows; remotely operated lighting equipment are also susceptible to technical failures. 2. Particulate noise: Turbidity and marine snow are often present in underwater video; particles in the water column can have sharper focus and higher contrast than the objects of interest due to their proximity to the light source and can also influence the camera's autofocus and auto white-balance routines. 3. Color distortion (low contrast): The rate of absorption of light in water varies by wavelength, and is higher overall than in air, altering apparent colors and lowering the contrast of objects at a distance.We also describe measures under development at ONC for detecting and mitigating

  8. Artificial Video for Video Analysis

    Science.gov (United States)

    Gallis, Michael R.

    2010-01-01

    This paper discusses the use of video analysis software and computer-generated animations for student activities. The use of artificial video affords the opportunity for students to study phenomena for which a real video may not be easy or even possible to procure, using analysis software with which the students are already familiar. We will…

  9. An automated form of video image analysis applied to classification of movement disorders.

    Science.gov (United States)

    Chang, R; Guan, L; Burne, J A

    Video image analysis is able to provide quantitative data on postural and movement abnormalities and thus has an important application in neurological diagnosis and management. The conventional techniques require patients to be videotaped while wearing markers in a highly structured laboratory environment. This restricts the utility of video in routine clinical practise. We have begun development of intelligent software which aims to provide a more flexible system able to quantify human posture and movement directly from whole-body images without markers and in an unstructured environment. The steps involved are to extract complete human profiles from video frames, to fit skeletal frameworks to the profiles and derive joint angles and swing distances. By this means a given posture is reduced to a set of basic parameters that can provide input to a neural network classifier. To test the system's performance we videotaped patients with dopa-responsive Parkinsonism and age-matched normals during several gait cycles, to yield 61 patient and 49 normal postures. These postures were reduced to their basic parameters and fed to the neural network classifier in various combinations. The optimal parameter sets (consisting of both swing distances and joint angles) yielded successful classification of normals and patients with an accuracy above 90%. This result demonstrated the feasibility of the approach. The technique has the potential to guide clinicians on the relative sensitivity of specific postural/gait features in diagnosis. Future studies will aim to improve the robustness of the system in providing accurate parameter estimates from subjects wearing a range of clothing, and to further improve discrimination by incorporating more stages of the gait cycle into the analysis. PMID:10661762

  10. Automated high-speed video analysis of the bubble dynamics in subcooled flow boiling

    Energy Technology Data Exchange (ETDEWEB)

    Maurus, Reinhold; Ilchenko, Volodymyr; Sattelmayer, Thomas [Technische Univ. Muenchen, Lehrstuhl fuer Thermodynamik, Garching (Germany)

    2004-04-01

    Subcooled flow boiling is a commonly applied technique for achieving efficient heat transfer. In the study, an experimental investigation in the nucleate boiling regime was performed for water circulating in a closed loop at atmospheric pressure. The test-section consists of a rectangular channel with a one side heated copper strip and a very good optical access. For the optical observation of the bubble behaviour the high-speed cinematography is used. Automated image processing and analysis algorithms developed by the authors were applied for a wide range of mass flow rates and heat fluxes in order to extract characteristic length and time scales of the bubbly layer during the boiling process. Using this methodology, a huge number of bubble cycles could be analysed. The structure of the developed algorithms for the detection of the bubble diameter, the bubble lifetime, the lifetime after the detachment process and the waiting time between two bubble cycles is described. Subsequently, the results from using these automated procedures are presented. A remarkable novelty is the presentation of all results as distribution functions. This is of physical importance because the commonly applied spatial and temporal averaging leads to a loss of information and, moreover, to an unjustified deterministic view of the boiling process, which exhibits in reality a very wide spread of bubble sizes and characteristic times. The results show that the mass flux dominates the temporal bubble behaviour. An increase of the liquid mass flux reveals a strong decrease of the bubble life - and waiting time. In contrast, the variation of the heat flux has a much smaller impact. It is shown in addition that the investigation of the bubble history using automated algorithms delivers novel information with respect to the bubble lift-off probability. (Author)

  11. A Novel Morphometry-Based Protocol of Automated Video-Image Analysis for Species Recognition and Activity Rhythms Monitoring in Deep-Sea Fauna

    Directory of Open Access Journals (Sweden)

    Paolo Menesatti

    2009-10-01

    Full Text Available The understanding of ecosystem dynamics in deep-sea areas is to date limited by technical constraints on sampling repetition. We have elaborated a morphometry-based protocol for automated video-image analysis where animal movement tracking (by frame subtraction is accompanied by species identification from animals’ outlines by Fourier Descriptors and Standard K-Nearest Neighbours methods. One-week footage from a permanent video-station located at 1,100 m depth in Sagami Bay (Central Japan was analysed. Out of 150,000 frames (1 per 4 s, a subset of 10.000 was analyzed by a trained operator to increase the efficiency of the automated procedure. Error estimation of the automated and trained operator procedure was computed as a measure of protocol performance. Three displacing species were identified as the most recurrent: Zoarcid fishes (eelpouts, red crabs (Paralomis multispina, and snails (Buccinum soyomaruae. Species identification with KNN thresholding produced better results in automated motion detection. Results were discussed assuming that the technological bottleneck is to date deeply conditioning the exploration of the deep-sea.

  12. Java Implementation based Heterogeneous Video Sequence Automated Surveillance Monitoring

    Directory of Open Access Journals (Sweden)

    Sankari Muthukarupan

    2013-04-01

    Full Text Available Automated video based surveillance monitoring is an essential and computationally challenging task to resolve issues in the secure access localities. This paper deals with some of the issues which are encountered in the integration surveillance monitoring in the real-life circumstances. We have employed video frames which are extorted from heterogeneous video formats. Each video frame is chosen to identify the anomalous events which are occurred in the sequence of time-driven process. Background subtraction is essentially required based on the optimal threshold and reference frame. Rest of the frames are ablated from reference image, hence all the foreground images paradigms are obtained. The co-ordinate existing in the deducted images is found by scanning the images horizontally until the occurrence of first black pixel. Obtained coordinate is twinned with existing co-ordinates in the primary images. The twinned co-ordinate in the primary image is considered as an active-region-of-interest. At the end, the starred images are converted to temporal video that scrutinizes the moving silhouettes of human behaviors in a static background. The proposed model is implemented in Java. Results and performance analysis are carried out in the real-life environments.

  13. Contextual analysis of videos

    CERN Document Server

    Thida, Myo; Monekosso, Dorothy

    2013-01-01

    Video context analysis is an active and vibrant research area, which provides means for extracting, analyzing and understanding behavior of a single target and multiple targets. Over the last few decades, computer vision researchers have been working to improve the accuracy and robustness of algorithms to analyse the context of a video automatically. In general, the research work in this area can be categorized into three major topics: 1) counting number of people in the scene 2) tracking individuals in a crowd and 3) understanding behavior of a single target or multiple targets in the scene.

  14. Gait Analysis by Multi Video Sequence Analysis

    DEFF Research Database (Denmark)

    Jensen, Karsten; Juhl, Jens

    2009-01-01

    and the calcaneus angle during gait. In the introductory phase of the project the task has been to select, purchase and draw up hardware, select and purchase software concerning video streaming and to develop special software concerning automated registration of the position of the foot during gait by Multi Video...

  15. Effects of the pyrethroid insecticide Cypermethrin on the locomotor activity of the wolf spider Pardosa amentata: quantitative analysis employing computer-automated video tracking.

    Science.gov (United States)

    Baatrup, E; Bayley, M

    1993-10-01

    Wildlife in areas surrounding arable land is almost inevitably exposed to pesticide spray. Even at doses far below the lethal level, this presents a threat to vulnerable species. The widely used pyrethroid insecticides, including Cypermethrin, are known for their direct effect on the locomotor apparatus of animals, inducing varying degrees of paresis. Quantitative measurements of the voluntary locomotion of animals express an integrated response to changes in biochemical and physiological processes. In the present study, the effect of Cypermethrin on the voluntary locomotion of the wolf spider Pardosa amentata was quantified in an open field setup, using computer-automated video tracking. Each spider was recorded for 24 hr prior to pesticide exposure. After topical application of 4.6 ng of Cypermethrin, the animal was recorded for a further 48 hr. Finally, after 9 days of recovery, the spider was tracked for 24 hr. Initially, Cypermethrin induced an almost instant paralysis of the hind legs and a lack of coordination in movement seen in the jagged and circular track appearance. This phase culminated in total quiescence, lasting approximately 12 hr in males and 24-48 hr in females. Following paresis, the effects of Cypermethrin were evident in reduced path length, average velocity, and maximum velocity and an increase in the time spent in quiescence. Also, the pyrethroid disrupted the consistent distributions of walking velocity and periods of quiescence seen prior to pesticide application. Our results suggest that normal locomotion had returned 9 days after Cypermethrin application, but that recovery of high velocities was still incomplete.

  16. Automated Motivic Analysis

    DEFF Research Database (Denmark)

    Lartillot, Olivier

    2016-01-01

    Motivic analysis provides very detailed understanding of musical composi- tions, but is also particularly difficult to formalize and systematize. A computational automation of the discovery of motivic patterns cannot be reduced to a mere extraction of all possible sequences of descriptions....... The systematic approach inexorably leads to a proliferation of redundant structures that needs to be addressed properly. Global filtering techniques cause a drastic elimination of interesting structures that damages the quality of the analysis. On the other hand, a selection of closed patterns allows...... for lossless compression. The structural complexity resulting from successive repetitions of patterns can be controlled through a simple modelling of cycles. Generally, motivic patterns cannot always be defined solely as sequences of descriptions in a fixed set of dimensions: throughout the descriptions...

  17. Toy Trucks in Video Analysis

    DEFF Research Database (Denmark)

    Buur, Jacob; Nakamura, Nanami; Larsen, Rainer Rye

    2015-01-01

    discovered that using scale-models like toy trucks has a strongly encouraging effect on developers/designers to collaboratively make sense of field videos. In our analysis of such scale-model sessions, we found some quite fundamental patterns of how participants utilise objects; the participants build shared...... narratives by moving the objects around, they name them to handle the complexity, they experience what happens in the video through their hands, and they use the video together with objects to create alternative narratives, and thus alternative solutions to the problems they observe. In this paper we claim...... that when analysing for instance truck drivers’ practices, the use of toy trucks to replicate actions in scale helps participants engage experiential knowledge as they use their body to make sense of the on-going action....

  18. Automated Identification and Reconstruction of YouTube Video Access

    Directory of Open Access Journals (Sweden)

    Jonathan Patterson

    2012-06-01

    Full Text Available YouTube is one of the most popular video-sharing websites on the Internet, allowing users to upload, view and share videos with other users all over the world. YouTube contains many different types of videos, from homemade sketches to instructional and educational tutorials, and therefore attracts a wide variety of users with different interests. The majority of YouTube visits are perfectly innocent, but there may be circumstances where YouTube video access is related to a digital investigation, e.g. viewing instructional videos on how to perform potentially unlawful actions or how to make unlawful articles.When a user accesses a YouTube video through their browser, certain digital artefacts relating to that video access may be left on their system in a number of different locations. However, there has been very little research published in the area of YouTube video artefacts.The paper discusses the identification of some of the artefacts that are left by the Internet Explorer web browser on a Windows system after accessing a YouTube video. The information that can be recovered from these artefacts can include the video ID, the video name and possibly a cached copy of the video itself. In addition to identifying the artefacts that are left, the paper also investigates how these artefacts can be brought together and analysed to infer specifics about the user’s interaction with the YouTube website, for example whether the video was searched for or visited as a result of a suggestion after viewing a previous video.The result of this research is a Python based prototype that will analyse a mounted disk image, automatically extract the artefacts related to YouTube visits and produce a report summarising the YouTube video accesses on a system.

  19. FPGA-Based Real-Time Motion Detection for Automated Video Surveillance Systems

    Directory of Open Access Journals (Sweden)

    Sanjay Singh

    2016-03-01

    Full Text Available Design of automated video surveillance systems is one of the exigent missions in computer vision community because of their ability to automatically select frames of interest in incoming video streams based on motion detection. This research paper focuses on the real-time hardware implementation of a motion detection algorithm for such vision based automated surveillance systems. A dedicated VLSI architecture has been proposed and designed for clustering-based motion detection scheme. The working prototype of a complete standalone automated video surveillance system, including input camera interface, designed motion detection VLSI architecture, and output display interface, with real-time relevant motion detection capabilities, has been implemented on Xilinx ML510 (Virtex-5 FX130T FPGA platform. The prototyped system robustly detects the relevant motion in real-time in live PAL (720 × 576 resolution video streams directly coming from the camera.

  20. Automation of the social interaction test by a video-tracking system: behavioural effects of repeated phencyclidine treatment.

    Science.gov (United States)

    Sams-Dodd, F

    1995-07-01

    The social interaction test is a valuable behavioural model for testing anxiolytic and neuroleptic drugs. The test quantifies the level of social behaviour between pairs of rats and it is usually based on manual analysis of behaviour. Advances in computer technology have made it possible to track the movements of pairs of rats in an arena, and the present paper describes the automation of the social interaction test by the commercial video-tracking programme, the EthoVision system. The ability of the automated system to correctly measure the social behaviour of rats is demonstrated by determining a dose-response relationship in the social interaction test for phencyclidine, a psychotomimetic drug that reduces social behaviour between pairs of rats. These data are subsequently analysed by the manual and automated data-acquisition methods and the results are compared. The study shows that the automated data-acquisition method best describes the behavioural effects of phencyclidine in the social interaction test by the locomotor activity of the rats, how much time the rats spend in different sections of the testing arena, and the level of social behaviour. Correlation analysis of the results from the manual and automated data-acquisition methods shows that the social behaviour measured by the automated system corresponds correctly to the social behaviour measured by the manual analysis. The present study has shown that the automated data-acquisition method can quantify locomotor activity, how rats use a testing arena and the level of social behaviour between rats in the social interaction test. The system cannot distinguish between social and aggressive behaviours, and therefore the rats should be tested in an unfamiliar arena to reduce territorial behaviour. Taking this limitation into consideration, the social interaction test can be automated by this computer-based video-tracking system and can be used as a routine test for quantifying the effects of drugs on the

  1. Automated identification and reconstruction of YouTube video access

    OpenAIRE

    Jonathan Patterson; Christopher Hargreaves

    2011-01-01

    YouTube is one of the most popular video-sharing websites on the Internet, allowing users to upload, view and share videos with other users all over the world. YouTube contains many different types of videos, from homemade sketches to instructional and educational tutorials, and therefore attracts a wide variety of users with different interests. The majority of YouTube visits are perfectly innocent, but there may be circumstances where YouTube video access is related to a digital investigati...

  2. Are signalized intersections with cycle tracks safer? A case-control study based on automated surrogate safety analysis using video data.

    Science.gov (United States)

    Zangenehpour, Sohail; Strauss, Jillian; Miranda-Moreno, Luis F; Saunier, Nicolas

    2016-01-01

    Cities in North America have been building bicycle infrastructure, in particular cycle tracks, with the intention of promoting urban cycling and improving cyclist safety. These facilities have been built and expanded but very little research has been done to investigate the safety impacts of cycle tracks, in particular at intersections, where cyclists interact with turning motor-vehicles. Some safety research has looked at injury data and most have reached the conclusion that cycle tracks have positive effects of cyclist safety. The objective of this work is to investigate the safety effects of cycle tracks at signalized intersections using a case-control study. For this purpose, a video-based method is proposed for analyzing the post-encroachment time as a surrogate measure of the severity of the interactions between cyclists and turning vehicles travelling in the same direction. Using the city of Montreal as the case study, a sample of intersections with and without cycle tracks on the right and left sides of the road were carefully selected accounting for intersection geometry and traffic volumes. More than 90h of video were collected from 23 intersections and processed to obtain cyclist and motor-vehicle trajectories and interactions. After cyclist and motor-vehicle interactions were defined, ordered logit models with random effects were developed to evaluate the safety effects of cycle tracks at intersections. Based on the extracted data from the recorded videos, it was found that intersection approaches with cycle tracks on the right are safer than intersection approaches with no cycle track. However, intersections with cycle tracks on the left compared to no cycle tracks seem to be significantly safer. Results also identify that the likelihood of a cyclist being involved in a dangerous interaction increases with increasing turning vehicle flow and decreases as the size of the cyclist group arriving at the intersection increases. The results highlight the

  3. The Comparative study of Automated Face replacement Techniques for Video

    Directory of Open Access Journals (Sweden)

    Harmesh Sanghvi

    2014-03-01

    Full Text Available For entertaining purposes, a computerized special effect referred to as “morphing” has enlarged huge attention and face replacement is one of the interesting tasks. Face replacement in video is a useful application in the amusement and special effect industries. Though various techniques for face replacement have been developed for single image and generally applied in animation and morphing, there are few mechanisms to spread out these techniques to handle videos automatically. Face replacement in video automatically is not only a fascinating application, but a challenging problem. For face replacement in video, the frame-by-frame manipulation process using the software is often time consuming and labor-intensive. Hence, the paper compares numerous latest Automatic face replacement techniques in video to understand the various problems to be solved, their shortcomings and benefits over others.

  4. Automated activation-analysis system

    International Nuclear Information System (INIS)

    An automated delayed neutron counting and instrumental neutron activation analysis system has been developed at Los Alamos National Laboratory's Omega West Reactor (OWR) to analyze samples for uranium and 31 additional elements with a maximum throughput of 400 samples per day. The system and its mode of operation for a large reconnaissance survey are described

  5. Automated cell tracking and analysis in phase-contrast videos (iTrack4U): development of Java software based on combined mean-shift processes.

    Science.gov (United States)

    Cordelières, Fabrice P; Petit, Valérie; Kumasaka, Mayuko; Debeir, Olivier; Letort, Véronique; Gallagher, Stuart J; Larue, Lionel

    2013-01-01

    Cell migration is a key biological process with a role in both physiological and pathological conditions. Locomotion of cells during embryonic development is essential for their correct positioning in the organism; immune cells have to migrate and circulate in response to injury. Failure of cells to migrate or an inappropriate acquisition of migratory capacities can result in severe defects such as altered pigmentation, skull and limb abnormalities during development, and defective wound repair, immunosuppression or tumor dissemination. The ability to accurately analyze and quantify cell migration is important for our understanding of development, homeostasis and disease. In vitro cell tracking experiments, using primary or established cell cultures, are often used to study migration as cells can quickly and easily be genetically or chemically manipulated. Images of the cells are acquired at regular time intervals over several hours using microscopes equipped with CCD camera. The locations (x,y,t) of each cell on the recorded sequence of frames then need to be tracked. Manual computer-assisted tracking is the traditional method for analyzing the migratory behavior of cells. However, this processing is extremely tedious and time-consuming. Most existing tracking algorithms require experience in programming languages that are unfamiliar to most biologists. We therefore developed an automated cell tracking program, written in Java, which uses a mean-shift algorithm and ImageJ as a library. iTrack4U is a user-friendly software. Compared to manual tracking, it saves considerable amount of time to generate and analyze the variables characterizing cell migration, since they are automatically computed with iTrack4U. Another major interest of iTrack4U is the standardization and the lack of inter-experimenter differences. Finally, iTrack4U is adapted for phase contrast and fluorescent cells.

  6. Automated cell tracking and analysis in phase-contrast videos (iTrack4U: development of Java software based on combined mean-shift processes.

    Directory of Open Access Journals (Sweden)

    Fabrice P Cordelières

    Full Text Available Cell migration is a key biological process with a role in both physiological and pathological conditions. Locomotion of cells during embryonic development is essential for their correct positioning in the organism; immune cells have to migrate and circulate in response to injury. Failure of cells to migrate or an inappropriate acquisition of migratory capacities can result in severe defects such as altered pigmentation, skull and limb abnormalities during development, and defective wound repair, immunosuppression or tumor dissemination. The ability to accurately analyze and quantify cell migration is important for our understanding of development, homeostasis and disease. In vitro cell tracking experiments, using primary or established cell cultures, are often used to study migration as cells can quickly and easily be genetically or chemically manipulated. Images of the cells are acquired at regular time intervals over several hours using microscopes equipped with CCD camera. The locations (x,y,t of each cell on the recorded sequence of frames then need to be tracked. Manual computer-assisted tracking is the traditional method for analyzing the migratory behavior of cells. However, this processing is extremely tedious and time-consuming. Most existing tracking algorithms require experience in programming languages that are unfamiliar to most biologists. We therefore developed an automated cell tracking program, written in Java, which uses a mean-shift algorithm and ImageJ as a library. iTrack4U is a user-friendly software. Compared to manual tracking, it saves considerable amount of time to generate and analyze the variables characterizing cell migration, since they are automatically computed with iTrack4U. Another major interest of iTrack4U is the standardization and the lack of inter-experimenter differences. Finally, iTrack4U is adapted for phase contrast and fluorescent cells.

  7. Automated cell tracking and analysis in phase-contrast videos (iTrack4U): development of Java software based on combined mean-shift processes.

    Science.gov (United States)

    Cordelières, Fabrice P; Petit, Valérie; Kumasaka, Mayuko; Debeir, Olivier; Letort, Véronique; Gallagher, Stuart J; Larue, Lionel

    2013-01-01

    Cell migration is a key biological process with a role in both physiological and pathological conditions. Locomotion of cells during embryonic development is essential for their correct positioning in the organism; immune cells have to migrate and circulate in response to injury. Failure of cells to migrate or an inappropriate acquisition of migratory capacities can result in severe defects such as altered pigmentation, skull and limb abnormalities during development, and defective wound repair, immunosuppression or tumor dissemination. The ability to accurately analyze and quantify cell migration is important for our understanding of development, homeostasis and disease. In vitro cell tracking experiments, using primary or established cell cultures, are often used to study migration as cells can quickly and easily be genetically or chemically manipulated. Images of the cells are acquired at regular time intervals over several hours using microscopes equipped with CCD camera. The locations (x,y,t) of each cell on the recorded sequence of frames then need to be tracked. Manual computer-assisted tracking is the traditional method for analyzing the migratory behavior of cells. However, this processing is extremely tedious and time-consuming. Most existing tracking algorithms require experience in programming languages that are unfamiliar to most biologists. We therefore developed an automated cell tracking program, written in Java, which uses a mean-shift algorithm and ImageJ as a library. iTrack4U is a user-friendly software. Compared to manual tracking, it saves considerable amount of time to generate and analyze the variables characterizing cell migration, since they are automatically computed with iTrack4U. Another major interest of iTrack4U is the standardization and the lack of inter-experimenter differences. Finally, iTrack4U is adapted for phase contrast and fluorescent cells. PMID:24312283

  8. AUTOMATED ANALYSIS OF BREAKERS

    Directory of Open Access Journals (Sweden)

    E. M. Farhadzade

    2014-01-01

    Full Text Available Breakers relate to Electric Power Systems’ equipment, the reliability of which influence, to a great extend, on reliability of Power Plants. In particular, the breakers determine structural reliability of switchgear circuit of Power Stations and network substations. Failure in short-circuit switching off by breaker with further failure of reservation unit or system of long-distance protection lead quite often to system emergency.The problem of breakers’ reliability improvement and the reduction of maintenance expenses is becoming ever more urgent in conditions of systematic increasing of maintenance cost and repair expenses of oil circuit and air-break circuit breakers. The main direction of this problem solution is the improvement of diagnostic control methods and organization of on-condition maintenance. But this demands to use a great amount of statistic information about nameplate data of breakers and their operating conditions, about their failures, testing and repairing, advanced developments (software of computer technologies and specific automated information system (AIS.The new AIS with AISV logo was developed at the department: “Reliability of power equipment” of AzRDSI of Energy. The main features of AISV are:· to provide the security and data base accuracy;· to carry out systematic control of breakers conformity with operating conditions;· to make the estimation of individual  reliability’s value and characteristics of its changing for given combination of characteristics variety;· to provide personnel, who is responsible for technical maintenance of breakers, not only with information but also with methodological support, including recommendations for the given problem solving  and advanced methods for its realization.

  9. VideoHacking: Automated Tracking and Quantification of Locomotor Behavior with Open Source Software and Off-the-Shelf Video Equipment

    Science.gov (United States)

    Conklin, Emily E.; Lee, Kathyann L.; Schlabach, Sadie A.; Woods, Ian G.

    2015-01-01

    Differences in nervous system function can result in differences in behavioral output. Measurements of animal locomotion enable the quantification of these differences. Automated tracking of animal movement is less labor-intensive and bias-prone than direct observation, and allows for simultaneous analysis of multiple animals, high spatial and temporal resolution, and data collection over extended periods of time. Here, we present a new video-tracking system built on Python-based software that is free, open source, and cross-platform, and that can analyze video input from widely available video capture devices such as smartphone cameras and webcams. We validated this software through four tests on a variety of animal species, including larval and adult zebrafish (Danio rerio), Siberian dwarf hamsters (Phodopus sungorus), and wild birds. These tests highlight the capacity of our software for long-term data acquisition, parallel analysis of multiple animals, and application to animal species of different sizes and movement patterns. We applied the software to an analysis of the effects of ethanol on thigmotaxis (wall-hugging) behavior on adult zebrafish, and found that acute ethanol treatment decreased thigmotaxis behaviors without affecting overall amounts of motion. The open source nature of our software enables flexibility, customization, and scalability in behavioral analyses. Moreover, our system presents a free alternative to commercial video-tracking systems and is thus broadly applicable to a wide variety of educational settings and research programs. PMID:26240518

  10. Semi-automated query construction for content-based endomicroscopy video retrieval.

    Science.gov (United States)

    Tafreshi, Marzieh Kohandani; Linard, Nicolas; André, Barbara; Ayache, Nicholas; Vercauteren, Tom

    2014-01-01

    Content-based video retrieval has shown promising results to help physicians in their interpretation of medical videos in general and endomicroscopic ones in particular. Defining a relevant query for CBVR can however be a complex and time-consuming task for non-expert and even expert users. Indeed, uncut endomicroscopy videos may very well contain images corresponding to a variety of different tissue types. Using such uncut videos as queries may lead to drastic performance degradations for the system. In this study, we propose a semi-automated methodology that allows the physician to create meaningful and relevant queries in a simple and efficient manner. We believe that this will lead to more reproducible and more consistent results. The validation of our method is divided into two approaches. The first one is an indirect validation based on per video classification results with histopathological ground-truth. The second one is more direct and relies on perceived inter-video visual similarity ground-truth. We demonstrate that our proposed method significantly outperforms the approach with uncut videos and approaches the performance of a tedious manual query construction by an expert. Finally, we show that the similarity perceived between videos by experts is significantly correlated with the inter-video similarity distance computed by our retrieval system. PMID:25333105

  11. Trending Videos: Measurement and Analysis

    OpenAIRE

    Barjasteh, Iman; Liu, Ying; Radha, Hayder

    2014-01-01

    Unlike popular videos, which would have already achieved high viewership numbers by the time they are declared popular, YouTube trending videos represent content that targets viewers attention over a relatively short time, and has the potential of becoming popular. Despite their importance and visibility, YouTube trending videos have not been studied or analyzed thoroughly. In this paper, we present our findings for measuring, analyzing, and comparing key aspects of YouTube trending videos. O...

  12. Video Analysis with a Web Camera

    Science.gov (United States)

    Wyrembeck, Edward P.

    2009-01-01

    Recent advances in technology have made video capture and analysis in the introductory physics lab even more affordable and accessible. The purchase of a relatively inexpensive web camera is all you need if you already have a newer computer and Vernier's Logger Pro 3 software. In addition to Logger Pro 3, other video analysis tools such as…

  13. A New Analysis of the IMO Video Meteor Database

    Science.gov (United States)

    Molau, Sirko

    2010-08-01

    Starting in 1999, a database of meteor records was created from automated single station video observations (Molau, 1991) of the IMO network. At the 2006 IMC, a first full analysis of the IMO Video Meteor Database based on roughly 190,000 meteors was presented. In the optical domain it was the first time, that a list of meteor showers was obtained automated, based on fully objective criteria only. For each shower, the activity interval, radiant position and drift, and an activity profile was obtained. A number of hitherto unknown showers were found as well. The corresponding analysis procedure was derived and explained in detail in Molau (2006). However, beside the successful application of the analysis procedure, also a number of weak points were detected. As of 2008, the database had almost doubled, which made it worthwhile to repeat the analysis. However, these weak points were to be addressed first. This paper describes the problems in detail and presents solutions for them. In addition, a new meter shower list derived from the new full analysis of the IMO Video Meteor Database is given.

  14. Software for automated classification of probe-based confocal laser endomicroscopy videos of colorectal polyps

    Institute of Scientific and Technical Information of China (English)

    Barbara André; Tom Vercauteren; Anna M Buchner; Murli Krishna; Nicholas Ayache; Michael B Wallace

    2012-01-01

    To support probe-based confocal laser endomicroscopy (pCLE) diagnosis by designing software for the automated classification of colonic polyps.METHODS:Intravenous fluorescein pCLE imaging of colorectal lesions was performed on patients undergoing screening and surveillance colonoscopies,followed by polypectomies.All resected specimens were reviewed by a reference gastrointestinal pathologist blinded to pCLE information.Histopathology was used as the criterion standard for the differentiation between neoplastic and non-neoplastic lesions.The pCLE video sequences,recorded for each polyp,were analyzed offline by 2 expert endoscopists who were blinded to the endoscopic characteristics and histopathology.These pCLE videos,along with their histopathology diagnosis,were used to train the automated classification software which is a content-based image retrieval technique followed by k-nearest neighbor classification.The performance of the off-line diagnosis of pCLE videos established by the 2 expert endoscopists was compared with that of automated pCLE software classification.All evaluations were performed using leave-one-patient-out cross-validation to avoid bias.RESULTS:Colorectal lesions (135) were imaged in 71 patients.Based on histopathology,93 of these 135lesions were neoplastic and 42 were non-neoplastic.The study found no statistical significance for the difference between the performance of automated pCLE software classification (accuracy 89.6%,sensitivity 92.5%,specificity 83.3%,using leave-one-patient-outcross-validation) and the performance of the off-line diagnosis of pCLE videos established by the 2 expert endoscopists (accuracy 89.6%,sensitivity 91.4%,specificity 85.7%).There was very low power (< 6%)to detect the observed differences.The 95% confidence intervals for equivalence testing were:-0.073 to 0.073 for accuracy,-0.068 to 0.089 for sensitivity and -0.18 to 0.13 for specificity.The classification software proposed in this study is

  15. Propuesta de un método para el resumen automático de video Proposal of a method for automatic video summary

    Directory of Open Access Journals (Sweden)

    Yendrys Blanco Rosabal

    2012-09-01

    Full Text Available El resumen automático de vídeo dentro del procesamiento digital de imágenes, campo de mucho auge de investigación en la actualidad, es una de las herramientas, que crea automáticamente una versión corta compuesta por un subconjunto de fotogramas claves que deben contener la mayor cantidad de información posible del video original.This work aims at developing a simple method which allows automatic video summary using statistical methods, such as processing histograms. The most significant work is to demonstrate the creation of a video summarization.

  16. Automating Commercial Video Game Development using Computational Intelligence

    Directory of Open Access Journals (Sweden)

    Tse G. Tan

    2011-01-01

    Full Text Available Problem statement: The retail sales of computer and video games have grown enormously during the last few years, not just in United States (US, but also all over the world. This is the reason a lot of game developers and academic researchers have focused on game related technologies, such as graphics, audio, physics and Artificial Intelligence (AI with the goal of creating newer and more fun games. In recent years, there has been an increasing interest in game AI for producing intelligent game objects and characters that can carry out their tasks autonomously. Approach: The aim of this study is an attempt to create an autonomous intelligent controller to play the game with no human intervention. Our approach is to use a simple but powerful evolutionary algorithm called Evolution Strategies (ES to evolve the connection weights and biases of feed-forward Artificial Neural Networks (ANN and to examine its learning ability through computational experiments in a non-deterministic and dynamic environment, which is the well-known arcade game, called Ms. Pac-man. The resulting algorithm is referred to as an Evolution Strategies Neural Network or ESNet. Results: The comparison of ESNet with two random systems, Random Direction (RandDir and Random Neural Network (RandNet yields promising results. The contribution of this work also focused on the comparison between the ESNet with different mutation probabilities. The results show that ESNet with a high probability with high mean scores recorded compared to the mean scores of RandDir, RandNet and ESNet with a low probability. Conclusion: Overall, the proposed algorithm has a very good performance with a high probability of automatically generating successful game AI controllers for the video game.

  17. An Automated Algorithm for Approximation of Temporal Video Data Using Linear B'EZIER Fitting

    Directory of Open Access Journals (Sweden)

    Murtaza Ali Khan

    2010-05-01

    Full Text Available This paper presents an efficient method for approximation of temporal video data using linear Bezierfitting. For a given sequence of frames, the proposed method estimates the intensity variations of eachpixel in temporal dimension using linear Bezier fitting in Euclidean space. Fitting of each segmentensures upper bound of specified mean squared error. Break and fit criteria is employed to minimize thenumber of segments required to fit the data. The proposed method is well suitable for lossy compressionof temporal video data and automates the fitting process of each pixel. Experimental results show that theproposed method yields good results both in terms of objective and subjective quality measurementparameters without causing any blocking artifacts.

  18. Automated pipelines for spectroscopic analysis

    Science.gov (United States)

    Allende Prieto, C.

    2016-09-01

    The Gaia mission will have a profound impact on our understanding of the structure and dynamics of the Milky Way. Gaia is providing an exhaustive census of stellar parallaxes, proper motions, positions, colors and radial velocities, but also leaves some glaring holes in an otherwise complete data set. The radial velocities measured with the on-board high-resolution spectrograph will only reach some 10 % of the full sample of stars with astrometry and photometry from the mission, and detailed chemical information will be obtained for less than 1 %. Teams all over the world are organizing large-scale projects to provide complementary radial velocities and chemistry, since this can now be done very efficiently from the ground thanks to large and mid-size telescopes with a wide field-of-view and multi-object spectrographs. As a result, automated data processing is taking an ever increasing relevance, and the concept is applying to many more areas, from targeting to analysis. In this paper, I provide a quick overview of recent, ongoing, and upcoming spectroscopic surveys, and the strategies adopted in their automated analysis pipelines.

  19. Reload safety analysis automation tools

    International Nuclear Information System (INIS)

    Performing core physics calculations for the sake of reload safety analysis is a very demanding and time consuming process. This process generally begins with the preparation of libraries for the core physics code using a lattice code. The next step involves creating a very large set of calculations with the core physics code. Lastly, the results of the calculations must be interpreted, correctly applying uncertainties and checking whether applicable limits are satisfied. Such a procedure requires three specialized experts. One must understand the lattice code in order to correctly calculate and interpret its results. The next expert must have a good understanding of the physics code in order to create libraries from the lattice code results and to correctly define all the calculations involved. The third expert must have a deep knowledge of the power plant and the reload safety analysis procedure in order to verify, that all the necessary calculations were performed. Such a procedure involves many steps and is very time consuming. At ÚJV Řež, a.s., we have developed a set of tools which can be used to automate and simplify the whole process of performing reload safety analysis. Our application QUADRIGA automates lattice code calculations for library preparation. It removes user interaction with the lattice code and reduces his task to defining fuel pin types, enrichments, assembly maps and operational parameters all through a very nice and user-friendly GUI. The second part in reload safety analysis calculations is done by CycleKit, a code which is linked with our core physics code ANDREA. Through CycleKit large sets of calculations with complicated interdependencies can be performed using simple and convenient notation. CycleKit automates the interaction with ANDREA, organizes all the calculations, collects the results, performs limit verification and displays the output in clickable html format. Using this set of tools for reload safety analysis simplifies

  20. A High End Building Automation and Online Video Surveillance Security System

    Directory of Open Access Journals (Sweden)

    Iyer Adith Nagarajan

    2015-02-01

    Full Text Available This paper deals with the design and implementation of a building automation and security system which facilitates a healthy, flexible, comfortable and a secure environment to the residents. The design incorporates a SIRC (Sony Infrared Remote Control protocol based infrared remote controller for the wireless operation and control of electrical appliances. Alternatively, the appliances are monitored and controlled via a laptop using a GUI (Graphical User Interface application built in C#. Apart from automation, this paper also focuses on indoor security. Multiple PIR (Pyroelectric Infrared sensors are placed within the area under surveillance to detect any intruder. A web camera used to record the video footage is mounted on the shaft of a servo motor to enable angular motion. Corresponding to which sensor has detected the motion; the ARM7 LPC2148 microcontroller provides appropriate PWM pulses to drive the servo motor, thus adjusting the position and orientation of the camera precisely. OpenCV libraries are used to record a video feed of 5 seconds at 30 frames per second (fps. Video frames are embedded with date and time stamp. The recorded video is compressed, saved to predefined directory (for backup and also uploaded to specific remote location over the internet using Google drive for instant access. The entire security system is automatic and does not need any human intervention.

  1. Video micro analysis in music therapy research

    DEFF Research Database (Denmark)

    Holck, Ulla; Oldfield, Amelia; Plahl, Christine

    2004-01-01

    Three music therapy researchers from three different countries who have recently completed their PhD theses will each briefly discuss the role of video analysis in their investigations. All three of these research projects have involved music therapy work with children, some of whom were...... and qualitative approaches to data collection. In addition, participants will be encouraged to reflect on what types of knowledge can be gained from video analyses and to explore the general relevance of video analysis in music therapy research....

  2. Glyph-Based Video Visualization for Semen Analysis

    KAUST Repository

    Duffy, Brian

    2015-08-01

    © 2013 IEEE. The existing efforts in computer assisted semen analysis have been focused on high speed imaging and automated image analysis of sperm motility. This results in a large amount of data, and it is extremely challenging for both clinical scientists and researchers to interpret, compare and correlate the multidimensional and time-varying measurements captured from video data. In this work, we use glyphs to encode a collection of numerical measurements taken at a regular interval and to summarize spatio-temporal motion characteristics using static visual representations. The design of the glyphs addresses the needs for (a) encoding some 20 variables using separable visual channels, (b) supporting scientific observation of the interrelationships between different measurements and comparison between different sperm cells and their flagella, and (c) facilitating the learning of the encoding scheme by making use of appropriate visual abstractions and metaphors. As a case study, we focus this work on video visualization for computer-aided semen analysis, which has a broad impact on both biological sciences and medical healthcare. We demonstrate that glyph-based visualization can serve as a means of external memorization of video data as well as an overview of a large set of spatiotemporal measurements. It enables domain scientists to make scientific observation in a cost-effective manner by reducing the burden of viewing videos repeatedly, while providing them with a new visual representation for conveying semen statistics.

  3. PIZZARO: Forensic analysis and restoration of image and video data.

    Science.gov (United States)

    Kamenicky, Jan; Bartos, Michal; Flusser, Jan; Mahdian, Babak; Kotera, Jan; Novozamsky, Adam; Saic, Stanislav; Sroubek, Filip; Sorel, Michal; Zita, Ales; Zitova, Barbara; Sima, Zdenek; Svarc, Petr; Horinek, Jan

    2016-07-01

    This paper introduces a set of methods for image and video forensic analysis. They were designed to help to assess image and video credibility and origin and to restore and increase image quality by diminishing unwanted blur, noise, and other possible artifacts. The motivation came from the best practices used in the criminal investigation utilizing images and/or videos. The determination of the image source, the verification of the image content, and image restoration were identified as the most important issues of which automation can facilitate criminalists work. Novel theoretical results complemented with existing approaches (LCD re-capture detection and denoising) were implemented in the PIZZARO software tool, which consists of the image processing functionality as well as of reporting and archiving functions to ensure the repeatability of image analysis procedures and thus fulfills formal aspects of the image/video analysis work. Comparison of new proposed methods with the state of the art approaches is shown. Real use cases are presented, which illustrate the functionality of the developed methods and demonstrate their applicability in different situations. The use cases as well as the method design were solved in tight cooperation of scientists from the Institute of Criminalistics, National Drug Headquarters of the Criminal Police and Investigation Service of the Police of the Czech Republic, and image processing experts from the Czech Academy of Sciences.

  4. PIZZARO: Forensic analysis and restoration of image and video data.

    Science.gov (United States)

    Kamenicky, Jan; Bartos, Michal; Flusser, Jan; Mahdian, Babak; Kotera, Jan; Novozamsky, Adam; Saic, Stanislav; Sroubek, Filip; Sorel, Michal; Zita, Ales; Zitova, Barbara; Sima, Zdenek; Svarc, Petr; Horinek, Jan

    2016-07-01

    This paper introduces a set of methods for image and video forensic analysis. They were designed to help to assess image and video credibility and origin and to restore and increase image quality by diminishing unwanted blur, noise, and other possible artifacts. The motivation came from the best practices used in the criminal investigation utilizing images and/or videos. The determination of the image source, the verification of the image content, and image restoration were identified as the most important issues of which automation can facilitate criminalists work. Novel theoretical results complemented with existing approaches (LCD re-capture detection and denoising) were implemented in the PIZZARO software tool, which consists of the image processing functionality as well as of reporting and archiving functions to ensure the repeatability of image analysis procedures and thus fulfills formal aspects of the image/video analysis work. Comparison of new proposed methods with the state of the art approaches is shown. Real use cases are presented, which illustrate the functionality of the developed methods and demonstrate their applicability in different situations. The use cases as well as the method design were solved in tight cooperation of scientists from the Institute of Criminalistics, National Drug Headquarters of the Criminal Police and Investigation Service of the Police of the Czech Republic, and image processing experts from the Czech Academy of Sciences. PMID:27182830

  5. Automated quantitative analysis for pneumoconiosis

    Science.gov (United States)

    Kondo, Hiroshi; Zhao, Bin; Mino, Masako

    1998-09-01

    Automated quantitative analysis for pneumoconiosis is presented. In this paper Japanese standard radiographs of pneumoconiosis are categorized by measuring the area density and the number density of small rounded opacities. And furthermore the classification of the size and shape of the opacities is made from the measuring of the equivalent radiuses of each opacity. The proposed method includes a bi- level unsharp masking filter with a 1D uniform impulse response in order to eliminate the undesired parts such as the images of blood vessels and ribs in the chest x-ray photo. The fuzzy contrast enhancement is also introduced in this method for easy and exact detection of small rounded opacities. Many simulation examples show that the proposed method is more reliable than the former method.

  6. AUTOMATED VIDEO IMAGE MORPHOMETRY OF THE CORNEAL ENDOTHELIUM

    NARCIS (Netherlands)

    SIERTSEMA, JV; LANDESZ, M; VANDENBROM, H; VANRIJ, G

    1993-01-01

    The central corneal endothelium of 13 eyes in 13 subjects was visualized with a non-contact specular microscope. This report describes the computer-assisted morphometric analysis of enhanced digitized images, using a direct input by means of a frame grabber. The output consisted of mean cell area, c

  7. Automation for System Safety Analysis

    Science.gov (United States)

    Malin, Jane T.; Fleming, Land; Throop, David; Thronesbery, Carroll; Flores, Joshua; Bennett, Ted; Wennberg, Paul

    2009-01-01

    This presentation describes work to integrate a set of tools to support early model-based analysis of failures and hazards due to system-software interactions. The tools perform and assist analysts in the following tasks: 1) extract model parts from text for architecture and safety/hazard models; 2) combine the parts with library information to develop the models for visualization and analysis; 3) perform graph analysis and simulation to identify and evaluate possible paths from hazard sources to vulnerable entities and functions, in nominal and anomalous system-software configurations and scenarios; and 4) identify resulting candidate scenarios for software integration testing. There has been significant technical progress in model extraction from Orion program text sources, architecture model derivation (components and connections) and documentation of extraction sources. Models have been derived from Internal Interface Requirements Documents (IIRDs) and FMEA documents. Linguistic text processing is used to extract model parts and relationships, and the Aerospace Ontology also aids automated model development from the extracted information. Visualizations of these models assist analysts in requirements overview and in checking consistency and completeness.

  8. Parts-based detection of AK-47s for forensic video analysis

    OpenAIRE

    Jones, Justin

    2010-01-01

    Approved for public release; distribution is unlimited Law enforcement, military personnel, and forensic analysts are increasingly reliant on imaging ystems to perform in a hostile environment and require a robust method to efficiently locate bjects of interest in videos and still images. Current approaches require a full-time operator to monitor a surveillance video or to sift a hard drive for suspicious content. In this thesis, we demonstrate the effectiveness of automated analysis tools...

  9. Video Game Control Dimensionality Analysis

    OpenAIRE

    Mustaquim, Moyen; Nyström, Tobias

    2014-01-01

    In this paper we have studied the video games control dimensionality and its effects on the traditional way of interpreting difficulty and familiarity in games. This paper presents the findings in which we have studied the Xbox 360 console’s games control dimensionality. Multivariate statistical operations were performed on the collected data from 83 different games of Xbox 360. It was found that the player’s perceived level of familiarity and difficulty can be influenced by the game control ...

  10. AN HMM BASED ANALYSIS FRAMEWORK FOR SEMANTIC VIDEO EVENTS

    Institute of Scientific and Technical Information of China (English)

    You Junyong; Liu Guizhong; Zhang Yaxin

    2007-01-01

    Semantic video analysis plays an important role in the field of machine intelligence and pattern recognition. In this paper, based on the Hidden Markov Model (HMM), a semantic recognition framework on compressed videos is proposed to analyze the video events according to six low-level features. After the detailed analysis of video events, the pattern of global motion and five features in foreground--the principal parts of videos, are employed as the observations of the Hidden Markov Model to classify events in videos. The applications of the proposed framework in some video event detections demonstrate the promising success of the proposed framework on semantic video analysis.

  11. Automated measurement of mouse social behaviors using depth sensing, video tracking, and machine learning.

    Science.gov (United States)

    Hong, Weizhe; Kennedy, Ann; Burgos-Artizzu, Xavier P; Zelikowsky, Moriel; Navonne, Santiago G; Perona, Pietro; Anderson, David J

    2015-09-22

    A lack of automated, quantitative, and accurate assessment of social behaviors in mammalian animal models has limited progress toward understanding mechanisms underlying social interactions and their disorders such as autism. Here we present a new integrated hardware and software system that combines video tracking, depth sensing, and machine learning for automatic detection and quantification of social behaviors involving close and dynamic interactions between two mice of different coat colors in their home cage. We designed a hardware setup that integrates traditional video cameras with a depth camera, developed computer vision tools to extract the body "pose" of individual animals in a social context, and used a supervised learning algorithm to classify several well-described social behaviors. We validated the robustness of the automated classifiers in various experimental settings and used them to examine how genetic background, such as that of Black and Tan Brachyury (BTBR) mice (a previously reported autism model), influences social behavior. Our integrated approach allows for rapid, automated measurement of social behaviors across diverse experimental designs and also affords the ability to develop new, objective behavioral metrics. PMID:26354123

  12. Automated segmentation and tracking of non-rigid objects in time-lapse microscopy videos of polymorphonuclear neutrophils.

    Science.gov (United States)

    Brandes, Susanne; Mokhtari, Zeinab; Essig, Fabian; Hünniger, Kerstin; Kurzai, Oliver; Figge, Marc Thilo

    2015-02-01

    Time-lapse microscopy is an important technique to study the dynamics of various biological processes. The labor-intensive manual analysis of microscopy videos is increasingly replaced by automated segmentation and tracking methods. These methods are often limited to certain cell morphologies and/or cell stainings. In this paper, we present an automated segmentation and tracking framework that does not have these restrictions. In particular, our framework handles highly variable cell shapes and does not rely on any cell stainings. Our segmentation approach is based on a combination of spatial and temporal image variations to detect moving cells in microscopy videos. This method yields a sensitivity of 99% and a precision of 95% in object detection. The tracking of cells consists of different steps, starting from single-cell tracking based on a nearest-neighbor-approach, detection of cell-cell interactions and splitting of cell clusters, and finally combining tracklets using methods from graph theory. The segmentation and tracking framework was applied to synthetic as well as experimental datasets with varying cell densities implying different numbers of cell-cell interactions. We established a validation framework to measure the performance of our tracking technique. The cell tracking accuracy was found to be >99% for all datasets indicating a high accuracy for connecting the detected cells between different time points. PMID:25465844

  13. AN AUTOMATED ALGORITHM FOR APPROXIMATION OF TEMPORAL VIDEO DATA USING LINEAR BEZIER FITTING

    Directory of Open Access Journals (Sweden)

    Murtaza Ali Khan

    2010-05-01

    Full Text Available This paper presents an efficient method for approximation of temporal video data using linear Bezier fitting. For a given sequence of frames, the proposed method estimates the intensity variations of each pixel in temporal dimension using linear Bezier fitting in Euclidean space. Fitting of each segment ensures upper bound of specified mean squared error. Break and fit criteria is employed to minimize the number of segments required to fit the data. The proposed method is well suitable for lossy compression of temporal video data and automates the fitting process of each pixel. Experimental results show that the proposed method yields good results both in terms of objective and subjective quality measurement parameters without causing any blocking artifacts.

  14. Automated Pipelines for Spectroscopic Analysis

    CERN Document Server

    Prieto, Carlos Allende

    2016-01-01

    The Gaia mission will have a profound impact on our understanding of the structure and dynamics of the Milky Way. Gaia is providing an exhaustive census of stellar parallaxes, proper motions, positions, colors and radial velocities, but also leaves some flaring holes in an otherwise complete data set. The radial velocities measured with the on-board high-resolution spectrograph will only reach some 10% of the full sample of stars with astrometry and photometry from the mission, and detailed chemical information will be obtained for less than 1%. Teams all over the world are organizing large-scale projects to provide complementary radial velocities and chemistry, since this can now be done very efficiently from the ground thanks to large and mid-size telescopes with a wide field-of-view and multi-object spectrographs. As a result, automated data processing is taking an ever increasing relevance, and the concept is applying to many more areas, from targeting to analysis. In this paper, I provide a quick overvie...

  15. Video Game Characters. Theory and Analysis

    Directory of Open Access Journals (Sweden)

    Felix Schröter

    2014-06-01

    Full Text Available This essay develops a method for the analysis of video game characters based on a theoretical understanding of their medium-specific representation and the mental processes involved in their intersubjective construction by video game players. We propose to distinguish, first, between narration, simulation, and communication as three modes of representation particularly salient for contemporary video games and the characters they represent, second, between narrative, ludic, and social experience as three ways in which players perceive video game characters and their representations, and, third, between three dimensions of video game characters as ‘intersubjective constructs’, which usually are to be analyzed not only as fictional beings with certain diegetic properties but also as game pieces with certain ludic properties and, in those cases in which they function as avatars in the social space of a multiplayer game, as representations of other players. Having established these basic distinctions, we proceed to analyze their realization and interrelation by reference to the character of Martin Walker from the third-person shooter Spec Ops: The Line (Yager Development 2012, the highly customizable player-controlled characters from the role-playing game The Elder Scrolls V: Skyrim (Bethesda 2011, and the complex multidimensional characters in the massively multiplayer online role-playing game Star Wars: The Old Republic (BioWare 2011-2014.

  16. Automated UAV-based video exploitation using service oriented architecture framework

    Science.gov (United States)

    Se, Stephen; Nadeau, Christian; Wood, Scott

    2011-05-01

    Airborne surveillance and reconnaissance are essential for successful military missions. Such capabilities are critical for troop protection, situational awareness, mission planning, damage assessment, and others. Unmanned Aerial Vehicles (UAVs) gather huge amounts of video data but it is extremely labour-intensive for operators to analyze hours and hours of received data. At MDA, we have developed a suite of tools that can process the UAV video data automatically, including mosaicking, change detection and 3D reconstruction, which have been integrated within a standard GIS framework. In addition, the mosaicking and 3D reconstruction tools have also been integrated in a Service Oriented Architecture (SOA) framework. The Visualization and Exploitation Workstation (VIEW) integrates 2D and 3D visualization, processing, and analysis capabilities developed for UAV video exploitation. Visualization capabilities are supported through a thick-client Graphical User Interface (GUI), which allows visualization of 2D imagery, video, and 3D models. The GUI interacts with the VIEW server, which provides video mosaicking and 3D reconstruction exploitation services through the SOA framework. The SOA framework allows multiple users to perform video exploitation by running a GUI client on the operator's computer and invoking the video exploitation functionalities residing on the server. This allows the exploitation services to be upgraded easily and allows the intensive video processing to run on powerful workstations. MDA provides UAV services to the Canadian and Australian forces in Afghanistan with the Heron, a Medium Altitude Long Endurance (MALE) UAV system. On-going flight operations service provides important intelligence, surveillance, and reconnaissance information to commanders and front-line soldiers.

  17. Feasibility Analysis of Crane Automation

    Institute of Scientific and Technical Information of China (English)

    DONG Ming-xiao; MEI Xue-song; JIANG Ge-dong; ZHANG Gui-qing

    2006-01-01

    This paper summarizes the modeling methods, open-loop control and closed-loop control techniques of various forms of cranes, worldwide, and discusses their feasibilities and limitations in engineering. Then the dynamic behaviors of cranes are analyzed. Finally, we propose applied modeling methods and feasible control techniques and demonstrate the feasibilities of crane automation.

  18. Distribution system analysis and automation

    CERN Document Server

    Gers, Juan

    2013-01-01

    A comprehensive guide to techniques that allow engineers to simulate, analyse and optimise power distribution systems which combined with automation, underpin the emerging concept of the "smart grid". This book is supported by theoretical concepts with real-world applications and MATLAB exercises.

  19. Automated analysis of 3D echocardiography

    NARCIS (Netherlands)

    Stralen, Marijn van

    2009-01-01

    In this thesis we aim at automating the analysis of 3D echocardiography, mainly targeting the functional analysis of the left ventricle. Manual analysis of these data is cumbersome, time-consuming and is associated with inter-observer and inter-institutional variability. Methods for reconstruction o

  20. An Ethnografic Approach to Video Analysis

    DEFF Research Database (Denmark)

    Holck, Ulla

    2007-01-01

    , followed by a discussion of their significance for the therapeutic interaction. Literature: Holck, U, Oldfield, A. and Plahl, C. (2005) Video Micro Analysis in Music Therapy Research, a Research Workshop. In: Aldridge, D., Fachner, J. & Erkkilä, J. (Eds) Many Faces of Music Therapy - Proceedings of the 6th......: Methods, Techniques and Applications in Music Therapy for Music Therapy Clinicians, Educators, Researchers and Students. London: Jessica Kingsley....

  1. Automated Technology for Verificiation and Analysis

    DEFF Research Database (Denmark)

    This volume contains the papers presented at the 7th International Symposium on Automated Technology for Verification and Analysis held during October 13-16 in Macao SAR, China. The primary objective of the ATVA conferences remains the same: to exchange and promote the latest advances of state-of-the-art...... research on theoretical and practical aspects of automated analysis, verification, and synthesis. Among 74 research papers and 10 tool papers submitted to ATVA 2009, the Program Committee accepted 23 as regular papers and 3 as tool papers. In all, 33 experts from 17 countries worked hard to make sure...

  2. Automated Non-invasive Video-Microscopy of Oyster Spat Heart Rate during Acute Temperature Change: Impact of Acclimation Temperature

    Science.gov (United States)

    Domnik, Nicolle J.; Polymeropoulos, Elias T.; Elliott, Nicholas G.; Frappell, Peter B.; Fisher, John T.

    2016-01-01

    We developed an automated, non-invasive method to detect real-time cardiac contraction in post-larval (1.1–1.7 mm length), juvenile oysters (i.e., oyster spat) via a fiber-optic trans-illumination system. The system is housed within a temperature-controlled chamber and video microscopy imaging of the heart was coupled with video edge-detection to measure cardiac contraction, inter-beat interval, and heart rate (HR). We used the method to address the hypothesis that cool acclimation (10°C vs. 22°C—Ta10 or Ta22, respectively; each n = 8) would preserve cardiac phenotype (assessed via HR variability, HRV analysis and maintained cardiac activity) during acute temperature changes. The temperature ramp (TR) protocol comprised 2°C steps (10 min/experimental temperature, Texp) from 22°C to 10°C to 22°C. HR was related to Texp in both acclimation groups. Spat became asystolic at low temperatures, particularly Ta22 spat (Ta22: 8/8 vs. Ta10: 3/8 asystolic at Texp = 10°C). The rate of HR decrease during cooling was less in Ta10 vs. Ta22 spat when asystole was included in analysis (P = 0.026). Time-domain HRV was inversely related to temperature and elevated in Ta10 vs. Ta22 spat (P < 0.001), whereas a lack of defined peaks in spectral density precluded frequency-domain analysis. Application of the method during an acute cooling challenge revealed that cool temperature acclimation preserved active cardiac contraction in oyster spat and increased time-domain HRV responses, whereas warm acclimation enhanced asystole. These physiologic changes highlight the need for studies of mechanisms, and have translational potential for oyster aquaculture practices.

  3. Automation of the proximate analysis of coals

    Energy Technology Data Exchange (ETDEWEB)

    1985-01-01

    A study is reported of the feasibility of using a multi-jointed general-purpose robot for the automated analysis of moisture, volatile matter, ash and total post-combustion sulfur in coal and coke. The results obtained with an automated system are compared with those of conventional manual methods. The design of the robot hand and the safety measures provided are now both fully satisfactory, and the analytic values obtained exhibit little scatter. It is concluded that the use of this robot system results in a better working environment and in considerable labour saving. Applications to other tasks are under development.

  4. High-Speed Video Analysis of Damped Harmonic Motion

    Science.gov (United States)

    Poonyawatpornkul, J.; Wattanakasiwich, P.

    2013-01-01

    In this paper, we acquire and analyse high-speed videos of a spring-mass system oscillating in glycerin at different temperatures. Three cases of damped harmonic oscillation are investigated and analysed by using high-speed video at a rate of 120 frames s[superscript -1] and Tracker Video Analysis (Tracker) software. We present empirical data for…

  5. Video analysis in youth volleyball team

    OpenAIRE

    Parisi, Fabio; Raiola, Gaetano

    2014-01-01

    The aim of the study was to use video analysis in training to improve the performance of young athletes. Participants will be divided in two teams that will play the same “Under-21 championship”, but with a different average age (Team A: average age 14,41±1,66; Team B average age 18,94±1,59). Twelve matches (in 4 months) of both teams will be videotaped. Statistical data for each team will be extrapolated from them, and compared among them in order to take in correlations skills data. Only te...

  6. Automated Loads Analysis System (ATLAS)

    Science.gov (United States)

    Gardner, Stephen; Frere, Scot; O’Reilly, Patrick

    2013-01-01

    ATLAS is a generalized solution that can be used for launch vehicles. ATLAS is used to produce modal transient analysis and quasi-static analysis results (i.e., accelerations, displacements, and forces) for the payload math models on a specific Shuttle Transport System (STS) flight using the shuttle math model and associated forcing functions. This innovation solves the problem of coupling of payload math models into a shuttle math model. It performs a transient loads analysis simulating liftoff, landing, and all flight events between liftoff and landing. ATLAS utilizes efficient and numerically stable algorithms available in MSC/NASTRAN.

  7. Automated assessment of Pavlovian conditioned freezing and shock reactivity in mice using the VideoFreeze system

    Directory of Open Access Journals (Sweden)

    Stephan G Anagnostaras

    2010-09-01

    Full Text Available The Pavlovian conditioned freezing paradigm has become a prominent mouse and rat model of learning and memory, as well as of pathological fear. Due to its efficiency, reproducibility, and well-defined neurobiology, the paradigm has become widely adopted in large-scale genetic and pharmacological screens. However, one major shortcoming of the use of freezing behavior has been that it has required the use of tedious hand scoring, or a variety of proprietary automated methods that are often poorly validated or difficult to obtain and implement. Here we report an extensive validation of the Video Freeze system in mice, a turn-key all-inclusive system for fear conditioning in small animals. Using digital video and near-infrared lighting, the system achieved outstanding performance in scoring both freezing and movement. Given the large-scale adoption of the conditioned freezing paradigm, we encourage similar validation of other automated systems for scoring freezing, or other behaviors.

  8. Text Readability within Video Retrieval Applications: A Study On CCTV Analysis

    Directory of Open Access Journals (Sweden)

    Neil Newbold

    2010-04-01

    Full Text Available  The indexing and retrieval of video footage requires appropriate annotation of the video for search queries to be able to provide useful results. This paper discusses an approach to automating video annotation based on an expanded consideration of readability that covers both text factors and cognitive factors. The eventual aim is the selection of ontological elements that support wider ranges of user queries through limited sets of annotations derived automatically through the analysis of expert annotations of prior content. We describe how considerations of readability influence the approach taken to ontology extraction components of the system in development, and the automatic population of a CCTV ontology from analysis of expert transcripts of video footage. Considerations are made of the semantic content of the expert transcripts through theories on readability analysis and terminology extraction to provide knowledgebased video retrieval. Using readability studies to improve the text, we suggest that the semantic content can be made more accessible and improves the terminology extraction process which highlights the key concepts. This information can be used to determine relationships in the text, as a proxy for relationships between video objects with strong potential for interlinkage.

  9. Techniques for Automated Performance Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Marcus, Ryan C. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2014-09-02

    The performance of a particular HPC code depends on a multitude of variables, including compiler selection, optimization flags, OpenMP pool size, file system load, memory usage, MPI configuration, etc. As a result of this complexity, current predictive models have limited applicability, especially at scale. We present a formulation of scientific codes, nodes, and clusters that reduces complex performance analysis to well-known mathematical techniques. Building accurate predictive models and enhancing our understanding of scientific codes at scale is an important step towards exascale computing.

  10. Video Analysis and Repackaging for Distance Education

    CERN Document Server

    Ram, A Ranjith

    2012-01-01

    This book presents various video processing methodologies that are useful for distance education. The motivation is to devise new multimedia technologies that are suitable for better representation of instructional videos by exploiting the temporal redundancies present in the original video. This solves many of the issues related to the memory and bandwidth limitation of lecture videos. The various methods described in the book focus on a key-frame based approach which is used to time shrink, repackage and retarget instructional videos. All the methods need a preprocessing step of shot detecti

  11. Proximate analysis by automated thermogravimetry

    Energy Technology Data Exchange (ETDEWEB)

    Elder, J.P.

    1983-05-01

    A study has been made of the use of the Perkin-Elmer thermogravimetric instrument TGS-2, under the control of the System 4 microprocessor for the automatic proximate analysis of solid fossil fuels and related matter. The programs developed are simple to operate, and do not require detailed temperature calibration of the instrumental system. They have been tested with coals of varying rank, biomass samples and Devonian oil shales all of which were of special importance to the State of Kentucky. Precise, accurate data conforming to ASTM specifications were obtained. The simplicity of the technique suggests that it may complement the classical ASTM method and could be used when this latter procedure cannot be employed. However, its adoption as a standardized method must await the development of statistical data resulting from interlaboratory testing on a variety of fossil fuels. (9 refs.)

  12. Flux-P: Automating Metabolic Flux Analysis

    Directory of Open Access Journals (Sweden)

    Birgitta E. Ebert

    2012-11-01

    Full Text Available Quantitative knowledge of intracellular fluxes in metabolic networks is invaluable for inferring metabolic system behavior and the design principles of biological systems. However, intracellular reaction rates can not often be calculated directly but have to be estimated; for instance, via 13C-based metabolic flux analysis, a model-based interpretation of stable carbon isotope patterns in intermediates of metabolism. Existing software such as FiatFlux, OpenFLUX or 13CFLUX supports experts in this complex analysis, but requires several steps that have to be carried out manually, hence restricting the use of this software for data interpretation to a rather small number of experiments. In this paper, we present Flux-P as an approach to automate and standardize 13C-based metabolic flux analysis, using the Bio-jETI workflow framework. Exemplarily based on the FiatFlux software, it demonstrates how services can be created that carry out the different analysis steps autonomously and how these can subsequently be assembled into software workflows that perform automated, high-throughput intracellular flux analysis of high quality and reproducibility. Besides significant acceleration and standardization of the data analysis, the agile workflow-based realization supports flexible changes of the analysis workflows on the user level, making it easy to perform custom analyses.

  13. Descriptive analysis of YouTube music therapy videos.

    Science.gov (United States)

    Gooding, Lori F; Gregory, Dianne

    2011-01-01

    The purpose of this study was to conduct a descriptive analysis of music therapy-related videos on YouTube. Preliminary searches using the keywords music therapy, music therapy session, and "music therapy session" resulted in listings of 5000, 767, and 59 videos respectively. The narrowed down listing of 59 videos was divided between two investigators and reviewed in order to determine their relationship to actual music therapy practice. A total of 32 videos were determined to be depictions of music therapy sessions. These videos were analyzed using a 16-item investigator-created rubric that examined both video specific information and therapy specific information. Results of the analysis indicated that audio and visual quality was adequate, while narrative descriptions and identification information were ineffective in the majority of the videos. The top 5 videos (based on the highest number of viewings in the sample) were selected for further analysis in order to investigate demonstration of the Professional Level of Practice Competencies set forth in the American Music Therapy Association (AMTA) Professional Competencies (AMTA, 2008). Four of the five videos met basic competency criteria, with the quality of the fifth video precluding evaluation of content. Of particular interest is the fact that none of the videos included credentialing information. Results of this study suggest the need to consider ways to ensure accurate dissemination of music therapy-related information in the YouTube environment, ethical standards when posting music therapy session videos, and the possibility of creating AMTA standards for posting music therapy related video.

  14. No-Reference Video Quality Assessment using MPEG Analysis

    DEFF Research Database (Denmark)

    Søgaard, Jacob; Forchhammer, Søren; Korhonen, Jari

    2013-01-01

    and estimate the video coding parameters for MPEG-2 and H.264/AVC which can be used to improve the VQA. The analysis differs from most other video coding analysis methods since it is without access to the bitstream. The results show that our proposed method is competitive with other recent NR VQA methods...... for MPEG-2 and H.264/AVC....

  15. High-Speed Video Analysis in a Conceptual Physics Class

    Science.gov (United States)

    Desbien, Dwain M.

    2011-01-01

    The use of probe ware and computers has become quite common in introductory physics classrooms. Video analysis is also becoming more popular and is available to a wide range of students through commercially available and/or free software. Video analysis allows for the study of motions that cannot be easily measured in the traditional lab setting…

  16. A Survey on Video-based Vehicle Behavior Analysis Algorithms

    Directory of Open Access Journals (Sweden)

    Jian Wu

    2012-06-01

    Full Text Available Analysis of the Vehicle Behavior is mainly to analyze and identify the vehicles’ motion pattern, and describe it by the use of natural language. It is a considerable challenge to analyze and describe the vehicles’ behavior in a complex scene. This paper first hackles the development history of the intelligent transportation system and analysis of vehicles’ behavior, and then conducts an in-depth analysis of current situation of vehicle behavior analysis from the video processing, video analysis and video understanding, summarizes the achieved results and the key technical problems, and prospects the future development of vehicle behavior analysis.

  17. Videos

    OpenAIRE

    Cheng, Xi En

    2014-01-01

    These are videos in which the results are overlaid on the original from-camera videos. Please turn to the section 3.2 and the legend of Fig. 7b for the explanation of the inset(s) of "MovieS1" and "MovieS2".

  18. Automating Risk Analysis of Software Design Models

    Directory of Open Access Journals (Sweden)

    Maxime Frydman

    2014-01-01

    Full Text Available The growth of the internet and networked systems has exposed software to an increased amount of security threats. One of the responses from software developers to these threats is the introduction of security activities in the software development lifecycle. This paper describes an approach to reduce the need for costly human expertise to perform risk analysis in software, which is common in secure development methodologies, by automating threat modeling. Reducing the dependency on security experts aims at reducing the cost of secure development by allowing non-security-aware developers to apply secure development with little to no additional cost, making secure development more accessible. To automate threat modeling two data structures are introduced, identification trees and mitigation trees, to identify threats in software designs and advise mitigation techniques, while taking into account specification requirements and cost concerns. These are the components of our model for automated threat modeling, AutSEC. We validated AutSEC by implementing it in a tool based on data flow diagrams, from the Microsoft security development methodology, and applying it to VOMS, a grid middleware component, to evaluate our model's performance.

  19. H.264/AVC Video Compressed Traces: Multifractal and Fractal Analysis

    Directory of Open Access Journals (Sweden)

    Samčović Andreja

    2006-01-01

    Full Text Available Publicly available long video traces encoded according to H.264/AVC were analyzed from the fractal and multifractal points of view. It was shown that such video traces, as compressed videos (H.261, H.263, and MPEG-4 Version 2 exhibit inherent long-range dependency, that is, fractal, property. Moreover they have high bit rate variability, particularly at higher compression ratios. Such signals may be better characterized by multifractal (MF analysis, since this approach describes both local and global features of the process. From multifractal spectra of the frame size video traces it was shown that higher compression ratio produces broader and less regular MF spectra, indicating to higher MF nature and the existence of additive components in video traces. Considering individual frames (I, P, and B and their MF spectra one can approve additive nature of compressed video and the particular influence of these frames to a whole MF spectrum. Since compressed video occupies a main part of transmission bandwidth, results obtained from MF analysis of compressed video may contribute to more accurate modeling of modern teletraffic. Moreover, by appropriate choice of the method for estimating MF quantities, an inverse MF analysis is possible, that means, from a once derived MF spectrum of observed signal it is possible to recognize and extract parts of the signal which are characterized by particular values of multifractal parameters. Intensive simulations and results obtained confirm the applicability and efficiency of MF analysis of compressed video.

  20. Automated Music Video Generation Using Multi-level Feature-based Segmentation

    Science.gov (United States)

    Yoon, Jong-Chul; Lee, In-Kwon; Byun, Siwoo

    The expansion of the home video market has created a requirement for video editing tools to allow ordinary people to assemble videos from short clips. However, professional skills are still necessary to create a music video, which requires a stream to be synchronized with pre-composed music. Because the music and the video are pre-generated in separate environments, even a professional producer usually requires a number of trials to obtain a satisfactory synchronization, which is something that most amateurs are unable to achieve.

  1. Two video analysis applications using foreground/background segmentation

    OpenAIRE

    Zivkovic, Z.; Petkovic, M; Mierlo, van, B.C.; Keulen, van, H.; Heijden, van der, RW Rob; Jonker, W.; Rijnierse, E.

    2003-01-01

    Probably the most frequently solved problem when videos are analyzed is segmenting a foreground object from its background in an image. After some regions in an image are detected as the foreground objects, some features are extracted that describe the segmented regions. These features together with the domain knowledge are often enough to extract the needed high-level semantics from the video material. In this paper we present two automatic systems for video analysis and indexing. In both sy...

  2. Automated Radiochemical Separation, Analysis, and Sensing

    International Nuclear Information System (INIS)

    Chapter 14 for the 2nd edition of the Handbook of Radioactivity Analysis. The techniques and examples described in this chapter demonstrate that modern fluidic techniques and instrumentation can be used to develop automated radiochemical separation workstations. In many applications, these can be mechanically simple and key parameters can be controlled from software. If desired, many of the fluidic components and solution can be located remotely from the radioactive samples and other hot sample processing zones. There are many issues to address in developing automated radiochemical separation that perform reliably time after time in unattended operation. These are associated primarily with the separation and analytical chemistry aspects of the process. The relevant issues include the selectivity of the separation, decontamination factors, matrix effects, and recoveries from the separation column. In addition, flow rate effects, column lifetimes, carryover from one sample to another, and sample throughput must be considered. Nevertheless, successful approaches for addressing these issues have been developed. Radiochemical analysis is required not only for processing nuclear waste samples in the laboratory, but also for at-site or in situ applications. Monitors for nuclear waste processing operations represent an at-site application where continuous unattended monitoring is required to assure effective process radiochemical separations that produce waste streams that qualify for conversion to stable waste forms. Radionuclide sensors for water monitoring and long term stewardship represent an application where at-site or in situ measurements will be most effective. Automated radiochemical analyzers and sensors have been developed that demonstrate that radiochemical analysis beyond the analytical laboratory is both possible and practical

  3. NEW TECHNIQUES USED IN AUTOMATED TEXT ANALYSIS

    Directory of Open Access Journals (Sweden)

    M. I strate

    2010-12-01

    Full Text Available Automated analysis of natural language texts is one of the most important knowledge discovery tasks for any organization. According to Gartner Group, almost 90% of knowledge available at an organization today is dispersed throughout piles of documents buried within unstructured text. Analyzing huge volumes of textual information is often involved in making informed and correct business decisions. Traditional analysis methods based on statistics fail to help processing unstructured texts and the society is in search of new technologies for text analysis. There exist a variety of approaches to the analysis of natural language texts, but most of them do not provide results that could be successfully applied in practice. This article concentrates on recent ideas and practical implementations in this area.

  4. APSAS; an Automated Particle Size Analysis System

    Science.gov (United States)

    Poppe, Lawrence J.; Eliason, A.H.; Fredericks, J.J.

    1985-01-01

    The Automated Particle Size Analysis System integrates a settling tube and an electroresistance multichannel particle-size analyzer (Coulter Counter) with a Pro-Comp/gg microcomputer and a Hewlett Packard 2100 MX(HP 2100 MX) minicomputer. This system and its associated software digitize the raw sediment grain-size data, combine the coarse- and fine-fraction data into complete grain-size distributions, perform method of moments and inclusive graphics statistics, verbally classify the sediment, generate histogram and cumulative frequency plots, and transfer the results into a data-retrieval system. This system saves time and labor and affords greater reliability, resolution, and reproducibility than conventional methods do.

  5. High-Definition Video Streams Analysis, Modeling, and Prediction

    Directory of Open Access Journals (Sweden)

    Abdel-Karim Al-Tamimi

    2012-01-01

    Full Text Available High-definition video streams' unique statistical characteristics and their high bandwidth requirements are considered to be a challenge in both network scheduling and resource allocation fields. In this paper, we introduce an innovative way to model and predict high-definition (HD video traces encoded with H.264/AVC encoding standard. Our results are based on our compilation of over 50 HD video traces. We show that our model, simplified seasonal ARIMA (SAM, provides an accurate representation for HD videos, and it provides significant improvements in prediction accuracy. Such accuracy is vital to provide better dynamic resource allocation for video traffic. In addition, we provide a statistical analysis of HD videos, including both factor and cluster analysis to support a better understanding of video stream workload characteristics and their impact on network traffic. We discuss our methodology to collect and encode our collection of HD video traces. Our video collection, results, and tools are available for the research community.

  6. Video analysis applied to volleyball didactics to improve sport skills

    OpenAIRE

    Raiola, Gaetano; Parisi, Fabio; Giugno, Ylenia; Di Tore, Pio Alfredo

    2013-01-01

    The feedback method is increasingly used in learning new skills and improving performance. "Recent research, however, showed that the most objective and quantitative feedback is, theº greater its effect on performance". The video analysis, which is the analysis of sports performance by watching the video, is used primarily for use in the quantitative performance of athletes through the notational analysis. It may be useful to combine the quantitative and qualitative analysis of the single ges...

  7. No-Reference Video Quality Assessment using Codec Analysis

    DEFF Research Database (Denmark)

    Søgaard, Jacob; Forchhammer, Søren; Korhonen, Jari

    2015-01-01

    propose a novel estimation method of the quantization in H.264/AVC videos without bitstream access, which can also be used for Peak Signalto-Noise Ratio (PSNR) estimation. The results from the MPEG-2 and H.264/AVC analysis are mapped to a perceptual measure of video quality by Support Vector Regression...

  8. Automated Analysis of Security in Networking Systems

    DEFF Research Database (Denmark)

    Buchholtz, Mikael

    2004-01-01

    It has for a long time been a challenge to built secure networking systems. One way to counter this problem is to provide developers of software applications for networking systems with easy-to-use tools that can check security properties before the applications ever reach the marked. These tools...... will both help raise the general level of awareness of the problems and prevent the most basic flaws from occurring. This thesis contributes to the development of such tools. Networking systems typically try to attain secure communication by applying standard cryptographic techniques. In this thesis...... such networking systems are modelled in the process calculus LySa. On top of this programming language based formalism an analysis is developed, which relies on techniques from data and control ow analysis. These are techniques that can be fully automated, which make them an ideal basis for tools targeted at non...

  9. Automated Analysis, Classification, and Display of Waveforms

    Science.gov (United States)

    Kwan, Chiman; Xu, Roger; Mayhew, David; Zhang, Frank; Zide, Alan; Bonggren, Jeff

    2004-01-01

    A computer program partly automates the analysis, classification, and display of waveforms represented by digital samples. In the original application for which the program was developed, the raw waveform data to be analyzed by the program are acquired from space-shuttle auxiliary power units (APUs) at a sampling rate of 100 Hz. The program could also be modified for application to other waveforms -- for example, electrocardiograms. The program begins by performing principal-component analysis (PCA) of 50 normal-mode APU waveforms. Each waveform is segmented. A covariance matrix is formed by use of the segmented waveforms. Three eigenvectors corresponding to three principal components are calculated. To generate features, each waveform is then projected onto the eigenvectors. These features are displayed on a three-dimensional diagram, facilitating the visualization of the trend of APU operations.

  10. Interacting with target tracking algorithms in a gaze-enhanced motion video analysis system

    Science.gov (United States)

    Hild, Jutta; Krüger, Wolfgang; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen

    2016-05-01

    Motion video analysis is a challenging task, particularly if real-time analysis is required. It is therefore an important issue how to provide suitable assistance for the human operator. Given that the use of customized video analysis systems is more and more established, one supporting measure is to provide system functions which perform subtasks of the analysis. Recent progress in the development of automated image exploitation algorithms allow, e.g., real-time moving target tracking. Another supporting measure is to provide a user interface which strives to reduce the perceptual, cognitive and motor load of the human operator for example by incorporating the operator's visual focus of attention. A gaze-enhanced user interface is able to help here. This work extends prior work on automated target recognition, segmentation, and tracking algorithms as well as about the benefits of a gaze-enhanced user interface for interaction with moving targets. We also propose a prototypical system design aiming to combine both the qualities of the human observer's perception and the automated algorithms in order to improve the overall performance of a real-time video analysis system. In this contribution, we address two novel issues analyzing gaze-based interaction with target tracking algorithms. The first issue extends the gaze-based triggering of a target tracking process, e.g., investigating how to best relaunch in the case of track loss. The second issue addresses the initialization of tracking algorithms without motion segmentation where the operator has to provide the system with the object's image region in order to start the tracking algorithm.

  11. Correlation structure analysis for distributed video compression over wireless video sensor networks

    Science.gov (United States)

    He, Zhihai; Chen, Xi

    2006-01-01

    From the information-theoretic perspective, as stated by the Wyner-Ziv theorem, the distributed source encoder doesn't need any knowledge about its side information in achieving the R-D performance limit. However, from the system design and performance analysis perspective, correlation modeling plays an important role in analysis, control, and optimization of the R-D behavior of the Wyner-Ziv video coding In this work, we observe that videos captured from a wireless video sensor network (WVSN) are uniquely correlated under the multi-view geometry. We propose to utilize this computer vision principal, as well as other existing information, which is already available or can be easily obtained from the encoder, to estimate the source correlation structure. The source correlation determines the R-D behavior of the Wyner-Ziv encoder, and provide useful information for rate control and performance optimization of the Wyner-Ziv encoder.

  12. Management issues in automated audit analysis

    Energy Technology Data Exchange (ETDEWEB)

    Jackson, K.A.; Hochberg, J.G.; Wilhelmy, S.K.; McClary, J.F.; Christoph, G.G.

    1994-03-01

    This paper discusses management issues associated with the design and implementation of an automated audit analysis system that we use to detect security events. It gives the viewpoint of a team directly responsible for developing and managing such a system. We use Los Alamos National Laboratory`s Network Anomaly Detection and Intrusion Reporter (NADIR) as a case in point. We examine issues encountered at Los Alamos, detail our solutions to them, and where appropriate suggest general solutions. After providing an introduction to NADIR, we explore four general management issues: cost-benefit questions, privacy considerations, legal issues, and system integrity. Our experiences are of general interest both to security professionals and to anyone who may wish to implement a similar system. While NADIR investigates security events, the methods used and the management issues are potentially applicable to a broad range of complex systems. These include those used to audit credit card transactions, medical care payments, and procurement systems.

  13. ASteCA - Automated Stellar Cluster Analysis

    CERN Document Server

    Perren, Gabriel I; Piatti, Andrés E

    2014-01-01

    We present ASteCA (Automated Stellar Cluster Analysis), a suit of tools designed to fully automatize the standard tests applied on stellar clusters to determine their basic parameters. The set of functions included in the code make use of positional and photometric data to obtain precise and objective values for a given cluster's center coordinates, radius, luminosity function and integrated color magnitude, as well as characterizing through a statistical estimator its probability of being a true physical cluster rather than a random overdensity of field stars. ASteCA incorporates a Bayesian field star decontamination algorithm capable of assigning membership probabilities using photometric data alone. An isochrone fitting process based on the generation of synthetic clusters from theoretical isochrones and selection of the best fit through a genetic algorithm is also present, which allows ASteCA to provide accurate estimates for a cluster's metallicity, age, extinction and distance values along with its unce...

  14. An approach to automated chromosome analysis

    International Nuclear Information System (INIS)

    The methods of approach developed with a view to automatic processing of the different stages of chromosome analysis are described in this study divided into three parts. Part 1 relates the study of automated selection of metaphase spreads, which operates a decision process in order to reject ail the non-pertinent images and keep the good ones. This approach has been achieved by Computing a simulation program that has allowed to establish the proper selection algorithms in order to design a kit of electronic logical units. Part 2 deals with the automatic processing of the morphological study of the chromosome complements in a metaphase: the metaphase photographs are processed by an optical-to-digital converter which extracts the image information and writes it out as a digital data set on a magnetic tape. For one metaphase image this data set includes some 200 000 grey values, encoded according to a 16, 32 or 64 grey-level scale, and is processed by a pattern recognition program isolating the chromosomes and investigating their characteristic features (arm tips, centromere areas), in order to get measurements equivalent to the lengths of the four arms. Part 3 studies a program of automated karyotyping by optimized pairing of human chromosomes. The data are derived from direct digitizing of the arm lengths by means of a BENSON digital reader. The program supplies' 1/ a list of the pairs, 2/ a graphic representation of the pairs so constituted according to their respective lengths and centromeric indexes, and 3/ another BENSON graphic drawing according to the author's own representation of the chromosomes, i.e. crosses with orthogonal arms, each branch being the accurate measurement of the corresponding chromosome arm. This conventionalized karyotype indicates on the last line the really abnormal or non-standard images unpaired by the program, which are of special interest for the biologist. (author)

  15. ASteCA: Automated Stellar Cluster Analysis

    Science.gov (United States)

    Perren, G. I.; Vázquez, R. A.; Piatti, A. E.

    2015-04-01

    We present the Automated Stellar Cluster Analysis package (ASteCA), a suit of tools designed to fully automate the standard tests applied on stellar clusters to determine their basic parameters. The set of functions included in the code make use of positional and photometric data to obtain precise and objective values for a given cluster's center coordinates, radius, luminosity function and integrated color magnitude, as well as characterizing through a statistical estimator its probability of being a true physical cluster rather than a random overdensity of field stars. ASteCA incorporates a Bayesian field star decontamination algorithm capable of assigning membership probabilities using photometric data alone. An isochrone fitting process based on the generation of synthetic clusters from theoretical isochrones and selection of the best fit through a genetic algorithm is also present, which allows ASteCA to provide accurate estimates for a cluster's metallicity, age, extinction and distance values along with its uncertainties. To validate the code we applied it on a large set of over 400 synthetic MASSCLEAN clusters with varying degrees of field star contamination as well as a smaller set of 20 observed Milky Way open clusters (Berkeley 7, Bochum 11, Czernik 26, Czernik 30, Haffner 11, Haffner 19, NGC 133, NGC 2236, NGC 2264, NGC 2324, NGC 2421, NGC 2627, NGC 6231, NGC 6383, NGC 6705, Ruprecht 1, Tombaugh 1, Trumpler 1, Trumpler 5 and Trumpler 14) studied in the literature. The results show that ASteCA is able to recover cluster parameters with an acceptable precision even for those clusters affected by substantial field star contamination. ASteCA is written in Python and is made available as an open source code which can be downloaded ready to be used from its official site.

  16. Ecological Automation Design, Extending Work Domain Analysis

    NARCIS (Netherlands)

    Amelink, M.H.J.

    2010-01-01

    In high–risk domains like aviation, medicine and nuclear power plant control, automation has enabled new capabilities, increased the economy of operation and has greatly contributed to safety. However, automation increases the number of couplings in a system, which can inadvertently lead to more com

  17. Automation literature: A brief review and analysis

    Science.gov (United States)

    Smith, D.; Dieterly, D. L.

    1980-01-01

    Current thought and research positions which may allow for an improved capability to understand the impact of introducing automation to an existing system are established. The orientation was toward the type of studies which may provide some general insight into automation; specifically, the impact of automation in human performance and the resulting system performance. While an extensive number of articles were reviewed, only those that addressed the issue of automation and human performance were selected to be discussed. The literature is organized along two dimensions: time, Pre-1970, Post-1970; and type of approach, Engineering or Behavioral Science. The conclusions reached are not definitive, but do provide the initial stepping stones in an attempt to begin to bridge the concept of automation in a systematic progression.

  18. Segmentation and Tracking of Multiple Moving Objects for Intelligent Video Analysis

    Science.gov (United States)

    Xu, L.-Q.; Landabaso, J. L.; Lei, B.

    In recent years, there has been considerable interest in visual surveillance of a wide range of indoor and outdoor sites by various parties. This is manifested by the widespread and unabated deployment of CCTV cameras in public and private areas. In particular, the increasing connectivity of broadband wired and wireless IP networks, and the emergence of IP-CCTV systems with smart sensors, enabling centralised or distributed remote monitoring, have further fuelled this trend. It is not uncommon nowadays to see a bank of displays in an organisation showing the activities of dozens of surveillance sites simultaneously. However, the limitations and deficiencies, together with the costs associated with human operators in monitoring the overwhelming video sources, have created urgent demands for automated video analysis solutions. Indeed, the ability of a system to automatically analyse and interpret visual scenes is of increasing importance to decision making, offering enormous business opportunities in the sector of information and communications technologies.

  19. Logo recognition in videos: an automated brand analysis system

    OpenAIRE

    Duruş, Murat; Durus, Murat

    2008-01-01

    Every year companies spend a sizeable budget on marketing, a large portion of which is spent on advertisement of their product brands on TV broadcasts. These physical advertising artifacts are usually emblazoned with the companies' name, logo, and their trademark brand. Given these astronomical numbers, companies are extremely keen to verify that their brand has the level of visibility they expect for such expenditure. In other words advertisers, in particular, like to verify that their contr...

  20. Video Analysis in Multi-Intelligence

    Energy Technology Data Exchange (ETDEWEB)

    Key, Everett Kiusan [Univ. of Washington, Seattle, WA (United States); Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Van Buren, Kendra Lu [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Warren, Will [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Hemez, Francois M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-07-27

    This is a project which was performed by a graduated high school student at Los Alamos National Laboratory (LANL). The goal of the Multi-intelligence (MINT) project is to determine the state of a facility from multiple data streams. The data streams are indirect observations. The researcher is using DARHT (Dual-Axis Radiographic Hydrodynamic Test Facility) as a proof of concept. In summary, videos from the DARHT facility contain a rich amount of information. Distribution of car activity can inform us about the state of the facility. Counting large vehicles shows promise as another feature for identifying the state of operations. Signal processing techniques are limited by the low resolution and compression of the videos. We are working on integrating these features with features obtained from other data streams to contribute to the MINT project. Future work can pursue other observations, such as when the gate is functioning or non-functioning.

  1. An intelligent crowdsourcing system for forensic analysis of surveillance video

    Science.gov (United States)

    Tahboub, Khalid; Gadgil, Neeraj; Ribera, Javier; Delgado, Blanca; Delp, Edward J.

    2015-03-01

    Video surveillance systems are of a great value for public safety. With an exponential increase in the number of cameras, videos obtained from surveillance systems are often archived for forensic purposes. Many automatic methods have been proposed to do video analytics such as anomaly detection and human activity recognition. However, such methods face significant challenges due to object occlusions, shadows and scene illumination changes. In recent years, crowdsourcing has become an effective tool that utilizes human intelligence to perform tasks that are challenging for machines. In this paper, we present an intelligent crowdsourcing system for forensic analysis of surveillance video that includes the video recorded as a part of search and rescue missions and large-scale investigation tasks. We describe a method to enhance crowdsourcing by incorporating human detection, re-identification and tracking. At the core of our system, we use a hierarchal pyramid model to distinguish the crowd members based on their ability, experience and performance record. Our proposed system operates in an autonomous fashion and produces a final output of the crowdsourcing analysis consisting of a set of video segments detailing the events of interest as one storyline.

  2. The impact of online video lecture recordings and automated feedback on student performance

    NARCIS (Netherlands)

    Wieling, M. B.; Hofman, W. H. A.

    2010-01-01

    To what extent a blended learning configuration of face-to-face lectures, online on-demand video recordings of the face-to-face lectures and the offering of online quizzes with appropriate feedback has an additional positive impact on the performance of these students compared to the traditional fac

  3. Automating the construction of scene classifiers for content-based video retrieval

    NARCIS (Netherlands)

    Israël, Menno; Broek, van den Egon L.; Putten, van der Peter; Khan, L.; Petrushin, V.A.

    2004-01-01

    This paper introduces a real time automatic scene classifier within content-based video retrieval. In our envisioned approach end users like documentalists, not image processing experts, build classifiers interactively, by simply indicating positive examples of a scene. Classification consists of a

  4. Automated Cache Performance Analysis And Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Mohror, Kathryn [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-12-23

    While there is no lack of performance counter tools for coarse-grained measurement of cache activity, there is a critical lack of tools for relating data layout to cache behavior to application performance. Generally, any nontrivial optimizations are either not done at all, or are done ”by hand” requiring significant time and expertise. To the best of our knowledge no tool available to users measures the latency of memory reference instructions for partic- ular addresses and makes this information available to users in an easy-to-use and intuitive way. In this project, we worked to enable the Open|SpeedShop performance analysis tool to gather memory reference latency information for specific instructions and memory ad- dresses, and to gather and display this information in an easy-to-use and intuitive way to aid performance analysts in identifying problematic data structures in their codes. This tool was primarily designed for use in the supercomputer domain as well as grid, cluster, cloud-based parallel e-commerce, and engineering systems and middleware. Ultimately, we envision a tool to automate optimization of application cache layout and utilization in the Open|SpeedShop performance analysis tool. To commercialize this soft- ware, we worked to develop core capabilities for gathering enhanced memory usage per- formance data from applications and create and apply novel methods for automatic data structure layout optimizations, tailoring the overall approach to support existing supercom- puter and cluster programming models and constraints. In this Phase I project, we focused on infrastructure necessary to gather performance data and present it in an intuitive way to users. With the advent of enhanced Precise Event-Based Sampling (PEBS) counters on recent Intel processor architectures and equivalent technology on AMD processors, we are now in a position to access memory reference information for particular addresses. Prior to the introduction of PEBS counters

  5. Integration of video and radiation analysis data

    International Nuclear Information System (INIS)

    For the past several years, the integration of containment and surveillance (C/S) with nondestructive assay (NDA) sensors for monitoring the movement of nuclear material has focused on the hardware and communications protocols in the transmission network. Little progress has been made in methods to utilize the combined C/S and NDA data for safeguards and to reduce the inspector time spent in nuclear facilities. One of the fundamental problems in the integration of the combined data is that the two methods operate in different dimensions. The C/S video data is spatial in nature; whereas, the NDA sensors provide radiation levels versus time data. The authors have introduced a new method to integrate spatial (digital video) with time (radiation monitoring) information. This technology is based on pattern recognition by neural networks, provides significant capability to analyze complex data, and has the ability to learn and adapt to changing situations. This technique has the potential of significantly reducing the frequency of inspection visits to key facilities without a loss of safeguards effectiveness

  6. Automated System for interpreting Non-verbal Communication in Video Conferencing

    Directory of Open Access Journals (Sweden)

    Dr.Chandragupta Warnekar

    2010-01-01

    Full Text Available Gesture is a form of non-verbal, action-based communication made with a part of the body and used instead of and/or in combination with verbal communication. People frequently use gestures for more effective inter-personal communication; out of which nearly 55% come from the facial expressions alone. Facial gestures often reveal when people are trying to conceal emotions such as fear, contempt, disgust, surprise, or even unspoken political tensions. Video conferencing captures such facial signals which can be directly processed by suitable image processing system. Facial gestures are more pronounced via eyes and lip region of human face and hence they form the regions of interest (ROI while processing the video signal. Some of these concepts are used to develop a system which can identify specific human gestures and use their interpretation towards business decision support.

  7. Automated image analysis techniques for cardiovascular magnetic resonance imaging

    NARCIS (Netherlands)

    Geest, Robertus Jacobus van der

    2011-01-01

    The introductory chapter provides an overview of various aspects related to quantitative analysis of cardiovascular MR (CMR) imaging studies. Subsequently, the thesis describes several automated methods for quantitative assessment of left ventricular function from CMR imaging studies. Several novel

  8. Automated generation of an efficient MPEG-4 Reconfigurable Video Coding decoder implementation

    OpenAIRE

    Gu, Ruirui; Piat, Jonathan; Raulet, Mickael; Janneck, Jorn W.; Bhattacharyya, Shuvra S.

    2010-01-01

    International audience This paper proposes an automatic design flow from user-friendly design to efficient implementation of video processing systems. This design flow starts with the use of coarse-grain dataflow representations based on the CAL language, which is a complete language for dataflow programming of embedded systems. Our approach integrates previously developed techniques for detecting synchronous dataflow (SDF) regions within larger CAL networks, and exploiting the static stru...

  9. Automating the construction of scene classifiers for content-based video retrieval

    OpenAIRE

    Israël, Menno; Broek, van den, M.A.F.H.; Putten, van, J.P.M.; Khan, L.; Petrushin, V.A.

    2004-01-01

    This paper introduces a real time automatic scene classifier within content-based video retrieval. In our envisioned approach end users like documentalists, not image processing experts, build classifiers interactively, by simply indicating positive examples of a scene. Classification consists of a two stage procedure. First, small image fragments called patches are classified. Second, frequency vectors of these patch classifications are fed into a second classifier for global scene classific...

  10. Quantitative assessment of human motion using video motion analysis

    Science.gov (United States)

    Probe, John D.

    1993-01-01

    In the study of the dynamics and kinematics of the human body a wide variety of technologies has been developed. Photogrammetric techniques are well documented and are known to provide reliable positional data from recorded images. Often these techniques are used in conjunction with cinematography and videography for analysis of planar motion, and to a lesser degree three-dimensional motion. Cinematography has been the most widely used medium for movement analysis. Excessive operating costs and the lag time required for film development, coupled with recent advances in video technology, have allowed video based motion analysis systems to emerge as a cost effective method of collecting and analyzing human movement. The Anthropometric and Biomechanics Lab at Johnson Space Center utilizes the video based Ariel Performance Analysis System (APAS) to develop data on shirtsleeved and space-suited human performance in order to plan efficient on-orbit intravehicular and extravehicular activities. APAS is a fully integrated system of hardware and software for biomechanics and the analysis of human performance and generalized motion measurement. Major components of the complete system include the video system, the AT compatible computer, and the proprietary software.

  11. Automated Steel Cleanliness Analysis Tool (ASCAT)

    International Nuclear Information System (INIS)

    The objective of this study was to develop the Automated Steel Cleanliness Analysis Tool (ASCATTM) to permit steelmakers to evaluate the quality of the steel through the analysis of individual inclusions. By characterizing individual inclusions, determinations can be made as to the cleanliness of the steel. Understanding the complicating effects of inclusions in the steelmaking process and on the resulting properties of steel allows the steel producer to increase throughput, better control the process, reduce remelts, and improve the quality of the product. The ASCAT (Figure 1) is a steel-smart inclusion analysis tool developed around a customized next-generation computer controlled scanning electron microscopy (NG-CCSEM) hardware platform that permits acquisition of inclusion size and composition data at a rate never before possible in SEM-based instruments. With built-in customized ''intelligent'' software, the inclusion data is automatically sorted into clusters representing different inclusion types to define the characteristics of a particular heat (Figure 2). The ASCAT represents an innovative new tool for the collection of statistically meaningful data on inclusions, and provides a means of understanding the complicated effects of inclusions in the steel making process and on the resulting properties of steel. Research conducted by RJLG with AISI (American Iron and Steel Institute) and SMA (Steel Manufactures of America) members indicates that the ASCAT has application in high-grade bar, sheet, plate, tin products, pipes, SBQ, tire cord, welding rod, and specialty steels and alloys where control of inclusions, whether natural or engineered, are crucial to their specification for a given end-use. Example applications include castability of calcium treated steel; interstitial free (IF) degasser grade slag conditioning practice; tundish clogging and erosion minimization; degasser circulation and optimization; quality assessment/steel cleanliness; slab, billet

  12. Automated Steel Cleanliness Analysis Tool (ASCAT)

    Energy Technology Data Exchange (ETDEWEB)

    Gary Casuccio (RJ Lee Group); Michael Potter (RJ Lee Group); Fred Schwerer (RJ Lee Group); Dr. Richard J. Fruehan (Carnegie Mellon University); Dr. Scott Story (US Steel)

    2005-12-30

    The objective of this study was to develop the Automated Steel Cleanliness Analysis Tool (ASCATTM) to permit steelmakers to evaluate the quality of the steel through the analysis of individual inclusions. By characterizing individual inclusions, determinations can be made as to the cleanliness of the steel. Understanding the complicating effects of inclusions in the steelmaking process and on the resulting properties of steel allows the steel producer to increase throughput, better control the process, reduce remelts, and improve the quality of the product. The ASCAT (Figure 1) is a steel-smart inclusion analysis tool developed around a customized next-generation computer controlled scanning electron microscopy (NG-CCSEM) hardware platform that permits acquisition of inclusion size and composition data at a rate never before possible in SEM-based instruments. With built-in customized ''intelligent'' software, the inclusion data is automatically sorted into clusters representing different inclusion types to define the characteristics of a particular heat (Figure 2). The ASCAT represents an innovative new tool for the collection of statistically meaningful data on inclusions, and provides a means of understanding the complicated effects of inclusions in the steel making process and on the resulting properties of steel. Research conducted by RJLG with AISI (American Iron and Steel Institute) and SMA (Steel Manufactures of America) members indicates that the ASCAT has application in high-grade bar, sheet, plate, tin products, pipes, SBQ, tire cord, welding rod, and specialty steels and alloys where control of inclusions, whether natural or engineered, are crucial to their specification for a given end-use. Example applications include castability of calcium treated steel; interstitial free (IF) degasser grade slag conditioning practice; tundish clogging and erosion minimization; degasser circulation and optimization; quality assessment

  13. Automated Cache Performance Analysis And Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Mohror, Kathryn [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-12-23

    While there is no lack of performance counter tools for coarse-grained measurement of cache activity, there is a critical lack of tools for relating data layout to cache behavior to application performance. Generally, any nontrivial optimizations are either not done at all, or are done ”by hand” requiring significant time and expertise. To the best of our knowledge no tool available to users measures the latency of memory reference instructions for partic- ular addresses and makes this information available to users in an easy-to-use and intuitive way. In this project, we worked to enable the Open|SpeedShop performance analysis tool to gather memory reference latency information for specific instructions and memory ad- dresses, and to gather and display this information in an easy-to-use and intuitive way to aid performance analysts in identifying problematic data structures in their codes. This tool was primarily designed for use in the supercomputer domain as well as grid, cluster, cloud-based parallel e-commerce, and engineering systems and middleware. Ultimately, we envision a tool to automate optimization of application cache layout and utilization in the Open|SpeedShop performance analysis tool. To commercialize this soft- ware, we worked to develop core capabilities for gathering enhanced memory usage per- formance data from applications and create and apply novel methods for automatic data structure layout optimizations, tailoring the overall approach to support existing supercom- puter and cluster programming models and constraints. In this Phase I project, we focused on infrastructure necessary to gather performance data and present it in an intuitive way to users. With the advent of enhanced Precise Event-Based Sampling (PEBS) counters on recent Intel processor architectures and equivalent technology on AMD processors, we are now in a position to access memory reference information for particular addresses. Prior to the introduction of PEBS counters

  14. Video Analysis of the Flight of a Model Aircraft

    Science.gov (United States)

    Tarantino, Giovanni; Fazio, Claudio

    2011-01-01

    A video-analysis software tool has been employed in order to measure the steady-state values of the kinematics variables describing the longitudinal behaviour of a radio-controlled model aircraft during take-off, climbing and gliding. These experimental results have been compared with the theoretical steady-state configurations predicted by the…

  15. A video-polygraphic analysis of the cataplectic attack

    DEFF Research Database (Denmark)

    Rubboli, G; d'Orsi, G; Zaniboni, A;

    2000-01-01

    OBJECTIVES AND METHODS: To perform a video-polygraphic analysis of 11 cataplectic attacks in a 39-year-old narcoleptic patient, correlating clinical manifestations with polygraphic findings. Polygraphic recordings monitored EEG, EMG activity from several cranial, trunk, upper and lower limbs musc...... of REM sleep and neural structures subserving postural control....

  16. Automated migration analysis based on cell texture: method & reliability

    Directory of Open Access Journals (Sweden)

    Chittenden Thomas W

    2005-03-01

    Full Text Available Abstract Background In this paper, we present and validate a way to measure automatically the extent of cell migration based on automated examination of a series of digital photographs. It was designed specifically to identify the impact of Second Hand Smoke (SHS on endothelial cell migration but has broader applications. The analysis has two stages: (1 preprocessing of image texture, and (2 migration analysis. Results The output is a graphic overlay that indicates the front lines of cell migration superimposed on each original image, with automated reporting of the distance traversed vs. time. Expert preference compares to manual placement of leading edge shows complete equivalence of automated vs. manual leading edge definition for cell migration measurement. Conclusion Our method is indistinguishable from careful manual determinations of cell front lines, with the advantages of full automation, objectivity, and speed.

  17. Statistical analysis of subjective preferences for video enhancement

    Science.gov (United States)

    Woods, Russell L.; Satgunam, PremNandhini; Bronstad, P. Matthew; Peli, Eli

    2010-02-01

    Measuring preferences for moving video quality is harder than for static images due to the fleeting and variable nature of moving video. Subjective preferences for image quality can be tested by observers indicating their preference for one image over another. Such pairwise comparisons can be analyzed using Thurstone scaling (Farrell, 1999). Thurstone (1927) scaling is widely used in applied psychology, marketing, food tasting and advertising research. Thurstone analysis constructs an arbitrary perceptual scale for the items that are compared (e.g. enhancement levels). However, Thurstone scaling does not determine the statistical significance of the differences between items on that perceptual scale. Recent papers have provided inferential statistical methods that produce an outcome similar to Thurstone scaling (Lipovetsky and Conklin, 2004). Here, we demonstrate that binary logistic regression can analyze preferences for enhanced video.

  18. Content-based analysis and indexing of sports video

    Science.gov (United States)

    Luo, Ming; Bai, Xuesheng; Xu, Guang-you

    2001-12-01

    An explosion of on-line image and video data in digital form is already well underway. With the exponential rise in interactive information exploration and dissemination through the World-Wide Web, the major inhibitors of rapid access to on-line video data are the management of capture and storage, and content-based intelligent search and indexing techniques. This paper proposes an approach for content-based analysis and event-based indexing of sports video. It includes a novel method to organize shots - classifying shots as close shots and far shots, an original idea of blur extent-based event detection, and an innovative local mutation-based algorithm for caption detection and retrieval. Results on extensive real TV programs demonstrate the applicability of our approach.

  19. Automating sensitivity analysis of computer models using computer calculus

    International Nuclear Information System (INIS)

    An automated procedure for performing sensitivity analysis has been developed. The procedure uses a new FORTRAN compiler with computer calculus capabilities to generate the derivatives needed to set up sensitivity equations. The new compiler is called GRESS - Gradient Enhanced Software System. Application of the automated procedure with direct and adjoint sensitivity theory for the analysis of non-linear, iterative systems of equations is discussed. Calculational efficiency consideration and techniques for adjoint sensitivity analysis are emphasized. The new approach is found to preserve the traditional advantages of adjoint theory while removing the tedious human effort previously needed to apply this theoretical methodology. Conclusions are drawn about the applicability of the automated procedure in numerical analysis and large-scale modelling sensitivity studies

  20. Irrelevant frame removal for scene analysis using video hyperclique pattern and spectrum analysis

    Directory of Open Access Journals (Sweden)

    Yuchou Chang

    2016-02-01

    Full Text Available Video often include frames that are irrelevant to the scenes for recording. These are mainly due to imperfect shooting, abrupt movements of camera, or unintended switching of scenes. The irrelevant frames should be removed before the semantic analysis of video scene is performed for video retrieval. An unsupervised approach for automatic removal of irrelevant frames is proposed in this paper. A novel log-spectral representation of color video frames based on Fibonacci lattice-quantization has been developed for better description of the global structures of video contents to measure similarity of video frames. Hyperclique pattern analysis, used to detect redundant data in textual analysis, is extended to extract relevant frame clusters in color videos. A new strategy using the k-nearest neighbor algorithm is developed for generating a video frame support measure and an h-confidence measure on this hyperclique pattern based analysis method. Evaluation of the proposed irrelevant video frame removal algorithm reveals promising results for datasets with irrelevant frames.

  1. Flexible Human Behavior Analysis Framework for Video Surveillance Applications

    Directory of Open Access Journals (Sweden)

    Weilun Lao

    2010-01-01

    Full Text Available We study a flexible framework for semantic analysis of human motion from surveillance video. Successful trajectory estimation and human-body modeling facilitate the semantic analysis of human activities in video sequences. Although human motion is widely investigated, we have extended such research in three aspects. By adding a second camera, not only more reliable behavior analysis is possible, but it also enables to map the ongoing scene events onto a 3D setting to facilitate further semantic analysis. The second contribution is the introduction of a 3D reconstruction scheme for scene understanding. Thirdly, we perform a fast scheme to detect different body parts and generate a fitting skeleton model, without using the explicit assumption of upright body posture. The extension of multiple-view fusion improves the event-based semantic analysis by 15%–30%. Our proposed framework proves its effectiveness as it achieves a near real-time performance (13–15 frames/second and 6–8 frames/second for monocular and two-view video sequences.

  2. Analysis of Trinity Power Metrics for Automated Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Michalenko, Ashley Christine [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-07-28

    This is a presentation from Los Alamos National Laboraotyr (LANL) about the analysis of trinity power metrics for automated monitoring. The following topics are covered: current monitoring efforts, motivation for analysis, tools used, the methodology, work performed during the summer, and future work planned.

  3. On Automating and Standardising Corpus Callosum Analysis in Brain MRI

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille; Skoglund, Karl

    2005-01-01

    Corpus callosum analysis is influenced by many factors. The effort in controlling these has previously been incomplete and scattered. This paper sketches a complete pipeline for automated corpus callosum analysis from magnetic resonance images, with focus on measurement standardisation. The prese...

  4. Image analysis and platform development for automated phenotyping in cytomics

    NARCIS (Netherlands)

    Yan, Kuan

    2013-01-01

    This thesis is dedicated to the empirical study of image analysis in HT/HC screen study. Often a HT/HC screening produces extensive amounts that cannot be manually analyzed. Thus, an automated image analysis solution is prior to an objective understanding of the raw image data. Compared to general a

  5. Automating sensitivity analysis of computer models using computer calculus

    International Nuclear Information System (INIS)

    An automated procedure for performing sensitivity analyses has been developed. The procedure uses a new FORTRAN compiler with computer calculus capabilities to generate the derivatives needed to set up sensitivity equations. The new compiler is called GRESS - Gradient Enhanced Software System. Application of the automated procedure with ''direct'' and ''adjoint'' sensitivity theory for the analysis of non-linear, iterative systems of equations is discussed. Calculational efficiency consideration and techniques for adjoint sensitivity analysis are emphasized. The new approach is found to preserve the traditional advantages of adjoint theory while removing the tedious human effort previously needed to apply this theoretical methodology. Conclusions are drawn about the applicability of the automated procedure in numerical analysis and large-scale modelling sensitivity studies. 24 refs., 2 figs

  6. Exploring the Behavior of Highly Effective CIOs Using Video Analysis

    OpenAIRE

    Gupta, Komal; Wilderom, Celeste; Hillegersberg, van, Jos

    2009-01-01

    Although recently several studies have addressed the required skills of effective CIOs, little is known of the actual behavior successful CIOs. In this study, we explore the behavior of highly effective CIOs by video-recording CIOs at work. The two CIOs videotaped were nominated as CIO of the year. We analyze the data in an innovative and systematic way by developing and using a behavioral leadership coding scheme. The analysis indicates that highly effective CIOs are good listeners. They als...

  7. Links between Characteristics of Collaborative Peer Video Analysis Events and Literacy Teachers' Outcomes

    Science.gov (United States)

    Arya, Poonam; Christ, Tanya; Chiu, Ming

    2015-01-01

    This study examined how characteristics of Collaborative Peer Video Analysis (CPVA) events are related to teachers' pedagogical outcomes. Data included 39 transcribed literacy video events, in which 14 in-service teachers engaged in discussions of their video clips. Emergent coding and Statistical Discourse Analysis were used to analyze the data.…

  8. General practitioner residency consultations: video feedback analysis

    Directory of Open Access Journals (Sweden)

    Afonso M. Cavaco

    2011-12-01

    Full Text Available Objectives: The purpose of this study was to analyse longitudinally two decades of Portuguese general practi-tioner (GP residents' consultation features, such as consultation length- estimating its major determinants- as well as to compare with GP residents from other Western practices. Methods: This pilot study followed a retrospective and descriptive design, comprising of the analysis of videotaped consultations with real patients from GP residents (southern Portugal, between 1990 and 2008. Main studied variables were consultation length and purpose, participant demographics and residency site characteristics. Results: From 516 residents, 68.0were females, mainly between 26-35 years old (50.6. Female patients' proportion equalled doctors', with the most frequent age group being the 46-65 years old (41.3. The consultation took on average 22 minutes and 22 seconds, with no significant differences by year and residency location. Main consultation purposes were previous scheduling (31.6 and acute symptoms (30.0. Duration was consistently longer than practising GPs from other countries, keeping in mind the supervised practice. Significant and positive predictors of consultation length were number of attendants and patients' frequency at the residency site. Conclusions: South Portugal GP residency program consultations were lengthier in comparison to similar practice in Europe and other Western countries. Length correlated preferably with patient related variables than with professionals', while confirming the longitudinal homogeneity in the residency consultation format for the last two decades.

  9. An overview of the contaminant analysis automation program

    International Nuclear Information System (INIS)

    The Department of Energy (DOE) has significant amounts of radioactive and hazardous wastes stored, buried, and still being generated at many sites within the United States. These wastes must be characterized to determine the elemental, isotopic, and compound content before remediation can begin. In this paper, the authors project that sampling requirements will necessitate generating more than 10 million samples by 1995, which will far exceed the capabilities of our current manual chemical analysis laboratories. The Contaminant Analysis Automation effort (CAA), with Los Alamos National Laboratory (LANL) as to the coordinating Laboratory, is designing and fabricating robotic systems that will standardize and automate both the hardware and the software of the most common environmental chemical methods. This will be accomplished by designing and producing several unique analysis systems called Standard Analysis Methods (SAM). Each SAM will automate a specific chemical method, including sample preparation, the analytical analysis, and the data interpretation, by using a building block known as the Standard Laboratory Module (SLM). This concept allows the chemist to assemble an automated environmental method using standardized SLMs easily and without the worry of hardware compatibility or the necessity of generating complicated control programs

  10. Content-Based Hierarchical Analysis of News Video Using Audio and Visual Information

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A schema for content-based analysis of broadcast news video is presented. First, we separate commercials from news using audiovisual features. Then, we automatically organize news programs into a content hierarchy at various levels of abstraction via effective integration of video, audio, and text data available from the news programs. Based on these news video structure and content analysis technologies, a TV news video Library is generated, from which users can retrieve definite news story according to their demands.

  11. ORIGAMI Automator Primer. Automated ORIGEN Source Terms and Spent Fuel Storage Pool Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Wieselquist, William A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Thompson, Adam B. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Bowman, Stephen M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Peterson, Joshua L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-04-01

    Source terms and spent nuclear fuel (SNF) storage pool decay heat load analyses for operating nuclear power plants require a large number of Oak Ridge Isotope Generation and Depletion (ORIGEN) calculations. SNF source term calculations also require a significant amount of bookkeeping to track quantities such as core and assembly operating histories, spent fuel pool (SFP) residence times, heavy metal masses, and enrichments. The ORIGEN Assembly Isotopics (ORIGAMI) module in the SCALE code system provides a simple scheme for entering these data. However, given the large scope of the analysis, extensive scripting is necessary to convert formats and process data to create thousands of ORIGAMI input files (one per assembly) and to process the results into formats readily usable by follow-on analysis tools. This primer describes a project within the SCALE Fulcrum graphical user interface (GUI) called ORIGAMI Automator that was developed to automate the scripting and bookkeeping in large-scale source term analyses. The ORIGAMI Automator enables the analyst to (1) easily create, view, and edit the reactor site and assembly information, (2) automatically create and run ORIGAMI inputs, and (3) analyze the results from ORIGAMI. ORIGAMI Automator uses the standard ORIGEN binary concentrations files produced by ORIGAMI, with concentrations available at all time points in each assembly’s life. The GUI plots results such as mass, concentration, activity, and decay heat using a powerful new ORIGEN Post-Processing Utility for SCALE (OPUS) GUI component. This document includes a description and user guide for the GUI, a step-by-step tutorial for a simplified scenario, and appendices that document the file structures used.

  12. An Automated Data Analysis Tool for Livestock Market Data

    Science.gov (United States)

    Williams, Galen S.; Raper, Kellie Curry

    2011-01-01

    This article describes an automated data analysis tool that allows Oklahoma Cooperative Extension Service educators to disseminate results in a timely manner. Primary data collected at Oklahoma Quality Beef Network (OQBN) certified calf auctions across the state results in a large amount of data per sale site. Sale summaries for an individual sale…

  13. Automated image analysis in the study of collagenous colitis

    DEFF Research Database (Denmark)

    Fiehn, Anne-Marie Kanstrup; Kristensson, Martin; Engel, Ulla;

    2016-01-01

    PURPOSE: The aim of this study was to develop an automated image analysis software to measure the thickness of the subepithelial collagenous band in colon biopsies with collagenous colitis (CC) and incomplete CC (CCi). The software measures the thickness of the collagenous band on microscopic...

  14. Automated SEM-EDS GSR Analysis for Turkish Ammunitions

    International Nuclear Information System (INIS)

    In this work, Automated Scanning Electron Microscopy with Energy Dispersive X-ray Spectrometry (SEM-EDS) was used to characterize 7.65 and 9mm cartridges Turkish ammunition. All samples were analyzed in a SEM Jeol JSM-5600LV equipped BSE detector and a Link ISIS 300 (EDS). A working distance of 20mm, an accelerating voltage of 20 keV and gunshot residue software was used in all analysis. Automated search resulted in a high number of particles analyzed containing gunshot residues (GSR) unique elements (PbBaSb). The obtained data about the definition of characteristic GSR particles was concordant with other studies on this topic

  15. Automated procedure for performing computer security risk analysis

    International Nuclear Information System (INIS)

    Computers, the invisible backbone of nuclear safeguards, monitor and control plant operations and support many materials accounting systems. Our automated procedure to assess computer security effectiveness differs from traditional risk analysis methods. The system is modeled as an interactive questionnaire, fully automated on a portable microcomputer. A set of modular event trees links the questionnaire to the risk assessment. Qualitative scores are obtained for target vulnerability, and qualitative impact measures are evaluated for a spectrum of threat-target pairs. These are then combined by a linguistic algebra to provide an accurate and meaningful risk measure. 12 references, 7 figures

  16. Volumetric measurements of pulmonary nodules: variability in automated analysis tools

    Science.gov (United States)

    Juluru, Krishna; Kim, Woojin; Boonn, William; King, Tara; Siddiqui, Khan; Siegel, Eliot

    2007-03-01

    Over the past decade, several computerized tools have been developed for detection of lung nodules and for providing volumetric analysis. Incidentally detected lung nodules have traditionally been followed over time by measurements of their axial dimensions on CT scans to ensure stability or document progression. A recently published article by the Fleischner Society offers guidelines on the management of incidentally detected nodules based on size criteria. For this reason, differences in measurements obtained by automated tools from various vendors may have significant implications on management, yet the degree of variability in these measurements is not well understood. The goal of this study is to quantify the differences in nodule maximum diameter and volume among different automated analysis software. Using a dataset of lung scans obtained with both "ultra-low" and conventional doses, we identified a subset of nodules in each of five size-based categories. Using automated analysis tools provided by three different vendors, we obtained size and volumetric measurements on these nodules, and compared these data using descriptive as well as ANOVA and t-test analysis. Results showed significant differences in nodule maximum diameter measurements among the various automated lung nodule analysis tools but no significant differences in nodule volume measurements. These data suggest that when using automated commercial software, volume measurements may be a more reliable marker of tumor progression than maximum diameter. The data also suggest that volumetric nodule measurements may be relatively reproducible among various commercial workstations, in contrast to the variability documented when performing human mark-ups, as is seen in the LIDC (lung imaging database consortium) study.

  17. Analysis of brook trout spatial behavior during passage attempts in corrugated culverts using near-infrared illumination video imagery

    Science.gov (United States)

    Bergeron, Normand E.; Constantin, Pierre-Marc; Goerig, Elsa; Castro-Santos, Theodore R.

    2016-01-01

    We used video recording and near-infrared illumination to document the spatial behavior of brook trout of various sizes attempting to pass corrugated culverts under different hydraulic conditions. Semi-automated image analysis was used to digitize fish position at high temporal resolution inside the culvert, which allowed calculation of various spatial behavior metrics, including instantaneous ground and swimming speed, path complexity, distance from side walls, velocity preference ratio (mean velocity at fish lateral position/mean crosssectional velocity) as well as number and duration of stops in forward progression. The presentation summarizes the main results and discusses how they could be used to improve fish passage performance in culverts.

  18. On Automating and Standardising Corpus Callosum Analysis in Brain MRI

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille; Skoglund, Karl

    2005-01-01

    Corpus callosum analysis is influenced by many factors. The effort in controlling these has previously been incomplete and scattered. This paper sketches a complete pipeline for automated corpus callosum analysis from magnetic resonance images, with focus on measurement standardisation. The...... presented pipeline deals with i) estimation of the mid-sagittal plane, ii) localisation and registration of the corpus callosum, iii) parameterisation and representation of its contour, and iv) means of standardising the traditional reference area measurements....

  19. Fully automated apparatus for the proximate analysis of coals

    Energy Technology Data Exchange (ETDEWEB)

    Fukumoto, K.; Ishibashi, Y.; Ishii, T.; Maeda, K.; Ogawa, A.; Gotoh, K.

    1985-01-01

    The authors report the development of fully-automated equipment for the proximate analysis of coals, a development undertaken with the twin aims of labour-saving and developing robot applications technology. This system comprises a balance, electric furnaces, a sulfur analyzer, etc., arranged concentrically around a multi-jointed robot which automatically performs all the necessary operations, such as sampling and weighing the materials for analysis, and inserting and removing them from the furnaces. 2 references.

  20. Computer automated movement detection for the analysis of behavior

    OpenAIRE

    Ramazani, Roseanna B.; Harish R Krishnan; BERGESON, SUSAN E.; Atkinson, Nigel S.

    2007-01-01

    Currently, measuring ethanol behaviors in flies depends on expensive image analysis software or time intensive experimenter observation. We have designed an automated system for the collection and analysis of locomotor behavior data, using the IEEE 1394 acquisition program dvgrab, the image toolkit ImageMagick and the programming language Perl. In the proposed method, flies are placed in a clear container and a computer-controlled camera takes pictures at regular intervals. Digital subtractio...

  1. Automated haematology analysis to diagnose malaria

    Directory of Open Access Journals (Sweden)

    Grobusch Martin P

    2010-11-01

    Full Text Available Abstract For more than a decade, flow cytometry-based automated haematology analysers have been studied for malaria diagnosis. Although current haematology analysers are not specifically designed to detect malaria-related abnormalities, most studies have found sensitivities that comply with WHO malaria-diagnostic guidelines, i.e. ≥ 95% in samples with > 100 parasites/μl. Establishing a correct and early malaria diagnosis is a prerequisite for an adequate treatment and to minimizing adverse outcomes. Expert light microscopy remains the 'gold standard' for malaria diagnosis in most clinical settings. However, it requires an explicit request from clinicians and has variable accuracy. Malaria diagnosis with flow cytometry-based haematology analysers could become an important adjuvant diagnostic tool in the routine laboratory work-up of febrile patients in or returning from malaria-endemic regions. Haematology analysers so far studied for malaria diagnosis are the Cell-Dyn®, Coulter® GEN·S and LH 750, and the Sysmex XE-2100® analysers. For Cell-Dyn analysers, abnormal depolarization events mainly in the lobularity/granularity and other scatter-plots, and various reticulocyte abnormalities have shown overall sensitivities and specificities of 49% to 97% and 61% to 100%, respectively. For the Coulter analysers, a 'malaria factor' using the monocyte and lymphocyte size standard deviations obtained by impedance detection has shown overall sensitivities and specificities of 82% to 98% and 72% to 94%, respectively. For the XE-2100, abnormal patterns in the DIFF, WBC/BASO, and RET-EXT scatter-plots, and pseudoeosinophilia and other abnormal haematological variables have been described, and multivariate diagnostic models have been designed with overall sensitivities and specificities of 86% to 97% and 81% to 98%, respectively. The accuracy for malaria diagnosis may vary according to species, parasite load, immunity and clinical context where the

  2. Inverse Multifractal Analysis of Different Frame Types of Multiview 3D Video

    Directory of Open Access Journals (Sweden)

    A. Zeković

    2014-11-01

    Full Text Available In this paper, the results of multifractal characterization of multiview 3D video are presented. Analyses are performed for different views of multiview video and for different frame types of video. Multifractal analysis is performed by the histogram method. Due to the advantages of the selected method for determining the spectrum, the inverse multifractal analysis of multiview 3D video was also possible. A discussion of the results obtained by the inverse multifractal analysis of multiview 3D video is presented, taking into account the frame type and whether the original frames belong to the left or right view of multiview 3D video. In the analysis, publicly available multiview 3D video traces were used.

  3. CRITICAL ASSESSMENT OF AUTOMATED FLOW CYTOMETRY DATA ANALYSIS TECHNIQUES

    Science.gov (United States)

    Aghaeepour, Nima; Finak, Greg; Hoos, Holger; Mosmann, Tim R.; Gottardo, Raphael; Brinkman, Ryan; Scheuermann, Richard H.

    2013-01-01

    Traditional methods for flow cytometry (FCM) data processing rely on subjective manual gating. Recently, several groups have developed computational methods for identifying cell populations in multidimensional FCM data. The Flow Cytometry: Critical Assessment of Population Identification Methods (FlowCAP) challenges were established to compare the performance of these methods on two tasks – mammalian cell population identification to determine if automated algorithms can reproduce expert manual gating, and sample classification to determine if analysis pipelines can identify characteristics that correlate with external variables (e.g., clinical outcome). This analysis presents the results of the first of these challenges. Several methods performed well compared to manual gating or external variables using statistical performance measures, suggesting that automated methods have reached a sufficient level of maturity and accuracy for reliable use in FCM data analysis. PMID:23396282

  4. Towards Automated Design, Analysis and Optimization of Declarative Curation Workflows

    Directory of Open Access Journals (Sweden)

    Tianhong Song

    2014-10-01

    Full Text Available Data curation is increasingly important. Our previous work on a Kepler curation package has demonstrated advantages that come from automating data curation pipelines by using workflow systems. However, manually designed curation workflows can be error-prone and inefficient due to a lack of user understanding of the workflow system, misuse of actors, or human error. Correcting problematic workflows is often very time-consuming. A more proactive workflow system can help users avoid such pitfalls. For example, static analysis before execution can be used to detect the potential problems in a workflow and help the user to improve workflow design. In this paper, we propose a declarative workflow approach that supports semi-automated workflow design, analysis and optimization. We show how the workflow design engine helps users to construct data curation workflows, how the workflow analysis engine detects different design problems of workflows and how workflows can be optimized by exploiting parallelism.

  5. Web Video Mining: Metadata Predictive Analysis using Classification Techniques

    Directory of Open Access Journals (Sweden)

    Siddu P. Algur

    2016-02-01

    Full Text Available Now a days, the Data Engineering becoming emerging trend to discover knowledge from web audiovisual data such as- YouTube videos, Yahoo Screen, Face Book videos etc. Different categories of web video are being shared on such social websites and are being used by the billions of users all over the world. The uploaded web videos will have different kind of metadata as attribute information of the video data. The metadata attributes defines the contents and features/characteristics of the web videos conceptually. Hence, accomplishing web video mining by extracting features of web videos in terms of metadata is a challenging task. In this work, effective attempts are made to classify and predict the metadata features of web videos such as length of the web videos, number of comments of the web videos, ratings information and view counts of the web videos using data mining algorithms such as Decision tree J48 and navie Bayesian algorithms as a part of web video mining. The results of Decision tree J48 and navie Bayesian classification models are analyzed and compared as a step in the process of knowledge discovery from web videos.

  6. Flexible surveillance system architecture for prototyping video content analysis algorithms

    Science.gov (United States)

    Wijnhoven, R. G. J.; Jaspers, E. G. T.; de With, P. H. N.

    2006-01-01

    Many proposed video content analysis algorithms for surveillance applications are very computationally intensive, which limits the integration in a total system, running on one processing unit (e.g. PC). To build flexible prototyping systems of low cost, a distributed system with scalable processing power is therefore required. This paper discusses requirements for surveillance systems, considering two example applications. From these requirements, specifications for a prototyping architecture are derived. An implementation of the proposed architecture is presented, enabling mapping of multiple software modules onto a number of processing units (PCs). The architecture enables fast prototyping of new algorithms for complex surveillance applications without considering resource constraints.

  7. MATSAP: An automated analysis of stretch-attend posture in rodent behavioral experiments.

    Science.gov (United States)

    Holly, Kevin S; Orndorff, Casey O; Murray, Teresa A

    2016-01-01

    Stretch-attend posture (SAP) occurs during risk assessment and is prevalent in common rodent behavioral tests. Measuring this behavior can enhance behavioral tests. For example, stretch-attend posture is a more sensitive measure of the effects of anxiolytics than traditional spatiotemporal indices. However, quantifying stretch-attend posture using human observers is time consuming, somewhat subjective, and prone to errors. We have developed MATLAB-based software, MATSAP, which is a quick, consistent, and open source program that provides objective automated analysis of stretch-attend posture in rodent behavioral experiments. Unlike human observers, MATSAP is not susceptible to fatigue or subjectivity. We assessed MATSAP performance with videos of male Swiss mice moving in an open field box and in an elevated plus maze. MATSAP reliably detected stretch-attend posture on par with human observers. This freely-available program can be broadly used by biologists and psychologists to accelerate neurological, pharmacological, and behavioral studies. PMID:27503239

  8. Development and validation of a video analysis software for marine benthic applications

    Science.gov (United States)

    Romero-Ramirez, A.; Grémare, A.; Bernard, G.; Pascal, L.; Maire, O.; Duchêne, J. C.

    2016-10-01

    Our aim in the EU funded JERICO project was to develop a flexible and scalable imaging platform that could be used in the widest possible set of ecological situations. Depending on research objectives, both image acquisition and analysis procedures may indeed differ. Up to now the attempts for automating image analysis procedures have consisted of the development of pieces of software specifically designed for a given objective. This led to the conception of a new software: AVIExplore. Its general architecture and its three constitutive modules: AVIExplore - Mobile, AVIExplore - Fixed and AVIExplore - ScriptEdit are presented. AVIExplore provides a unique environment for video analysis. Its main features include: (1) image selection tools allowing for the division of videos in homogeneous sections, (2) automatic extraction of targeted information, (3) solutions for long-term time-series as well as large spatial scale image acquisition, (4) real time acquisition and in some cases real time analysis, and (5) a large range of customized image-analysis possibilities through a script editor. The flexibility of use of AVIExplore is illustrated and validated by three case studies: (1) coral identification and mapping, (2) identification and quantification of different types of behaviors in a mud shrimp, and (3) quantification of filtering activity in a passive suspension-feeder. The accuracy of the software is measured comparing with visual assessment. It is: 90.2%, 82.7%, and 98.3% for the three case studies, respectively. Some of the advantages and current limitations of the software as well as some of its foreseen advancements are then briefly discussed.

  9. Automating with ROBOCOM. An expert system for complex engineering analysis

    International Nuclear Information System (INIS)

    Nuclear engineering analysis is automated with the help of preprocessors and postprocessors. All the analysis and processing steps are recorded in a form that is reportable and replayable. These recordings serve both as documentations and as robots, for they are capable of performing the analyses they document. Since the processors and robots in ROBOCOM interface the users in a way independent of the analysis program being used, it is now possible to unify input modeling for programs with similar functionality. ROBOCOM will eventually evolve into an encyclopedia of how every nuclear engineering analysis is performed

  10. The automation of analysis of technological process effectiveness

    Directory of Open Access Journals (Sweden)

    B. Krupińska

    2007-10-01

    Full Text Available Purpose: Improvement of technological processes by the use of technological efficiency analysis can create basis of their optimization. Informatization and computerization of wider and wider scope of activity is one of the most important current development trends of an enterprise.Design/methodology/approach: Indicators appointment makes it possible to evaluate the process efficiency, which can constitute an optimization basis of particular operation. Model of technological efficiency analysis is based on particular efficiency indicators that characterize operation, taking into account following criteria: operation – material, operation – machine, operation – human, operation – technological parameters.Findings: From the qualitative and correctness of choose of technology point of view comprehensive technological processes assessment makes up the basis of technological efficiency analysis. Results of technological efficiency analysis of technological process of prove that the chosen model of technological efficiency analysis makes it possible to improve the process continuously by the technological analysis, and application of computer assistance makes it possible to automate the process of efficiency analysis, and finally controlled improvement of technological processes.Practical implications: For the sake of complexity of technological efficiency analysis one has created an AEPT computer analysis from which result: operation efficiency indicators with distinguished indicators with minimal acceptable values, values of efficiency of the applied samples, value of technological process efficiency.Originality/value: The created computer analysis of ef technological process efficiency (AEPT makes it possible to automate the process of analysis and optimization.

  11. Multispectral tissue analysis and classification towards enabling automated robotic surgery

    Science.gov (United States)

    Triana, Brian; Cha, Jaepyeong; Shademan, Azad; Krieger, Axel; Kang, Jin U.; Kim, Peter C. W.

    2014-02-01

    Accurate optical characterization of different tissue types is an important tool for potentially guiding surgeons and enabling automated robotic surgery. Multispectral imaging and analysis have been used in the literature to detect spectral variations in tissue reflectance that may be visible to the naked eye. Using this technique, hidden structures can be visualized and analyzed for effective tissue classification. Here, we investigated the feasibility of automated tissue classification using multispectral tissue analysis. Broadband reflectance spectra (200-1050 nm) were collected from nine different ex vivo porcine tissues types using an optical fiber-probe based spectrometer system. We created a mathematical model to train and distinguish different tissue types based upon analysis of the observed spectra using total principal component regression (TPCR). Compared to other reported methods, our technique is computationally inexpensive and suitable for real-time implementation. Each of the 92 spectra was cross-referenced against the nine tissue types. Preliminary results show a mean detection rate of 91.3%, with detection rates of 100% and 70.0% (inner and outer kidney), 100% and 100% (inner and outer liver), 100% (outer stomach), and 90.9%, 100%, 70.0%, 85.7% (four different inner stomach areas, respectively). We conclude that automated tissue differentiation using our multispectral tissue analysis method is feasible in multiple ex vivo tissue specimens. Although measurements were performed using ex vivo tissues, these results suggest that real-time, in vivo tissue identification during surgery may be possible.

  12. Tank Farm Operations Surveillance Automation Analysis

    Energy Technology Data Exchange (ETDEWEB)

    MARQUEZ, D.L.

    2000-12-21

    The Nuclear Operations Project Services identified the need to improve manual tank farm surveillance data collection, review, distribution and storage practices often referred to as Operator Rounds. This document provides the analysis in terms of feasibility to improve the manual data collection methods by using handheld computer units, barcode technology, a database for storage and acquisitions, associated software, and operational procedures to increase the efficiency of Operator Rounds associated with surveillance activities.

  13. Tank Farm Operations Surveillance Automation Analysis

    International Nuclear Information System (INIS)

    The Nuclear Operations Project Services identified the need to improve manual tank farm surveillance data collection, review, distribution and storage practices often referred to as Operator Rounds. This document provides the analysis in terms of feasibility to improve the manual data collection methods by using handheld computer units, barcode technology, a database for storage and acquisitions, associated software, and operational procedures to increase the efficiency of Operator Rounds associated with surveillance activities

  14. Micro photometer's automation for quantitative spectrograph analysis

    International Nuclear Information System (INIS)

    A Microphotometer is used to increase the sharpness of dark spectral lines. Analyzing these lines one sample content and its concentration could be determined and the analysis is known as Quantitative Spectrographic Analysis. The Quantitative Spectrographic Analysis is carried out in 3 steps, as follows. 1. Emulsion calibration. This consists of gauging a photographic emulsion, to determine the intensity variations in terms of the incident radiation. For the procedure of emulsion calibration an adjustment with square minimum to the data obtained is applied to obtain a graph. It is possible to determine the density of dark spectral line against the incident light intensity shown by the microphotometer. 2. Working curves. The values of known concentration of an element against incident light intensity are plotted. Since the sample contains several elements, it is necessary to find a work curve for each one of them. 3. Analytical results. The calibration curve and working curves are compared and the concentration of the studied element is determined. The automatic data acquisition, calculation and obtaining of resulting, is done by means of a computer (PC) and a computer program. The conditioning signal circuits have the function of delivering TTL levels (Transistor Transistor Logic) to make the communication between the microphotometer and the computer possible. Data calculation is done using a computer programm

  15. Extended -Regular Sequence for Automated Analysis of Microarray Images

    Directory of Open Access Journals (Sweden)

    Jin Hee-Jeong

    2006-01-01

    Full Text Available Microarray study enables us to obtain hundreds of thousands of expressions of genes or genotypes at once, and it is an indispensable technology for genome research. The first step is the analysis of scanned microarray images. This is the most important procedure for obtaining biologically reliable data. Currently most microarray image processing systems require burdensome manual block/spot indexing work. Since the amount of experimental data is increasing very quickly, automated microarray image analysis software becomes important. In this paper, we propose two automated methods for analyzing microarray images. First, we propose the extended -regular sequence to index blocks and spots, which enables a novel automatic gridding procedure. Second, we provide a methodology, hierarchical metagrid alignment, to allow reliable and efficient batch processing for a set of microarray images. Experimental results show that the proposed methods are more reliable and convenient than the commercial tools.

  16. Automated Asteroseismic Analysis of Solar-type Stars

    DEFF Research Database (Denmark)

    Karoff, Christoffer; Campante, T.L.; Chaplin, W.J.

    2010-01-01

    , radius, luminosity, effective temperature, surface gravity and age based on grid modeling. All the tools take into account the window function of the observations which means that they work equally well for space-based photometry observations from e.g. the NASA Kepler satellite and ground-based velocity......The rapidly increasing volume of asteroseismic observations on solar-type stars has revealed a need for automated analysis tools. The reason for this is not only that individual analyses of single stars are rather time consuming, but more importantly that these large volumes of observations open...... are calculated in a consistent way. Here we present a set of automated asterosesimic analysis tools. The main engine of these set of tools is an algorithm for modelling the autocovariance spectra of the stellar acoustic spectra allowing us to measure not only the frequency of maximum power and the large...

  17. Automated analysis of Xe-133 pulmonary ventilation (AAPV) in children

    Science.gov (United States)

    Cao, Xinhua; Treves, S. Ted

    2011-03-01

    In this study, an automated analysis of pulmonary ventilation (AAPV) was developed to visualize the ventilation in pediatric lungs using dynamic Xe-133 scintigraphy. AAPV is a software algorithm that converts a dynamic series of Xe- 133 images into four functional images: equilibrium, washout halftime, residual, and clearance rate by analyzing pixelbased activity. Compared to conventional methods of calculating global or regional ventilation parameters, AAPV provides a visual representation of pulmonary ventilation functions.

  18. RFI detection by automated feature extraction and statistical analysis

    OpenAIRE

    Winkel, Benjamin; Kerp, Juergen; Stanko, Stephan

    2006-01-01

    In this paper we present an interference detection toolbox consisting of a high dynamic range Digital Fast-Fourier-Transform spectrometer (DFFT, based on FPGA-technology) and data analysis software for automated radio frequency interference (RFI) detection. The DFFT spectrometer allows high speed data storage of spectra on time scales of less than a second. The high dynamic range of the device assures constant calibration even during extremely powerful RFI events. The software uses an algorit...

  19. A Method of Automated Nonparametric Content Analysis for Social Science

    OpenAIRE

    Hopkins, Daniel J.; King, Gary

    2010-01-01

    The increasing availability of digitized text presents enormous opportunities for social scientists. Yet hand coding many blogs, speeches, government records, newspapers, or other sources of unstructured text is infeasible. Although computer scientists have methods for automated content analysis, most are optimized to classify individual documents, whereas social scientists instead want generalizations about the population of documents, such as the proportion in a given category. Unfortunatel...

  20. Optimizing FPGA Design For Real Time Video Content Analysis

    OpenAIRE

    Ma, Xiaoyin

    2016-01-01

    The rapid growth of camera and storage capabilities, over the past decade, has resulted in an exponential growth in the size of video repositories, such as YouTube. In 2015, 400 hours of videos are uploaded to YouTube every minute. At the same time, massive amount of images/videos are generated from monitoring cameras for elderly, sick assistance, satellites for earth science research, and telescopes for space exploration. Human annotation and manual manipulation of such videos are infeasible...

  1. An optimized method for automated analysis of algal pigments by HPLC

    NARCIS (Netherlands)

    van Leeuwe, M. A.; Villerius, L. A.; Roggeveld, J.; Visser, R. J. W.; Stefels, J.

    2006-01-01

    A recent development in algal pigment analysis by high-performance liquid chromatography (HPLC) is the application of automation. An optimization of a complete sampling and analysis protocol applied specifically in automation has not yet been performed. In this paper we show that automation can only

  2. Video Games and Youth Violence: A Prospective Analysis in Adolescents

    Science.gov (United States)

    Ferguson, Christopher J.

    2011-01-01

    The potential influence of violent video games on youth violence remains an issue of concern for psychologists, policymakers and the general public. Although several prospective studies of video game violence effects have been conducted, none have employed well validated measures of youth violence, nor considered video game violence effects in…

  3. Tech Tips: Using Video Management/ Analysis Technology in Qualitative Research

    Directory of Open Access Journals (Sweden)

    J.A. Spiers

    2004-03-01

    Full Text Available This article presents tips on how to use video in qualitative research. The author states that, though there many complex and powerful computer programs for working with video, the work done in qualitative research does not require those programs. For this work, simple editing software is sufficient. Also presented is an easy and efficient method of transcribing video clips.

  4. Prevalence of discordant microscopic changes with automated CBC analysis

    Directory of Open Access Journals (Sweden)

    Fabiano de Jesus Santos

    2014-12-01

    Full Text Available Introduction:The most common cause of diagnostic error is related to errors in laboratory tests as well as errors of results interpretation. In order to reduce them, the laboratory currently has modern equipment which provides accurate and reliable results. The development of automation has revolutionized the laboratory procedures in Brazil and worldwide.Objective:To determine the prevalence of microscopic changes present in blood slides concordant and discordant with results obtained using fully automated procedures.Materials and method:From January to July 2013, 1,000 hematological parameters slides were analyzed. Automated analysis was performed on last generation equipment, which methodology is based on electrical impedance, and is able to quantify all the figurative elements of the blood in a universe of 22 parameters. The microscopy was performed by two experts in microscopy simultaneously.Results:The data showed that only 42.70% were concordant, comparing with 57.30% discordant. The main findings among discordant were: Changes in red blood cells 43.70% (n = 250, white blood cells 38.46% (n = 220, and number of platelet 17.80% (n = 102.Discussion:The data show that some results are not consistent with clinical or physiological state of an individual, and cannot be explained because they have not been investigated, which may compromise the final diagnosis.Conclusion:It was observed that it is of fundamental importance that the microscopy qualitative analysis must be performed in parallel with automated analysis in order to obtain reliable results, causing a positive impact on the prevention, diagnosis, prognosis, and therapeutic follow-up.

  5. Video Sensor-Based Complex Scene Analysis with Granger Causality

    Directory of Open Access Journals (Sweden)

    Shuang Wu

    2013-10-01

    Full Text Available In this report, we propose a novel framework to explore the activity interactions and temporal dependencies between activities in complex video surveillance scenes. Under our framework, a low-level codebook is generated by an adaptive quantization with respect to the activeness criterion. The Hierarchical Dirichlet Processes (HDP model is then applied to automatically cluster low-level features into atomic activities. Afterwards, the dynamic behaviors of the activities are represented as a multivariate point-process. The pair-wise relationships between activities are explicitly captured by the non-parametric Granger causality analysis, from which the activity interactions and temporal dependencies are discovered. Then, each video clip is labeled by one of the activity interactions. The results of the real-world traffic datasets show that the proposed method can achieve a high quality classification performance. Compared with traditional K-means clustering, a maximum improvement of 19.19% is achieved by using the proposed causal grouping method.

  6. Automation of Large-scale Computer Cluster Monitoring Information Analysis

    Science.gov (United States)

    Magradze, Erekle; Nadal, Jordi; Quadt, Arnulf; Kawamura, Gen; Musheghyan, Haykuhi

    2015-12-01

    High-throughput computing platforms consist of a complex infrastructure and provide a number of services apt to failures. To mitigate the impact of failures on the quality of the provided services, a constant monitoring and in time reaction is required, which is impossible without automation of the system administration processes. This paper introduces a way of automation of the process of monitoring information analysis to provide the long and short term predictions of the service response time (SRT) for a mass storage and batch systems and to identify the status of a service at a given time. The approach for the SRT predictions is based on Adaptive Neuro Fuzzy Inference System (ANFIS). An evaluation of the approaches is performed on real monitoring data from the WLCG Tier 2 center GoeGrid. Ten fold cross validation results demonstrate high efficiency of both approaches in comparison to known methods.

  7. Using historical wafermap data for automated yield analysis

    International Nuclear Information System (INIS)

    To be productive and profitable in a modern semiconductor fabrication environment, large amounts of manufacturing data must be collected, analyzed, and maintained. This includes data collected from in- and off-line wafer inspection systems and from the process equipment itself. This data is increasingly being used to design new processes, control and maintain tools, and to provide the information needed for rapid yield learning and prediction. Because of increasing device complexity, the amount of data being generated is outstripping the yield engineer close-quote s ability to effectively monitor and correct unexpected trends and excursions. The 1997 SIA National Technology Roadmap for Semiconductors highlights a need to address these issues through open-quotes automated data reduction algorithms to source defects from multiple data sources and to reduce defect sourcing time.close quotes SEMATECH and the Oak Ridge National Laboratory have been developing new strategies and technologies for providing the yield engineer with higher levels of assisted data reduction for the purpose of automated yield analysis. In this article, we will discuss the current state of the art and trends in yield management automation. copyright 1999 American Vacuum Society

  8. Statistical models of video structure for content analysis and characterization.

    Science.gov (United States)

    Vasconcelos, N; Lippman, A

    2000-01-01

    Content structure plays an important role in the understanding of video. In this paper, we argue that knowledge about structure can be used both as a means to improve the performance of content analysis and to extract features that convey semantic information about the content. We introduce statistical models for two important components of this structure, shot duration and activity, and demonstrate the usefulness of these models with two practical applications. First, we develop a Bayesian formulation for the shot segmentation problem that is shown to extend the standard thresholding model in an adaptive and intuitive way, leading to improved segmentation accuracy. Second, by applying the transformation into the shot duration/activity feature space to a database of movie clips, we also illustrate how the Bayesian model captures semantic properties of the content. We suggest ways in which these properties can be used as a basis for intuitive content-based access to movie libraries.

  9. Cost Analysis of an Automated and Manual Cataloging and Book Processing System.

    Science.gov (United States)

    Druschel, Joselyn

    1981-01-01

    Cost analysis of an automated network system and a manual system of cataloging and book processing indicates a 20 percent savings using automation. Per unit costs based on the average monthly automation rate are used for comparison. Higher manual system costs are attributed to staff costs. (RAA)

  10. Applications of Automation Methods for Nonlinear Fracture Test Analysis

    Science.gov (United States)

    Allen, Phillip A.; Wells, Douglas N.

    2013-01-01

    Using automated and standardized computer tools to calculate the pertinent test result values has several advantages such as: 1. allowing high-fidelity solutions to complex nonlinear phenomena that would be impractical to express in written equation form, 2. eliminating errors associated with the interpretation and programing of analysis procedures from the text of test standards, 3. lessening the need for expertise in the areas of solid mechanics, fracture mechanics, numerical methods, and/or finite element modeling, to achieve sound results, 4. and providing one computer tool and/or one set of solutions for all users for a more "standardized" answer. In summary, this approach allows a non-expert with rudimentary training to get the best practical solution based on the latest understanding with minimum difficulty.Other existing ASTM standards that cover complicated phenomena use standard computer programs: 1. ASTM C1340/C1340M-10- Standard Practice for Estimation of Heat Gain or Loss Through Ceilings Under Attics Containing Radiant Barriers by Use of a Computer Program 2. ASTM F 2815 - Standard Practice for Chemical Permeation through Protective Clothing Materials: Testing Data Analysis by Use of a Computer Program 3. ASTM E2807 - Standard Specification for 3D Imaging Data Exchange, Version 1.0 The verification, validation, and round-robin processes required of a computer tool closely parallel the methods that are used to ensure the solution validity for equations included in test standard. The use of automated analysis tools allows the creation and practical implementation of advanced fracture mechanics test standards that capture the physics of a nonlinear fracture mechanics problem without adding undue burden or expense to the user. The presented approach forms a bridge between the equation-based fracture testing standards of today and the next generation of standards solving complex problems through analysis automation.

  11. Multimodal Semantic Analysis and Annotation for Basketball Video

    Science.gov (United States)

    Liu, Song; Xu, Min; Yi, Haoran; Chia, Liang-Tien; Rajan, Deepu

    2006-12-01

    This paper presents a new multiple-modality method for extracting semantic information from basketball video. The visual, motion, and audio information are extracted from video to first generate some low-level video segmentation and classification. Domain knowledge is further exploited for detecting interesting events in the basketball video. For video, both visual and motion prediction information are utilized for shot and scene boundary detection algorithm; this will be followed by scene classification. For audio, audio keysounds are sets of specific audio sounds related to semantic events and a classification method based on hidden Markov model (HMM) is used for audio keysound identification. Subsequently, by analyzing the multimodal information, the positions of potential semantic events, such as "foul" and "shot at the basket," are located with additional domain knowledge. Finally, a video annotation is generated according to MPEG-7 multimedia description schemes (MDSs). Experimental results demonstrate the effectiveness of the proposed method.

  12. Using Video Analysis to Investigate Conservation Impulse and Mechanical Energy Laws

    OpenAIRE

    Aleksandrova, Aleksandrija; Nancheva, Nadezhda

    2008-01-01

    Video analysis provides an educational, motivating, and cost-effective alternative to traditional course- related activities in physics education. Our paper presents results from video analysis of experiments “Collision of balls” and “Motion of a ball rolled on inclined plane” as examples to illustrate the laws of conservation of impulse and mechanical energy.

  13. Video Analysis and Modeling Tool for Physics Education: A workshop for Redesigning Pedagogy

    CERN Document Server

    Wee, Loo Kang

    2012-01-01

    This workshop aims to demonstrate how the Tracker Video Analysis and Modeling Tool engages, enables and empowers teachers to be learners so that we can be leaders in our teaching practice. Through this workshop, the kinematics of a falling ball and a projectile motion are explored using video analysis and in the later video modeling. We hope to lead and inspire other teachers by facilitating their experiences with this ICT-enabled video modeling pedagogy (Brown, 2008) and free tool for facilitating students-centered active learning, thus motivate students to be more self-directed.

  14. AMDA: an R package for the automated microarray data analysis

    Directory of Open Access Journals (Sweden)

    Foti Maria

    2006-07-01

    Full Text Available Abstract Background Microarrays are routinely used to assess mRNA transcript levels on a genome-wide scale. Large amount of microarray datasets are now available in several databases, and new experiments are constantly being performed. In spite of this fact, few and limited tools exist for quickly and easily analyzing the results. Microarray analysis can be challenging for researchers without the necessary training and it can be time-consuming for service providers with many users. Results To address these problems we have developed an automated microarray data analysis (AMDA software, which provides scientists with an easy and integrated system for the analysis of Affymetrix microarray experiments. AMDA is free and it is available as an R package. It is based on the Bioconductor project that provides a number of powerful bioinformatics and microarray analysis tools. This automated pipeline integrates different functions available in the R and Bioconductor projects with newly developed functions. AMDA covers all of the steps, performing a full data analysis, including image analysis, quality controls, normalization, selection of differentially expressed genes, clustering, correspondence analysis and functional evaluation. Finally a LaTEX document is dynamically generated depending on the performed analysis steps. The generated report contains comments and analysis results as well as the references to several files for a deeper investigation. Conclusion AMDA is freely available as an R package under the GPL license. The package as well as an example analysis report can be downloaded in the Services/Bioinformatics section of the Genopolis http://www.genopolis.it/

  15. Video game demand in Japan : a household data analysis : revised

    OpenAIRE

    Harada, Nobuyuki

    2005-01-01

    Various economic studies of the video game industry have focused on intra-industry details. This paper complements the approach by highlighting broader budget allocation by households. Using the “total households” data of the Family Income and Expenditure Survey, this paper estimates the demand model for video games. Estimation results show the effects of household income, demographic factors, and prices of goods on the expenditure share of video games. These results indicate the importance o...

  16. Performance Analysis of Digital Video Watermarking using Discrete Cosine Transform

    OpenAIRE

    Ashish M. Kothari; Dwivedi, Ved V.

    2011-01-01

    In this paper, we have suggested the transform domain method for digital video watermarking for embedding invisible watermarks behind the video. It is used for copyright protection as well as proof of ownership. In this paper, we first extracted the frames from the video and then used frequency domain characteristics of the frames for watermarking. In this paper, we have specifically used the characteristics of the Discrete Cosine Transform for watermarking and calculated different parameters.

  17. Analysis of YouTube~TM videos related to bowel preparation for colonoscopy

    Institute of Scientific and Technical Information of China (English)

    Corey; Hannah; Basch; Grace; Clarke; Hillyer; Rachel; Reeves; Charles; E; Basch

    2014-01-01

    AIM: To examine YouTubeTM videos about bowel preparation procedure to better understand the quality of this information on the Internet. METHODS: YouTubeTM videos related to colonoscopy preparation were identified during the winter of 2014; only those with ≥ 5000 views were selected for analysis(n = 280). Creator of the video, length, date posted, whether the video was based upon personal experience, and theme was recorded. Bivariate analysis was conducted to examine differences between consumers vs healthcare professionals-created videos. RESULTS: Most videos were based on personal experience. Half were created by consumers and 34% were ≥ 4.5 min long. Healthcare professional videos were viewed more often(> 19400, 59.4% vs 40.8%,P = 0.037, for healthcare professional and consumer, respectively) and more often focused on the purgative type and completing the preparation. Consumer videos received more comments(> 10 comments, 62.2% vs 42.7%, P = 0.001) and more often emphasized the palatability of the purgative, disgust, and hunger during the procedure. Content of colonoscopy bowel preparation YouTube? videos is influenced by who creates the video and may affect views on colon cancer screening. CONCLUSION: The impact of perspectives on the quality of health-related information found on the Internet requires further examination.

  18. The experiments and analysis of several selective video encryption methods

    Science.gov (United States)

    Zhang, Yue; Yang, Cheng; Wang, Lei

    2013-07-01

    This paper presents four methods for selective video encryption based on the MPEG-2 video compression,including the slices, the I-frames, the motion vectors, and the DCT coefficients. We use the AES encryption method for simulation experiment for the four methods on VS2010 Platform, and compare the video effects and the processing speed of each frame after the video encrypted. The encryption depth can be arbitrarily selected, and design the encryption depth by using the double limit counting method, so the accuracy can be increased.

  19. Analysis and simulation of a torque assist automated manual transmission

    Science.gov (United States)

    Galvagno, E.; Velardocchia, M.; Vigliani, A.

    2011-08-01

    The paper presents the kinematic and dynamic analysis of a power-shift automated manual transmission (AMT) characterised by a wet clutch, called assist clutch (ACL), replacing the fifth gear synchroniser. This torque assist mechanism becomes a torque transfer path during gearshifts, in order to overcome a typical dynamic problem of the AMTs, that is the driving force interruption. The mean power contributions during gearshifts are computed for different engine and ACL interventions, thus allowing to draw considerations useful for developing the control algorithms. The simulation results prove the advantages in terms of gearshift quality and ride comfort of the analysed transmission.

  20. An analysis of technology usage for streaming digital video in support of a preclinical curriculum.

    OpenAIRE

    Dev, P.; Rindfleisch, T. C.; Kush, S. J.; Stringer, J R

    2000-01-01

    Usage of streaming digital video of lectures in preclinical courses was measured by analysis of the data in the log file maintained on the web server. We observed that students use the video when it is available. They do not use it to replace classroom attendance but rather for review before examinations or when a class has been missed. Usage of video has not increased significantly for any course within the 18 month duration of this project.

  1. An automated confirmatory system for analysis of mammograms.

    Science.gov (United States)

    Peng, W; Mayorga, R V; Hussein, E M A

    2016-03-01

    This paper presents an integrated system for the automatic analysis of mammograms to assist radiologists in confirming their diagnosis in mammography screening. The proposed automated confirmatory system (ACS) can process a digitalized mammogram online, and generates a high quality filtered segmentation of an image for biological interpretation and a texture-feature based diagnosis. We use a serial of image pre-processing and segmentation techniques, including 2D median filtering, seeded region growing (SRG) algorithm, image contrast enhancement, to remove noise, delete radiopaque artifacts and eliminate the projection of the pectoral muscle from a digitalized mammogram. We also develop an entire-image texture-feature based classification method, by combining a Rough-set approach to extract five fundamental texture features from images, and then an Artificial Neural Network technique to classify a mammogram as: normal; indicating the presence of a benign lump; or representing a malignant tumor. Here, 222 random images from the Mammographic Image Analysis Society (MIAS) database are used for the offline ACS training. Once the system is tuned and trained, it is ready for the automated use for the analysis and diagnosis of new mammograms. To test the trained system, a separate set of 100 random images from the MIAS and another set of 100 random images from the independent BancoWeb database are selected. The proposed ACS is shown to be successful in confirming diagnosis of mammograms from the two independent databases. PMID:26742491

  2. Toward an Analysis of Video Games for Mathematics Education

    Science.gov (United States)

    Offenholley, Kathleen

    2011-01-01

    Video games have tremendous potential in mathematics education, yet there is a push to simply add mathematics to a video game without regard to whether the game structure suits the mathematics, and without regard to the level of mathematical thought being learned in the game. Are students practicing facts, or are they problem-solving? This paper…

  3. Automated quantitative analysis of ventilation-perfusion lung scintigrams

    International Nuclear Information System (INIS)

    An automated computer analysis of ventilation (Kr-81m) and perfusion (Tc-99m) lung images has been devised that produces a graphical image of the distribution of ventilation and perfusion, and of ventilation-perfusion ratios. The analysis has overcome the following problems: the identification of the midline between two lungs and the lung boundaries, the exclusion of extrapulmonary radioactivity, the superimposition of lung images of different sizes, and the format for presentation of the data. Therefore, lung images of different sizes and shapes may be compared with each other. The analysis has been used to develop normal ranges from 55 volunteers. Comparison of younger and older age groups of men and women show small but significant differences in the distribution of ventilation and perfusion, but no differences in ventilation-perfusion ratios

  4. Analysis of automated highway system risks and uncertainties. Volume 5

    Energy Technology Data Exchange (ETDEWEB)

    Sicherman, A.

    1994-10-01

    This volume describes a risk analysis performed to help identify important Automated Highway System (AHS) deployment uncertainties and quantify their effect on costs and benefits for a range of AHS deployment scenarios. The analysis identified a suite of key factors affecting vehicle and roadway costs, capacities and market penetrations for alternative AHS deployment scenarios. A systematic protocol was utilized for obtaining expert judgments of key factor uncertainties in the form of subjective probability percentile assessments. Based on these assessments, probability distributions on vehicle and roadway costs, capacity and market penetration were developed for the different scenarios. The cost/benefit risk methodology and analysis provide insights by showing how uncertainties in key factors translate into uncertainties in summary cost/benefit indices.

  5. Using statistical analysis and artificial intelligence tools for automatic assessment of video sequences

    Science.gov (United States)

    Ekobo Akoa, Brice; Simeu, Emmanuel; Lebowsky, Fritz

    2014-01-01

    This paper proposes two novel approaches to Video Quality Assessment (VQA). Both approaches attempt to develop video evaluation techniques capable of replacing human judgment when rating video quality in subjective experiments. The underlying study consists of selecting fundamental quality metrics based on Human Visual System (HVS) models and using artificial intelligence solutions as well as advanced statistical analysis. This new combination enables suitable video quality ratings while taking as input multiple quality metrics. The first method uses a neural network based machine learning process. The second method consists in evaluating the video quality assessment using non-linear regression model. The efficiency of the proposed methods is demonstrated by comparing their results with those of existing work done on synthetic video artifacts. The results obtained by each method are compared with scores from a database resulting from subjective experiments.

  6. A Framework for Soccer Video Processing and Analysis Based on Enhanced Algorithm for Dominant Color Extraction

    Directory of Open Access Journals (Sweden)

    Youness TABII

    2009-10-01

    Full Text Available Video contents retrieval and semantics research attract a large number ofresearchers in video processing and analysis domain. The researchers try topropose structure or frameworks to extract the content of the video that’sintegrating many algorithms using low and high level features. To improve theefficiency, the system has to consider user behavior as well as develops a lowcomplexity framework. In this paper we present a framework for automatic soccervideo summaries and highlights extraction using audio/video features and anenhanced generic algorithm for dominant color extraction. Our frameworkconsists of stages shown in Figure 1. Experimental results demonstrate theeffectiveness and efficiency of the proposed framework.

  7. Trends in biomedical informatics: automated topic analysis of JAMIA articles.

    Science.gov (United States)

    Han, Dong; Wang, Shuang; Jiang, Chao; Jiang, Xiaoqian; Kim, Hyeon-Eui; Sun, Jimeng; Ohno-Machado, Lucila

    2015-11-01

    Biomedical Informatics is a growing interdisciplinary field in which research topics and citation trends have been evolving rapidly in recent years. To analyze these data in a fast, reproducible manner, automation of certain processes is needed. JAMIA is a "generalist" journal for biomedical informatics. Its articles reflect the wide range of topics in informatics. In this study, we retrieved Medical Subject Headings (MeSH) terms and citations of JAMIA articles published between 2009 and 2014. We use tensors (i.e., multidimensional arrays) to represent the interaction among topics, time and citations, and applied tensor decomposition to automate the analysis. The trends represented by tensors were then carefully interpreted and the results were compared with previous findings based on manual topic analysis. A list of most cited JAMIA articles, their topics, and publication trends over recent years is presented. The analyses confirmed previous studies and showed that, from 2012 to 2014, the number of articles related to MeSH terms Methods, Organization & Administration, and Algorithms increased significantly both in number of publications and citations. Citation trends varied widely by topic, with Natural Language Processing having a large number of citations in particular years, and Medical Record Systems, Computerized remaining a very popular topic in all years.

  8. Quantifying biodiversity using digital cameras and automated image analysis.

    Science.gov (United States)

    Roadknight, C. M.; Rose, R. J.; Barber, M. L.; Price, M. C.; Marshall, I. W.

    2009-04-01

    Monitoring the effects on biodiversity of extensive grazing in complex semi-natural habitats is labour intensive. There are also concerns about the standardization of semi-quantitative data collection. We have chosen to focus initially on automating the most time consuming aspect - the image analysis. The advent of cheaper and more sophisticated digital camera technology has lead to a sudden increase in the number of habitat monitoring images and information that is being collected. We report on the use of automated trail cameras (designed for the game hunting market) to continuously capture images of grazer activity in a variety of habitats at Moor House National Nature Reserve, which is situated in the North of England at an average altitude of over 600m. Rainfall is high, and in most areas the soil consists of deep peat (1m to 3m), populated by a mix of heather, mosses and sedges. The cameras have been continuously in operation over a 6 month period, daylight images are in full colour and night images (IR flash) are black and white. We have developed artificial intelligence based methods to assist in the analysis of the large number of images collected, generating alert states for new or unusual image conditions. This paper describes the data collection techniques, outlines the quantitative and qualitative data collected and proposes online and offline systems that can reduce the manpower overheads and increase focus on important subsets in the collected data. By converting digital image data into statistical composite data it can be handled in a similar way to other biodiversity statistics thus improving the scalability of monitoring experiments. Unsupervised feature detection methods and supervised neural methods were tested and offered solutions to simplifying the process. Accurate (85 to 95%) categorization of faunal content can be obtained, requiring human intervention for only those images containing rare animals or unusual (undecidable) conditions, and

  9. Robust and sensitive video motion detection for sleep analysis.

    Science.gov (United States)

    Heinrich, Adrienne; Geng, Di; Znamenskiy, Dmitry; Vink, Jelte Peter; de Haan, Gerard

    2014-05-01

    In this paper, we propose a camera-based system combining video motion detection, motion estimation, and texture analysis with machine learning for sleep analysis. The system is robust to time-varying illumination conditions while using standard camera and infrared illumination hardware. We tested the system for periodic limb movement (PLM) detection during sleep, using EMG signals as a reference. We evaluated the motion detection performance both per frame and with respect to movement event classification relevant for PLM detection. The Matthews correlation coefficient improved by a factor of 2, compared to a state-of-the-art motion detection method, while sensitivity and specificity increased with 45% and 15%, respectively. Movement event classification improved by a factor of 6 and 3 in constant and highly varying lighting conditions, respectively. On 11 PLM patient test sequences, the proposed system achieved a 100% accurate PLM index (PLMI) score with a slight temporal misalignment of the starting time (PLM detection during sleep is feasible and can give an indication of the PLMI score.

  10. Design of video quality metrics with multi-way data analysis a data driven approach

    CERN Document Server

    Keimel, Christian

    2016-01-01

    This book proposes a data-driven methodology using multi-way data analysis for the design of video-quality metrics. It also enables video- quality metrics to be created using arbitrary features. This data- driven design approach not only requires no detailed knowledge of the human visual system, but also allows a proper consideration of the temporal nature of video using a three-way prediction model, corresponding to the three-way structure of video. Using two simple example metrics, the author demonstrates not only that this purely data- driven approach outperforms state-of-the-art video-quality metrics, which are often optimized for specific properties of the human visual system, but also that multi-way data analysis methods outperform the combination of two-way data analysis methods and temporal pooling. .

  11. Description of texts of auxiliary programs for processing video information. Part 2: SUODH program of automated separation of quasihomogeneous formations

    Science.gov (United States)

    Borisenko, V. I.; Chesalin, L. S.

    1980-01-01

    The algorithm, block diagram, complete text, and instructions are given for the use of a computer program to separate formations whose spectral characteristics are constant on the average. The initial material for operating the computer program presented is video information in a standard color-superposition format.

  12. MPEG-2 Compressed-Domain Algorithms for Video Analysis

    Directory of Open Access Journals (Sweden)

    Hesseler Wolfgang

    2006-01-01

    Full Text Available This paper presents new algorithms for extracting metadata from video sequences in the MPEG-2 compressed domain. Three algorithms for efficient low-level metadata extraction in preprocessing stages are described. The first algorithm detects camera motion using the motion vector field of an MPEG-2 video. The second method extends the idea of motion detection to a limited region of interest, yielding an efficient algorithm to track objects inside video sequences. The third algorithm performs a cut detection using macroblock types and motion vectors.

  13. Video game demand in Japan : a household data analysis

    OpenAIRE

    Harada, Nobuyuki

    2004-01-01

    There are many empirical studies of supply-side data for the video games industry. This paper,on the contrary, highlights the household side, estimating demand equations for video games.Using the “total households” data of the Family Income and Expenditure Survey, whichincludes one-person households and households engaged in agriculture, forestry and fishery,estimation results show that a household’s income factor has a positive effect on its share of expenditure on video games. It is also ve...

  14. Automated image analysis for quantification of filamentous bacteria

    DEFF Research Database (Denmark)

    Fredborg, Marlene; Rosenvinge, Flemming Schønning; Spillum, Erik;

    2015-01-01

    Background Antibiotics of the β-lactam group are able to alter the shape of the bacterial cell wall, e.g. filamentation or a spheroplast formation. Early determination of antimicrobial susceptibility may be complicated by filamentation of bacteria as this can be falsely interpreted as growth...... in systems relying on colorimetry or turbidometry (such as Vitek-2, Phoenix, MicroScan WalkAway). The objective was to examine an automated image analysis algorithm for quantification of filamentous bacteria using the 3D digital microscopy imaging system, oCelloScope. Results Three E. coli strains displaying...... different resistant profiles and differences in filamentation kinetics were used to study a novel image analysis algorithm to quantify length of bacteria and bacterial filamentation. A total of 12 β-lactam antibiotics or β-lactam–β-lactamase inhibitor combinations were analyzed for their ability to induce...

  15. Automated drawing of network plots in network meta-analysis.

    Science.gov (United States)

    Rücker, Gerta; Schwarzer, Guido

    2016-03-01

    In systematic reviews based on network meta-analysis, the network structure should be visualized. Network plots often have been drawn by hand using generic graphical software. A typical way of drawing networks, also implemented in statistical software for network meta-analysis, is a circular representation, often with many crossing lines. We use methods from graph theory in order to generate network plots in an automated way. We give a number of requirements for graph drawing and present an algorithm that fits prespecified ideal distances between the nodes representing the treatments. The method was implemented in the function netgraph of the R package netmeta and applied to a number of networks from the literature. We show that graph representations with a small number of crossing lines are often preferable to circular representations. PMID:26060934

  16. Widely applicable MATLAB routines for automated analysis of saccadic reaction times.

    Science.gov (United States)

    Leppänen, Jukka M; Forssman, Linda; Kaatiala, Jussi; Yrttiaho, Santeri; Wass, Sam

    2015-06-01

    Saccadic reaction time (SRT) is a widely used dependent variable in eye-tracking studies of human cognition and its disorders. SRTs are also frequently measured in studies with special populations, such as infants and young children, who are limited in their ability to follow verbal instructions and remain in a stable position over time. In this article, we describe a library of MATLAB routines (Mathworks, Natick, MA) that are designed to (1) enable completely automated implementation of SRT analysis for multiple data sets and (2) cope with the unique challenges of analyzing SRTs from eye-tracking data collected from poorly cooperating participants. The library includes preprocessing and SRT analysis routines. The preprocessing routines (i.e., moving median filter and interpolation) are designed to remove technical artifacts and missing samples from raw eye-tracking data. The SRTs are detected by a simple algorithm that identifies the last point of gaze in the area of interest, but, critically, the extracted SRTs are further subjected to a number of postanalysis verification checks to exclude values contaminated by artifacts. Example analyses of data from 5- to 11-month-old infants demonstrated that SRTs extracted with the proposed routines were in high agreement with SRTs obtained manually from video records, robust against potential sources of artifact, and exhibited moderate to high test-retest stability. We propose that the present library has wide utility in standardizing and automating SRT-based cognitive testing in various populations. The MATLAB routines are open source and can be downloaded from http://www.uta.fi/med/icl/methods.html .

  17. AutoGate: automating analysis of flow cytometry data.

    Science.gov (United States)

    Meehan, Stephen; Walther, Guenther; Moore, Wayne; Orlova, Darya; Meehan, Connor; Parks, David; Ghosn, Eliver; Philips, Megan; Mitsunaga, Erin; Waters, Jeffrey; Kantor, Aaron; Okamura, Ross; Owumi, Solomon; Yang, Yang; Herzenberg, Leonard A; Herzenberg, Leonore A

    2014-05-01

    Nowadays, one can hardly imagine biology and medicine without flow cytometry to measure CD4 T cell counts in HIV, follow bone marrow transplant patients, characterize leukemias, etc. Similarly, without flow cytometry, there would be a bleak future for stem cell deployment, HIV drug development and full characterization of the cells and cell interactions in the immune system. But while flow instruments have improved markedly, the development of automated tools for processing and analyzing flow data has lagged sorely behind. To address this deficit, we have developed automated flow analysis software technology, provisionally named AutoComp and AutoGate. AutoComp acquires sample and reagent labels from users or flow data files, and uses this information to complete the flow data compensation task. AutoGate replaces the manual subsetting capabilities provided by current analysis packages with newly defined statistical algorithms that automatically and accurately detect, display and delineate subsets in well-labeled and well-recognized formats (histograms, contour and dot plots). Users guide analyses by successively specifying axes (flow parameters) for data subset displays and selecting statistically defined subsets to be used for the next analysis round. Ultimately, this process generates analysis "trees" that can be applied to automatically guide analyses for similar samples. The first AutoComp/AutoGate version is currently in the hands of a small group of users at Stanford, Emory and NIH. When this "early adopter" phase is complete, the authors expect to distribute the software free of charge to .edu, .org and .gov users.

  18. Fully automated diabetic retinopathy screening using morphological component analysis.

    Science.gov (United States)

    Imani, Elaheh; Pourreza, Hamid-Reza; Banaee, Touka

    2015-07-01

    Diabetic retinopathy is the major cause of blindness in the world. It has been shown that early diagnosis can play a major role in prevention of visual loss and blindness. This diagnosis can be made through regular screening and timely treatment. Besides, automation of this process can significantly reduce the work of ophthalmologists and alleviate inter and intra observer variability. This paper provides a fully automated diabetic retinopathy screening system with the ability of retinal image quality assessment. The novelty of the proposed method lies in the use of Morphological Component Analysis (MCA) algorithm to discriminate between normal and pathological retinal structures. To this end, first a pre-screening algorithm is used to assess the quality of retinal images. If the quality of the image is not satisfactory, it is examined by an ophthalmologist and must be recaptured if necessary. Otherwise, the image is processed for diabetic retinopathy detection. In this stage, normal and pathological structures of the retinal image are separated by MCA algorithm. Finally, the normal and abnormal retinal images are distinguished by statistical features of the retinal lesions. Our proposed system achieved 92.01% sensitivity and 95.45% specificity on the Messidor dataset which is a remarkable result in comparison with previous work. PMID:25863517

  19. Automated retinal image analysis for diabetic retinopathy in telemedicine.

    Science.gov (United States)

    Sim, Dawn A; Keane, Pearse A; Tufail, Adnan; Egan, Catherine A; Aiello, Lloyd Paul; Silva, Paolo S

    2015-03-01

    There will be an estimated 552 million persons with diabetes globally by the year 2030. Over half of these individuals will develop diabetic retinopathy, representing a nearly insurmountable burden for providing diabetes eye care. Telemedicine programmes have the capability to distribute quality eye care to virtually any location and address the lack of access to ophthalmic services. In most programmes, there is currently a heavy reliance on specially trained retinal image graders, a resource in short supply worldwide. These factors necessitate an image grading automation process to increase the speed of retinal image evaluation while maintaining accuracy and cost effectiveness. Several automatic retinal image analysis systems designed for use in telemedicine have recently become commercially available. Such systems have the potential to substantially improve the manner by which diabetes eye care is delivered by providing automated real-time evaluation to expedite diagnosis and referral if required. Furthermore, integration with electronic medical records may allow a more accurate prognostication for individual patients and may provide predictive modelling of medical risk factors based on broad population data. PMID:25697773

  20. Software fault tree analysis of an automated control system device written in Ada

    OpenAIRE

    Winter, Mathias William.

    1995-01-01

    Software Fault Tree Analysis (SFTA) is a technique used to analyze software for faults that could lead to hazardous conditions in systems which contain software components. Previous thesis works have developed three Ada-based, semi-automated software analysis tools, the Automated Code Translation Tool (ACm) an Ada statement template generator, the Fault Tree Editor (Fm) a graphical fault tree editor, and the Fault Isolator (Fl) an automated software fault tree isolator. These previous works d...

  1. Acquisition and Analysis of Dynamic Responses of a Historic Pedestrian Bridge using Video Image Processing

    Science.gov (United States)

    O'Byrne, Michael; Ghosh, Bidisha; Schoefs, Franck; O'Donnell, Deirdre; Wright, Robert; Pakrashi, Vikram

    2015-07-01

    Video based tracking is capable of analysing bridge vibrations that are characterised by large amplitudes and low frequencies. This paper presents the use of video images and associated image processing techniques to obtain the dynamic response of a pedestrian suspension bridge in Cork, Ireland. This historic structure is one of the four suspension bridges in Ireland and is notable for its dynamic nature. A video camera is mounted on the river-bank and the dynamic responses of the bridge have been measured from the video images. The dynamic response is assessed without the need of a reflector on the bridge and in the presence of various forms of luminous complexities in the video image scenes. Vertical deformations of the bridge were measured in this regard. The video image tracking for the measurement of dynamic responses of the bridge were based on correlating patches in time-lagged scenes in video images and utilisinga zero mean normalisedcross correlation (ZNCC) metric. The bridge was excited by designed pedestrian movement and by individual cyclists traversing the bridge. The time series data of dynamic displacement responses of the bridge were analysedto obtain the frequency domain response. Frequencies obtained from video analysis were checked against accelerometer data from the bridge obtained while carrying out the same set of experiments used for video image based recognition.

  2. Simulation and Analysis of Digital Video Watermarking Using MPEG-2

    Directory of Open Access Journals (Sweden)

    Dr. Anil Kumar Sharma,

    2011-07-01

    Full Text Available Quantization Index Modulation (QIM is an important method for embedding digital watermark signal with information. This technique achieves very efficient tradeoffs among watermark embedding rate, the amount of embedding induced distortion to the host signal and the robustness to intentional or unintentional attacks. Most of the schemes of video watermarking have been proposed on uncompressed video. This paper introduces a compressed video watermarking procedure to reduce computations. In a video frame the luminance component is an important factor where much change can not be made as it can disturb the original data. The MPGE-2 video compression technique is based on a macroblock structure, motion compression and conditional replenishment of macroblocks. To achieve high compression motion compensation employed with P-frames, the Discrete Cosine Transformation (DCT always exists in video stream for high robustness. In this work the QIM technique used for embedding is the DC component of chrome DCT of P-frames. The robustness of the proposed method has been studied through simulation.

  3. Analysis of User Requirements in Interactive 3D Video Systems

    Directory of Open Access Journals (Sweden)

    Haiyue Yuan

    2012-01-01

    Full Text Available The recent development of three dimensional (3D display technologies has resulted in a proliferation of 3D video production and broadcasting, attracting a lot of research into capture, compression and delivery of stereoscopic content. However, the predominant design practice of interactions with 3D video content has failed to address its differences and possibilities in comparison to the existing 2D video interactions. This paper presents a study of user requirements related to interaction with the stereoscopic 3D video. The study suggests that the change of view, zoom in/out, dynamic video browsing, and textual information are the most relevant interactions with stereoscopic 3D video. In addition, we identified a strong demand for object selection that resulted in a follow-up study of user preferences in 3D selection using virtual-hand and ray-casting metaphors. These results indicate that interaction modality affects users’ decision of object selection in terms of chosen location in 3D, while user attitudes do not have significant impact. Furthermore, the ray-casting-based interaction modality using Wiimote can outperform the volume-based interaction modality using mouse and keyboard for object positioning accuracy.

  4. A computerized system for video analysis of the aortic valve.

    Science.gov (United States)

    Vesely, I; Menkis, A; Campbell, G

    1990-10-01

    A novel technique was developed to study the dynamic behavior of the porcine aortic valve in an isolated heart preparation. Under the control of a personal computer, a video frame grabber board continuously acquired and digitized images of the aortic valve, and an analog-to-digital (A/D) converter read four channels of physiological data (flow rate, aortic and ventricular pressure, and aortic root diameter). The valve was illuminated with a strobe light synchronized to fire at the field acquisition rate of the CCD video camera. Using the overlay bits in the video board, the measured parameters were super-imposed over the live video as graphical tracing, and the resultant composite images were recorded on-line to video tape. The overlaying of the valve images with the graphical tracings of acquired data enabled the data tracings to be precisely synchronized with the video images of the aortic valve. This technique enabled us to observe the relationship between aortic root expansion and valve function.

  5. Despeckle filtering for ultrasound imaging and video II selected applications

    CERN Document Server

    Loizou, Christos P

    2015-01-01

    In ultrasound imaging and video visual perception is hindered by speckle multiplicative noise that degrades the quality. Noise reduction is therefore essential for improving the visual observation quality or as a pre-processing step for further automated analysis, such as image/video segmentation, texture analysis and encoding in ultrasound imaging and video. The goal of the first book (book 1 of 2 books) was to introduce the problem of speckle in ultrasound image and video as well as the theoretical background, algorithmic steps, and the MatlabTM for the following group of despeckle filters:

  6. Malaria: the value of the automated depolarization analysis.

    Science.gov (United States)

    Josephine, F P; Nissapatorn, V

    2005-01-01

    This retrospective and descriptive study was carried out in the University of Malaya Medical Center (UMMC) from January to September, 2004. This study aimed to evaluate the diagnostic utility of the Cell-Dyn 4000 hematology analyzer's depolarization analysis and to determine the sensitivity and specificity of this technique in the context of malaria diagnosis. A total of 889 cases presenting with pyrexia of unknown origin or clinically suspected of malaria were examined. Sixteen of these blood samples were found to be positive; 12 for P. vivax, 3 for P. malariae, and 1 for P. falciparum by peripheral blood smear as the standard technique for parasite detection and species identification. Demographic characteristics showed that the majority of patients were in the age range of 20-57 with a mean of 35.9 (+/- SD) 11.4 years, and male foreign workers. Of these, 16 positive blood samples were also processed by Cell-Dyne 4000 analyzer in the normal complete blood count (CBC) operational mode. Malaria parasites produce hemozoin, which depolarizes light and this allows the automated detection of malaria during routine complete blood count analysis with the Abbot Cell-Dyn CD4000 instrument. The white blood cell (WBC) differential plots of all malaria positive samples showed abnormal depolarization events in the NEU-EOS and EOS I plots. This was not seen in the negative samples. In 12 patients with P. vivax infection, a cluster pattern in the Neu-EOS and EOS I plots was observed, and appeared color-coded green or black. In 3 patients with P. malariae infection, few random depolarization events in the NEU-EOS and EOS I plots were seen, and appeared color-coded green, black or blue. While in the patient with P. falciparum infection, the sample was color-coded green with a few random purple depolarizing events in the NEU-EOS and EOS I plots. This study confirms that automated depolarization analysis is a highly sensitive and specific method to diagnose whether or not a patient

  7. Automated quantitative gait analysis in animal models of movement disorders

    Directory of Open Access Journals (Sweden)

    Vandeputte Caroline

    2010-08-01

    Full Text Available Abstract Background Accurate and reproducible behavioral tests in animal models are of major importance in the development and evaluation of new therapies for central nervous system disease. In this study we investigated for the first time gait parameters of rat models for Parkinson's disease (PD, Huntington's disease (HD and stroke using the Catwalk method, a novel automated gait analysis test. Static and dynamic gait parameters were measured in all animal models, and these data were compared to readouts of established behavioral tests, such as the cylinder test in the PD and stroke rats and the rotarod tests for the HD group. Results Hemiparkinsonian rats were generated by unilateral injection of the neurotoxin 6-hydroxydopamine in the striatum or in the medial forebrain bundle. For Huntington's disease, a transgenic rat model expressing a truncated huntingtin fragment with multiple CAG repeats was used. Thirdly, a stroke model was generated by a photothrombotic induced infarct in the right sensorimotor cortex. We found that multiple gait parameters were significantly altered in all three disease models compared to their respective controls. Behavioural deficits could be efficiently measured using the cylinder test in the PD and stroke animals, and in the case of the PD model, the deficits in gait essentially confirmed results obtained by the cylinder test. However, in the HD model and the stroke model the Catwalk analysis proved more sensitive than the rotarod test and also added new and more detailed information on specific gait parameters. Conclusion The automated quantitative gait analysis test may be a useful tool to study both motor impairment and recovery associated with various neurological motor disorders.

  8. Automated generation of burnup chain for reactor analysis applications

    International Nuclear Information System (INIS)

    This paper presents the development of an automated generation of a new burnup chain for reactor analysis applications. The JENDL FP Decay Data File 2011 and Fission Yields Data File 2011 were used as the data sources. The nuclides in the new chain are determined by restrictions of the half-life and cumulative yield of fission products or from a given list. Then, decay modes, branching ratios and fission yields are recalculated taking into account intermediate reactions. The new burnup chain is output according to the format for the SRAC code system. Verification was performed to evaluate the accuracy of the new burnup chain. The results show that the new burnup chain reproduces well the results of a reference one with 193 fission products used in SRAC. Further development and applications are being planned with the burnup chain code. (author)

  9. Disaster Video Gallery Project

    OpenAIRE

    Fesseha, ZeleAlem

    2012-01-01

    Project Report.docx Project Report.pdf Project Presentation.pptx Project Presentation.pdf Sample_YouTube_Videos_Raw.txt Sample_YouTube_Videos_Readable.txt The goal of this project was to collect YouTube videos for carefully selected events. The videos were manually collected and verified to be relevant to the specific events. The collection together with short description included with each video can later be used to automate the process of collecting videos pertaining to pa...

  10. galaxieEST: addressing EST identity through automated phylogenetic analysis

    Directory of Open Access Journals (Sweden)

    Larsson Karl-Henrik

    2004-07-01

    Full Text Available Abstract Background Research involving expressed sequence tags (ESTs is intricately coupled to the existence of large, well-annotated sequence repositories. Comparatively complete and satisfactory annotated public sequence libraries are, however, available only for a limited range of organisms, rendering the absence of sequences and gene structure information a tangible problem for those working with taxa lacking an EST or genome sequencing project. Paralogous genes belonging to the same gene family but distinguished by derived characteristics are particularly prone to misidentification and erroneous annotation; high but incomplete levels of sequence similarity are typically difficult to interpret and have formed the basis of many unsubstantiated assumptions of orthology. In these cases, a phylogenetic study of the query sequence together with the most similar sequences in the database may be of great value to the identification process. In order to facilitate this laborious procedure, a project to employ automated phylogenetic analysis in the identification of ESTs was initiated. Results galaxieEST is an open source Perl-CGI script package designed to complement traditional similarity-based identification of EST sequences through employment of automated phylogenetic analysis. It uses a series of BLAST runs as a sieve to retrieve nucleotide and protein sequences for inclusion in neighbour joining and parsimony analyses; the output includes the BLAST output, the results of the phylogenetic analyses, and the corresponding multiple alignments. galaxieEST is available as an on-line web service for identification of fungal ESTs and for download / local installation for use with any organism group at http://galaxie.cgb.ki.se/galaxieEST.html. Conclusions By addressing sequence relatedness in addition to similarity, galaxieEST provides an integrative view on EST origin and identity, which may prove particularly useful in cases where similarity searches

  11. 14 CFR 1261.413 - Analysis of costs; automation; prevention of overpayments, delinquencies, or defaults.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 5 2010-01-01 2010-01-01 false Analysis of costs; automation; prevention of overpayments, delinquencies, or defaults. 1261.413 Section 1261.413 Aeronautics and Space NATIONAL...) § 1261.413 Analysis of costs; automation; prevention of overpayments, delinquencies, or defaults....

  12. Structuring Lecture Videos by Automatic Projection Screen Localization and Analysis.

    Science.gov (United States)

    Li, Kai; Wang, Jue; Wang, Haoqian; Dai, Qionghai

    2015-06-01

    We present a fully automatic system for extracting the semantic structure of a typical academic presentation video, which captures the whole presentation stage with abundant camera motions such as panning, tilting, and zooming. Our system automatically detects and tracks both the projection screen and the presenter whenever they are visible in the video. By analyzing the image content of the tracked screen region, our system is able to detect slide progressions and extract a high-quality, non-occluded, geometrically-compensated image for each slide, resulting in a list of representative images that reconstruct the main presentation structure. Afterwards, our system recognizes text content and extracts keywords from the slides, which can be used for keyword-based video retrieval and browsing. Experimental results show that our system is able to generate more stable and accurate screen localization results than commonly-used object tracking methods. Our system also extracts more accurate presentation structures than general video summarization methods, for this specific type of video. PMID:26357345

  13. Video Frames Reconstruction Based on Time-Frequency Analysis and Hermite Projection Method

    Directory of Open Access Journals (Sweden)

    Krylov Andrey

    2010-01-01

    Full Text Available A method for temporal analysis and reconstruction of video sequences based on the time-frequency analysis and Hermite projection method is proposed. The S-method-based time-frequency distribution is used to characterize stationarity within the sequence. Namely, a sequence of DCT coefficients along the time axes is used to create a frequency-modulated signal. The reconstruction of nonstationary sequences is done using the Hermite expansion coefficients. Here, a small number of Hermite coefficients can be used, which may provide significant savings for some video-based applications. The results are illustrated with video examples.

  14. Automated seismic event location by waveform coherence analysis

    OpenAIRE

    Grigoli, Francesco

    2014-01-01

    Automated location of seismic events is a very important task in microseismic monitoring operations as well for local and regional seismic monitoring. Since microseismic records are generally characterised by low signal-to-noise ratio, such methods are requested to be noise robust and sufficiently accurate. Most of the standard automated location routines are based on the automated picking, identification and association of the first arrivals of P and S waves and on the minimization of the re...

  15. Automated SEM Modal Analysis Applied to the Diogenites

    Science.gov (United States)

    Bowman, L. E.; Spilde, M. N.; Papike, James J.

    1996-01-01

    Analysis of volume proportions of minerals, or modal analysis, is routinely accomplished by point counting on an optical microscope, but the process, particularly on brecciated samples such as the diogenite meteorites, is tedious and prone to error by misidentification of very small fragments, which may make up a significant volume of the sample. Precise volume percentage data can be gathered on a scanning electron microscope (SEM) utilizing digital imaging and an energy dispersive spectrometer (EDS). This form of automated phase analysis reduces error, and at the same time provides more information than could be gathered using simple point counting alone, such as particle morphology statistics and chemical analyses. We have previously studied major, minor, and trace-element chemistry of orthopyroxene from a suite of diogenites. This abstract describes the method applied to determine the modes on this same suite of meteorites and the results of that research. The modal abundances thus determined add additional information on the petrogenesis of the diogenites. In addition, low-abundance phases such as spinels were located for further analysis by this method.

  16. Block Based Video Watermarking Scheme Using Wavelet Transform and Principle Component Analysis

    Directory of Open Access Journals (Sweden)

    Nisreen I. Yassin

    2012-01-01

    Full Text Available In this paper, a comprehensive approach for digital video watermarking is introduced, where a binary watermark image is embedded into the video frames. Each video frame is decomposed into sub-images using 2 level discrete wavelet transform then the Principle Component Analysis (PCA transformation is applied for each block in the two bands LL and HH. The watermark is embedded into the maximum coefficient of the PCA block of the two bands. The proposed scheme is tested using a number of video sequences. Experimental results show high imperceptibility where there is no noticeable difference between the watermarked video frames and the original frames. The computed PSNR achieves high score which is 44.097 db. The proposed scheme shows high robustness against several attacks such as JPEG coding, Gaussian noise addition, histogram equalization, gamma correction, and contrast adjustment.

  17. Video Object Tracking and Analysis for Computer Assisted Surgery

    Directory of Open Access Journals (Sweden)

    Nobert Thomas Pallath

    2012-03-01

    Full Text Available Pedicle screw insertion technique has made revolution in the surgical treatment of spinal fractures and spinal disorders. Although X- ray fluoroscopy based navigation is popular, there is risk of prolonged exposure to X- ray radiation. Systems that have lower radiation risk are generally quite expensive. The position and orientation of the drill is clinically very important in pedicle screw fixation. In this paper, the position and orientation of the marker on the drill is determined using pattern recognition based methods, using geometric features, obtained from the input video sequence taken from CCD camera. A search is then performed on the video frames after preprocessing, to obtain the exact position and orientation of the drill. An animated graphics, showing the instantaneous position and orientation of the drill is then overlaid on the processed video for real time drill control and navigation.

  18. Granulometric profiling of aeolian dust deposits by automated image analysis

    Science.gov (United States)

    Varga, György; Újvári, Gábor; Kovács, János; Jakab, Gergely; Kiss, Klaudia; Szalai, Zoltán

    2016-04-01

    Determination of granulometric parameters is of growing interest in the Earth sciences. Particle size data of sedimentary deposits provide insights into the physicochemical environment of transport, accumulation and post-depositional alterations of sedimentary particles, and are important proxies applied in paleoclimatic reconstructions. It is especially true for aeolian dust deposits with a fairly narrow grain size range as a consequence of the extremely selective nature of wind sediment transport. Therefore, various aspects of aeolian sedimentation (wind strength, distance to source(s), possible secondary source regions and modes of sedimentation and transport) can be reconstructed only from precise grain size data. As terrestrial wind-blown deposits are among the most important archives of past environmental changes, proper explanation of the proxy data is a mandatory issue. Automated imaging provides a unique technique to gather direct information on granulometric characteristics of sedimentary particles. Granulometric data obtained from automatic image analysis of Malvern Morphologi G3-ID is a rarely applied new technique for particle size and shape analyses in sedimentary geology. Size and shape data of several hundred thousand (or even million) individual particles were automatically recorded in this study from 15 loess and paleosoil samples from the captured high-resolution images. Several size (e.g. circle-equivalent diameter, major axis, length, width, area) and shape parameters (e.g. elongation, circularity, convexity) were calculated by the instrument software. At the same time, the mean light intensity after transmission through each particle is automatically collected by the system as a proxy of optical properties of the material. Intensity values are dependent on chemical composition and/or thickness of the particles. The results of the automated imaging were compared to particle size data determined by three different laser diffraction instruments

  19. Fusion of Learned Multi-Modal Representations and Dense Trajectories for Emotional Analysis in Videos

    OpenAIRE

    Acar, Esra; Hopfgartner, Frank; Albayrak, Sahin

    2015-01-01

    When designing a video affective content analysis algorithm, one of the most important steps is the selection of discriminative features for the effective representation of video segments. The majority of existing affective content analysis methods either use low-level audio-visual features or generate handcrafted higher level representations based on these low-level features. We propose in this work to use deep learning methods, in particular convolutional neural networks (CNNs), in order to...

  20. A scheme for racquet sports video analysis with the combination of audio-visual information

    Science.gov (United States)

    Xing, Liyuan; Ye, Qixiang; Zhang, Weigang; Huang, Qingming; Yu, Hua

    2005-07-01

    As a very important category in sports video, racquet sports video, e.g. table tennis, tennis and badminton, has been paid little attention in the past years. Considering the characteristics of this kind of sports video, we propose a new scheme for structure indexing and highlight generating based on the combination of audio and visual information. Firstly, a supervised classification method is employed to detect important audio symbols including impact (ball hit), audience cheers, commentator speech, etc. Meanwhile an unsupervised algorithm is proposed to group video shots into various clusters. Then, by taking advantage of temporal relationship between audio and visual signals, we can specify the scene clusters with semantic labels including rally scenes and break scenes. Thirdly, a refinement procedure is developed to reduce false rally scenes by further audio analysis. Finally, an exciting model is proposed to rank the detected rally scenes from which many exciting video clips such as game (match) points can be correctly retrieved. Experiments on two types of representative racquet sports video, table tennis video and tennis video, demonstrate encouraging results.

  1. Subjective Analysis and Objective Characterization of Adaptive Bitrate Videos

    DEFF Research Database (Denmark)

    Søgaard, Jacob; Tavakoli, Samira; Brunnström, Kjell;

    2016-01-01

    the factors influencing on subjective QoE of adaptation events.However, adapting the video quality typically lasts in a time scale much longer than what current standardized subjective testing methods are designed for, thus making the full matrix design of the experiment on an event level hard to achieve...... mean opinion score (MOS) and the MOS from shorter sequences. The aforementioned empirical dataset has proven to be very challenging in terms of video quality assessment test design, thus deriving a conclusive outcome about the influence of different parameters have been difficult. The second...

  2. 基于单摄像机视频的鱼类三维自动跟踪方法初探%Preliminary studies on an automated 3D fish tracking method based on a single video camera

    Institute of Scientific and Technical Information of China (English)

    徐盼麟; 韩军; 童剑锋

    2012-01-01

    coordinate to world coordinate, the automated tracking algorithm of fish movement and the automated output of fish behavior 2D and 3D data. Tests find out that while the distance between the camera and the aquaria is 1.5 m, the distortion calibration result shows the pixel error is much more acceptable which is about 0.1 pixels. As the camera tilted slightly during the experiment, the shape of the aquaria in the images changed. So based on the processing of Free-Form Deformation, the deformation of images is rectified during coordinate transform process. Then we implemented the algorithm of Interacting Multiple Model Joint Probabilistic Data Association (IMMJPDA) to automatically track fishes in 3D and output fish behavior data. The result of 6 Hemigrammus rhodostomus tracking experiment shows that: IMMJPDA algorithm can deal with the key issues during fish tracking system, which enables the method to extract individual fish from video images, construct their tracks, output 3D positions and speeds, and finally generate a complete 3D movement track drawing for fish behavior analysis. In a dense clutter situation JPDA requires a fairly large amount of computation to evaluate the joint probabilities. We combined Nearest Neighbor algorithm and JPDA algorithm to reduce the computational burden.

  3. A Semi-Automated Functional Test Data Analysis Tool

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Peng; Haves, Philip; Kim, Moosung

    2005-05-01

    The growing interest in commissioning is creating a demand that will increasingly be met by mechanical contractors and less experienced commissioning agents. They will need tools to help them perform commissioning effectively and efficiently. The widespread availability of standardized procedures, accessible in the field, will allow commissioning to be specified with greater certainty as to what will be delivered, enhancing the acceptance and credibility of commissioning. In response, a functional test data analysis tool is being developed to analyze the data collected during functional tests for air-handling units. The functional test data analysis tool is designed to analyze test data, assess performance of the unit under test and identify the likely causes of the failure. The tool has a convenient user interface to facilitate manual entry of measurements made during a test. A graphical display shows the measured performance versus the expected performance, highlighting significant differences that indicate the unit is not able to pass the test. The tool is described as semiautomated because the measured data need to be entered manually, instead of being passed from the building control system automatically. However, the data analysis and visualization are fully automated. The tool is designed to be used by commissioning providers conducting functional tests as part of either new building commissioning or retro-commissioning, as well as building owners and operators interested in conducting routine tests periodically to check the performance of their HVAC systems.

  4. Intelligent Control in Automation Based on Wireless Traffic Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kurt Derr; Milos Manic

    2007-08-01

    Wireless technology is a central component of many factory automation infrastructures in both the commercial and government sectors, providing connectivity among various components in industrial realms (distributed sensors, machines, mobile process controllers). However wireless technologies provide more threats to computer security than wired environments. The advantageous features of Bluetooth technology resulted in Bluetooth units shipments climbing to five million per week at the end of 2005 [1, 2]. This is why the real-time interpretation and understanding of Bluetooth traffic behavior is critical in both maintaining the integrity of computer systems and increasing the efficient use of this technology in control type applications. Although neuro-fuzzy approaches have been applied to wireless 802.11 behavior analysis in the past, a significantly different Bluetooth protocol framework has not been extensively explored using this technology. This paper presents a new neurofuzzy traffic analysis algorithm of this still new territory of Bluetooth traffic. Further enhancements of this algorithm are presented along with the comparison against the traditional, numerical approach. Through test examples, interesting Bluetooth traffic behavior characteristics were captured, and the comparative elegance of this computationally inexpensive approach was demonstrated. This analysis can be used to provide directions for future development and use of this prevailing technology in various control type applications, as well as making the use of it more secure.

  5. Intelligent Control in Automation Based on Wireless Traffic Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kurt Derr; Milos Manic

    2007-09-01

    Wireless technology is a central component of many factory automation infrastructures in both the commercial and government sectors, providing connectivity among various components in industrial realms (distributed sensors, machines, mobile process controllers). However wireless technologies provide more threats to computer security than wired environments. The advantageous features of Bluetooth technology resulted in Bluetooth units shipments climbing to five million per week at the end of 2005 [1, 2]. This is why the real-time interpretation and understanding of Bluetooth traffic behavior is critical in both maintaining the integrity of computer systems and increasing the efficient use of this technology in control type applications. Although neuro-fuzzy approaches have been applied to wireless 802.11 behavior analysis in the past, a significantly different Bluetooth protocol framework has not been extensively explored using this technology. This paper presents a new neurofuzzy traffic analysis algorithm of this still new territory of Bluetooth traffic. Further enhancements of this algorithm are presented along with the comparison against the traditional, numerical approach. Through test examples, interesting Bluetooth traffic behavior characteristics were captured, and the comparative elegance of this computationally inexpensive approach was demonstrated. This analysis can be used to provide directions for future development and use of this prevailing technology in various control type applications, as well as making the use of it more secure.

  6. VID-R and SCAN: Tools and Methods for the Automated Analysis of Visual Records.

    Science.gov (United States)

    Ekman, Paul; And Others

    The VID-R (Visual Information Display and Retrieval) system that enables computer-aided analysis of visual records is composed of a film-to-television chain, two videotape recorders with complete remote control of functions, a video-disc recorder, three high-resolution television monitors, a teletype, a PDP-8, a video and audio interface, three…

  7. Is interactional dissynchrony a clue to deception? Insights from automated analysis of nonverbal visual cues.

    Science.gov (United States)

    Yu, Xiang; Zhang, Shaoting; Yan, Zhennan; Yang, Fei; Huang, Junzhou; Dunbar, Norah E; Jensen, Matthew L; Burgoon, Judee K; Metaxas, Dimitris N

    2015-03-01

    Detecting deception in interpersonal dialog is challenging since deceivers take advantage of the give-and-take of interaction to adapt to any sign of skepticism in an interlocutor's verbal and nonverbal feedback. Human detection accuracy is poor, often with no better than chance performance. In this investigation, we consider whether automated methods can produce better results and if emphasizing the possible disruption in interactional synchrony can signal whether an interactant is truthful or deceptive. We propose a data-driven and unobtrusive framework using visual cues that consists of face tracking, head movement detection, facial expression recognition, and interactional synchrony estimation. Analysis were conducted on 242 video samples from an experiment in which deceivers and truth-tellers interacted with professional interviewers either face-to-face or through computer mediation. Results revealed that the framework is able to automatically track head movements and expressions of both interlocutors to extract normalized meaningful synchrony features and to learn classification models for deception recognition. Further experiments show that these features reliably capture interactional synchrony and efficiently discriminate deception from truth. PMID:24988600

  8. Video Analysis in Cross-Cultural Environments and Methodological Issues

    Science.gov (United States)

    Montandon, Christiane

    2015-01-01

    This paper addresses the use of videography combined with group interviews, as a way to better understand the informal learnings of 11-12 year old children in cross-cultural encounters during French-German school exchanges. The complete, consistent video data required the researchers to choose the most significant sequences to highlight the…

  9. Two video analysis applications using foreground/background segmentation

    NARCIS (Netherlands)

    Zivkovic, Z.; Petkovic, M.; Mierlo, van R.; Keulen, van M.; Heijden, van der F.; Jonker, W.; Rijnierse, E.

    2003-01-01

    Probably the most frequently solved problem when videos are analyzed is segmenting a foreground object from its background in an image. After some regions in an image are detected as the foreground objects, some features are extracted that describe the segmented regions. These features together with

  10. Violent Video Games as Exemplary Teachers: A Conceptual Analysis

    Science.gov (United States)

    Gentile, Douglas A.; Gentile, J. Ronald

    2008-01-01

    This article presents conceptual and empirical analyses of several of the "best practices" of learning and instruction, and demonstrates how violent video games use them effectively to motivate learners to persevere in acquiring and mastering a number of skills, to navigate through complex problems and changing environments, and to experiment with…

  11. Exploring the Behavior of Highly Effective CIOs Using Video Analysis

    NARCIS (Netherlands)

    Gupta, Komal; Wilderom, Celeste; Hillegersberg, van Jos

    2009-01-01

    Although recently several studies have addressed the required skills of effective CIOs, little is known of the actual behavior successful CIOs. In this study, we explore the behavior of highly effective CIOs by video-recording CIOs at work. The two CIOs videotaped were nominated as CIO of the year.

  12. The potential of accelerating early detection of autism through content analysis of YouTube videos.

    Directory of Open Access Journals (Sweden)

    Vincent A Fusaro

    Full Text Available Autism is on the rise, with 1 in 88 children receiving a diagnosis in the United States, yet the process for diagnosis remains cumbersome and time consuming. Research has shown that home videos of children can help increase the accuracy of diagnosis. However the use of videos in the diagnostic process is uncommon. In the present study, we assessed the feasibility of applying a gold-standard diagnostic instrument to brief and unstructured home videos and tested whether video analysis can enable more rapid detection of the core features of autism outside of clinical environments. We collected 100 public videos from YouTube of children ages 1-15 with either a self-reported diagnosis of an ASD (N = 45 or not (N = 55. Four non-clinical raters independently scored all videos using one of the most widely adopted tools for behavioral diagnosis of autism, the Autism Diagnostic Observation Schedule-Generic (ADOS. The classification accuracy was 96.8%, with 94.1% sensitivity and 100% specificity, the inter-rater correlation for the behavioral domains on the ADOS was 0.88, and the diagnoses matched a trained clinician in all but 3 of 22 randomly selected video cases. Despite the diversity of videos and non-clinical raters, our results indicate that it is possible to achieve high classification accuracy, sensitivity, and specificity as well as clinically acceptable inter-rater reliability with nonclinical personnel. Our results also demonstrate the potential for video-based detection of autism in short, unstructured home videos and further suggests that at least a percentage of the effort associated with detection and monitoring of autism may be mobilized and moved outside of traditional clinical environments.

  13. Development of an Automated Technique for Failure Modes and Effect Analysis

    DEFF Research Database (Denmark)

    Blanke, M.; Borch, Ole; Allasia, G.;

    1999-01-01

    Advances in automation have provided integration of monitoring and control functions to enhance the operator's overview and ability to take remedy actions when faults occur. Automation in plant supervision is technically possible with integrated automation systems as platforms, but new design...... implementing an automated technique for Failure Modes and Effects Analysis (FMEA). This technique is based on the matrix formulation of FMEA for the investigation of failure propagation through a system. As main result, this technique will provide the design engineer with decision tables for fault handling...

  14. Development of an automated technique for failure modes and effect analysis

    DEFF Research Database (Denmark)

    Blanke, M.; Borch, Ole; Bagnoli, F.;

    Advances in automation have provided integration of monitoring and control functions to enhance the operator's overview and ability to take remedy actions when faults occur. Automation in plant supervision is technically possible with integrated automation systems as platforms, but new design...... implementing an automated technique for Failure Modes and Effects Analysis (FMEA). This technique is based on the matrix formulation of FMEA for the investigation of failure propagation through a system. As main result, this technique will provide the design engineer with decision tables for fault handling...

  15. Music video shot segmentation using independent component analysis and keyframe extraction based on image complexity

    Science.gov (United States)

    Li, Wei; Chen, Ting; Zhang, Wenjun; Shi, Yunyu; Li, Jun

    2012-04-01

    In recent years, Music video data is increasing at an astonishing speed. Shot segmentation and keyframe extraction constitute a fundamental unit in organizing, indexing, retrieving video content. In this paper a unified framework is proposed to detect the shot boundaries and extract the keyframe of a shot. Music video is first segmented to shots by illumination-invariant chromaticity histogram in independent component (IC) analysis feature space .Then we presents a new metric, image complexity, to extract keyframe in a shot which is computed by ICs. Experimental results show the framework is effective and has a good performance.

  16. Automated analysis for detecting beams in laser wakefield simulations

    Energy Technology Data Exchange (ETDEWEB)

    Ushizima, Daniela M.; Rubel, Oliver; Prabhat, Mr.; Weber, Gunther H.; Bethel, E. Wes; Aragon, Cecilia R.; Geddes, Cameron G.R.; Cormier-Michel, Estelle; Hamann, Bernd; Messmer, Peter; Hagen, Hans

    2008-07-03

    Laser wakefield particle accelerators have shown the potential to generate electric fields thousands of times higher than those of conventional accelerators. The resulting extremely short particle acceleration distance could yield a potential new compact source of energetic electrons and radiation, with wide applications from medicine to physics. Physicists investigate laser-plasma internal dynamics by running particle-in-cell simulations; however, this generates a large dataset that requires time-consuming, manual inspection by experts in order to detect key features such as beam formation. This paper describes a framework to automate the data analysis and classification of simulation data. First, we propose a new method to identify locations with high density of particles in the space-time domain, based on maximum extremum point detection on the particle distribution. We analyze high density electron regions using a lifetime diagram by organizing and pruning the maximum extrema as nodes in a minimum spanning tree. Second, we partition the multivariate data using fuzzy clustering to detect time steps in a experiment that may contain a high quality electron beam. Finally, we combine results from fuzzy clustering and bunch lifetime analysis to estimate spatially confined beams. We demonstrate our algorithms successfully on four different simulation datasets.

  17. 40 CFR 13.19 - Analysis of costs; automation; prevention of overpayments, delinquencies or defaults.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 1 2010-07-01 2010-07-01 false Analysis of costs; automation; prevention of overpayments, delinquencies or defaults. 13.19 Section 13.19 Protection of Environment...; automation; prevention of overpayments, delinquencies or defaults. (a) The Administrator may...

  18. Hybridization of DCT and SVD in the Implementation and Performance Analysis of Video Watermarking

    Directory of Open Access Journals (Sweden)

    Ved Vyas Dwivedi

    2012-06-01

    Full Text Available In this Paper, We worked and documented the implementation and performance analysis of digital video watermarking that uses the hybrid features of two of the most powerful transform domain processing of the video and fundamentals of the linear algebra. We have taken into the account fundamentals of Discrete Cosine Transform and Singular Value Decomposition for the development of the proposed algorithm. We first used the Singular Value Decomposition and then used the singular values for the insertion of the message behind the video. Finally we used two of the visual quality matrices for the analysis purpose. We also applied various attacks on the video and found the proposed scheme more robust.

  19. VIDEO OBJECT SEGMENTATION BY 2-D MESH-BASED MOTION ANALYSIS

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Video object extraction is a key technology in content-based video coding. A novel video object extracting algorithm by two Dimensional (2-D) mesh-based motion analysis is proposed in this paper. Firstly, a 2-D mesh fitting the original frame image is obtained via feature detection algorithm.Then, higher order statistics motion analysis is applied on the 2-D mesh representation to get an initial motion detection mask. After post-processing, the final segmenting mask is quickly obtained. And hence the video object is effectively extracted. Experimental results show that the proposed algorithm combines the merits of mesh-based segmenting algorithms and pixel-based segmenting algorithms, and hereby achieves satisfactory subjective and objective performance while dramatically increasing the segmenting speed.

  20. A New Motion Capture System For Automated Gait Analysis Based On Multi Video Sequence Analysis

    DEFF Research Database (Denmark)

    Jensen, Karsten; Juhl, Jens

    There is an increasing demand for assessing foot mal positions and an interest in monitoring the effect of treatment. In the last decades several different motion capture systems has been used. This abstract describes a new low cost motion capture system....

  1. Automated SIMS Isotopic Analysis Of Small Dust Particles

    Science.gov (United States)

    Nittler, L.; Alexander, C.; Gyngard, F.; Morgand, A.; Zinner, E. K.

    2009-12-01

    The isotopic compositions of sub-μm to μm sized dust grains are of increasing interest in cosmochemistry, nuclear forensics and terrestrial aerosol research. Because of its high sensitivity and spatial resolution, Secondary Ion Mass Spectrometry (SIMS) is the tool of choice for measuring isotopes in such small samples. Indeed, SIMS has enabled an entirely new sub-field of astronomy: presolar grains in meteorites. In recent years, the development of the Cameca NanoSIMS ion probe has extended the reach of isotopic measurements to particles as small as 100 nm in diameter, a regime where isotopic precision is strongly limited by the total number of atoms in the sample. Many applications require obtaining isotopic data on large numbers of particles, necessitating the development of automated techniques. One such method is isotopic imaging, wherein images of multiple isotopes are acquired, each containing multiple dispersed particles, and image processing is used to determine isotopic ratios for individual particles. This method is powerful, but relatively inefficient for raster-based imaging on the NanoSIMS. Modern computerized control of instrumentation has allowed for another approach, analogous to commercial automated SEM-EDS particle analysis systems, in which images are used solely to locate particles followed by fully automated grain-by-grain analysis. The first such system was developed on the Carnegie Institution’s Cameca ims-6f, and was used to generate large databases of presolar grains. We have recently developed a similar system for the NanoSIMS, whose high sensitivity allows for smaller grains to be analyzed with less sample consumption than is possible with the 6f system. The 6f and NanoSIMS systems are functionally identical: an image of dispersed grains is obtained with sufficient statistical precision for an algorithm to identify the positions of individual particles, the primary ion beam is deflected to each particle in turn and rastered in a small

  2. How violent video games communicate violence: A literature review and content analysis of moral disengagement factors

    OpenAIRE

    T. Hartmann; Krakowiak, M.; Tsay-Vogel, M.

    2014-01-01

    Mechanisms of moral disengagement in violent video game play have recently received considerable attention among communication scholars. To date, however, no study has analyzed the prevalence of moral disengagement factors in violent video games. To fill this research gap, the present approach includes both a systematic literature review and a content analysis of moral disengagement cues embedded in the narratives and actual game play of 17 top-ranked first-person shooters (PC). Findings sugg...

  3. Online Nonparametric Bayesian Activity Mining and Analysis From Surveillance Video.

    Science.gov (United States)

    Bastani, Vahid; Marcenaro, Lucio; Regazzoni, Carlo S

    2016-05-01

    A method for online incremental mining of activity patterns from the surveillance video stream is presented in this paper. The framework consists of a learning block in which Dirichlet process mixture model is employed for the incremental clustering of trajectories. Stochastic trajectory pattern models are formed using the Gaussian process regression of the corresponding flow functions. Moreover, a sequential Monte Carlo method based on Rao-Blackwellized particle filter is proposed for tracking and online classification as well as the detection of abnormality during the observation of an object. Experimental results on real surveillance video data are provided to show the performance of the proposed algorithm in different tasks of trajectory clustering, classification, and abnormality detection. PMID:26978823

  4. Measurements and analysis of a major adult video portal

    OpenAIRE

    Tyson, Gareth; El Khatib, Yehia; Sastry, Nishanth; Uhlig, Steve

    2016-01-01

    Today the Internet is a large multimedia delivery infrastructure, with websites such as YouTube appearing at the top of most measurement studies. However, most traffic studies have ignored an important domain: adult multimedia distribution. Whereas, traditionally, such services were provided primarily via bespoke websites, recently these have converged towards what is known as "Porn 2.0". These services allow users to upload, view, rate and comment on videos for free (much like YouTube). Desp...

  5. Measurements and analysis of a major adult video portal

    OpenAIRE

    Tyson, Gareth; Elkhatib, Yehia; Sastry, Nishanth; Uhlig, Steve

    2016-01-01

    Today, the Internet is a largemultimedia delivery infrastructure, withwebsites such as YouTube appearing at the top of most measurement studies. However,most traffic studies have ignored an important domain: Adult multimedia distribution.Whereas, traditionally, such services were provided primarily via bespoke websites, recently these have converged towards what is known as "Porn 2.0". These services allow users to upload, view, rate, and comment on videos for free (much like YouTube). Despit...

  6. Application of automated image analysis to coal petrography

    Science.gov (United States)

    Chao, E.C.T.; Minkin, J.A.; Thompson, C.L.

    1982-01-01

    The coal petrologist seeks to determine the petrographic characteristics of organic and inorganic coal constituents and their lateral and vertical variations within a single coal bed or different coal beds of a particular coal field. Definitive descriptions of coal characteristics and coal facies provide the basis for interpretation of depositional environments, diagenetic changes, and burial history and determination of the degree of coalification or metamorphism. Numerous coal core or columnar samples must be studied in detail in order to adequately describe and define coal microlithotypes, lithotypes, and lithologic facies and their variations. The large amount of petrographic information required can be obtained rapidly and quantitatively by use of an automated image-analysis system (AIAS). An AIAS can be used to generate quantitative megascopic and microscopic modal analyses for the lithologic units of an entire columnar section of a coal bed. In our scheme for megascopic analysis, distinctive bands 2 mm or more thick are first demarcated by visual inspection. These bands consist of either nearly pure microlithotypes or lithotypes such as vitrite/vitrain or fusite/fusain, or assemblages of microlithotypes. Megascopic analysis with the aid of the AIAS is next performed to determine volume percentages of vitrite, inertite, minerals, and microlithotype mixtures in bands 0.5 to 2 mm thick. The microlithotype mixtures are analyzed microscopically by use of the AIAS to determine their modal composition in terms of maceral and optically observable mineral components. Megascopic and microscopic data are combined to describe the coal unit quantitatively in terms of (V) for vitrite, (E) for liptite, (I) for inertite or fusite, (M) for mineral components other than iron sulfide, (S) for iron sulfide, and (VEIM) for the composition of the mixed phases (Xi) i = 1,2, etc. in terms of the maceral groups vitrinite V, exinite E, inertinite I, and optically observable mineral

  7. Socio-phenomenology and conversation analysis: interpreting video lifeworld healthcare interactions.

    Science.gov (United States)

    Bickerton, Jane; Procter, Sue; Johnson, Barbara; Medina, Angel

    2011-10-01

    This article uses a socio-phenomenological methodology to develop knowledge and understanding of the healthcare consultation based on the concept of the lifeworld. It concentrates its attention on social action rather than strategic action and a systems approach. This article argues that patient-centred care is more effective when it is informed through a lifeworld conception of human mutual shared interaction. Videos offer an opportunity for a wide audience to experience the many kinds of conversations and dynamics that take place in consultations. Visual sociology used in this article provides a method to organize video emotional, knowledge and action conversations as well as dynamic typical consultation situations. These interactions are experienced through the video materials themselves unlike conversation analysis where video materials are first transcribed and then analysed. Both approaches have the potential to support intersubjective learning but this article argues that a video lifeworld schema is more accessible to health professionals and the general public. The typical interaction situations are constructed through the analysis of video materials of consultations in a London walk-in centre. Further studies are planned in the future to extend and replicate results in other healthcare services. This method of analysis focuses on the ways in which the everyday lifeworld informs face-to-face person-centred health care and supports social action as a significant factor underpinning strategic action and a systems approach to consultation practice.

  8. Automated absolute activation analysis with californium-252 sources

    Energy Technology Data Exchange (ETDEWEB)

    MacMurdo, K.W.; Bowman, W.W.

    1978-09-01

    A 100-mg /sup 252/Cf neutron activation analysis facility is used routinely at the Savannah River Laboratory for multielement analysis of many solid and liquid samples. An absolute analysis technique converts counting data directly to elemental concentration without the use of classical comparative standards and flux monitors. With the totally automated pneumatic sample transfer system, cyclic irradiation-decay-count regimes can be pre-selected for up to 40 samples, and samples can be analyzed with the facility unattended. An automatic data control system starts and stops a high-resolution gamma-ray spectrometer and/or a delayed-neutron detector; the system also stores data and controls output modes. Gamma ray data are reduced by three main programs in the IBM 360/195 computer: the 4096-channel spectrum and pertinent experimental timing, counting, and sample data are stored on magnetic tape; the spectrum is then reduced to a list of significant photopeak energies, integrated areas, and their associated statistical errors; and the third program assigns gamma ray photopeaks to the appropriate neutron activation product(s) by comparing photopeak energies to tabulated gamma ray energies. Photopeak areas are then converted to elemental concentration by using experimental timing and sample data, calculated elemental neutron capture rates, absolute detector efficiencies, and absolute spectroscopic decay data. Calculational procedures have been developed so that fissile material can be analyzed by cyclic neutron activation and delayed-neutron counting procedures. These calculations are based on a 6 half-life group model of delayed neutron emission; calculations include corrections for delayed neutron interference from /sup 17/O. Detection sensitivities of < or = 400 ppB for natural uranium and 8 ppB (< or = 0.5 (nCi/g)) for /sup 239/Pu were demonstrated with 15-g samples at a throughput of up to 140 per day. Over 40 elements can be detected at the sub-ppM level.

  9. MORPHY, a program for an automated "atoms in molecules" analysis

    Science.gov (United States)

    Popelier, Paul L. A.

    1996-02-01

    The operating manual for a structured FORTAN 77 program called MORPHY is presented. This code performs an automated topological analysis of a molecular electron density and its Laplacian. The program is written in a stylistically homogeneous, transparant and modular manner. The input is compact but flexible and allows for multiple jobs in one deck. The output is detailed and has an attractive lay-out. Critical points in the charge density and its Laplacian can be located in a robust and economic way and are displayed via an external on-line visualisation package. The gradient vector field of the charge density can be traced with great accuracy, planar contour, relief and one-dimensional line plots of many scalar properties can be generated. Non-bonded radii are calculated and analytical expressions for interatomic surfaces are computed (with error estimates) and plotted. MORPHY is interfaced with the AIMPAC suite of programs. The capabilities of the program are illustrated with two test runs and five selected figures.

  10. RPCA-KFE: Key Frame Extraction for Video Using Robust Principal Component Analysis.

    Science.gov (United States)

    Dang, Chinh; Radha, Hayder

    2015-11-01

    Key frame extraction algorithms consider the problem of selecting a subset of the most informative frames from a video to summarize its content. Several applications, such as video summarization, search, indexing, and prints from video, can benefit from extracted key frames of the video under consideration. Most approaches in this class of algorithms work directly with the input video data set, without considering the underlying low-rank structure of the data set. Other algorithms exploit the low-rank component only, ignoring the other key information in the video. In this paper, a novel key frame extraction framework based on robust principal component analysis (RPCA) is proposed. Furthermore, we target the challenging application of extracting key frames from unstructured consumer videos. The proposed framework is motivated by the observation that the RPCA decomposes an input data into: 1) a low-rank component that reveals the systematic information across the elements of the data set and 2) a set of sparse components each of which containing distinct information about each element in the same data set. The two information types are combined into a single l1-norm-based non-convex optimization problem to extract the desired number of key frames. Moreover, we develop a novel iterative algorithm to solve this optimization problem. The proposed RPCA-based framework does not require shot(s) detection, segmentation, or semantic understanding of the underlying video. Finally, experiments are performed on a variety of consumer and other types of videos. A comparison of the results obtained by our method with the ground truth and with related state-of-the-art algorithms clearly illustrates the viability of the proposed RPCA-based framework. PMID:26087486

  11. Automated Facial Action Coding System for dynamic analysis of facial expressions in neuropsychiatric disorders.

    Science.gov (United States)

    Hamm, Jihun; Kohler, Christian G; Gur, Ruben C; Verma, Ragini

    2011-09-15

    Facial expression is widely used to evaluate emotional impairment in neuropsychiatric disorders. Ekman and Friesen's Facial Action Coding System (FACS) encodes movements of individual facial muscles from distinct momentary changes in facial appearance. Unlike facial expression ratings based on categorization of expressions into prototypical emotions (happiness, sadness, anger, fear, disgust, etc.), FACS can encode ambiguous and subtle expressions, and therefore is potentially more suitable for analyzing the small differences in facial affect. However, FACS rating requires extensive training, and is time consuming and subjective thus prone to bias. To overcome these limitations, we developed an automated FACS based on advanced computer science technology. The system automatically tracks faces in a video, extracts geometric and texture features, and produces temporal profiles of each facial muscle movement. These profiles are quantified to compute frequencies of single and combined Action Units (AUs) in videos, and they can facilitate a statistical study of large populations in disorders known to impact facial expression. We derived quantitative measures of flat and inappropriate facial affect automatically from temporal AU profiles. Applicability of the automated FACS was illustrated in a pilot study, by applying it to data of videos from eight schizophrenia patients and controls. We created temporal AU profiles that provided rich information on the dynamics of facial muscle movements for each subject. The quantitative measures of flatness and inappropriateness showed clear differences between patients and the controls, highlighting their potential in automatic and objective quantification of symptom severity.

  12. Building a Reduced Reference Video Quality Metric with Very Low Overhead Using Multivariate Data Analysis

    Directory of Open Access Journals (Sweden)

    Tobias Oelbaum

    2008-10-01

    Full Text Available In this contribution a reduced reference video quality metric for AVC/H.264 is proposed that needs only a very low overhead (not more than two bytes per sequence. This reduced reference metric uses well established algorithms to measure objective features of the video such as 'blur' or 'blocking'. Those measurements are then combined into a single measurement for the overall video quality. The weights of the single features and the combination of those are determined using methods provided by multivariate data analysis. The proposed metric is verified using a data set of AVC/H.264 encoded videos and the corresponding results of a carefully designed and conducted subjective evaluation. Results show that the proposed reduced reference metric not only outperforms standard PSNR but also two well known full reference metrics.

  13. The Video Genome

    CERN Document Server

    Bronstein, Alexander M; Kimmel, Ron

    2010-01-01

    Fast evolution of Internet technologies has led to an explosive growth of video data available in the public domain and created unprecedented challenges in the analysis, organization, management, and control of such content. The problems encountered in video analysis such as identifying a video in a large database (e.g. detecting pirated content in YouTube), putting together video fragments, finding similarities and common ancestry between different versions of a video, have analogous counterpart problems in genetic research and analysis of DNA and protein sequences. In this paper, we exploit the analogy between genetic sequences and videos and propose an approach to video analysis motivated by genomic research. Representing video information as video DNA sequences and applying bioinformatic algorithms allows to search, match, and compare videos in large-scale databases. We show an application for content-based metadata mapping between versions of annotated video.

  14. Interobserver and Intraobserver Variability in pH-Impedance Analysis between 10 Experts and Automated Analysis

    DEFF Research Database (Denmark)

    Loots, Clara M; van Wijk, Michiel P; Blondeau, Kathleen;

    2011-01-01

    OBJECTIVE: To determine interobserver and intraobserver variability in pH-impedance interpretation between experts and accuracy of automated analysis (AA). STUDY DESIGN: Ten pediatric 24-hour pH-impedance tracings were analyzed by 10 observers from 7 world groups and with AA. Detection of gastroe......OBJECTIVE: To determine interobserver and intraobserver variability in pH-impedance interpretation between experts and accuracy of automated analysis (AA). STUDY DESIGN: Ten pediatric 24-hour pH-impedance tracings were analyzed by 10 observers from 7 world groups and with AA. Detection....... CONCLUSION: Interobserver agreement in combined pH-multichannel intraluminal impedance analysis in experts is moderate; only 42% of GER episodes were detected by the majority of observers. Detection of total GER numbers is more consistent. Considering these poor outcomes, AA seems favorable compared...

  15. Software tools for the analysis of video meteors emission spectra

    Science.gov (United States)

    Madiedo, J. M.; Toscano, F. M.; Trigo-Rodriguez, J. M.

    2011-10-01

    One of the goals of the SPanish Meteor Network (SPMN) is related to the study of the chemical composition of meteoroids by analyzing the emission spectra resulting from the ablation of these particles of interplanetary matter in the atmosphere. With this aim, some of the CCD video devices we employ to observe the nigh sky are endowed with holographic diffraction gratings, and a continuous monitoring of meteor activity is performed. We have recently developed a new software to analyze these spectra. A description of this computer program is given, and some of the results obtained so far are presented here.

  16. Automated Design and Analysis Tool for CEV Structural and TPS Components Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The innovation of the proposed effort is a unique automated process for the analysis, design, and sizing of CEV structures and TPS. This developed process will...

  17. Automated Design and Analysis Tool for CLV/CEV Composite and Metallic Structural Components Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The innovation of the proposed effort is a unique automated process for the analysis, design, and sizing of CLV/CEV composite and metallic structures. This...

  18. Ground-target detection system for digital video database

    Science.gov (United States)

    Liang, Yiqing; Huang, Jeffrey R.; Wolf, Wayne H.; Liu, Bede

    1998-07-01

    As more and more visual information is available on video, information indexing and retrieval of digital video data is becoming important. A digital video database embedded with visual information processing using image analysis and image understanding techniques such as automated target detection, classification, and identification can provide query results of higher quality. We address in this paper a robust digital video database system within which a target detection module is implemented and applied onto the keyframe images extracted by our digital library system. The tasks and application scenarios under consideration involve indexing video with information about detection and verification of artificial objects that exist in video scenes. Based on the scenario that the video sequences are acquired by an onboard camera mounted on Predator unmanned aircraft, we demonstrate how an incoming video stream is structured into different levels -- video program level, scene level, shot level, and object level, based on the analysis of video contents using global imagery information. We then consider that the keyframe representation is most appropriate for video processing and it holds the property that can be used as the input for our detection module. As a result, video processing becomes feasible in terms of decreased computational resources spent and increased confidence in the (detection) decisions reached. The architecture we proposed can respond to the query of whether artificial structures and suspected combat vehicles are detected. The architecture for ground detection takes advantage of the image understanding paradigm and it involves different methods to locate and identify the artificial object rather than nature background such as tree, grass, and cloud. Edge detection, morphological transformation, line and parallel line detection using Hough transform applied on key frame images at video shot level are introduced in our detection module. This function can

  19. Automated Production Flow Line Failure Rate Mathematical Analysis with Probability Theory

    Directory of Open Access Journals (Sweden)

    Tan Chan Sin

    2014-12-01

    Full Text Available Automated lines have been widely used in the industries especially for mass production and to customize product. Productivity of automated line is a crucial indicator to show the output and performance of the production. Failure or breakdown of station or mechanisms is commonly occurs in the automated line in real condition due to the technological and technical problem which is highly affect the productivity. The failure rates of automated line are not express or analyse in terms of mathematic form. This paper presents the mathematic analysis by using probability theory towards the failure condition in automated line. The mathematic express for failure rates can produce and forecast the output of productivity accurately

  20. Rate distortion analysis for spatially scalable video coding.

    Science.gov (United States)

    Zhang, Rong; Comer, Mary L

    2010-11-01

    In this paper, we derive the rate distortion lower bounds of spatially scalable video coding techniques. The methods we evaluate are subband and pyramid motion compensation where temporal redundancies in the same spatial layer as well as interlayer spatial redundancies are exploited in the enhancement layer encoding. The rate distortion bounds are derived from rate distortion theory for stationary Gaussian signals where mean square error is used as the distortion criteria. Assuming that the base layer is encoded by a non-scalable video coder, we derive the rate distortion functions for the enhancement layer, which depend on the power spectral density of the input signal, the motion prediction error probability density function and the base layer encoding performance. We will show that pyramid and subband methods are expected to outperform independently encoding the enhancement layer using motion-compensated prediction, in terms of rate distortion efficiency, when the base layer is encoded at a relatively higher quality or less accurate displacement estimation happens in the enhancement layer. PMID:20519155

  1. Automated analysis of eclipsing binary lightcurves. I. EBAS -- a new Eclipsing Binary Automated Solver with EBOP

    CERN Document Server

    Tamuz, O; North, P; Mazeh, Tsevi; North, Pierre; Tamuz, Omer

    2006-01-01

    We present a new algorithm -- Eclipsing Binary Automated Solver (EBAS), to analyse lightcurves of eclipsing binaries. The algorithm is designed to analyse large numbers of lightcurves, and is therefore based on the relatively fast EBOP code. To facilitate the search for the best solution, EBAS uses two parameter transformations. Instead of the radii of the two stellar components, EBAS uses the sum of radii and their ratio, while the inclination is transformed into the impact parameter. To replace human visual assessment, we introduce a new 'alarm' goodness-of-fit statistic that takes into account correlation between neighbouring residuals. We perform extensive tests and simulations that show that our algorithm converges well, finds a good set of parameters and provides reasonable error estimation.

  2. A semi-automated computer tool for the analysis of retinal vessel diameter dynamics.

    Science.gov (United States)

    Euvrard, Guillaume; Genevois, Olivier; Rivals, Isabelle; Massin, Pascale; Collet, Amélie; Sahel, José-Alain; Paques, Michel

    2013-06-01

    Retinal vessels are directly accessible to clinical observation. This has numerous potential interests for medical investigations. Using the Retinal Vessel Analyzer, a dedicated eye fundus camera enabling dynamic, video-rate recording of micrometric changes of the diameter of retinal vessels, we developed a semi-automated computer tool that extracts the heart beat rate and pulse amplitude values from the records. The extracted data enabled us to show that there is a decreasing relationship between heart beat rate and pulse amplitude of arteries and veins. Such an approach will facilitate the modeling of hemodynamic interactions in small vessels. PMID:23566397

  3. Video monitoring analysis of the dynamics at fissure eruptions

    Science.gov (United States)

    Witt, Tanja; Walter, Thomas R.

    2016-04-01

    At basaltic eruption often lava fountains occur. The fountains mainly develop at erupting fissures, underlain by a magma-filled dike transporting the magma horizontally and vertically. Understanding of the dynamics of the deep dike and fracture mechanisms are mainly based on geophysical data as well as observations from seismic or geodetic networks. At the surface, however, new methods are needed to allow detailed interpretation on the eruption velocities, interactions between vents and complexities in the magma paths. With video cameras we collected imaging data from different erupting fissures. We find that lava fountaining is often correlated at distinct vents. From the frames of the videos we calculated the height and velocities of fountains as a function of time. Lava fountains often show a pulsating regime, that may change over time. Comparing the fountain height as a function of time of different vents by an time-dependent cross-correlation, we find a time lag between the pulses at adjacent vents. From this we derive an apparent velocity of temporal separation between vents, associated with the fountaining activity based on the calculated time lag and the vent distances. Although the correlation system can change episodically and sporadically, both the frequency of the fountains and eruption and the rest time between single fountains remain remarkably similar for adjacent lava fountains imply a controlling process in the magma feeder system itself. We present and compare our method for the Kamoamoa eruption 2011 (Hawaii) and the Holuhraun eruption 2014/2015 (Iceland). Both sites show a significant time shift between the single pulses of adjacent vents. We compare our velocities determined by this time shift to the assumed magma flow velocity in the dike as determined by independent models. Therefore we conjecture that the time shift of venting activity may allow to estimate the dynamics of magma and fluid migration at depth, as well as to identify the

  4. High speed video analysis study of elastic and inelastic collisions

    Science.gov (United States)

    Baker, Andrew; Beckey, Jacob; Aravind, Vasudeva; Clarion Team

    We study inelastic and elastic collisions with a high frame rate video capture to study the process of deformation and other energy transformations during collision. Snapshots are acquired before and after collision and the dynamics of collision are analyzed using Tracker software. By observing the rapid changes (over few milliseconds) and slower changes (over few seconds) in momentum and kinetic energy during the process of collision, we study the loss of momentum and kinetic energy over time. Using this data, it could be possible to design experiments that reduce error involved in these experiments, helping students build better and more robust models to understand the physical world. We thank Clarion University undergraduate student grant for financial support involving this project.

  5. Prototype Software for Automated Structural Analysis of Systems

    DEFF Research Database (Denmark)

    Jørgensen, A.; Izadi-Zamanabadi, Roozbeh; Kristensen, M.

    2004-01-01

    In this paper we present a prototype software tool that is developed to analyse the structural model of automated systems in order to identify redundant information that is hence utilized for Fault detection and Isolation (FDI) purposes. The dedicated algorithms in this software tool use a tri...

  6. Use of simulated patients and reflective video analysis to assess occupational therapy students' preparedness for fieldwork.

    Science.gov (United States)

    Giles, Amanda K; Carson, Nancy E; Breland, Hazel L; Coker-Bolt, Patty; Bowman, Peter J

    2014-01-01

    Educators must determine whether occupational therapy students are adequately prepared for Level II fieldwork once they have successfully completed the didactic portion of their coursework. Although studies have shown that students regard the use of video cameras and simulated patient encounters as useful tools for assessing professional and clinical behaviors, little has been published in the occupational therapy literature regarding the practical application of simulated patients or reflective video analysis. We describe a model for a final Comprehensive Practical Exam that uses both simulated patients and reflective video analysis to assess student preparedness for Level II fieldwork, and we report on student perceptions of these instructional modalities. We provide recommendations for designing, implementing, and evaluating simulated patient experiences in light of existing educational theory. PMID:25397940

  7. Researchers and teachers learning together and from each other using video-based multimodal analysis

    DEFF Research Database (Denmark)

    Davidsen, Jacob; Vanderlinde, Ruben

    2014-01-01

    and learning activities in two separate classrooms; the learning and collaborative processes were captured by using video, collecting over 150 hours of footage. By using digital research technologies and a longitudinal design, the authors of the research project studied how teachers and children gradually......This paper discusses a year-long technology integration project, during which teachers and researchers joined forces to explore children’s collaborative activities through the use of touch-screens. In the research project, discussed in this paper, 16 touch-screens were integrated into teaching...... integrated touch-screens into their teaching and learning. This paper examines the methodological usefulness of video-based multimodal analysis. Through reflection on the research project, we discuss how, by using video-based multimodal analysis, researchers and teachers can study children’s touch...

  8. Development of students' conceptual thinking by means of video analysis and interactive simulations at technical universities

    Science.gov (United States)

    Hockicko, Peter; Krišt‧ák, L.‧uboš; Němec, Miroslav

    2015-03-01

    Video analysis, using the program Tracker (Open Source Physics), in the educational process introduces a new creative method of teaching physics and makes natural sciences more interesting for students. This way of exploring the laws of nature can amaze students because this illustrative and interactive educational software inspires them to think creatively, improves their performance and helps them in studying physics. This paper deals with increasing the key competencies in engineering by analysing real-life situation videos - physical problems - by means of video analysis and the modelling tools using the program Tracker and simulations of physical phenomena from The Physics Education Technology (PhET™) Project (VAS method of problem tasks). The statistical testing using the t-test confirmed the significance of the differences in the knowledge of the experimental and control groups, which were the result of interactive method application.

  9. Video content analysis on body-worn cameras for retrospective investigation

    Science.gov (United States)

    Bouma, Henri; Baan, Jan; ter Haar, Frank B.; Eendebak, Pieter T.; den Hollander, Richard J. M.; Burghouts, Gertjan J.; Wijn, Remco; van den Broek, Sebastiaan P.; van Rest, Jeroen H. C.

    2015-10-01

    In the security domain, cameras are important to assess critical situations. Apart from fixed surveillance cameras we observe an increasing number of sensors on mobile platforms, such as drones, vehicles and persons. Mobile cameras allow rapid and local deployment, enabling many novel applications and effects, such as the reduction of violence between police and citizens. However, the increased use of bodycams also creates potential challenges. For example: how can end-users extract information from the abundance of video, how can the information be presented, and how can an officer retrieve information efficiently? Nevertheless, such video gives the opportunity to stimulate the professionals' memory, and support complete and accurate reporting. In this paper, we show how video content analysis (VCA) can address these challenges and seize these opportunities. To this end, we focus on methods for creating a complete summary of the video, which allows quick retrieval of relevant fragments. The content analysis for summarization consists of several components, such as stabilization, scene selection, motion estimation, localization, pedestrian tracking and action recognition in the video from a bodycam. The different components and visual representations of summaries are presented for retrospective investigation.

  10. Kinetics analysis and automated online screening of aminocarbonylation of aryl halides in flow

    OpenAIRE

    Moore, Jason S.; Smith, Christopher D; Jensen, Klavs F.

    2016-01-01

    Temperature, pressure, gas stoichiometry, and residence time were varied to control the yield and product distribution of the palladium-catalyzed aminocarbonylation of aromatic bromides in both a silicon microreactor and a packed-bed tubular reactor. Automation of the system set points and product sampling enabled facile and repeatable reaction analysis with minimal operator supervision. It was observed that the reaction was divided into two temperature regimes. An automated system was used t...

  11. Automated red blood cell analysis compared with routine red blood cell morphology by smear review

    OpenAIRE

    Dr.Poonam Radadiya; Dr.Nandita Mehta; Dr.Hansa Goswami; Dr.R.N.Gonsai

    2015-01-01

    The RBC histogram is an integral part of automated haematology analysis and is now routinely available on all automated cell counters. This histogram and other associated complete blood count (CBC) parameters have been found abnormal in various haematological conditions and may provide major clues in the diagnosis and management of significant red cell disorders. Performing manual blood smears is important to ensure the quality of blood count results an...

  12. Video Traffic Flow Analysis in Distributed System during Interactive Session

    Directory of Open Access Journals (Sweden)

    Soumen Kanrar

    2016-01-01

    Full Text Available Cost effective, smooth multimedia streaming to the remote customer through the distributed “video on demand” architecture is the most challenging research issue over the decade. The hierarchical system design is used for distributed network to satisfy more requesting users. The distributed hierarchical network system contains all the local and remote storage multimedia servers. The hierarchical network system is used to provide continuous availability of the data stream to the requesting customer. In this work, we propose a novel data stream that handles the methodology for reducing the connection failure and smooth multimedia stream delivery to the remote customer. The proposed session based single-user bandwidth requirement model presents the bandwidth requirement for any interactive session like pause, move slowly, rewind, skip some of the frame, and move fast with some constant number of frames. The proposed session based optimum storage finding algorithm reduces the search hop count towards the remote storage-data server. The modeling and simulation result shows the better impact over the distributed system architecture. This work presents the novel bandwidth requirement model at the interactive session and gives the trade-off in communication and storage costs for different system resource configurations.

  13. Automated Spatio-Temporal Analysis of Remotely Sensed Imagery for Water Resources Management

    Science.gov (United States)

    Bahr, Thomas

    2016-04-01

    a common video format. • Plotting the time series of water surface area in square kilometers. The automated spatio-temporal analysis introduced here can be embedded in virtually any existing geospatial workflow for operational applications. Three integration options were implemented in this case study: • Integration within any ArcGIS environment whether deployed on the desktop, in the cloud, or online. Execution uses a customized ArcGIS script tool. A Python script file retrieves the parameters from the user interface and runs the precompiled IDL code. That IDL code is used to interface between the Python script and the relevant ENVITasks. • Publishing the spatio-temporal analysis tasks as services via the ENVI Services Engine (ESE). ESE is a cloud-based image analysis solution to publish and deploy advanced ENVI image and data analytics to existing enterprise infrastructures. For this purpose the entire IDL code can be capsuled in a single ENVITask. • Integration in an existing geospatial workflow using the Python-to-IDL Bridge. This mechanism allows calling IDL code within Python on a user-defined platform. The results of this case study verify the drastic decrease of the amount of surface water in the AOI, indicative of the major drought that is pervasive throughout California. Accordingly, the time series analysis was correlated successfully with the daily reservoir elevations of the Don Pedro reservoir (station DNP, operated by CDEC).

  14. Quantization of polyphenolic compounds in histological sections of grape berries by automated color image analysis

    Science.gov (United States)

    Clement, Alain; Vigouroux, Bertnand

    2003-04-01

    We present new results in applied color image analysis that put in evidence the significant influence of soil on localization and appearance of polyphenols in grapes. These results have been obtained with a new unsupervised classification algorithm founded on hierarchical analysis of color histograms. The process is automated thanks to a software platform we developed specifically for color image analysis and it's applications.

  15. Investigating the Magnetic Interaction with Geomag and Tracker Video Analysis: Static Equilibrium and Anharmonic Dynamics

    Science.gov (United States)

    Onorato, P.; Mascheretti, P.; DeAmbrosis, A.

    2012-01-01

    In this paper, we describe how simple experiments realizable by using easily found and low-cost materials allow students to explore quantitatively the magnetic interaction thanks to the help of an Open Source Physics tool, the Tracker Video Analysis software. The static equilibrium of a "column" of permanents magnets is carefully investigated by…

  16. Evaluating the Evidence Base of Video Analysis: A Special Education Teacher Development Tool

    Science.gov (United States)

    Nagro, Sarah A.; Cornelius, Kyena E.

    2013-01-01

    Special education teacher development is continually studied to determine best practices for improving teacher quality and promoting student learning. Video analysis is commonly included in teacher development targeting both teacher thinking and practice intended to improve learning opportunities for students. Positive research findings support…

  17. Amusement Machine Playing in Childhood and Adolescence: A Comparative Analysis of Video Games and Fruit Machines.

    Science.gov (United States)

    Griffiths, Mark D.

    1991-01-01

    Attempts to put ongoing U.S. and United Kingdom amusement machine debates into empirical perspective. Conducts comparative analysis of video games and fruit machines (slot machines) by examining incidence of play, sex differences and psychological characteristics of machine players, observational findings in arcade settings, alleged negative…

  18. XbD Video 3, The SEEing process of qualitative data analysis

    DEFF Research Database (Denmark)

    2013-01-01

    This is the third video in the Experience-based Designing series. It presents a live classroom demonstration of a nine step qualitative data analysis process called SEEing: The process is useful for uncovering or discovering deeper layers of 'meaning' and meaning structures in an experience...

  19. Estimation of low back moments from video analysis: A validation study

    NARCIS (Netherlands)

    Coenen, P.; Kingma, I.; Boot, C.R.L.; Faber, G.S.; Xu, X.; Bongers, P.M.; Dieën, J.H. van

    2011-01-01

    This study aimed to develop, compare and validate two versions of a video analysis method for assessment of low back moments during occupational lifting tasks since for epidemiological studies and ergonomic practice relatively cheap and easily applicable methods to assess low back loads are needed.

  20. A novel automated image analysis method for accurate adipocyte quantification

    OpenAIRE

    Osman, Osman S.; Selway, Joanne L; Kępczyńska, Małgorzata A; Stocker, Claire J.; O’Dowd, Jacqueline F; Cawthorne, Michael A.; Arch, Jonathan RS; Jassim, Sabah; Langlands, Kenneth

    2013-01-01

    Increased adipocyte size and number are associated with many of the adverse effects observed in metabolic disease states. While methods to quantify such changes in the adipocyte are of scientific and clinical interest, manual methods to determine adipocyte size are both laborious and intractable to large scale investigations. Moreover, existing computational methods are not fully automated. We, therefore, developed a novel automatic method to provide accurate measurements of the cross-section...

  1. CRITICAL ASSESSMENT OF AUTOMATED FLOW CYTOMETRY DATA ANALYSIS TECHNIQUES

    OpenAIRE

    Aghaeepour, Nima; Finak, Greg; ,; Hoos, Holger; Mosmann, Tim R; Gottardo, Raphael; Brinkman, Ryan; Scheuermann, Richard H.

    2013-01-01

    Traditional methods for flow cytometry (FCM) data processing rely on subjective manual gating. Recently, several groups have developed computational methods for identifying cell populations in multidimensional FCM data. The Flow Cytometry: Critical Assessment of Population Identification Methods (FlowCAP) challenges were established to compare the performance of these methods on two tasks – mammalian cell population identification to determine if automated algorithms can reproduce expert manu...

  2. Alert management for home healthcare based on home automation analysis.

    Science.gov (United States)

    Truong, T T; de Lamotte, F; Diguet, J-Ph; Said-Hocine, F

    2010-01-01

    Rising healthcare for elder and disabled people can be controlled by offering people autonomy at home by means of information technology. In this paper, we present an original and sensorless alert management solution which performs multimedia and home automation service discrimination and extracts highly regular home activities as sensors for alert management. The results of simulation data, based on real context, allow us to evaluate our approach before application to real data.

  3. Object Type Recognition for Automated Analysis of Protein Subcellular Location

    OpenAIRE

    Zhao, Ting; Velliste, Meel; Boland, Michael V.; Murphy, Robert F.

    2005-01-01

    The new field of location proteomics seeks to provide a comprehensive, objective characterization of the subcellular locations of all proteins expressed in a given cell type. Previous work has demonstrated that automated classifiers can recognize the patterns of all major subcellular organelles and structures in fluorescence microscope images with high accuracy. However, since some proteins may be present in more than one organelle, this paper addresses a more difficult task: recognizing a pa...

  4. Analysis of Defense Language Institute automated student questionnaire data

    OpenAIRE

    Strycharz, Theodore M.

    1996-01-01

    This thesis explores the dimensionality of the Defense Language Institute's (DLI) primary student feed back tool - the Automated Student Questionnaire (ASQ). In addition a data set from ASQ 2.0 (the newest version) is analyzed for trends in student satisfaction across the sub-scales of sex, pay grade, and Defense Language Proficiency Test (DLPT) results. The method of principal components is used to derive initial factors. Although an interpretation of those factors seems plausible, these are...

  5. Methods of automated cell analysis and their application in radiation biology

    International Nuclear Information System (INIS)

    The present review is concerned with the methods of automated analysis of biological microobjects and covers two groups into which all the systems of automated analysis can be divided-systems of flow ( flow cytometry) and scanning (image analysis systems) type. Particular emphasis has been placed on their use in radiobiological studies, namely, in the micronucleus test, a cytogenetic assay for monitoring the clastogenic action of ionizing radiation commonly used at present. Examples of suing methods described and actual setups in other biomedical researches are given. Analysis of advantages and disadvantages of the methods of automated cell analysis enables to choose more thoroughly between the systems of flow and scanning type to use them in particular research

  6. A content analysis of smoking fetish videos on YouTube: regulatory implications for tobacco control.

    Science.gov (United States)

    Kim, Kyongseok; Paek, Hye-Jin; Lynn, Jordan

    2010-03-01

    This study examined the prevalence, accessibility, and characteristics of eroticized smoking portrayal, also referred to as smoking fetish, on YouTube. The analysis of 200 smoking fetish videos revealed that the smoking fetish videos are prevalent and accessible to adolescents on the website. They featured explicit smoking behavior by sexy, young, and healthy females, with the content corresponding to PG-13 and R movie ratings. We discuss a potential impact of the prosmoking image on youth according to social cognitive theory, and implications for tobacco control. PMID:20390676

  7. Extending and automating a Systems-Theoretic hazard analysis for requirements generation and analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Thomas, John (Massachusetts Institute of Technology)

    2012-05-01

    Systems Theoretic Process Analysis (STPA) is a powerful new hazard analysis method designed to go beyond traditional safety techniques - such as Fault Tree Analysis (FTA) - that overlook important causes of accidents like flawed requirements, dysfunctional component interactions, and software errors. While proving to be very effective on real systems, no formal structure has been defined for STPA and its application has been ad-hoc with no rigorous procedures or model-based design tools. This report defines a formal mathematical structure underlying STPA and describes a procedure for systematically performing an STPA analysis based on that structure. A method for using the results of the hazard analysis to generate formal safety-critical, model-based system and software requirements is also presented. Techniques to automate both the analysis and the requirements generation are introduced, as well as a method to detect conflicts between the safety and other functional model-based requirements during early development of the system.

  8. Fuzzy Emotional Semantic Analysis and Automated Annotation of Scene Images

    Directory of Open Access Journals (Sweden)

    Jianfang Cao

    2015-01-01

    Full Text Available With the advances in electronic and imaging techniques, the production of digital images has rapidly increased, and the extraction and automated annotation of emotional semantics implied by images have become issues that must be urgently addressed. To better simulate human subjectivity and ambiguity for understanding scene images, the current study proposes an emotional semantic annotation method for scene images based on fuzzy set theory. A fuzzy membership degree was calculated to describe the emotional degree of a scene image and was implemented using the Adaboost algorithm and a back-propagation (BP neural network. The automated annotation method was trained and tested using scene images from the SUN Database. The annotation results were then compared with those based on artificial annotation. Our method showed an annotation accuracy rate of 91.2% for basic emotional values and 82.4% after extended emotional values were added, which correspond to increases of 5.5% and 8.9%, respectively, compared with the results from using a single BP neural network algorithm. Furthermore, the retrieval accuracy rate based on our method reached approximately 89%. This study attempts to lay a solid foundation for the automated emotional semantic annotation of more types of images and therefore is of practical significance.

  9. Analysis of Temporal Effects in Quality Assessment of High Definition Video

    Directory of Open Access Journals (Sweden)

    M. Slanina

    2012-04-01

    Full Text Available The paper deals with the temporal properties of a~scoring session when assessing the subjective quality of full HD video sequences using the continuous video quality tests. The performed experiment uses a modification of the standard test methodology described in ITU-R Rec. BT.500. It focuses on the reactive times and the time needed for the user ratings to stabilize at the beginning of a video sequence. In order to compare the subjective scores with objective quality measures, we also provide an analysis of PSNR and VQM for the considered sequences to find that correlation of the objective metric results with user scores, recored during playback and after playback, differs significantly.

  10. Comparative analysis of video processing and 3D rendering for cloud video games using different virtualization technologies

    Science.gov (United States)

    Bada, Adedayo; Alcaraz-Calero, Jose M.; Wang, Qi; Grecos, Christos

    2014-05-01

    This paper describes a comprehensive empirical performance evaluation of 3D video processing employing the physical/virtual architecture implemented in a cloud environment. Different virtualization technologies, virtual video cards and various 3D benchmarks tools have been utilized in order to analyse the optimal performance in the context of 3D online gaming applications. This study highlights 3D video rendering performance under each type of hypervisors, and other factors including network I/O, disk I/O and memory usage. Comparisons of these factors under well-known virtual display technologies such as VNC, Spice and Virtual 3D adaptors reveal the strengths and weaknesses of the various hypervisors with respect to 3D video rendering and streaming.

  11. Correcting Students' Misconceptions about Automobile Braking Distances and Video Analysis Using Interactive Program Tracker

    Science.gov (United States)

    Hockicko, Peter; Trpišová, Beáta; Ondruš, Ján

    2014-12-01

    The present paper informs about an analysis of students' conceptions about car braking distances and also presents one of the novel methods of learning: an interactive computer program Tracker that we used to analyse the process of braking of a car. The analysis of the students' conceptions about car braking distances consisted in obtaining their estimates of these quantities before and after watching a video recording of a car braking from various initial speeds to a complete stop and subsequent application of mathematical statistics to the obtained sets of students' answers. The results revealed that the difference between the value of the car braking distance estimated before watching the video and the real value of this distance was not caused by a random error but by a systematic error which was due to the incorrect students' conceptions about the car braking process. Watching the video significantly improved the students' estimates of the car braking distance, and we show that in this case, the difference between the estimated value and the real value of the car braking distance was due only to a random error, i.e. the students' conceptions about the car braking process were corrected. Some of the students subsequently performed video analysis of the braking processes of cars of various brands and under various conditions by means of Tracker that gave them exact knowledge of the physical quantities, which characterize a motor vehicle braking. Interviewing some of these students brought very positive reactions to this novel method of learning.

  12. Engineering Mathematical Analysis Method for Productivity Rate in Linear Arrangement Serial Structure Automated Flow Assembly Line

    Directory of Open Access Journals (Sweden)

    Tan Chan Sin

    2015-01-01

    Full Text Available Productivity rate (Q or production rate is one of the important indicator criteria for industrial engineer to improve the system and finish good output in production or assembly line. Mathematical and statistical analysis method is required to be applied for productivity rate in industry visual overviews of the failure factors and further improvement within the production line especially for automated flow line since it is complicated. Mathematical model of productivity rate in linear arrangement serial structure automated flow line with different failure rate and bottleneck machining time parameters becomes the basic model for this productivity analysis. This paper presents the engineering mathematical analysis method which is applied in an automotive company which possesses automated flow assembly line in final assembly line to produce motorcycle in Malaysia. DCAS engineering and mathematical analysis method that consists of four stages known as data collection, calculation and comparison, analysis, and sustainable improvement is used to analyze productivity in automated flow assembly line based on particular mathematical model. Variety of failure rate that causes loss of productivity and bottleneck machining time is shown specifically in mathematic figure and presents the sustainable solution for productivity improvement for this final assembly automated flow line.

  13. Automated analysis of short responses in an interactive synthetic tutoring system for introductory physics

    Science.gov (United States)

    Nakamura, Christopher M.; Murphy, Sytil K.; Christel, Michael G.; Stevens, Scott M.; Zollman, Dean A.

    2016-06-01

    Computer-automated assessment of students' text responses to short-answer questions represents an important enabling technology for online learning environments. We have investigated the use of machine learning to train computer models capable of automatically classifying short-answer responses and assessed the results. Our investigations are part of a project to develop and test an interactive learning environment designed to help students learn introductory physics concepts. The system is designed around an interactive video tutoring interface. We have analyzed 9 with about 150 responses or less. We observe for 4 of the 9 automated assessment with interrater agreement of 70% or better with the human rater. This level of agreement may represent a baseline for practical utility in instruction and indicates that the method warrants further investigation for use in this type of application. Our results also suggest strategies that may be useful for writing activities and questions that are more appropriate for automated assessment. These strategies include building activities that have relatively few conceptually distinct ways of perceiving the physical behavior of relatively few physical objects. Further success in this direction may allow us to promote interactivity and better provide feedback in online learning systems. These capabilities could enable our system to function more like a real tutor.

  14. Automated frame selection process for high-resolution microendoscopy

    Science.gov (United States)

    Ishijima, Ayumu; Schwarz, Richard A.; Shin, Dongsuk; Mondrik, Sharon; Vigneswaran, Nadarajah; Gillenwater, Ann M.; Anandasabapathy, Sharmila; Richards-Kortum, Rebecca

    2015-04-01

    We developed an automated frame selection algorithm for high-resolution microendoscopy video sequences. The algorithm rapidly selects a representative frame with minimal motion artifact from a short video sequence, enabling fully automated image analysis at the point-of-care. The algorithm was evaluated by quantitative comparison of diagnostically relevant image features and diagnostic classification results obtained using automated frame selection versus manual frame selection. A data set consisting of video sequences collected in vivo from 100 oral sites and 167 esophageal sites was used in the analysis. The area under the receiver operating characteristic curve was 0.78 (automated selection) versus 0.82 (manual selection) for oral sites, and 0.93 (automated selection) versus 0.92 (manual selection) for esophageal sites. The implementation of fully automated high-resolution microendoscopy at the point-of-care has the potential to reduce the number of biopsies needed for accurate diagnosis of precancer and cancer in low-resource settings where there may be limited infrastructure and personnel for standard histologic analysis.

  15. Automated Live Forensics Analysis for Volatile Data Acquisition

    Directory of Open Access Journals (Sweden)

    Bharath B,

    2015-03-01

    Full Text Available The increase in sophisticated attack on computers needs the assistance of Live forensics to uncover the evidence since traditional forensics methods doesn’t collect volatile data. The volatile data can ease the difficulty towards investigation in fact it can provide investigator with rich information towards solving a case. Here we are trying to eliminate the complexity involved in normal process by automating the process of acquisition and analyzing at the same time providing integrity towards evidence data through python scripting.

  16. Application of quantum dots as analytical tools in automated chemical analysis: A review

    Energy Technology Data Exchange (ETDEWEB)

    Frigerio, Christian; Ribeiro, David S.M.; Rodrigues, S. Sofia M.; Abreu, Vera L.R.G.; Barbosa, Joao A.C.; Prior, Joao A.V.; Marques, Karine L. [REQUIMTE, Laboratory of Applied Chemistry, Department of Chemical Sciences, Faculty of Pharmacy of Porto University, Rua Jorge Viterbo Ferreira, 228, 4050-313 Porto (Portugal); Santos, Joao L.M., E-mail: joaolms@ff.up.pt [REQUIMTE, Laboratory of Applied Chemistry, Department of Chemical Sciences, Faculty of Pharmacy of Porto University, Rua Jorge Viterbo Ferreira, 228, 4050-313 Porto (Portugal)

    2012-07-20

    Highlights: Black-Right-Pointing-Pointer Review on quantum dots application in automated chemical analysis. Black-Right-Pointing-Pointer Automation by using flow-based techniques. Black-Right-Pointing-Pointer Quantum dots in liquid chromatography and capillary electrophoresis. Black-Right-Pointing-Pointer Detection by fluorescence and chemiluminescence. Black-Right-Pointing-Pointer Electrochemiluminescence and radical generation. - Abstract: Colloidal semiconductor nanocrystals or quantum dots (QDs) are one of the most relevant developments in the fast-growing world of nanotechnology. Initially proposed as luminescent biological labels, they are finding new important fields of application in analytical chemistry, where their photoluminescent properties have been exploited in environmental monitoring, pharmaceutical and clinical analysis and food quality control. Despite the enormous variety of applications that have been developed, the automation of QDs-based analytical methodologies by resorting to automation tools such as continuous flow analysis and related techniques, which would allow to take advantage of particular features of the nanocrystals such as the versatile surface chemistry and ligand binding ability, the aptitude to generate reactive species, the possibility of encapsulation in different materials while retaining native luminescence providing the means for the implementation of renewable chemosensors or even the utilisation of more drastic and even stability impairing reaction conditions, is hitherto very limited. In this review, we provide insights into the analytical potential of quantum dots focusing on prospects of their utilisation in automated flow-based and flow-related approaches and the future outlook of QDs applications in chemical analysis.

  17. An automated system for whole microscopic image acquisition and analysis.

    Science.gov (United States)

    Bueno, Gloria; Déniz, Oscar; Fernández-Carrobles, María Del Milagro; Vállez, Noelia; Salido, Jesús

    2014-09-01

    The field of anatomic pathology has experienced major changes over the last decade. Virtual microscopy (VM) systems have allowed experts in pathology and other biomedical areas to work in a safer and more collaborative way. VMs are automated systems capable of digitizing microscopic samples that were traditionally examined one by one. The possibility of having digital copies reduces the risk of damaging original samples, and also makes it easier to distribute copies among other pathologists. This article describes the development of an automated high-resolution whole slide imaging (WSI) system tailored to the needs and problems encountered in digital imaging for pathology, from hardware control to the full digitization of samples. The system has been built with an additional digital monochromatic camera together with the color camera by default and LED transmitted illumination (RGB). Monochrome cameras are the preferred method of acquisition for fluorescence microscopy. The system is able to digitize correctly and form large high resolution microscope images for both brightfield and fluorescence. The quality of the digital images has been quantified using three metrics based on sharpness, contrast and focus. It has been proved on 150 tissue samples of brain autopsies, prostate biopsies and lung cytologies, at five magnifications: 2.5×, 10×, 20×, 40×, and 63×. The article is focused on the hardware set-up and the acquisition software, although results of the implemented image processing techniques included in the software and applied to the different tissue samples are also presented.

  18. Automated detection and measurement of isolated retinal arterioles by a combination of edge enhancement and cost analysis.

    Directory of Open Access Journals (Sweden)

    José A Fernández

    Full Text Available Pressure myography studies have played a crucial role in our understanding of vascular physiology and pathophysiology. Such studies depend upon the reliable measurement of changes in the diameter of isolated vessel segments over time. Although several software packages are available to carry out such measurements on small arteries and veins, no such software exists to study smaller vessels (<50 µm in diameter. We provide here a new, freely available open-source algorithm, MyoTracker, to measure and track changes in the diameter of small isolated retinal arterioles. The program has been developed as an ImageJ plug-in and uses a combination of cost analysis and edge enhancement to detect the vessel walls. In tests performed on a dataset of 102 images, automatic measurements were found to be comparable to those of manual ones. The program was also able to track both fast and slow constrictions and dilations during intraluminal pressure changes and following application of several drugs. Variability in automated measurements during analysis of videos and processing times were also investigated and are reported. MyoTracker is a new software to assist during pressure myography experiments on small isolated retinal arterioles. It provides fast and accurate measurements with low levels of noise and works with both individual images and videos. Although the program was developed to work with small arterioles, it is also capable of tracking the walls of other types of microvessels, including venules and capillaries. It also works well with larger arteries, and therefore may provide an alternative to other packages developed for larger vessels when its features are considered advantageous.

  19. Mass asymmetry and tricyclic wobble motion assessment using automated launch video analysis

    Directory of Open Access Journals (Sweden)

    Ryan Decker

    2016-04-01

    Examination of the pitch and yaw histories clearly indicates that in addition to epicyclic motion's nutation and precession oscillations, an even faster wobble amplitude is present during each spin revolution, even though some of the amplitudes of the oscillation are smaller than 0.02 degree. The results are compared to a sequence of shots where little appreciable mass asymmetries were present, and only nutation and precession frequencies are predominantly apparent in the motion history results. Magnitudes of the wobble motion are estimated and compared to product of inertia measurements of the asymmetric projectiles.

  20. Quantitative analysis of spider locomotion employing computer-automated video tracking

    DEFF Research Database (Denmark)

    Baatrup, E; Bayley, M

    1993-01-01

    consecutive 12-h periods, alternating between white and red (lambda > 600 nm) illumination. Male spiders were significantly more locomotor active than female spiders under both lighting conditions. They walked, on average, twice the distance of females, employed higher velocities, and spent less time...... in quiescence. Both male and female P. amentata were significantly less active in red light (simulated dark environment) than in white light. The results also revealed that P. amentata administers its walking velocity and periods of quiescence according to consistent distributions, which can be approximated...

  1. Facilitating Reflexivity in Preservice Science Teacher Education Using Video Analysis and Cogenerative Dialogue in Field-Based Methods Courses

    Science.gov (United States)

    Siry, Christina; Martin, Sonya N.

    2014-01-01

    This paper presents an approach to preservice science teacher education coupling video analysis with dialogue as tools for fostering teachers' ability to notice and reflexively interpret events captured during teaching practicum with the intent of transforming classroom practice. In this approach, video becomes a tool with which teachers…

  2. The Use of Video Analysis in a Personnel Preparation Program for Teachers of Students Who Are Visually Impaired

    Science.gov (United States)

    Gale, Elaine; Trief, Ellen; Lengel, James

    2010-01-01

    Video analysis affords the observer the opportunity to capture and analyze videos of teaching practices, so that the observer can review, analyze, and synthesize specific examples of teaching in authentic classroom settings. The student teaching experience is the prime opportunity during the personnel preparation program in which student teachers…

  3. Power consumption analysis of constant bit rate video transmission over 3G networks

    DEFF Research Database (Denmark)

    Ukhanova, Ann; Belyaev, Evgeny; Wang, Le;

    2012-01-01

    This paper presents an analysis of the power consumption of video data transmission with constant bit rate over 3G mobile wireless networks. The work includes the description of the radio resource control transition state machine in 3G networks, followed by a detailed power consumption analysis...... and measurements of the radio link power consumption. Based on this description and analysis, we propose our power consumption model. The power model was evaluated on a smartphone Nokia N900, which follows 3GPP Release 5 and 6 supporting HSDPA/HSUPA data bearers. We also propose a method for parameter selection...... for the 3GPP transition state machine that allows to decrease power consumption on a mobile device taking signaling traffic, buffer size and latency restrictions into account. Furthermore, we discuss the gain in power consumption vs. PSNR for transmitted video and show the possibility of performing power...

  4. Object-oriented database design for the contaminant analysis automation project

    International Nuclear Information System (INIS)

    The Contaminant Analysis Automation project's automated soil analysis laboratory uses an Object-Oriented database for storage of runtime and archive information. Data which is generated by the processing of a sample, and is relevant for verification of the specifics of that process, is retained in the database. The database also contains intermediate results which are generated by one step of the process and used for decision making by later steps. The description of this database reveals design considerations of the objects used to model the behavior of the chemical laboratory and its components

  5. Automated Image Processing for the Analysis of DNA Repair Dynamics

    CERN Document Server

    Riess, Thorsten; Tomas, Martin; Ferrando-May, Elisa; Merhof, Dorit

    2011-01-01

    The efficient repair of cellular DNA is essential for the maintenance and inheritance of genomic information. In order to cope with the high frequency of spontaneous and induced DNA damage, a multitude of repair mechanisms have evolved. These are enabled by a wide range of protein factors specifically recognizing different types of lesions and finally restoring the normal DNA sequence. This work focuses on the repair factor XPC (xeroderma pigmentosum complementation group C), which identifies bulky DNA lesions and initiates their removal via the nucleotide excision repair pathway. The binding of XPC to damaged DNA can be visualized in living cells by following the accumulation of a fluorescent XPC fusion at lesions induced by laser microirradiation in a fluorescence microscope. In this work, an automated image processing pipeline is presented which allows to identify and quantify the accumulation reaction without any user interaction. The image processing pipeline comprises a preprocessing stage where the ima...

  6. 3D Assembly Group Analysis for Cognitive Automation

    Directory of Open Access Journals (Sweden)

    Christian Brecher

    2012-01-01

    Full Text Available A concept that allows the cognitive automation of robotic assembly processes is introduced. An assembly cell comprised of two robots was designed to verify the concept. For the purpose of validation a customer-defined part group consisting of Hubelino bricks is assembled. One of the key aspects for this process is the verification of the assembly group. Hence a software component was designed that utilizes the Microsoft Kinect to perceive both depth and color data in the assembly area. This information is used to determine the current state of the assembly group and is compared to a CAD model for validation purposes. In order to efficiently resolve erroneous situations, the results are interactively accessible to a human expert. The implications for an industrial application are demonstrated by transferring the developed concepts to an assembly scenario for switch-cabinet systems.

  7. AUTOMATION OF MORPHOMETRIC MEASUREMENTS FOR PLANETARY SURFACE ANALYSIS AND CARTOGRAPHY

    Directory of Open Access Journals (Sweden)

    A. A. Kokhanov

    2016-06-01

    Full Text Available For automation of measurements of morphometric parameters of surface relief various tools were developed and integrated into GIS. We have created a tool, which calculates statistical characteristics of the surface: interquartile range of heights, and slopes, as well as second derivatives of height fields as measures of topographic roughness. Other tools were created for morphological studies of craters. One of them allows automatic placing of topographic profiles through the geometric center of a crater. Another tool was developed for calculation of small crater depths and shape estimation, using C++ programming language. Additionally, we have prepared tool for calculating volumes of relief features from DTM rasters. The created software modules and models will be available in a new developed web-GIS system, operating in distributed cloud environment.

  8. Automation of Morphometric Measurements for Planetary Surface Analysis and Cartography

    Science.gov (United States)

    Kokhanov, A. A.; Bystrov, A. Y.; Kreslavsky, M. A.; Matveev, E. V.; Karachevtseva, I. P.

    2016-06-01

    For automation of measurements of morphometric parameters of surface relief various tools were developed and integrated into GIS. We have created a tool, which calculates statistical characteristics of the surface: interquartile range of heights, and slopes, as well as second derivatives of height fields as measures of topographic roughness. Other tools were created for morphological studies of craters. One of them allows automatic placing of topographic profiles through the geometric center of a crater. Another tool was developed for calculation of small crater depths and shape estimation, using C++ programming language. Additionally, we have prepared tool for calculating volumes of relief features from DTM rasters. The created software modules and models will be available in a new developed web-GIS system, operating in distributed cloud environment.

  9. Automated Multivariate Optimization Tool for Energy Analysis: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, P. G.; Griffith, B. T.; Long, N.; Torcellini, P. A.; Crawley, D.

    2006-07-01

    Building energy simulations are often used for trial-and-error evaluation of ''what-if'' options in building design--a limited search for an optimal solution, or ''optimization''. Computerized searching has the potential to automate the input and output, evaluate many options, and perform enough simulations to account for the complex interactions among combinations of options. This paper describes ongoing efforts to develop such a tool. The optimization tool employs multiple modules, including a graphical user interface, a database, a preprocessor, the EnergyPlus simulation engine, an optimization engine, and a simulation run manager. Each module is described and the overall application architecture is summarized.

  10. Research Prototype: Automated Analysis of Scientific and Engineering Semantics

    Science.gov (United States)

    Stewart, Mark E. M.; Follen, Greg (Technical Monitor)

    2001-01-01

    Physical and mathematical formulae and concepts are fundamental elements of scientific and engineering software. These classical equations and methods are time tested, universally accepted, and relatively unambiguous. The existence of this classical ontology suggests an ideal problem for automated comprehension. This problem is further motivated by the pervasive use of scientific code and high code development costs. To investigate code comprehension in this classical knowledge domain, a research prototype has been developed. The prototype incorporates scientific domain knowledge to recognize code properties (including units, physical, and mathematical quantity). Also, the procedure implements programming language semantics to propagate these properties through the code. This prototype's ability to elucidate code and detect errors will be demonstrated with state of the art scientific codes.

  11. A community of curious souls: an analysis of commenting behavior on TED talks videos.

    Science.gov (United States)

    Tsou, Andrew; Thelwall, Mike; Mongeon, Philippe; Sugimoto, Cassidy R

    2014-01-01

    The TED (Technology, Entertainment, Design) Talks website hosts video recordings of various experts, celebrities, academics, and others who discuss their topics of expertise. Funded by advertising and members but provided free online, TED Talks have been viewed over a billion times and are a science communication phenomenon. Although the organization has been derided for its populist slant and emphasis on entertainment value, no previous research has assessed audience reactions in order to determine the degree to which presenter characteristics and platform affect the reception of a video. This article addresses this issue via a content analysis of comments left on both the TED website and the YouTube platform (on which TED Talks videos are also posted). It was found that commenters were more likely to discuss the characteristics of a presenter on YouTube, whereas commenters tended to engage with the talk content on the TED website. In addition, people tended to be more emotional when the speaker was a woman (by leaving comments that were either positive or negative). The results can inform future efforts to popularize science amongst the public, as well as to provide insights for those looking to disseminate information via Internet videos. PMID:24718634

  12. A community of curious souls: an analysis of commenting behavior on TED talks videos.

    Directory of Open Access Journals (Sweden)

    Andrew Tsou

    Full Text Available The TED (Technology, Entertainment, Design Talks website hosts video recordings of various experts, celebrities, academics, and others who discuss their topics of expertise. Funded by advertising and members but provided free online, TED Talks have been viewed over a billion times and are a science communication phenomenon. Although the organization has been derided for its populist slant and emphasis on entertainment value, no previous research has assessed audience reactions in order to determine the degree to which presenter characteristics and platform affect the reception of a video. This article addresses this issue via a content analysis of comments left on both the TED website and the YouTube platform (on which TED Talks videos are also posted. It was found that commenters were more likely to discuss the characteristics of a presenter on YouTube, whereas commenters tended to engage with the talk content on the TED website. In addition, people tended to be more emotional when the speaker was a woman (by leaving comments that were either positive or negative. The results can inform future efforts to popularize science amongst the public, as well as to provide insights for those looking to disseminate information via Internet videos.

  13. Automated reduction and interpretation of multidimensional mass spectra for analysis of complex peptide mixtures

    Science.gov (United States)

    Gambin, Anna; Dutkowski, Janusz; Karczmarski, Jakub; Kluge, Boguslaw; Kowalczyk, Krzysztof; Ostrowski, Jerzy; Poznanski, Jaroslaw; Tiuryn, Jerzy; Bakun, Magda; Dadlez, Michal

    2007-01-01

    Here we develop a fully automated procedure for the analysis of liquid chromatography-mass spectrometry (LC-MS) datasets collected during the analysis of complex peptide mixtures. We present the underlying algorithm and outcomes of several experiments justifying its applicability. The novelty of our approach is to exploit the multidimensional character of the datasets. It is common knowledge that highly complex peptide mixtures can be analyzed by liquid chromatography coupled with mass spectrometry, but we are not aware of any existing automated MS spectra interpretation procedure designed to take into account the multidimensional character of the data. Our work fills this gap by providing an effective algorithm for this task, allowing for automated conversion of raw data to the list of masses of peptides.

  14. Automated analysis for scintigraphic evaluation of gastric emptying using invariant moments.

    Science.gov (United States)

    Abutaleb, A; Delalic, Z J; Ech, R; Siegel, J A

    1989-01-01

    This study introduces a method for automated analysis of the standard solid-meal gastric emptying test. The purpose was to develop a diagnostic tool to characterize more reproducibly abnormalities of solid-phase gastric emptying. The processing of gastric emptying is automated using geometrical moments that are invariant to scaling, rotation, and shift. Twenty subjects were studied. The first step was to obtain images of the stomach using a nuclear gamma camera immediately after the subject had eaten a radio-labeled meal. The second step was to process and analyze the images by a recently developed automated gastric emptying analysis (AGEA) method, which determines the gastric contour and the geometrical properties include such parameters as area, centroid, orientation, and moments of inertia. Statistical tests showed that some of the moments were sensitive to the patient's gastric status (normal versus abnormal). The difference between the normal and abnormal patients became noticeable approximately 1 h after meal ingestion. PMID:18230536

  15. Big data extraction with adaptive wavelet analysis (Presentation Video)

    Science.gov (United States)

    Qu, Hongya; Chen, Genda; Ni, Yiqing

    2015-04-01

    Nondestructive evaluation and sensing technology have been increasingly applied to characterize material properties and detect local damage in structures. More often than not, they generate images or data strings that are difficult to see any physical features without novel data extraction techniques. In the literature, popular data analysis techniques include Short-time Fourier Transform, Wavelet Transform, and Hilbert Transform for time efficiency and adaptive recognition. In this study, a new data analysis technique is proposed and developed by introducing an adaptive central frequency of the continuous Morlet wavelet transform so that both high frequency and time resolution can be maintained in a time-frequency window of interest. The new analysis technique is referred to as Adaptive Wavelet Analysis (AWA). This paper will be organized in several sections. In the first section, finite time-frequency resolution limitations in the traditional wavelet transform are introduced. Such limitations would greatly distort the transformed signals with a significant frequency variation with time. In the second section, Short Time Wavelet Transform (STWT), similar to Short Time Fourier Transform (STFT), is defined and developed to overcome such shortcoming of the traditional wavelet transform. In the third section, by utilizing the STWT and a time-variant central frequency of the Morlet wavelet, AWA can adapt the time-frequency resolution requirement to the signal variation over time. Finally, the advantage of the proposed AWA is demonstrated in Section 4 with a ground penetrating radar (GPR) image from a bridge deck, an analytical chirp signal with a large range sinusoidal frequency change over time, the train-induced acceleration responses of the Tsing-Ma Suspension Bridge in Hong Kong, China. The performance of the proposed AWA will be compared with the STFT and traditional wavelet transform.

  16. Functional MRI Preprocessing in Lesioned Brains: Manual Versus Automated Region of Interest Analysis.

    Science.gov (United States)

    Garrison, Kathleen A; Rogalsky, Corianne; Sheng, Tong; Liu, Brent; Damasio, Hanna; Winstein, Carolee J; Aziz-Zadeh, Lisa S

    2015-01-01

    Functional magnetic resonance imaging (fMRI) has significant potential in the study and treatment of neurological disorders and stroke. Region of interest (ROI) analysis in such studies allows for testing of strong a priori clinical hypotheses with improved statistical power. A commonly used automated approach to ROI analysis is to spatially normalize each participant's structural brain image to a template brain image and define ROIs using an atlas. However, in studies of individuals with structural brain lesions, such as stroke, the gold standard approach may be to manually hand-draw ROIs on each participant's non-normalized structural brain image. Automated approaches to ROI analysis are faster and more standardized, yet are susceptible to preprocessing error (e.g., normalization error) that can be greater in lesioned brains. The manual approach to ROI analysis has high demand for time and expertise, but may provide a more accurate estimate of brain response. In this study, commonly used automated and manual approaches to ROI analysis were directly compared by reanalyzing data from a previously published hypothesis-driven cognitive fMRI study, involving individuals with stroke. The ROI evaluated is the pars opercularis of the inferior frontal gyrus. Significant differences were identified in task-related effect size and percent-activated voxels in this ROI between the automated and manual approaches to ROI analysis. Task interactions, however, were consistent across ROI analysis approaches. These findings support the use of automated approaches to ROI analysis in studies of lesioned brains, provided they employ a task interaction design.

  17. Deep Multimodal Feature Analysis for Action Recognition in RGB+D Videos

    OpenAIRE

    Shahroudy, Amir; Ng, Tian-Tsong; Gong, Yihong; Wang, Gang

    2016-01-01

    Single modality action recognition on RGB or depth sequences has been extensively explored recently. It is generally accepted that each of these two modalities has different strengths and limitations for the task of action recognition. Therefore, analysis of the RGB+D videos can help us to better study the complementary properties of these two types of modalities and achieve higher levels of performance. In this paper, we propose a new deep autoencoder based shared-specific feature factorizat...

  18. Sagittal Plane Analysis of Adolescent Idiopathic Scoliosis after VATS (Video-Assisted Thoracoscopic Surgery) Anterior Instrumentations

    OpenAIRE

    Kim, Hak-Sun; Lee, Chong-Suh; Jeon, Byoung-Ho; Park, Jin-Oh

    2007-01-01

    Radiographic sagittal plane analysis of VATS (video-assisted thoracoscopic surgery) anterior instrumentation for adolescent idiopathic scoliosis. This is retrospective study. To report, in details about effects of VATS anterior instrumentation on the sagittal plane. Evaluations of the surgical outcome of scoliosis have primarily studied in coronal plane correction, functional, and cosmetic aspects. Sagittal balance, as well as coronal balance, is important in functional spine. Recently, scoli...

  19. An Evaluation on the Usage of Intelligent Video Analysis Software for Marketing Strategies

    Directory of Open Access Journals (Sweden)

    Kadri Gökhan Yılmaz

    2013-12-01

    Full Text Available This study investigates the historical development of the relation between companies and technology. Especially, it focuses on the new technology adaptation in the retail industry due to both the widespread technology usage in this sector and its technology guiding role. The usage of one of the current new technologies, intelligent video analysis software systems, in retail industry is evaluated and measures for such systems are determined.

  20. Analysis of Decorrelation Transform Gain for Uncoded Wireless Image and Video Communication.

    Science.gov (United States)

    Ruiqin Xiong; Feng Wu; Jizheng Xu; Xiaopeng Fan; Chong Luo; Wen Gao

    2016-04-01

    An uncoded transmission scheme called SoftCast has recently shown great potential for wireless video transmission. Unlike conventional approaches, SoftCast processes input images only by a series of transformations and modulates the coefficients directly to a dense constellation for transmission. The transmission is uncoded and lossy in nature, with its noise level commensurate with the channel condition. This paper presents a theoretical analysis for an uncoded visual communication, focusing on developing a quantitative measurements for the efficiency of decorrelation transform in a generalized uncoded transmission framework. Our analysis reveals that the energy distribution among signal elements is critical for the efficiency of uncoded transmission. A decorrelation transform can potentially bring a significant performance gain by boosting the energy diversity in signal representation. Numerical results on Markov random process and real image and video signals are reported to evaluate the performance gain of using different transforms in uncoded transmission. The analysis presented in this paper is verified by simulated SoftCast transmissions. This provide guidelines for designing efficient uncoded video transmission schemes. PMID:26930682

  1. Using Video Analysis and Biomechanics to Engage Life Science Majors in Introductory Physics

    Science.gov (United States)

    Stephens, Jeff

    There is an interest in Introductory Physics for the Life Sciences (IPLS) as a way to better engage students in what may be their only physical science course. In this talk I will present some low cost and readily available technologies for video analysis and how they have been implemented in classes and in student research projects. The technologies include software like Tracker and LoggerPro for video analysis and low cost high speed cameras for capturing real world events. The focus of the talk will be on content created by students including two biomechanics research projects performed over the summer by pre-physical therapy majors. One project involved assessing medial knee displacement (MKD), a situation where the subject's knee becomes misaligned during a squatting motion and is a contributing factor in ACL and other knee injuries. The other project looks at the difference in landing forces experienced by gymnasts and cheer-leaders while performing on foam mats versus spring floors. The goal of this talk is to demonstrate how easy it can be to engage life science majors through the use of video analysis and topics like biomechanics and encourage others to try it for themselves.

  2. Automated analysis of small animal PET studies through deformable registration to an atlas

    International Nuclear Information System (INIS)

    This work aims to develop a methodology for automated atlas-guided analysis of small animal positron emission tomography (PET) data through deformable registration to an anatomical mouse model. A non-rigid registration technique is used to put into correspondence relevant anatomical regions of rodent CT images from combined PET/CT studies to corresponding CT images of the Digimouse anatomical mouse model. The latter provides a pre-segmented atlas consisting of 21 anatomical regions suitable for automated quantitative analysis. Image registration is performed using a package based on the Insight Toolkit allowing the implementation of various image registration algorithms. The optimal parameters obtained for deformable registration were applied to simulated and experimental mouse PET/CT studies. The accuracy of the image registration procedure was assessed by segmenting mouse CT images into seven regions: brain, lungs, heart, kidneys, bladder, skeleton and the rest of the body. This was accomplished prior to image registration using a semi-automated algorithm. Each mouse segmentation was transformed using the parameters obtained during CT to CT image registration. The resulting segmentation was compared with the original Digimouse atlas to quantify image registration accuracy using established metrics such as the Dice coefficient and Hausdorff distance. PET images were then transformed using the same technique and automated quantitative analysis of tracer uptake performed. The Dice coefficient and Hausdorff distance show fair to excellent agreement and a mean registration mismatch distance of about 6 mm. The results demonstrate good quantification accuracy in most of the regions, especially the brain, but not in the bladder, as expected. Normalized mean activity estimates were preserved between the reference and automated quantification techniques with relative errors below 10 % in most of the organs considered. The proposed automated quantification technique is

  3. Automated analysis of small animal PET studies through deformable registration to an atlas

    Energy Technology Data Exchange (ETDEWEB)

    Gutierrez, Daniel F. [Geneva University Hospital, Division of Nuclear Medicine and Molecular Imaging, Geneva 4 (Switzerland); Zaidi, Habib [Geneva University Hospital, Division of Nuclear Medicine and Molecular Imaging, Geneva 4 (Switzerland); Geneva University, Geneva Neuroscience Center, Geneva (Switzerland); University of Groningen, Department of Nuclear Medicine and Molecular Imaging, University Medical Center Groningen, Groningen (Netherlands)

    2012-11-15

    This work aims to develop a methodology for automated atlas-guided analysis of small animal positron emission tomography (PET) data through deformable registration to an anatomical mouse model. A non-rigid registration technique is used to put into correspondence relevant anatomical regions of rodent CT images from combined PET/CT studies to corresponding CT images of the Digimouse anatomical mouse model. The latter provides a pre-segmented atlas consisting of 21 anatomical regions suitable for automated quantitative analysis. Image registration is performed using a package based on the Insight Toolkit allowing the implementation of various image registration algorithms. The optimal parameters obtained for deformable registration were applied to simulated and experimental mouse PET/CT studies. The accuracy of the image registration procedure was assessed by segmenting mouse CT images into seven regions: brain, lungs, heart, kidneys, bladder, skeleton and the rest of the body. This was accomplished prior to image registration using a semi-automated algorithm. Each mouse segmentation was transformed using the parameters obtained during CT to CT image registration. The resulting segmentation was compared with the original Digimouse atlas to quantify image registration accuracy using established metrics such as the Dice coefficient and Hausdorff distance. PET images were then transformed using the same technique and automated quantitative analysis of tracer uptake performed. The Dice coefficient and Hausdorff distance show fair to excellent agreement and a mean registration mismatch distance of about 6 mm. The results demonstrate good quantification accuracy in most of the regions, especially the brain, but not in the bladder, as expected. Normalized mean activity estimates were preserved between the reference and automated quantification techniques with relative errors below 10 % in most of the organs considered. The proposed automated quantification technique is

  4. Semi-automated analysis of EEG spikes in the preterm fetal sheep using wavelet analysis

    International Nuclear Information System (INIS)

    Full text: Presentation Preference Oral Presentation Perinatal hypoxia plays a key role in the cause of brain injury in premature infants. Cerebral hypothermia commenced in the latent phase of evolving injury (first 6-8 h post hypoxic-ischemic insult) is the lead candidate for treatment however currently there is no means to identify which infants can benefit from treatment. Recent studies suggest that epileptiform transients in latent phase are predictive of neural outcome. To quantify this, an automated means of EEG analysis is required as EEG monitoring produces vast amounts of data which is timely to analyse manually. We have developed a semi-automated EEG spike detection method which employs a discretized version of the continuous wavelet transform (CWT). EEG data was obtained from a fetal sheep at approximately 0.7 of gestation. Fetal asphyxia was maintained for 25 min and the EEG recorded for 8 h before and after asphyxia. The CWT was calculated followed by the power of the wavelet transform coefficients. Areas of high power corresponded to spike waves so thresholding was employed to identify the spikes. The performance of the method was found have a good sensitivity and selectivity, thus demonstrating that this method is a simple, robust and potentially effective spike detection algorithm.

  5. The Narrative Analysis of the Discourse on Homosexual BDSM Pornograhic Video Clips of The Manhunt Variety

    Directory of Open Access Journals (Sweden)

    Milica Vasić

    2016-02-01

    Full Text Available In this paper we have analyzed the ideal-type model of the story which represents the basic framework of action in Manhunt category pornographic internet video clips, using narrative analysis methods of Claude Bremond. The results have shown that it is possible to apply the theoretical model to elements of visual and mass culture, with certain modifications and taking into account the wider context of the narrative itself. The narrative analysis indicated the significance of researching categories of pornography on the internet, because it leads to a deep analysis of the distribution of power in relations between the categories of heterosexual and homosexual within a virtual environment.

  6. Scanning probe image wizard: A toolbox for automated scanning probe microscopy data analysis

    Science.gov (United States)

    Stirling, Julian; Woolley, Richard A. J.; Moriarty, Philip

    2013-11-01

    We describe SPIW (scanning probe image wizard), a new image processing toolbox for SPM (scanning probe microscope) images. SPIW can be used to automate many aspects of SPM data analysis, even for images with surface contamination and step edges present. Specialised routines are available for images with atomic or molecular resolution to improve image visualisation and generate statistical data on surface structure.

  7. Chapter 2: Predicting Newcomer Integration in Online Knowledge Communities by Automated Dialog Analysis

    NARCIS (Netherlands)

    Nistor, Nicolae; Dascalu, Mihai; Stavarache, Lucia; Tarnai, Christian; Trausan-Matu, Stefan

    2016-01-01

    Nistor, N., Dascalu, M., Stavarache, L.L., Tarnai, C., & Trausan-Matu, S. (2015). Predicting Newcomer Integration in Online Knowledge Communities by Automated Dialog Analysis. In Y. Li, M. Chang, M. Kravcik, E. Popescu, R. Huang, Kinshuk & N.-S. Chen (Eds.), State-of-the-Art and Future Directions of

  8. Miniaturized Mass-Spectrometry-Based Analysis System for Fully Automated Examination of Conditioned Cell Culture Media

    NARCIS (Netherlands)

    Weber, E.; Pinkse, M.W.H.; Bener-Aksam, E.; Vellekoop, M.J.; Verhaert, P.D.E.M.

    2012-01-01

    We present a fully automated setup for performing in-line mass spectrometry (MS) analysis of conditioned media in cell cultures, in particular focusing on the peptides therein. The goal is to assess peptides secreted by cells in different culture conditions. The developed system is compatible with M

  9. Application of fluorescence-based semi-automated AFLP analysis in barley and wheat

    DEFF Research Database (Denmark)

    Schwarz, G.; Herz, M.; Huang, X.Q.;

    2000-01-01

    of semi-automated codominant analysis for hemizygous AFLP markers in an F-2 population was too low, proposing the use of dominant allele-typing defaults. Nevertheless, the efficiency of genetic mapping, especially of complex plant genomes, will be accelerated by combining the presented genotyping...

  10. Development of a novel and automated fluorescent immunoassay for the analysis of beta-lactam antibiotics

    NARCIS (Netherlands)

    Benito-Pena, E.; Moreno-Bondi, M.C.; Orellana, G.; Maquieira, K.; Amerongen, van A.

    2005-01-01

    An automated immunosensor for the rapid and sensitive analysis of penicillin type -lactam antibiotics has been developed and optimized. An immunogen was prepared by coupling the common structure of the penicillanic -lactam antibiotics, i.e., 6-aminopenicillanic acid to keyhole limpet hemocyanin. Pol

  11. Automated data acquisition and analysis system for inventory verification

    International Nuclear Information System (INIS)

    A real-time system is proposed which would allow CLO Safeguards Branch to conduct a meaningful inventory verification using a variety of NDA instruments. The overall system would include the NDA instruments, automated data handling equipment, and a vehicle to house and transport the instruments and equipment. For the purpose of the preliminary cost estimate a specific data handling system and vehicle were required. A Tracor Northern TN-11 data handling system including a PDP-11 minicomputer and a measurement vehicle similar to the Commission's Regulatory Region I van were used. The basic system is currently estimated to cost about $100,000, and future add-ons which would expand the systems' capabilities are estimated to cost about $40,000. The concept of using a vehicle in order to permanently rack mount the data handling equipmentoffers a number of benefits such as control of equipment environment and allowance for improvements, expansion, and flexibility in the system. Justification is also presented for local design and assembly of the overall system. A summary of the demonstration system which illustrates the advantages and feasibility of the overall system is included in this discussion. Two ideas are discussed which are not considered to be viable alternatives to the proposed system: addition of the data handling capabilities to the semiportable ''cart'' and use of a telephone link to a large computer center

  12. An Analysis of Intelligent Automation Demands in Taiwanese Firms

    Directory of Open Access Journals (Sweden)

    Ying-Mei Tai

    2016-03-01

    Full Text Available To accurately elucidate the production deployment, process intelligent automation (IA, and production bottlenecks of Taiwanese companies, as well as the IA application status, this research conducted a structured questionnaire survey on the participants of the IA promotion activities arranged by the Industrial Development Bureau, Ministry of Economic Affairs. A total of 35 valid questionnaires were recovered. Research findings indicated that the majority of participants were large-scale enterprises. These enterprises anticipated adding production bases in Taiwan and China to transit and upgrade their operations or strengthen influence in the domestic market. The degrees of various process IA and production bottlenecks were relatively low, which was associated with the tendency to small amount of diversified products. The majority of sub-categories of hardware equipment and simulation technologies have reached maturity, and the effective application of these technologies can enhance production efficiency. Technologies of intelligent software remain immature and need further development and application. More importantly, they can meet customer values and create new business models, so as to satisfy the purpose of sustainable development.

  13. Detailed interrogation of trypanosome cell biology via differential organelle staining and automated image analysis

    Directory of Open Access Journals (Sweden)

    Wheeler Richard J

    2012-01-01

    Full Text Available Abstract Background Many trypanosomatid protozoa are important human or animal pathogens. The well defined morphology and precisely choreographed division of trypanosomatid cells makes morphological analysis a powerful tool for analyzing the effect of mutations, chemical insults and changes between lifecycle stages. High-throughput image analysis of micrographs has the potential to accelerate collection of quantitative morphological data. Trypanosomatid cells have two large DNA-containing organelles, the kinetoplast (mitochondrial DNA and nucleus, which provide useful markers for morphometric analysis; however they need to be accurately identified and often lie in close proximity. This presents a technical challenge. Accurate identification and quantitation of the DNA content of these organelles is a central requirement of any automated analysis method. Results We have developed a technique based on double staining of the DNA with a minor groove binding (4'', 6-diamidino-2-phenylindole (DAPI and a base pair intercalating (propidium iodide (PI or SYBR green fluorescent stain and color deconvolution. This allows the identification of kinetoplast and nuclear DNA in the micrograph based on whether the organelle has DNA with a more A-T or G-C rich composition. Following unambiguous identification of the kinetoplasts and nuclei the resulting images are amenable to quantitative automated analysis of kinetoplast and nucleus number and DNA content. On this foundation we have developed a demonstrative analysis tool capable of measuring kinetoplast and nucleus DNA content, size and position and cell body shape, length and width automatically. Conclusions Our approach to DNA staining and automated quantitative analysis of trypanosomatid morphology accelerated analysis of trypanosomatid protozoa. We have validated this approach using Leishmania mexicana, Crithidia fasciculata and wild-type and mutant Trypanosoma brucei. Automated analysis of T. brucei

  14. Automated red blood cell analysis compared with routine red blood cell morphology by smear review

    Directory of Open Access Journals (Sweden)

    Dr.Poonam Radadiya

    2015-01-01

    Full Text Available The RBC histogram is an integral part of automated haematology analysis and is now routinely available on all automated cell counters. This histogram and other associated complete blood count (CBC parameters have been found abnormal in various haematological conditions and may provide major clues in the diagnosis and management of significant red cell disorders. Performing manual blood smears is important to ensure the quality of blood count results and to make presumptive diagnosis. In this article we have taken 100 samples for comparative study between RBC histograms obtained by automated haematology analyzer with peripheral blood smear. This article discusses some morphological features of dimorphism and the ensuing characteristic changes in their RBC histograms.

  15. Trend Analysis on the Automation of the Notebook PC Production Process

    Directory of Open Access Journals (Sweden)

    Chin-Ching Yeh

    2012-09-01

    Full Text Available Notebook PCs are among the Taiwanese electronic products that generate the highest production value and market share. According to the ITRI IEK statistics, the domestic Notebook PC - production value in 2011 was about NT $2.3 trillion. Of about 200 million notebook PCs in global markets in 2011, Taiwan’s notebook PC output accounts for more than 90% of them, meaning that nine out of every ten notebook PCs in the world are manufactured by Taiwanese companies. For such a large industry with its output value and quantity, the degree of automation in production processes is not high. This means that there is still a great room for the automation of the notebook PC production process, or that the degree of automation of the production process of the laptops cannot be enhanced. This paper presents an analysis of the situation.

  16. Comparison of semi-automated image analysis and manual methods for tissue quantification in pancreatic carcinoma

    Energy Technology Data Exchange (ETDEWEB)

    Sims, A.J. [Regional Medical Physics Department, Freeman Hospital, Newcastle upon Tyne (United Kingdom)]. E-mail: a.j.sims@newcastle.ac.uk; Murray, A. [Regional Medical Physics Department, Freeman Hospital, Newcastle upon Tyne (United Kingdom); Bennett, M.K. [Department of Histopathology, Newcastle upon Tyne Hospitals NHS Trust, Newcastle upon Tyne (United Kingdom)

    2002-04-21

    Objective measurements of tissue area during histological examination of carcinoma can yield valuable prognostic information. However, such measurements are not made routinely because the current manual approach is time consuming and subject to large statistical sampling error. In this paper, a semi-automated image analysis method for measuring tissue area in histological samples is applied to the measurement of stromal tissue, cell cytoplasm and lumen in samples of pancreatic carcinoma and compared with the standard manual point counting method. Histological samples from 26 cases of pancreatic carcinoma were stained using the sirius red, light-green method. Images from each sample were captured using two magnifications. Image segmentation based on colour cluster analysis was used to subdivide each image into representative colours which were classified manually into one of three tissue components. Area measurements made using this technique were compared to corresponding manual measurements and used to establish the comparative accuracy of the semi-automated image analysis technique, with a quality assurance study to measure the repeatability of the new technique. For both magnifications and for each tissue component, the quality assurance study showed that the semi-automated image analysis algorithm had better repeatability than its manual equivalent. No significant bias was detected between the measurement techniques for any of the comparisons made using the 26 cases of pancreatic carcinoma. The ratio of manual to semi-automatic repeatability errors varied from 2.0 to 3.6. Point counting would need to be increased to be between 400 and 1400 points to achieve the same repeatability as for the semi-automated technique. The results demonstrate that semi-automated image analysis is suitable for measuring tissue fractions in histological samples prepared with coloured stains and is a practical alternative to manual point counting. (author)

  17. Comparison of semi-automated image analysis and manual methods for tissue quantification in pancreatic carcinoma

    International Nuclear Information System (INIS)

    Objective measurements of tissue area during histological examination of carcinoma can yield valuable prognostic information. However, such measurements are not made routinely because the current manual approach is time consuming and subject to large statistical sampling error. In this paper, a semi-automated image analysis method for measuring tissue area in histological samples is applied to the measurement of stromal tissue, cell cytoplasm and lumen in samples of pancreatic carcinoma and compared with the standard manual point counting method. Histological samples from 26 cases of pancreatic carcinoma were stained using the sirius red, light-green method. Images from each sample were captured using two magnifications. Image segmentation based on colour cluster analysis was used to subdivide each image into representative colours which were classified manually into one of three tissue components. Area measurements made using this technique were compared to corresponding manual measurements and used to establish the comparative accuracy of the semi-automated image analysis technique, with a quality assurance study to measure the repeatability of the new technique. For both magnifications and for each tissue component, the quality assurance study showed that the semi-automated image analysis algorithm had better repeatability than its manual equivalent. No significant bias was detected between the measurement techniques for any of the comparisons made using the 26 cases of pancreatic carcinoma. The ratio of manual to semi-automatic repeatability errors varied from 2.0 to 3.6. Point counting would need to be increased to be between 400 and 1400 points to achieve the same repeatability as for the semi-automated technique. The results demonstrate that semi-automated image analysis is suitable for measuring tissue fractions in histological samples prepared with coloured stains and is a practical alternative to manual point counting. (author)

  18. Video Analysis and Modeling Performance Task to promote becoming like scientists in classrooms

    CERN Document Server

    Wee, Loo Kang

    2015-01-01

    This paper aims to share the use of Tracker a free open source video analysis and modeling tool that is increasingly used as a pedagogical tool for the effective learning and teaching of Physics for Grade 9 Secondary 3 students in Singapore schools to make physics relevant to the real world. We discuss the pedagogical use of Tracker, guided by the Framework for K-12 Science Education by National Research Council, USA to help students to be more like scientists. For a period of 6 to 10 weeks, students use a video analysis coupled with the 8 practices of sciences such as 1. Ask question, 2. Use models, 3. Plan and carry out investigation, 4. Analyse and interpret data, 5. Use mathematical and computational thinking, 6. Construct explanations, 7. Argue from evidence and 8. Communicate information. This papers focus in on discussing some of the performance task design ideas such as 3.1 flip video, 3.2 starting with simple classroom activities, 3.3 primer science activity, 3.4 integrative dynamics and kinematics l...

  19. Writing/Thinking in Real Time: Digital Video and Corpus Query Analysis

    Directory of Open Access Journals (Sweden)

    Park, Kwanghyun

    2010-10-01

    Full Text Available The advance of digital video technology in the past two decades facilitates empirical investigation of learning in real time. The focus of this paper is the combined use of real-time digital video and a networked linguistic corpus for exploring the ways in which these technologies enhance our capability to investigate the cognitive process of learning. A perennial challenge to research using digital video (e.g., screen recordings has been the method for interfacing the captured behavior with the learners’ cognition. An exploratory proposal in this paper is that with an additional layer of data (i.e., corpus search queries, analyses of real-time data can be extended to provide an explicit representation of learner’s cognitive processes. This paper describes the method and applies it to an area of SLA, specifically writing, and presents an in-depth, moment-by-moment analysis of an L2 writer’s composing process. The findings show that the writer’s composing process is fundamentally developmental, and that it is facilitated in her dialogue-like interaction with an artifact (i.e., the corpus. The analysis illustrates the effectiveness of the method for capturing learners’ cognition, suggesting that L2 learning can be more fully explicated by interpreting real-time data in concert with investigation of corpus search queries.

  20. HIERARCHICAL ADAPTIVE ROOD PATTERN SEARCH FOR MOTION ESTIMATION AT VIDEO SEQUENCE ANALYSIS

    Directory of Open Access Journals (Sweden)

    V. T. Nguyen

    2016-05-01

    Full Text Available Subject of Research.The paper deals with the motion estimation algorithms for the analysis of video sequences in compression standards MPEG-4 Visual and H.264. Anew algorithm has been offered based on the analysis of the advantages and disadvantages of existing algorithms. Method. Thealgorithm is called hierarchical adaptive rood pattern search (Hierarchical ARPS, HARPS. This new algorithm includes the classic adaptive rood pattern search ARPS and hierarchical search MP (Hierarchical search or Mean pyramid. All motion estimation algorithms have been implemented using MATLAB package and tested with several video sequences. Main Results. The criteria for evaluating the algorithms were: speed, peak signal to noise ratio, mean square error and mean absolute deviation. The proposed method showed a much better performance at a comparable error and deviation. The peak signal to noise ratio in different video sequences shows better and worse results than characteristics of known algorithms so it requires further investigation. Practical Relevance. Application of this algorithm in MPEG-4 and H.264 codecs instead of the standard can significantly reduce compression time. This feature enables to recommend it in telecommunication systems for multimedia data storing, transmission and processing.

  1. Temporal structure analysis of broadcast tennis video using hidden Markov models

    Science.gov (United States)

    Kijak, Ewa; Oisel, Lionel; Gros, Patrick

    2003-01-01

    This work aims at recovering the temporal structure of a broadcast tennis video from an analysis of the raw footage. Our method relies on a statistical model of the interleaving of shots, in order to group shots into predefined classes representing structural elements of a tennis video. This stochastic modeling is performed in the global framework of Hidden Markov Models (HMMs). The fundamental units are shots and transitions. In a first step, colors and motion attributes of segmented shots are used to map shots into 2 classes: game (view of the full tennis court) and not game (medium, close up views, and commercials). In a second step, a trained HMM is used to analyze the temporal interleaving of shots. This analysis results in the identification of more complex structures, such as first missed services, short rallies that could be aces or services, long rallies, breaks that are significant of the end of a game and replays that highlight interesting points. These higher-level unit structures can be used either to create summaries, or to allow non-linear browsing of the video.

  2. Automating case reports for the analysis of digital evidence

    OpenAIRE

    Cassidy, Regis H. Friend

    2005-01-01

    The reporting process during computer analysis is critical in the practice of digital forensics. Case reports are used to review the process and results of an investigation and serve multiple purposes. The investigator may refer to these reports to monitor the progress of his analysis throughout the investigation. When acting as an expert witness, the investigator will refer to organized documentation to recall past analysis. A lot of time can elapse between the analysis and the actual testim...

  3. Automated face analysis by feature point tracking has high concurrent validity with manual FACS coding.

    Science.gov (United States)

    Cohn, J F; Zlochower, A J; Lien, J; Kanade, T

    1999-01-01

    The face is a rich source of information about human behavior. Available methods for coding facial displays, however, are human-observer dependent, labor intensive, and difficult to standardize. To enable rigorous and efficient quantitative measurement of facial displays, we have developed an automated method of facial display analysis. In this report, we compare the results with this automated system with those of manual FACS (Facial Action Coding System, Ekman & Friesen, 1978a) coding. One hundred university students were videotaped while performing a series of facial displays. The image sequences were coded from videotape by certified FACS coders. Fifteen action units and action unit combinations that occurred a minimum of 25 times were selected for automated analysis. Facial features were automatically tracked in digitized image sequences using a hierarchical algorithm for estimating optical flow. The measurements were normalized for variation in position, orientation, and scale. The image sequences were randomly divided into a training set and a cross-validation set, and discriminant function analyses were conducted on the feature point measurements. In the training set, average agreement with manual FACS coding was 92% or higher for action units in the brow, eye, and mouth regions. In the cross-validation set, average agreement was 91%, 88%, and 81% for action units in the brow, eye, and mouth regions, respectively. Automated face analysis by feature point tracking demonstrated high concurrent validity with manual FACS coding.

  4. Automated image analysis for space debris identification and astrometric measurements

    Science.gov (United States)

    Piattoni, Jacopo; Ceruti, Alessandro; Piergentili, Fabrizio

    2014-10-01

    The space debris is a challenging problem for the human activity in the space. Observation campaigns are conducted around the globe to detect and track uncontrolled space objects. One of the main problems in optical observation is obtaining useful information about the debris dynamical state by the images collected. For orbit determination, the most relevant information embedded in optical observation is the precise angular position, which can be evaluated by astrometry procedures, comparing the stars inside the image with star catalogs. This is typically a time consuming process, if done by a human operator, which makes this task impractical when dealing with large amounts of data, in the order of thousands images per night, generated by routinely conducted observations. An automated procedure is investigated in this paper that is capable to recognize the debris track inside a picture, calculate the celestial coordinates of the image's center and use these information to compute the debris angular position in the sky. This procedure has been implemented in a software code, that does not require human interaction and works without any supplemental information besides the image itself, detecting space objects and solving for their angular position without a priori information. The algorithm for object detection was developed inside the research team. For the star field computation, the software code astrometry.net was used and released under GPL v2 license. The complete procedure was validated by an extensive testing, using the images obtained in the observation campaign performed in a joint project between the Italian Space Agency (ASI) and the University of Bologna at the Broglio Space center, Kenya.

  5. Video-tracker trajectory analysis: who meets whom, when and where

    Science.gov (United States)

    Jäger, U.; Willersinn, D.

    2010-04-01

    Unveiling unusual or hostile events by observing manifold moving persons in a crowd is a challenging task for human operators, especially when sitting in front of monitor walls for hours. Typically, hostile events are rare. Thus, due to tiredness and negligence the operator may miss important events. In such situations, an automatic alarming system is able to support the human operator. The system incorporates a processing chain consisting of (1) people tracking, (2) event detection, (3) data retrieval, and (4) display of relevant video sequence overlaid by highlighted regions of interest. In this paper we focus on the event detection stage of the processing chain mentioned above. In our case, the selected event of interest is the encounter of people. Although being based on a rather simple trajectory analysis, this kind of event embodies great practical importance because it paves the way to answer the question "who meets whom, when and where". This, in turn, forms the basis to detect potential situations where e.g. money, weapons, drugs etc. are handed over from one person to another in crowded environments like railway stations, airports or busy streets and places etc.. The input to the trajectory analysis comes from a multi-object video-based tracking system developed at IOSB which is able to track multiple individuals within a crowd in real-time [1]. From this we calculate the inter-distances between all persons on a frame-to-frame basis. We use a sequence of simple rules based on the individuals' kinematics to detect the event mentioned above to output the frame number, the persons' IDs from the tracker and the pixel coordinates of the meeting position. Using this information, a data retrieval system may extract the corresponding part of the recorded video image sequence and finally allows for replaying the selected video clip with a highlighted region of interest to attract the operator's attention for further visual inspection.

  6. DEFINITION AND ANALYSIS OF MOTION ACTIVITY AFTER-STROKE PATIENT FROM THE VIDEO STREAM

    Directory of Open Access Journals (Sweden)

    M. Yu. Katayev

    2014-01-01

    Full Text Available This article describes an approach to the assessment of motion activity of man in after-stroke period, allowing the doctor to get new information to give a more informed recommendations on rehabilitation treatment than in traditional approaches. Consider description of the hardware-software complex for determination and analysis of motion activity after-stroke patient for the video stream. The article provides a description of the complex, its algorithmic filling and the results of the work on the example of processing of the actual data. The algorithms and technology to significantly accelerate the gait analysis and improve the quality of diagnostics post-stroke patients.

  7. Automated acquisition and analysis of small angle X-ray scattering data

    International Nuclear Information System (INIS)

    Small Angle X-ray Scattering (SAXS) is a powerful tool in the study of biological macromolecules providing information about the shape, conformation, assembly and folding states in solution. Recent advances in robotic fluid handling make it possible to perform automated high throughput experiments including fast screening of solution conditions, measurement of structural responses to ligand binding, changes in temperature or chemical modifications. Here, an approach to full automation of SAXS data acquisition and data analysis is presented, which advances automated experiments to the level of a routine tool suitable for large scale structural studies. The approach links automated sample loading, primary data reduction and further processing, facilitating queuing of multiple samples for subsequent measurement and analysis and providing means of remote experiment control. The system was implemented and comprehensively tested in user operation at the BioSAXS beamlines X33 and P12 of EMBL at the DORIS and PETRA storage rings of DESY, Hamburg, respectively, but is also easily applicable to other SAXS stations due to its modular design.

  8. Fully Automated Sample Preparation for Ultrafast N-Glycosylation Analysis of Antibody Therapeutics.

    Science.gov (United States)

    Szigeti, Marton; Lew, Clarence; Roby, Keith; Guttman, Andras

    2016-04-01

    There is a growing demand in the biopharmaceutical industry for high-throughput, large-scale N-glycosylation profiling of therapeutic antibodies in all phases of product development, but especially during clone selection when hundreds of samples should be analyzed in a short period of time to assure their glycosylation-based biological activity. Our group has recently developed a magnetic bead-based protocol for N-glycosylation analysis of glycoproteins to alleviate the hard-to-automate centrifugation and vacuum-centrifugation steps of the currently used protocols. Glycan release, fluorophore labeling, and cleanup were all optimized, resulting in a automating all steps of the optimized magnetic bead-based protocol from endoglycosidase digestion, through fluorophore labeling and cleanup with high-throughput sample processing in 96-well plate format, using an automated laboratory workstation. Capillary electrophoresis analysis of the fluorophore-labeled glycans was also optimized for rapid (automated sample preparation workflow. Ultrafast N-glycosylation analyses of several commercially relevant antibody therapeutics are also shown and compared to their biosimilar counterparts, addressing the biological significance of the differences.

  9. Composite behavior analysis for video surveillance using hierarchical dynamic Bayesian networks

    Science.gov (United States)

    Cheng, Huanhuan; Shan, Yong; Wang, Runsheng

    2011-03-01

    Analyzing composite behaviors involving objects from multiple categories in surveillance videos is a challenging task due to the complicated relationships among human and objects. This paper presents a novel behavior analysis framework using a hierarchical dynamic Bayesian network (DBN) for video surveillance systems. The model is built for extracting objects' behaviors and their relationships by representing behaviors using spatial-temporal characteristics. The recognition of object behaviors is processed by the DBN at multiple levels: features of objects at low level, objects and their relationships at middle level, and event at high level, where event refers to behaviors of a single type object as well as behaviors consisting of several types of objects such as ``a person getting in a car.'' Furthermore, to reduce the complexity, a simple model selection criterion is addressed, by which the appropriated model is picked out from a pool of candidate models. Experiments are shown to demonstrate that the proposed framework could efficiently recognize and semantically describe composite object and human activities in surveillance videos.

  10. GenePublisher: automated analysis of DNA microarray data

    DEFF Research Database (Denmark)

    Knudsen, Steen; Workman, Christopher; Sicheritz-Ponten, T.;

    2003-01-01

    GenePublisher, a system for automatic analysis of data from DNA microarray experiments, has been implemented with a web interface at http://www.cbs.dtu.dk/services/GenePublisher. Raw data are uploaded to the server together with aspecification of the data. The server performs normalization......, statistical analysis and visualization of the data. The results are run against databases of signal transduction pathways, metabolic pathways and promoter sequences in order to extract more information. The results of the entire analysis are summarized in report form and returned to the user....

  11. Are short Economics teaching videos liked? Analysis of features driving “Likes” in Youtube

    OpenAIRE

    Meseguer-Martinez, Angel; Ros-Galvez, Alejandro; Rosa-Garcia, Alfonso

    2015-01-01

    We analyze the factors that determine the number of clicks in the Like button in online teaching videos. We perform a study in a sample of Spanish-language teaching videos in the area of Microeconomics. The results show that users prefer short online teaching videos. Moreover, some other features of the videos have shown significant impact on the number of “likes”. Videos produced by entities other than Universities, conducted by female instructors, where the instructor appears on the screen ...

  12. Infrascope: Full-Spectrum Phonocardiography with Automated Signal Analysis Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Using digital signal analysis tools, we will generate a repeatable output from the infrascope and compare it to the output of a traditional electrocardiogram, and...

  13. Automation of Safety Analysis with SysML Models Project

    Data.gov (United States)

    National Aeronautics and Space Administration — This project was a small proof-of-concept case study, generating SysML model information as a side effect of safety analysis. A prototype FMEA Assistant was...

  14. Implicit media frames: Automated analysis of public debate on artificial sweeteners

    CERN Document Server

    Hellsten, Iina; Leydesdorff, Loet

    2010-01-01

    The framing of issues in the mass media plays a crucial role in the public understanding of science and technology. This article contributes to research concerned with diachronic analysis of media frames by making an analytical distinction between implicit and explicit media frames, and by introducing an automated method for analysing diachronic changes of implicit frames. In particular, we apply a semantic maps method to a case study on the newspaper debate about artificial sweeteners, published in The New York Times (NYT) between 1980 and 2006. Our results show that the analysis of semantic changes enables us to filter out the dynamics of implicit frames, and to detect emerging metaphors in public debates. Theoretically, we discuss the relation between implicit frames in public debates and codification of information in scientific discourses, and suggest further avenues for research interested in the automated analysis of frame changes and trends in public debates.

  15. A unified and efficient framework for court-net sports video analysis using 3D camera modeling

    Science.gov (United States)

    Han, Jungong; de With, Peter H. N.

    2007-01-01

    The extensive amount of video data stored on available media (hard and optical disks) necessitates video content analysis, which is a cornerstone for different user-friendly applications, such as, smart video retrieval and intelligent video summarization. This paper aims at finding a unified and efficient framework for court-net sports video analysis. We concentrate on techniques that are generally applicable for more than one sports type to come to a unified approach. To this end, our framework employs the concept of multi-level analysis, where a novel 3-D camera modeling is utilized to bridge the gap between the object-level and the scene-level analysis. The new 3-D camera modeling is based on collecting features points from two planes, which are perpendicular to each other, so that a true 3-D reference is obtained. Another important contribution is a new tracking algorithm for the objects (i.e. players). The algorithm can track up to four players simultaneously. The complete system contributes to summarization by various forms of information, of which the most important are the moving trajectory and real-speed of each player, as well as 3-D height information of objects and the semantic event segments in a game. We illustrate the performance of the proposed system by evaluating it for a variety of court-net sports videos containing badminton, tennis and volleyball, and we show that the feature detection performance is above 92% and events detection about 90%.

  16. Automative Multi Classifier Framework for Medical Image Analysis

    Directory of Open Access Journals (Sweden)

    R. Edbert Rajan

    2015-04-01

    Full Text Available Medical image processing is the technique used to create images of the human body for medical purposes. Nowadays, medical image processing plays a major role and a challenging solution for the critical stage in the medical line. Several researches have done in this area to enhance the techniques for medical image processing. However, due to some demerits met by some advanced technologies, there are still many aspects that need further development. Existing study evaluate the efficacy of the medical image analysis with the level-set shape along with fractal texture and intensity features to discriminate PF (Posterior Fossa tumor from other tissues in the brain image. To develop the medical image analysis and disease diagnosis, to devise an automotive subjective optimality model for segmentation of images based on different sets of selected features from the unsupervised learning model of extracted features. After segmentation, classification of images is done. The classification is processed by adapting the multiple classifier frameworks in the previous work based on the mutual information coefficient of the selected features underwent for image segmentation procedures. In this study, to enhance the classification strategy, we plan to implement enhanced multi classifier framework for the analysis of medical images and disease diagnosis. The performance parameter used for the analysis of the proposed enhanced multi classifier framework for medical image analysis is Multiple Class intensity, image quality, time consumption.

  17. A qualitative analysis of methotrexate self-injection education videos on YouTube.

    Science.gov (United States)

    Rittberg, Rebekah; Dissanayake, Tharindri; Katz, Steven J

    2016-05-01

    The aim of this study is to identify and evaluate the quality of videos for patients available on YouTube for learning to self-administer subcutaneous methotrexate. Using the search term "Methotrexate injection," two clinical reviewers analyzed the first 60 videos on YouTube. Source and search rank of video, audience interaction, video duration, and time since video was uploaded on YouTube were recorded. Videos were classified as useful, misleading, or a personal patient view. Videos were rated for reliability, comprehensiveness, and global quality scale (GQS). Reasons for misleading videos were documented, and patient videos were documented as being either positive or negative towards methotrexate (MTX) injection. Fifty-one English videos overlapped between the two geographic locations; 10 videos were classified as useful (19.6 %), 14 misleading (27.5 %), and 27 personal patient view (52.9 %). Total views of videos were 161,028: 19.2 % useful, 72.8 % patient, and 8.0 % misleading. Mean GQS: 4.2 (±1.0) useful, 1.6 (±1.1) misleading, and 2.0 (±0.9) for patient videos (p < 0.0001). Mean reliability: 3.3 (±0.6) useful, 0.9 (±1.2) misleading, and 1.0 (±0.7) for patient videos (p < 0.0001). Comprehensiveness: 2.2 (±1.9) useful, 0.1 (±0.3) misleading, and 1.5 (±1.5) for patient view videos (p = 0.0027). This study demonstrates a minority of videos are useful for teaching MTX injection. Further, video quality does not correlate with video views. While web video may be an additional educational tool available, clinicians need to be familiar with specific resources to help guide and educate their patients to ensure best outcomes.

  18. A qualitative analysis of methotrexate self-injection education videos on YouTube.

    Science.gov (United States)

    Rittberg, Rebekah; Dissanayake, Tharindri; Katz, Steven J

    2016-05-01

    The aim of this study is to identify and evaluate the quality of videos for patients available on YouTube for learning to self-administer subcutaneous methotrexate. Using the search term "Methotrexate injection," two clinical reviewers analyzed the first 60 videos on YouTube. Source and search rank of video, audience interaction, video duration, and time since video was uploaded on YouTube were recorded. Videos were classified as useful, misleading, or a personal patient view. Videos were rated for reliability, comprehensiveness, and global quality scale (GQS). Reasons for misleading videos were documented, and patient videos were documented as being either positive or negative towards methotrexate (MTX) injection. Fifty-one English videos overlapped between the two geographic locations; 10 videos were classified as useful (19.6 %), 14 misleading (27.5 %), and 27 personal patient view (52.9 %). Total views of videos were 161,028: 19.2 % useful, 72.8 % patient, and 8.0 % misleading. Mean GQS: 4.2 (±1.0) useful, 1.6 (±1.1) misleading, and 2.0 (±0.9) for patient videos (p < 0.0001). Mean reliability: 3.3 (±0.6) useful, 0.9 (±1.2) misleading, and 1.0 (±0.7) for patient videos (p < 0.0001). Comprehensiveness: 2.2 (±1.9) useful, 0.1 (±0.3) misleading, and 1.5 (±1.5) for patient view videos (p = 0.0027). This study demonstrates a minority of videos are useful for teaching MTX injection. Further, video quality does not correlate with video views. While web video may be an additional educational tool available, clinicians need to be familiar with specific resources to help guide and educate their patients to ensure best outcomes. PMID:25739847

  19. AMAB: Automated measurement and analysis of body motion

    NARCIS (Netherlands)

    Poppe, Ronald; Zee, van der Sophie; Heylen, Dirk K.J.; Taylor, Paul J.

    2014-01-01

    Technologies that measure human nonverbal behavior have existed for some time, and their use in the analysis of social behavior has become more popular following the development of sensor technologies that record full-body movement. However, a standardized methodology to efficiently represent and an

  20. Automated analysis of security requirements through risk-based argumentation

    NARCIS (Netherlands)

    Yu, Yijun; Franqueira, Virginia N.L.; Tun, Thein Tan; Wieringa, Roel J.; Nuseibeh, Bashar

    2015-01-01

    Computer-based systems are increasingly being exposed to evolving security threats, which often reveal new vulnerabilities. A formal analysis of the evolving threats is difficult due to a number of practical considerations such as incomplete knowledge about the design, limited information about atta

  1. Automated analysis of three-dimensional stress echocardiography

    NARCIS (Netherlands)

    K.Y.E. Leung (Esther); M. van Stralen (Marijn); M.G. Danilouchkine (Mikhail); G. van Burken (Gerard); M.L. Geleijnse (Marcel); J.H.C. Reiber (Johan); N. de Jong (Nico); A.F.W. van der Steen (Ton); J.G. Bosch (Johan)

    2011-01-01

    textabstractReal-time three-dimensional (3D) ultrasound imaging has been proposed as an alternative for two-dimensional stress echocardiography for assessing myocardial dysfunction and underlying coronary artery disease. Analysis of 3D stress echocardiography is no simple task and requires considera

  2. Analysis of the automated systems of planning of spatial constructions

    Directory of Open Access Journals (Sweden)

    М.С. Барабаш

    2004-04-01

    Full Text Available  The article is devoted to the questions of analysis of existing SAPR and questions of development of new information technologies of planning on the basis of integration of programmatic complexes with the use of united informatively-logical model of object.

  3. An automated image analysis system to measure and count organisms in laboratory microcosms.

    Science.gov (United States)

    Mallard, François; Le Bourlot, Vincent; Tully, Thomas

    2013-01-01

    1. Because of recent technological improvements in the way computer and digital camera perform, the potential use of imaging for contributing to the study of communities, populations or individuals in laboratory microcosms has risen enormously. However its limited use is due to difficulties in the automation of image analysis. 2. We present an accurate and flexible method of image analysis for detecting, counting and measuring moving particles on a fixed but heterogeneous substrate. This method has been specifically designed to follow individuals, or entire populations, in experimental laboratory microcosms. It can be used in other applications. 3. The method consists in comparing multiple pictures of the same experimental microcosm in order to generate an image of the fixed background. This background is then used to extract, measure and count the moving organisms, leaving out the fixed background and the motionless or dead individuals. 4. We provide different examples (springtails, ants, nematodes, daphnia) to show that this non intrusive method is efficient at detecting organisms under a wide variety of conditions even on faintly contrasted and heterogeneous substrates. 5. The repeatability and reliability of this method has been assessed using experimental populations of the Collembola Folsomia candida. 6. We present an ImageJ plugin to automate the analysis of digital pictures of laboratory microcosms. The plugin automates the successive steps of the analysis and recursively analyses multiple sets of images, rapidly producing measurements from a large number of replicated microcosms. PMID:23734199

  4. An automated image analysis system to measure and count organisms in laboratory microcosms.

    Directory of Open Access Journals (Sweden)

    François Mallard

    Full Text Available 1. Because of recent technological improvements in the way computer and digital camera perform, the potential use of imaging for contributing to the study of communities, populations or individuals in laboratory microcosms has risen enormously. However its limited use is due to difficulties in the automation of image analysis. 2. We present an accurate and flexible method of image analysis for detecting, counting and measuring moving particles on a fixed but heterogeneous substrate. This method has been specifically designed to follow individuals, or entire populations, in experimental laboratory microcosms. It can be used in other applications. 3. The method consists in comparing multiple pictures of the same experimental microcosm in order to generate an image of the fixed background. This background is then used to extract, measure and count the moving organisms, leaving out the fixed background and the motionless or dead individuals. 4. We provide different examples (springtails, ants, nematodes, daphnia to show that this non intrusive method is efficient at detecting organisms under a wide variety of conditions even on faintly contrasted and heterogeneous substrates. 5. The repeatability and reliability of this method has been assessed using experimental populations of the Collembola Folsomia candida. 6. We present an ImageJ plugin to automate the analysis of digital pictures of laboratory microcosms. The plugin automates the successive steps of the analysis and recursively analyses multiple sets of images, rapidly producing measurements from a large number of replicated microcosms.

  5. Molecular Detection of Bladder Cancer by Fluorescence Microsatellite Analysis and an Automated Genetic Analyzing System

    Directory of Open Access Journals (Sweden)

    Sarel Halachmi

    2007-01-01

    Full Text Available To investigate the ability of an automated fluorescent analyzing system to detect microsatellite alterations, in patients with bladder cancer. We investigated 11 with pathology proven bladder Transitional Cell Carcinoma (TCC for microsatellite alterations in blood, urine, and tumor biopsies. DNA was prepared by standard methods from blood, urine and resected tumor specimens, and was used for microsatellite analysis. After the primers were fluorescent labeled, amplification of the DNA was performed with PCR. The PCR products were placed into the automated genetic analyser (ABI Prism 310, Perkin Elmer, USA and were subjected to fluorescent scanning with argon ion laser beams. The fluorescent signal intensity measured by the genetic analyzer measured the product size in terms of base pairs. We found loss of heterozygocity (LOH or microsatellite alterations (a loss or gain of nucleotides, which alter the original normal locus size in all the patients by using fluorescent microsatellite analysis and an automated analyzing system. In each case the genetic changes found in urine samples were identical to those found in the resected tumor sample. The studies demonstrated the ability to detect bladder tumor non-invasively by fluorescent microsatellite analysis of urine samples. Our study supports the worldwide trend for the search of non-invasive methods to detect bladder cancer. We have overcome major obstacles that prevented the clinical use of an experimental system. With our new tested system microsatellite analysis can be done cheaper, faster, easier and with higher scientific accuracy.

  6. Automated Performance Monitoring Data Analysis and Reporting within the Open Source R Environment

    Science.gov (United States)

    Kennel, J.; Tonkin, M. J.; Faught, W.; Lee, A.; Biebesheimer, F.

    2013-12-01

    Environmental scientists encounter quantities of data at a rate that in many cases outpaces our ability to appropriately store, visualize and convey the information. The free software environment, R, provides a framework for efficiently processing, analyzing, depicting and reporting on data from a multitude of formats in the form of traceable and quality-assured data summary reports. Automated data summary reporting leverages document markup languages such as markdown, HTML, or LaTeX using R-scripts capable of completing a variety of simple or sophisticated data processing, analysis and visualization tasks. Automated data summary reports seamlessly integrate analysis into report production with calculation outputs - such as plots, maps and statistics - included alongside report text. Once a site-specific template is set up, including data types, geographic base data and reporting requirements, reports can be (re-)generated trivially as the data evolve. The automated data summary report can be a stand-alone report, or it can be incorporated as an attachment to an interpretive report prepared by a subject-matter expert, thereby providing the technical basis to report on and efficiently evaluate large volumes of data resulting in a concise interpretive report. Hence, the data summary report does not replace the scientist, but relieves them of repetitive data processing tasks, facilitating a greater level of analysis. This is demonstrated using an implementation developed for monthly groundwater data reporting for a multi-constituent contaminated site, highlighting selected analysis techniques that can be easily incorporated in a data summary report.

  7. Rocket engine plume diagnostics using video digitization and image processing - Analysis of start-up

    Science.gov (United States)

    Disimile, P. J.; Shoe, B.; Dhawan, A. P.

    1991-01-01

    Video digitization techniques have been developed to analyze the exhaust plume of the Space Shuttle Main Engine. Temporal averaging and a frame-by-frame analysis provide data used to evaluate the capabilities of image processing techniques for use as measurement tools. Capabilities include the determination of the necessary time requirement for the Mach disk to obtain a fully-developed state. Other results show the Mach disk tracks the nozzle for short time intervals, and that dominate frequencies exist for the nozzle and Mach disk movement.

  8. Performance analysis of medical video streaming over mobile WiMAX.

    Science.gov (United States)

    Alinejad, Ali; Philip, N; Istepanian, R H

    2010-01-01

    Wireless medical ultrasound streaming is considered one of the emerging application within the broadband mobile healthcare domain. These applications are considered as bandwidth demanding services that required high data rates with acceptable diagnostic quality of the transmitted medical images. In this paper, we present the performance analysis of a medical ultrasound video streaming acquired via special robotic ultrasonography system over emulated WiMAX wireless network. The experimental set-up of this application is described together with the performance of the relevant medical quality of service (m-QoS) metrics. PMID:21097263

  9. Performance analysis of medical video streaming over mobile WiMAX.

    Science.gov (United States)

    Alinejad, Ali; Philip, N; Istepanian, R H

    2010-01-01

    Wireless medical ultrasound streaming is considered one of the emerging application within the broadband mobile healthcare domain. These applications are considered as bandwidth demanding services that required high data rates with acceptable diagnostic quality of the transmitted medical images. In this paper, we present the performance analysis of a medical ultrasound video streaming acquired via special robotic ultrasonography system over emulated WiMAX wireless network. The experimental set-up of this application is described together with the performance of the relevant medical quality of service (m-QoS) metrics.

  10. Analysis of Head Mounted Wireless Camera Videos for Early Diagnosis of Autism

    OpenAIRE

    Zolnierek, Andrej; Wozniak, Michal; Puchala, Edward; Kurzynski, Marek; Noris, B.; Benmachiche, K.; Meynet, Julien; Thiran, Jean-Philippe; Billard, A.

    2007-01-01

    In this paper we present a computer based approach to analysis of social interaction experiments for the diagnosis of autism spectrum disorders in young children of 6-18 months of age. We apply face detection on videos from a head-mounted wireless camera to measure the time a child spends looking at people. In-Plane rotation invariant Face Detection is used to detect faces from the diverse directions of the children’s head. Skin color detection is used to render the system more robust...

  11. Lipid vesicle shape analysis from populations using light video microscopy and computer vision.

    Science.gov (United States)

    Zupanc, Jernej; Drašler, Barbara; Boljte, Sabina; Kralj-Iglič, Veronika; Iglič, Aleš; Erdogmus, Deniz; Drobne, Damjana

    2014-01-01

    We present a method for giant lipid vesicle shape analysis that combines manually guided large-scale video microscopy and computer vision algorithms to enable analyzing vesicle populations. The method retains the benefits of light microscopy and enables non-destructive analysis of vesicles from suspensions containing up to several thousands of lipid vesicles (1-50 µm in diameter). For each sample, image analysis was employed to extract data on vesicle quantity and size distributions of their projected diameters and isoperimetric quotients (measure of contour roundness). This process enables a comparison of samples from the same population over time, or the comparison of a treated population to a control. Although vesicles in suspensions are heterogeneous in sizes and shapes and have distinctively non-homogeneous distribution throughout the suspension, this method allows for the capture and analysis of repeatable vesicle samples that are representative of the population inspected. PMID:25426933

  12. Lipid vesicle shape analysis from populations using light video microscopy and computer vision.

    Directory of Open Access Journals (Sweden)

    Jernej Zupanc

    Full Text Available We present a method for giant lipid vesicle shape analysis that combines manually guided large-scale video microscopy and computer vision algorithms to enable analyzing vesicle populations. The method retains the benefits of light microscopy and enables non-destructive analysis of vesicles from suspensions containing up to several thousands of lipid vesicles (1-50 µm in diameter. For each sample, image analysis was employed to extract data on vesicle quantity and size distributions of their projected diameters and isoperimetric quotients (measure of contour roundness. This process enables a comparison of samples from the same population over time, or the comparison of a treated population to a control. Although vesicles in suspensions are heterogeneous in sizes and shapes and have distinctively non-homogeneous distribution throughout the suspension, this method allows for the capture and analysis of repeatable vesicle samples that are representative of the population inspected.

  13. BitTorrent Swarm Analysis through Automation and Enhanced Logging

    OpenAIRE

    R˘azvan Deaconescu; Marius Sandu-Popa; Adriana Dr˘aghici; Nicolae T˘apus

    2011-01-01

    Peer-to-Peer protocols currently form the most heavily used protocol class in the Internet, with BitTorrent, the most popular protocol for content distribution, as its flagship. A high number of studies and investigations have been undertaken to measure, analyse and improve the inner workings of the BitTorrent protocol. Approaches such as tracker message analysis, network probing and packet sniffing have been deployed to understand and enhance BitTorrent's internal behaviour. In this paper we...

  14. Automated analysis of protein subcellular location in time series images

    OpenAIRE

    Hu, Yanhua; Osuna-Highley, Elvira; Hua, Juchang; Nowicki, Theodore Scott; Stolz, Robert; McKayle, Camille; Murphy, Robert F.

    2010-01-01

    Motivation: Image analysis, machine learning and statistical modeling have become well established for the automatic recognition and comparison of the subcellular locations of proteins in microscope images. By using a comprehensive set of features describing static images, major subcellular patterns can be distinguished with near perfect accuracy. We now extend this work to time series images, which contain both spatial and temporal information. The goal is to use temporal features to improve...

  15. Automated analysis of image mammogram for breast cancer diagnosis

    Science.gov (United States)

    Nurhasanah, Sampurno, Joko; Faryuni, Irfana Diah; Ivansyah, Okto

    2016-03-01

    Medical imaging help doctors in diagnosing and detecting diseases that attack the inside of the body without surgery. Mammogram image is a medical image of the inner breast imaging. Diagnosis of breast cancer needs to be done in detail and as soon as possible for determination of next medical treatment. The aim of this work is to increase the objectivity of clinical diagnostic by using fractal analysis. This study applies fractal method based on 2D Fourier analysis to determine the density of normal and abnormal and applying the segmentation technique based on K-Means clustering algorithm to image abnormal for determine the boundary of the organ and calculate the area of organ segmentation results. The results show fractal method based on 2D Fourier analysis can be used to distinguish between the normal and abnormal breast and segmentation techniques with K-Means Clustering algorithm is able to generate the boundaries of normal and abnormal tissue organs, so area of the abnormal tissue can be determined.

  16. BitTorrent Swarm Analysis through Automation and Enhanced Logging

    CERN Document Server

    Deaconescu, Răzvan; Drăghici, Adriana; Tăpus, Nicolae

    2011-01-01

    Peer-to-Peer protocols currently form the most heavily used protocol class in the Internet, with BitTorrent, the most popular protocol for content distribution, as its flagship. A high number of studies and investigations have been undertaken to measure, analyse and improve the inner workings of the BitTorrent protocol. Approaches such as tracker message analysis, network probing and packet sniffing have been deployed to understand and enhance BitTorrent's internal behaviour. In this paper we present a novel approach that aims to collect, process and analyse large amounts of local peer information in BitTorrent swarms. We classify the information as periodic status information able to be monitored in real time and as verbose logging information to be used for subsequent analysis. We have designed and implemented a retrieval, storage and presentation infrastructure that enables easy analysis of BitTorrent protocol internals. Our approach can be employed both as a comparison tool, as well as a measurement syste...

  17. Automated condition classification of a reciprocating compressor using time frequency analysis and an artificial neural network

    Science.gov (United States)

    Lin, Yih-Hwang; Wu, Hsien-Chang; Wu, Chung-Yung

    2006-12-01

    The purpose of this study is to develop an automated system for condition classification of a reciprocating compressor. Various time-frequency analysis techniques will be examined for decomposition of the vibration signals. Because a time-frequency distribution is a 3D data map, data reduction is indispensable for subsequent analysis. The extraction of the system characteristics using three indices, namely the time index, frequency index, and amplitude index, will be presented and examined for their applicability. The probability neural network is applied for automated condition classification using a combination of the three indices. The study reveals that a proper choice of the index combination and the time-frequency band can provide excellent classification accuracy for the machinery conditions examined in this work.

  18. Automated fine structure image analysis method for discrimination of diabetic retinopathy stage using conjunctival microvasculature images

    Science.gov (United States)

    Khansari, Maziyar M; O’Neill, William; Penn, Richard; Chau, Felix; Blair, Norman P; Shahidi, Mahnaz

    2016-01-01

    The conjunctiva is a densely vascularized mucus membrane covering the sclera of the eye with a unique advantage of accessibility for direct visualization and non-invasive imaging. The purpose of this study is to apply an automated quantitative method for discrimination of different stages of diabetic retinopathy (DR) using conjunctival microvasculature images. Fine structural analysis of conjunctival microvasculature images was performed by ordinary least square regression and Fisher linear discriminant analysis. Conjunctival images between groups of non-diabetic and diabetic subjects at different stages of DR were discriminated. The automated method’s discriminate rates were higher than those determined by human observers. The method allowed sensitive and rapid discrimination by assessment of conjunctival microvasculature images and can be potentially useful for DR screening and monitoring. PMID:27446692

  19. An Empirical Study on the Impact of Automation on the Requirements Analysis Process

    Institute of Scientific and Technical Information of China (English)

    Giuseppe Lami; Robert W. Ferguson

    2007-01-01

    Requirements analysis is an important phase in a software project. The analysis is often performed in aninformal way by specialists who review documents looking for ambiguities, technical inconsistencies and incomplete parts.Automation is still far from being applied in requirements analyses, above all since natural languages are informal andthus difficult to treat automatically. There are only a few tools that can analyse texts. One of them, called QuARS, wasdeveloped by the Istituto di Scienza e Tecnologie dell'Informazione and can analyse texts in terms of ambiguity. This paperdescribes how QuARS was used in a formal empirical experiment to assess the impact in terms of effectiveness and efficacyof the automation in the requirements review process of a software company.

  20. Statistical model, analysis and approximation of rate-distortion function in MPEG-4 FGS videos

    Science.gov (United States)

    Sun, Jun; Gao, Wen; Zhao, Debin; Huang, Qingming

    2005-07-01

    Fine-granular scalability (FGS) has been accepted as the streaming profile of MPEG-4 to provide a flexible foundation for scaling the enhancement layer (EL) to accommodate variable network capacity. To support smooth quality reconstruction of different rate constraints during transmission, it"s significant to acquire the actual rate-distortion functions (RDF) or curves (RDC) of each frame in MPEG-4 FGS videos. In this paper, firstly, we use zero-mean generalized Gaussian distributions (GGD) to model the distributions of 64 (8*8) different discrete cosine transform (DCT) coefficients of FGS EL in a frame. Secondly, we decompose and analyze the FGS coding system using quantization theory and rate-distortion theory, and then combine the analysis of each component together to form a complete RDF of the EL. Guided by the above analysis, at last, we introduce a simple and effective rate-distortion (RD) model to approximate the actual RDF of the EL in MPEG-4 FGS videos. Extensive experimental results show our statistical model, composition and approximation of actual RDF are efficient and effective. What"s more, our analysis methods are general, and the RDF model can also be used in more widely related R-D areas such as rate control algorithms.

  1. Two-Dimensional Video Analysis of Youth and Adolescent Pitching Biomechanics: A Tool For the Common Athlete.

    Science.gov (United States)

    DeFroda, Steven F; Thigpen, Charles A; Kriz, Peter K

    2016-01-01

    Three-dimensional (3D) motion analysis is the gold standard for analyzing the biomechanics of the baseball pitching motion. Historically, 3D analysis has been available primarily to elite athletes, requiring advanced cameras, and sophisticated facilities with expensive software. The advent of newer technology, and increased affordability of video recording devices, and smartphone/tablet-based applications has led to increased access to this technology for youth/amateur athletes and sports medicine professionals. Two-dimensional (2D) video analysis is an emerging tool for the kinematic assessment and observational measurement of pitching biomechanics. It is important for providers, coaches, and players to be aware of this technology, its application in identifying causes of arm pain and preventing injury, as well as its limitations. This review provides an in-depth assessment of 2D video analysis studies for pitching, a direct comparison of 2D video versus 3D motion analysis, and a practical introduction to assessing pitching biomechanics using 2D video analysis. PMID:27618245

  2. A Multi-Wavelength Analysis of Active Regions and Sunspots by Comparison of Automated Detection Algorithms

    OpenAIRE

    Verbeeck, Cis; Higgins, Paul A.; Colak, Tufan; Watson, Fraser T.; Delouille, Veronique; Mampaey, Benjamin; Qahwaji, Rami

    2011-01-01

    Since the Solar Dynamics Observatory (SDO) began recording ~ 1 TB of data per day, there has been an increased need to automatically extract features and events for further analysis. Here we compare the overall detection performance, correlations between extracted properties, and usability for feature tracking of four solar feature-detection algorithms: the Solar Monitor Active Region Tracker (SMART) detects active regions in line-of-sight magnetograms; the Automated Solar Activity Prediction...

  3. SCHUBOT: Machine Learning Tools for the Automated Analysis of Schubert’s Lieder

    OpenAIRE

    Nagler, Dylan Jeremy

    2014-01-01

    This paper compares various methods for automated musical analysis, applying machine learning techniques to gain insight about the Lieder (art songs) of com- poser Franz Schubert (1797-1828). Known as a rule-breaking, individualistic, and adventurous composer, Schubert produced hundreds of emotionally-charged songs that have challenged music theorists to this day. The algorithms presented in this paper analyze the harmonies, melodies, and texts of these songs. This paper begins with an explor...

  4. Pharmacokinetic analysis of topical tobramycin in equine tears by automated immunoassay

    OpenAIRE

    Czerwinski Sarah L; Lyon Andrew W; Skorobohach Brian; Léguillette Renaud

    2012-01-01

    Abstract Background Ophthalmic antibiotic therapy in large animals is often used empirically because of the lack of pharmacokinetics studies. The purpose of the study was to determine the pharmacokinetics of topical tobramycin 0.3% ophthalmic solution in the tears of normal horses using an automated immunoassay analysis. Results The mean tobramycin concentrations in the tears at 5, 10, 15, 30 minutes and 1, 2, 4, 6 hours after administration were 759 (±414), 489 (±237), 346 (±227), 147 (±264)...

  5. Spectral analysis for automated exploration and sample acquisition

    Science.gov (United States)

    Eberlein, Susan; Yates, Gigi

    1992-05-01

    Future space exploration missions will rely heavily on the use of complex instrument data for determining the geologic, chemical, and elemental character of planetary surfaces. One important instrument is the imaging spectrometer, which collects complete images in multiple discrete wavelengths in the visible and infrared regions of the spectrum. Extensive computational effort is required to extract information from such high-dimensional data. A hierarchical classification scheme allows multispectral data to be analyzed for purposes of mineral classification while limiting the overall computational requirements. The hierarchical classifier exploits the tunability of a new type of imaging spectrometer which is based on an acousto-optic tunable filter. This spectrometer collects a complete image in each wavelength passband without spatial scanning. It may be programmed to scan through a range of wavelengths or to collect only specific bands for data analysis. Spectral classification activities employ artificial neural networks, trained to recognize a number of mineral classes. Analysis of the trained networks has proven useful in determining which subsets of spectral bands should be employed at each step of the hierarchical classifier. The network classifiers are capable of recognizing all mineral types which were included in the training set. In addition, the major components of many mineral mixtures can also be recognized. This capability may prove useful for a system designed to evaluate data in a strange environment where details of the mineral composition are not known in advance.

  6. Bittorrent Swarm Analysis Through Automation and Enhanced Logging

    Directory of Open Access Journals (Sweden)

    R˘azvan Deaconescu

    2011-01-01

    Full Text Available Peer-to-Peer protocols currently form the most heavily used protocol class in the Internet,with BitTorrent, the most popular protocol for content distribution, as its flagship.A high number ofstudies and investigations have been undertaken to measure, analyse and improve the inner workings ofthe BitTorrent protocol. Approaches such as tracker message analysis, network probing and packetsniffing have been deployed to understand and enhance BitTorrent’s internal behaviour. In this paper wepresent a novel approach that aims to collect, process and analyse large amounts of local peerinformation in BitTorrent swarms. We classify the information as periodic status information able to bemonitored in real time and as verbose logging information to be used for subsequent analysis. We havedesigned and implemented a retrieval, storage and presentation infrastructure that enables easy analysisof BitTorrent protocol internals. Our approach can be employed both as a comparison tool, as well as ameasurement system of how network characteristics and protocol implementation influence the overallBitTorrent swarm performance.We base our approach on a framework that allows easy swarm creationand control for different BitTorrent clients.With the help of a virtualized infrastructure and a client-serversoftware layer we are able to create, command and manage large sized BitTorrent swarms. Theframework allows a user to run, schedule, start, stop clients within a swarm and collect informationregarding their behavior.

  7. A 3D-video-based computerized analysis of social and sexual interactions in rats.

    Directory of Open Access Journals (Sweden)

    Jumpei Matsumoto

    Full Text Available A large number of studies have analyzed social and sexual interactions between rodents in relation to neural activity. Computerized video analysis has been successfully used to detect numerous behaviors quickly and objectively; however, to date only 2D video recording has been used, which cannot determine the 3D locations of animals and encounters difficulties in tracking animals when they are overlapping, e.g., when mounting. To overcome these limitations, we developed a novel 3D video analysis system for examining social and sexual interactions in rats. A 3D image was reconstructed by integrating images captured by multiple depth cameras at different viewpoints. The 3D positions of body parts of the rats were then estimated by fitting skeleton models of the rats to the 3D images using a physics-based fitting algorithm, and various behaviors were recognized based on the spatio-temporal patterns of the 3D movements of the body parts. Comparisons between the data collected by the 3D system and those by visual inspection indicated that this system could precisely estimate the 3D positions of body parts for 2 rats during social and sexual interactions with few manual interventions, and could compute the traces of the 2 animals even during mounting. We then analyzed the effects of AM-251 (a cannabinoid CB1 receptor antagonist on male rat sexual behavior, and found that AM-251 decreased movements and trunk height before sexual behavior, but increased the duration of head-head contact during sexual behavior. These results demonstrate that the use of this 3D system in behavioral studies could open the door to new approaches for investigating the neuroscience of social and sexual behavior.

  8. Dynamics at the Holuhraun eruption based on high speed video data analysis

    Science.gov (United States)

    Witt, Tanja; Walter, Thomas R.

    2016-04-01

    The 2014/2015 Holuhraun eruption was an gas rich fissure eruption with high fountains. The magma was transported by a horizontal dyke over a distance of 45km. At the first day the fountains occur over a distance of 1.5km and focused at isolated vents during the following day. Based on video analysis of the fountains we obtained a detailed view onto the velocities of the eruption, the propagation path of magma, communication between vents and complexities in the magma paths. We collected videos from the Holuhraun eruption with 2 high speed cameras and one DSLR camera from 31st August, 2015 to 4th September, 2015 for several hours. The fountains at adjacent vents visually seemed to be related at all days. Hence, we calculated the height as a function of time from the video data. All fountains show a pulsating regime with apparent and sporadic alternations from meter to several tens of meters heights. By a time-dependent cross-correlation approach developed within the FUTUREVOLC project, we are able to compare the pulses in the height at adjacent vents. We find that in most cases there is a time lag between the pulses. From the calculated time lags between the pulses and the distance between the correlated vents, we calculate the apparent speed of magma pulses. The analysis of the frequency of the fountains and the eruption and rest time between the the fountains itself, are quite similar and suggest a connection and controlling process of the fountains in the feeder below. At the Holuhraun eruption 2014/2015 (Iceland) we find a significant time shift between the single pulses of adjacent vents at all days. The mean velocity of all days is 30-40 km/hr, which could be interpreted by a magma flow velocity along the dike at depth.Comparison of the velocities derived from the video data analysis to the assumed magma flow velocity in the dike based on seismic data shows a very good agreement, implying that surface expressions of pulsating vents provide an insight into the

  9. Development of automated preparation system for isotopocule analysis of N2O in various air samples

    Science.gov (United States)

    Toyoda, Sakae; Yoshida, Naohiro

    2016-05-01

    Nitrous oxide (N2O), an increasingly abundant greenhouse gas in the atmosphere, is the most important stratospheric ozone-depleting gas of this century. Natural abundance ratios of isotopocules of N2O, NNO molecules substituted with stable isotopes of nitrogen and oxygen, are a promising index of various sources or production pathways of N2O and of its sink or decomposition pathways. Several automated methods have been reported to improve the analytical precision for the isotopocule ratio of atmospheric N2O and to reduce the labor necessary for complicated sample preparation procedures related to mass spectrometric analysis. However, no method accommodates flask samples with limited volume or pressure. Here we present an automated preconcentration system which offers flexibility with respect to the available gas volume, pressure, and N2O concentration. The shortest processing time for a single analysis of typical atmospheric sample is 40 min. Precision values of isotopocule ratio analysis are < 0.1 ‰ for δ15Nbulk (average abundances of 14N15N16O and 15N14N16O relative to 14N14N16O), < 0.2 ‰ for δ18O (relative abundance of 14N14N18O), and < 0.5 ‰ for site preference (SP; difference between relative abundance of 14N15N16O and 15N14N16O). This precision is comparable to that of other automated systems, but better than that of our previously reported manual measurement system.

  10. Tool for automated method design in activation analysis

    International Nuclear Information System (INIS)

    A computational approach to the optimization of the adjustable parameters of nuclear activation analysis has been developed for use in comprehensive method design calculations. An estimate of sample composition is used to predict the gamma-ray spectra to be expected for given sets of values of experimental parameters. These spectra are used to evaluate responses such as detection limits and measurement precision for application to optimization by the simplex method. This technique has been successfully implemented for the simultaneous determination of sample size and irradiation, decay and counting times by the optimization of either detection limit or precision. Both single-element and multielement determinations can be designed with the aid of these calculations. The combination of advance prediction and simplex optimization is both flexible and efficient and produces numerical results suitable for use in further computations

  11. A new web-based method for automated analysis of muscle histology

    Directory of Open Access Journals (Sweden)

    Pertl Cordula

    2013-01-01

    Full Text Available Abstract Background Duchenne Muscular Dystrophy is an inherited degenerative neuromuscular disease characterised by rapidly progressive muscle weakness. Currently, curative treatment is not available. Approaches for new treatments that improve muscle strength and quality of life depend on preclinical testing in animal models. The mdx mouse model is the most frequently used animal model for preclinical studies in muscular dystrophy research. Standardised pathology-relevant parameters of dystrophic muscle in mdx mice for histological analysis have been developed in international, collaborative efforts, but automation has not been accessible to most research groups. A standardised and mainly automated quantitative assessment of histopathological parameters in the mdx mouse model is desirable to allow an objective comparison between laboratories. Methods Immunological and histochemical reactions were used to obtain a double staining for fast and slow myosin. Additionally, fluorescence staining of the myofibre membranes allows defining the minimal Feret’s diameter. The staining of myonuclei with the fluorescence dye bisbenzimide H was utilised to identify nuclei located internally within myofibres. Relevant structures were extracted from the image as single objects and assigned to different object classes using web-based image analysis (MyoScan. Quantitative and morphometric data were analysed, e.g. the number of nuclei per fibre and minimal Feret’s diameter in 6 month old wild-type C57BL/10 mice and mdx mice. Results In the current version of the module “MyoScan”, essential parameters for histologic analysis of muscle sections were implemented including the minimal Feret’s diameter of the myofibres and the automated calculation of the percentage of internally nucleated myofibres. Morphometric data obtained in the present study were in good agreement with previously reported data in the literature and with data obtained from manual

  12. Automation of Axisymmetric Drop Shape Analysis Using Digital Image Processing

    Science.gov (United States)

    Cheng, Philip Wing Ping

    The Axisymmetric Drop Shape Analysis - Profile (ADSA-P) technique, as initiated by Rotenberg, is a user -oriented scheme to determine liquid-fluid interfacial tensions and contact angles from the shape of axisymmetric menisci, i.e., from sessile as well as pendant drops. The ADSA -P program requires as input several coordinate points along the drop profile, the value of the density difference between the bulk phases, and gravity. The solution yields interfacial tension and contact angle. Although the ADSA-P technique was in principle complete, it was found that it was of very limited practical use. The major difficulty with the method is the need for very precise coordinate points along the drop profile, which, up to now, could not be obtained readily. In the past, the coordinate points along the drop profile were obtained by manual digitization of photographs or negatives. From manual digitization data, the surface tension values obtained had an average error of +/-5% when compared with literature values. Another problem with the ADSA-P technique was that the computer program failed to converge for the case of very elongated pendant drops. To acquire the drop profile coordinates automatically, a technique which utilizes recent developments in digital image acquisition and analysis was developed. In order to determine the drop profile coordinates as precisely as possible, the errors due to optical distortions were eliminated. In addition, determination of drop profile coordinates to pixel and sub-pixel resolution was developed. It was found that high precision could be obtained through the use of sub-pixel resolution and a spline fitting method. The results obtained using the automatic digitization technique in conjunction with ADSA-P not only compared well with the conventional methods, but also outstripped the precision of conventional methods considerably. To solve the convergence problem of very elongated pendant drops, it was found that the reason for the

  13. Conventional Versus Automated Implantation of Loose Seeds in Prostate Brachytherapy: Analysis of Dosimetric and Clinical Results

    Energy Technology Data Exchange (ETDEWEB)

    Genebes, Caroline, E-mail: genebes.caroline@claudiusregaud.fr [Radiation Oncology Department, Institut Claudius Regaud, Toulouse (France); Filleron, Thomas; Graff, Pierre [Radiation Oncology Department, Institut Claudius Regaud, Toulouse (France); Jonca, Frédéric [Department of Urology, Clinique Ambroise Paré, Toulouse (France); Huyghe, Eric; Thoulouzan, Matthieu; Soulie, Michel; Malavaud, Bernard [Department of Urology and Andrology, CHU Rangueil, Toulouse (France); Aziza, Richard; Brun, Thomas; Delannes, Martine; Bachaud, Jean-Marc [Radiation Oncology Department, Institut Claudius Regaud, Toulouse (France)

    2013-11-15

    Purpose: To review the clinical outcome of I-125 permanent prostate brachytherapy (PPB) for low-risk and intermediate-risk prostate cancer and to compare 2 techniques of loose-seed implantation. Methods and Materials: 574 consecutive patients underwent I-125 PPB for low-risk and intermediate-risk prostate cancer between 2000 and 2008. Two successive techniques were used: conventional implantation from 2000 to 2004 and automated implantation (Nucletron, FIRST system) from 2004 to 2008. Dosimetric and biochemical recurrence-free (bNED) survival results were reported and compared for the 2 techniques. Univariate and multivariate analysis researched independent predictors for bNED survival. Results: 419 (73%) and 155 (27%) patients with low-risk and intermediate-risk disease, respectively, were treated (median follow-up time, 69.3 months). The 60-month bNED survival rates were 95.2% and 85.7%, respectively, for patients with low-risk and intermediate-risk disease (P=.04). In univariate analysis, patients treated with automated implantation had worse bNED survival rates than did those treated with conventional implantation (P<.0001). By day 30, patients treated with automated implantation showed lower values of dose delivered to 90% of prostate volume (D90) and volume of prostate receiving 100% of prescribed dose (V100). In multivariate analysis, implantation technique, Gleason score, and V100 on day 30 were independent predictors of recurrence-free status. Grade 3 urethritis and urinary incontinence were observed in 2.6% and 1.6% of the cohort, respectively, with no significant differences between the 2 techniques. No grade 3 proctitis was observed. Conclusion: Satisfactory 60-month bNED survival rates (93.1%) and acceptable toxicity (grade 3 urethritis <3%) were achieved by loose-seed implantation. Automated implantation was associated with worse dosimetric and bNED survival outcomes.

  14. Knowledge Support and Automation for Performance Analysis with PerfExplorer 2.0

    Directory of Open Access Journals (Sweden)

    Kevin A. Huck

    2008-01-01

    Full Text Available The integration of scalable performance analysis in parallel development tools is difficult. The potential size of data sets and the need to compare results from multiple experiments presents a challenge to manage and process the information. Simply to characterize the performance of parallel applications running on potentially hundreds of thousands of processor cores requires new scalable analysis techniques. Furthermore, many exploratory analysis processes are repeatable and could be automated, but are now implemented as manual procedures. In this paper, we will discuss the current version of PerfExplorer, a performance analysis framework which provides dimension reduction, clustering and correlation analysis of individual trails of large dimensions, and can perform relative performance analysis between multiple application executions. PerfExplorer analysis processes can be captured in the form of Python scripts, automating what would otherwise be time-consuming tasks. We will give examples of large-scale analysis results, and discuss the future development of the framework, including the encoding and processing of expert performance rules, and the increasing use of performance metadata.

  15. ADGEN: a system for automated sensitivity analysis of predictive models

    International Nuclear Information System (INIS)

    A system that can automatically enhance computer codes with a sensitivity calculation capability is presented. With this new system, named ADGEN, rapid and cost-effective calculation of sensitivities can be performed in any FORTRAN code for all input data or parameters. The resulting sensitivities can be used in performance assessment studies related to licensing or interactions with the public to systematically and quantitatively prove the relative importance of each of the system parameters in calculating the final performance results. A general procedure calling for the systematic use of sensitivities in assessment studies is presented. The procedure can be used in modelling and model validation studies to avoid ''over modelling,'' in site characterization planning to avoid ''over collection of data,'' and in performance assessment to determine the uncertainties on the final calculated results. The added capability to formally perform the inverse problem, i.e., to determine the input data or parameters on which to focus additional research or analysis effort in order to improve the uncertainty of the final results, is also discussed

  16. A Review of Machine-Vision-Based Analysis of Wireless Capsule Endoscopy Video

    Directory of Open Access Journals (Sweden)

    Yingju Chen

    2012-01-01

    Full Text Available Wireless capsule endoscopy (WCE enables a physician to diagnose a patient's digestive system without surgical procedures. However, it takes 1-2 hours for a gastroenterologist to examine the video. To speed up the review process, a number of analysis techniques based on machine vision have been proposed by computer science researchers. In order to train a machine to understand the semantics of an image, the image contents need to be translated into numerical form first. The numerical form of the image is known as image abstraction. The process of selecting relevant image features is often determined by the modality of medical images and the nature of the diagnoses. For example, there are radiographic projection-based images (e.g., X-rays and PET scans, tomography-based images (e.g., MRT and CT scans, and photography-based images (e.g., endoscopy, dermatology, and microscopic histology. Each modality imposes unique image-dependent restrictions for automatic and medically meaningful image abstraction processes. In this paper, we review the current development of machine-vision-based analysis of WCE video, focusing on the research that identifies specific gastrointestinal (GI pathology and methods of shot boundary detection.

  17. An innovative experiment on superconductivity, based on video analysis and non-expensive data acquisition

    Science.gov (United States)

    Bonanno, A.; Bozzo, G.; Camarca, M.; Sapia, P.

    2015-07-01

    In this paper we present a new experiment on superconductivity, designed for university undergraduate students, based on the high-speed video analysis of a magnet falling through a ceramic superconducting cylinder (Tc = 110 K). The use of an Atwood’s machine allows us to vary the magnet’s speed and acceleration during its interaction with the superconductor. In this way, we highlight the existence of two interaction regimes: for low crossing energy, the magnet is levitated by the superconductor after a transient oscillatory damping; for higher crossing energy, the magnet passes through the superconducting cylinder. The use of a commercial-grade high speed imaging system, together with video analysis performed using the Tracker software, allows us to attain a good precision in space and time measurements. Four sensing coils, mounted inside and outside the superconducting cylinder, allow us to study the magnetic flux variations in connection with the magnet’s passage through the superconductor, permitting us to shed light on a didactically relevant topic as the behaviour of magnetic field lines in the presence of a superconductor. The critical discussion of experimental data allows undergraduate university students to grasp useful insights on the basic phenomenology of superconductivity as well as on relevant conceptual topics such as the difference between the Meissner effect and the Faraday-like ‘perfect’ induction.

  18. Performance Task using Video Analysis and Modelling to promote K12 eight practices of science

    CERN Document Server

    Wee, Loo Kang

    2015-01-01

    We will share on the use of Tracker as a pedagogical tool in the effective learning and teaching of physics performance tasks taking root in some Singapore Grade 9 (Secondary 3) schools. We discuss the pedagogical use of Tracker help students to be like scientists in these 6 to 10 weeks where all Grade 9 students are to conduct a personal video analysis and where appropriate the 8 practices of sciences (1. ask question, 2. use models, 3. Plan and carry out investigation, 4. Analyse and interpret data, 5. Using mathematical and computational thinking, 6. Construct explanations, 7. Discuss from evidence and 8. Communicating information). We will situate our sharing on actual students work and discuss how tracker could be an effective pedagogical tool. Initial research findings suggest that allowing learners conduct performance task using Tracker, a free open source video analysis and modelling tool, guided by the 8 practices of sciences and engineering, could be an innovative and effective way to mentor authent...

  19. An innovative experiment on superconductivity, based on video analysis and non-expensive data acquisition

    International Nuclear Information System (INIS)

    In this paper we present a new experiment on superconductivity, designed for university undergraduate students, based on the high-speed video analysis of a magnet falling through a ceramic superconducting cylinder (Tc = 110 K). The use of an Atwood’s machine allows us to vary the magnet’s speed and acceleration during its interaction with the superconductor. In this way, we highlight the existence of two interaction regimes: for low crossing energy, the magnet is levitated by the superconductor after a transient oscillatory damping; for higher crossing energy, the magnet passes through the superconducting cylinder. The use of a commercial-grade high speed imaging system, together with video analysis performed using the Tracker software, allows us to attain a good precision in space and time measurements. Four sensing coils, mounted inside and outside the superconducting cylinder, allow us to study the magnetic flux variations in connection with the magnet’s passage through the superconductor, permitting us to shed light on a didactically relevant topic as the behaviour of magnetic field lines in the presence of a superconductor. The critical discussion of experimental data allows undergraduate university students to grasp useful insights on the basic phenomenology of superconductivity as well as on relevant conceptual topics such as the difference between the Meissner effect and the Faraday-like ‘perfect’ induction. (paper)

  20. Multi-scale AM-FM motion analysis of ultrasound videos of carotid artery plaques

    Science.gov (United States)

    Murillo, Sergio; Murray, Victor; Loizou, C. P.; Pattichis, C. S.; Pattichis, Marios; Barriga, E. Simon

    2012-03-01

    An estimated 82 million American adults have one or more type of cardiovascular diseases (CVD). CVD is the leading cause of death (1 of every 3 deaths) in the United States. When considered separately from other CVDs, stroke ranks third among all causes of death behind diseases of the heart and cancer. Stroke accounts for 1 out of every 18 deaths and is the leading cause of serious long-term disability in the United States. Motion estimation of ultrasound videos (US) of carotid artery (CA) plaques provides important information regarding plaque deformation that should be considered for distinguishing between symptomatic and asymptomatic plaques. In this paper, we present the development of verifiable methods for the estimation of plaque motion. Our methodology is tested on a set of 34 (5 symptomatic and 29 asymptomatic) ultrasound videos of carotid artery plaques. Plaque and wall motion analysis provides information about plaque instability and is used in an attempt to differentiate between symptomatic and asymptomatic cases. The final goal for motion estimation and analysis is to identify pathological conditions that can be detected from motion changes due to changes in tissue stiffness.

  1. Automated validation of patient safety clinical incident classification: macro analysis.

    Science.gov (United States)

    Gupta, Jaiprakash; Patrick, Jon

    2013-01-01

    Patient safety is the buzz word in healthcare. Incident Information Management System (IIMS) is electronic software that stores clinical mishaps narratives in places where patients are treated. It is estimated that in one state alone over one million electronic text documents are available in IIMS. In this paper we investigate the data density available in the fields entered to notify an incident and the validity of the built in classification used by clinician to categories the incidents. Waikato Environment for Knowledge Analysis (WEKA) software was used to test the classes. Four statistical classifier based on J48, Naïve Bayes (NB), Naïve Bayes Multinominal (NBM) and Support Vector Machine using radial basis function (SVM_RBF) algorithms were used to validate the classes. The data pool was 10,000 clinical incidents drawn from 7 hospitals in one state in Australia. In first part of the study 1000 clinical incidents were selected to determine type and number of fields worth investigating and in the second part another 5448 clinical incidents were randomly selected to validate 13 clinical incident types. Result shows 74.6% of the cells were empty and only 23 fields had content over 70% of the time. The percentage correctly classified classes on four algorithms using categorical dataset ranged from 42 to 49%, using free-text datasets from 65% to 77% and using both datasets from 72% to 79%. Kappa statistic ranged from 0.36 to 0.4. for categorical data, from 0.61 to 0.74. for free-text and from 0.67 to 0.77 for both datasets. Similar increases in performance in the 3 experiments was noted on true positive rate, precision, F-measure and area under curve (AUC) of receiver operating characteristics (ROC) scores. The study demonstrates only 14 of 73 fields in IIMS have data that is usable for machine learning experiments. Irrespective of the type of algorithms used when all datasets are used performance was better. Classifier NBM showed best performance. We think the

  2. Automated Software Analysis of Fetal Movement Recorded during a Pregnant Woman's Sleep at Home.

    Science.gov (United States)

    Nishihara, Kyoko; Ohki, Noboru; Kamata, Hideo; Ryo, Eiji; Horiuchi, Shigeko

    2015-01-01

    Fetal movement is an important biological index of fetal well-being. Since 2008, we have been developing an original capacitive acceleration sensor and device that a pregnant woman can easily use to record fetal movement by herself at home during sleep. In this study, we report a newly developed automated software system for analyzing recorded fetal movement. This study will introduce the system and compare its results to those of a manual analysis of the same fetal movement signals (Experiment I). We will also demonstrate an appropriate way to use the system (Experiment II). In Experiment I, fetal movement data reported previously for six pregnant women at 28-38 gestational weeks were used. We evaluated the agreement of the manual and automated analyses for the same 10-sec epochs using prevalence-adjusted bias-adjusted kappa (PABAK) including quantitative indicators for prevalence and bias. The mean PABAK value was 0.83, which can be considered almost perfect. In Experiment II, twelve pregnant women at 24-36 gestational weeks recorded fetal movement at night once every four weeks. Overall, mean fetal movement counts per hour during maternal sleep significantly decreased along with gestational weeks, though individual differences in fetal development were noted. This newly developed automated analysis system can provide important data throughout late pregnancy.

  3. Development of automated high throughput single molecular microfluidic detection platform for signal transduction analysis

    Science.gov (United States)

    Huang, Po-Jung; Baghbani Kordmahale, Sina; Chou, Chao-Kai; Yamaguchi, Hirohito; Hung, Mien-Chie; Kameoka, Jun

    2016-03-01

    Signal transductions including multiple protein post-translational modifications (PTM), protein-protein interactions (PPI), and protein-nucleic acid interaction (PNI) play critical roles for cell proliferation and differentiation that are directly related to the cancer biology. Traditional methods, like mass spectrometry, immunoprecipitation, fluorescence resonance energy transfer, and fluorescence correlation spectroscopy require a large amount of sample and long processing time. "microchannel for multiple-parameter analysis of proteins in single-complex (mMAPS)"we proposed can reduce the process time and sample volume because this system is composed by microfluidic channels, fluorescence microscopy, and computerized data analysis. In this paper, we will present an automated mMAPS including integrated microfluidic device, automated stage and electrical relay for high-throughput clinical screening. Based on this result, we estimated that this automated detection system will be able to screen approximately 150 patient samples in a 24-hour period, providing a practical application to analyze tissue samples in a clinical setting.

  4. Automated Software Analysis of Fetal Movement Recorded during a Pregnant Woman's Sleep at Home.

    Directory of Open Access Journals (Sweden)

    Kyoko Nishihara

    Full Text Available Fetal movement is an important biological index of fetal well-being. Since 2008, we have been developing an original capacitive acceleration sensor and device that a pregnant woman can easily use to record fetal movement by herself at home during sleep. In this study, we report a newly developed automated software system for analyzing recorded fetal movement. This study will introduce the system and compare its results to those of a manual analysis of the same fetal movement signals (Experiment I. We will also demonstrate an appropriate way to use the system (Experiment II. In Experiment I, fetal movement data reported previously for six pregnant women at 28-38 gestational weeks were used. We evaluated the agreement of the manual and automated analyses for the same 10-sec epochs using prevalence-adjusted bias-adjusted kappa (PABAK including quantitative indicators for prevalence and bias. The mean PABAK value was 0.83, which can be considered almost perfect. In Experiment II, twelve pregnant women at 24-36 gestational weeks recorded fetal movement at night once every four weeks. Overall, mean fetal movement counts per hour during maternal sleep significantly decreased along with gestational weeks, though individual differences in fetal development were noted. This newly developed automated analysis system can provide important data throughout late pregnancy.

  5. Automated cell colony counting and analysis using the circular Hough image transform algorithm (CHiTA)

    Energy Technology Data Exchange (ETDEWEB)

    Bewes, J M; Suchowerska, N; McKenzie, D R [School of Physics, University of Sydney, Sydney, NSW (Australia)], E-mail: jbewes@physics.usyd.edu.au

    2008-11-07

    We present an automated cell colony counting method that is flexible, robust and capable of providing more in-depth clonogenic analysis than existing manual and automated approaches. The full form of the Hough transform without approximation has been implemented, for the first time. Improvements in computing speed have facilitated this approach. Colony identification was achieved by pre-processing the raw images of the colonies in situ in the flask, including images of the flask edges, by erosion, dilation and Gaussian smoothing processes. Colony edges were then identified by intensity gradient field discrimination. Our technique eliminates the need for specialized hardware for image capture and enables the use of a standard desktop scanner for distortion-free image acquisition. Additional parameters evaluated included regional colony counts, average colony area, nearest neighbour distances and radial distribution. This spatial and qualitative information extends the utility of the clonogenic assay, allowing analysis of spatially-variant cytotoxic effects. To test the automated system, two flask types and three cell lines with different morphology, cell size and plating density were examined. A novel Monte Carlo method of simulating cell colony images, as well as manual counting, were used to quantify algorithm accuracy. The method was able to identify colonies with unusual morphology, to successfully resolve merged colonies and to correctly count colonies adjacent to flask edges.

  6. Automated cell colony counting and analysis using the circular Hough image transform algorithm (CHiTA)

    Science.gov (United States)

    Bewes, J. M.; Suchowerska, N.; McKenzie, D. R.

    2008-11-01

    We present an automated cell colony counting method that is flexible, robust and capable of providing more in-depth clonogenic analysis than existing manual and automated approaches. The full form of the Hough transform without approximation has been implemented, for the first time. Improvements in computing speed have facilitated this approach. Colony identification was achieved by pre-processing the raw images of the colonies in situ in the flask, including images of the flask edges, by erosion, dilation and Gaussian smoothing processes. Colony edges were then identified by intensity gradient field discrimination. Our technique eliminates the need for specialized hardware for image capture and enables the use of a standard desktop scanner for distortion-free image acquisition. Additional parameters evaluated included regional colony counts, average colony area, nearest neighbour distances and radial distribution. This spatial and qualitative information extends the utility of the clonogenic assay, allowing analysis of spatially-variant cytotoxic effects. To test the automated system, two flask types and three cell lines with different morphology, cell size and plating density were examined. A novel Monte Carlo method of simulating cell colony images, as well as manual counting, were used to quantify algorithm accuracy. The method was able to identify colonies with unusual morphology, to successfully resolve merged colonies and to correctly count colonies adjacent to flask edges.

  7. Towards the Procedure Automation of Full Stochastic Spectral Based Fatigue Analysis

    Directory of Open Access Journals (Sweden)

    Khurram Shehzad

    2013-05-01

    Full Text Available Fatigue is one of the most significant failure modes for marine structures such as ships and offshore platforms. Among numerous methods for fatigue life estimation, spectral method is considered as the most reliable one due to its ability to cater different sea states as well as their probabilities of occurrence. However, spectral based simulation procedure itself is quite complex and numerically intensive owing to various critical technical details. Present research study is focused on the application and automation of spectral based fatigue analysis procedure for ship structure using ANSYS software with 3D liner sea keeping code AQWA. Ansys Parametric Design Language (APDL macros are created and subsequently implemented to automate the workflow of simulation process by reducing the time spent on non-value added repetitive activity. A MATLAB program based on direct calculation procedure of spectral fatigue is developed to calculate total fatigue damage. The automation procedure is employed to predict the fatigue life of a ship structural detail using wave scatter data of North Atlantic and Worldwide trade. The current work will provide a system for efficient implementation of stochastic spectral fatigue analysis procedure for ship structures.

  8. RootGraph: a graphic optimization tool for automated image analysis of plant roots.

    Science.gov (United States)

    Cai, Jinhai; Zeng, Zhanghui; Connor, Jason N; Huang, Chun Yuan; Melino, Vanessa; Kumar, Pankaj; Miklavcic, Stanley J

    2015-11-01

    This paper outlines a numerical scheme for accurate, detailed, and high-throughput image analysis of plant roots. In contrast to existing root image analysis tools that focus on root system-average traits, a novel, fully automated and robust approach for the detailed characterization of root traits, based on a graph optimization process is presented. The scheme, firstly, distinguishes primary roots from lateral roots and, secondly, quantifies a broad spectrum of root traits for each identified primary and lateral root. Thirdly, it associates lateral roots and their properties with the specific primary root from which the laterals emerge. The performance of this approach was evaluated through comparisons with other automated and semi-automated software solutions as well as against results based on manual measurements. The comparisons and subsequent application of the algorithm to an array of experimental data demonstrate that this method outperforms existing methods in terms of accuracy, robustness, and the ability to process root images under high-throughput conditions.

  9. Examining Feedback in an Instructional Video Game Using Process Data and Error Analysis. CRESST Report 817

    Science.gov (United States)

    Buschang, Rebecca E.; Kerr, Deirdre S.; Chung, Gregory K. W. K.

    2012-01-01

    Appropriately designed technology-based learning environments such as video games can be used to give immediate and individualized feedback to students. However, little is known about the design and use of feedback in instructional video games. This study investigated how feedback used in a mathematics video game about fractions impacted student…

  10. Automated DNA extraction of single dog hairs without roots for mitochondrial DNA analysis.

    Science.gov (United States)

    Bekaert, Bram; Larmuseau, Maarten H D; Vanhove, Maarten P M; Opdekamp, Anouschka; Decorte, Ronny

    2012-03-01

    Dogs are intensely integrated in human social life and their shed hairs can play a major role in forensic investigations. The overall aim of this study was to validate a semi-automated extraction method for mitochondrial DNA analysis of telogenic dog hairs. Extracted DNA was amplified with a 95% success rate from 43 samples using two new experimental designs in which the mitochondrial control region was amplified as a single large (± 1260 bp) amplicon or as two individual amplicons (HV1 and HV2; ± 650 and 350 bp) with tailed-primers. The results prove that the extraction of dog hair mitochondrial DNA can easily be automated to provide sufficient DNA yield for the amplification of a forensically useful long mitochondrial DNA fragment or alternatively two short fragments with minimal loss of sequence in case of degraded samples.

  11. Automated IR determination of petroleum products in water based on sequential injection analysis.

    Science.gov (United States)

    Falkova, Marina; Vakh, Christina; Shishov, Andrey; Zubakina, Ekaterina; Moskvin, Aleksey; Moskvin, Leonid; Bulatov, Andrey

    2016-02-01

    The simple and easy performed automated method for the IR determination of petroleum products (PP) in water using extraction-chromatographic cartridges has been developed. The method assumes two stages: on-site extraction of PP during a sampling by using extraction-chromatographic cartridges and subsequent determination of the extracted PP using sequential injection analysis (SIA) with IR detection. The appropriate experimental conditions for extraction of the dissolved in water PP and for automated SIA procedure were investigated. The calibration plot constructed using the developed procedure was linear in the range of 3-200 μg L(-1). The limit of detection (LOD), calculated from a blank test based on 3σ was 1 µg L(-1). The sample volume was 1L. The system throughput was found to be 12 h(-1). PMID:26653498

  12. Automated Bifurcation Analysis for Nonlinear Elliptic Partial Difference Equations on Graphs

    CERN Document Server

    Neuberger, John M; Swift, James W

    2010-01-01

    We seek solutions $u\\in\\R^n$ to the semilinear elliptic partial difference equation $-Lu + f_s(u) = 0$, where $L$ is the matrix corresponding to the Laplacian operator on a graph $G$ and $f_s$ is a one-parameter family of nonlinear functions. This article combines the ideas introduced by the authors in two papers: a) {\\it Nonlinear Elliptic Partial Difference Equations on Graphs} (J. Experimental Mathematics, 2006), which introduces analytical and numerical techniques for solving such equations, and b) {\\it Symmetry and Automated Branch Following for a Semilinear Elliptic PDE on a Fractal Region} wherein we present some of our recent advances concerning symmetry, bifurcation, and automation fo We apply the symmetry analysis found in the SIAM paper to arbitrary graphs in order to obtain better initial guesses for Newton's method, create informative graphics, and be in the underlying variational structure. We use two modified implementations of the gradient Newton-Galerkin algorithm (GNGA, Neuberger and Swift) ...

  13. Skin-color Based Videos Categorization

    OpenAIRE

    Rehanullah Khan; Asad Maqsood; Zeeshan Khan; Muhammad Ishaq; Arsalan Arif

    2012-01-01

    On dedicated websites, people can upload videos and share it with the rest of the world. Currently these videos are cat- egorized manually by the help of the user community. In this paper, we propose a combination of color spaces with the Bayesian network approach for robust detection of skin color followed by an automated video categorization. Exper- imental results show that our method can achieve satisfactory performance for categorizing videos based on skin color.

  14. Video-based Analysis of Motivation and Interaction in Science Classrooms

    DEFF Research Database (Denmark)

    Andersen, Hanne Moeller; Nielsen, Birgitte Lund

    2013-01-01

    in groups. Subsequently, the framework was used for an analysis of students’ motivation in the whole class situation. A cross-case analysis was carried out illustrating characteristics of students’ motivation dependent on the context. This research showed that students’ motivation to learn science...... is stimulated by a range of different factors, with autonomy, relatedness and belonging apparently being the main sources of motivation. The teacher’s combined use of questions, uptake and high level evaluation was very important for students’ learning processes and motivation, especially students’ self......-efficacy. By coding and analysing video excerpts from science classrooms, we were able to demonstrate that the analytical framework helped us gain new insights into the effect of teachers’ communication and other elements on students’ motivation....

  15. Video quality assessment for web content mirroring

    Science.gov (United States)

    He, Ye; Fei, Kevin; Fernandez, Gustavo A.; Delp, Edward J.

    2014-03-01

    Due to the increasing user expectation on watching experience, moving web high quality video streaming content from the small screen in mobile devices to the larger TV screen has become popular. It is crucial to develop video quality metrics to measure the quality change for various devices or network conditions. In this paper, we propose an automated scoring system to quantify user satisfaction. We compare the quality of local videos with the videos transmitted to a TV. Four video quality metrics, namely Image Quality, Rendering Quality, Freeze Time Ratio and Rate of Freeze Events are used to measure video quality change during web content mirroring. To measure image quality and rendering quality, we compare the matched frames between the source video and the destination video using barcode tools. Freeze time ratio and rate of freeze events are measured after extracting video timestamps. Several user studies are conducted to evaluate the impact of each objective video quality metric on the subjective user watching experience.

  16. OpenComet: An automated tool for comet assay image analysis

    Directory of Open Access Journals (Sweden)

    Benjamin M. Gyori

    2014-01-01

    Full Text Available Reactive species such as free radicals are constantly generated in vivo and DNA is the most important target of oxidative stress. Oxidative DNA damage is used as a predictive biomarker to monitor the risk of development of many diseases. The comet assay is widely used for measuring oxidative DNA damage at a single cell level. The analysis of comet assay output images, however, poses considerable challenges. Commercial software is costly and restrictive, while free software generally requires laborious manual tagging of cells. This paper presents OpenComet, an open-source software tool providing automated analysis of comet assay images. It uses a novel and robust method for finding comets based on geometric shape attributes and segmenting the comet heads through image intensity profile analysis. Due to automation, OpenComet is more accurate, less prone to human bias, and faster than manual analysis. A live analysis functionality also allows users to analyze images captured directly from a microscope. We have validated OpenComet on both alkaline and neutral comet assay images as well as sample images from existing software packages. Our results show that OpenComet achieves high accuracy with significantly reduced analysis time.

  17. AGAPE (Automated Genome Analysis PipelinE for pan-genome analysis of Saccharomyces cerevisiae.

    Directory of Open Access Journals (Sweden)

    Giltae Song

    Full Text Available The characterization and public release of genome sequences from thousands of organisms is expanding the scope for genetic variation studies. However, understanding the phenotypic consequences of genetic variation remains a challenge in eukaryotes due to the complexity of the genotype-phenotype map. One approach to this is the intensive study of model systems for which diverse sources of information can be accumulated and integrated. Saccharomyces cerevisiae is an extensively studied model organism, with well-known protein functions and thoroughly curated phenotype data. To develop and expand the available resources linking genomic variation with function in yeast, we aim to model the pan-genome of S. cerevisiae. To initiate the yeast pan-genome, we newly sequenced or re-sequenced the genomes of 25 strains that are commonly used in the yeast research community using advanced sequencing technology at high quality. We also developed a pipeline for automated pan-genome analysis, which integrates the steps of assembly, annotation, and variation calling. To assign strain-specific functional annotations, we identified genes that were not present in the reference genome. We classified these according to their presence or absence across strains and characterized each group of genes with known functional and phenotypic features. The functional roles of novel genes not found in the reference genome and associated with strains or groups of strains appear to be consistent with anticipated adaptations in specific lineages. As more S. cerevisiae strain genomes are released, our analysis can be used to collate genome data and relate it to lineage-specific patterns of genome evolution. Our new tool set will enhance our understanding of genomic and functional evolution in S. cerevisiae, and will be available to the yeast genetics and molecular biology community.

  18. AGAPE (Automated Genome Analysis PipelinE) for pan-genome analysis of Saccharomyces cerevisiae.

    Science.gov (United States)

    Song, Giltae; Dickins, Benjamin J A; Demeter, Janos; Engel, Stacia; Gallagher, Jennifer; Choe, Kisurb; Dunn, Barbara; Snyder, Michael; Cherry, J Michael

    2015-01-01

    The characterization and public release of genome sequences from thousands of organisms is expanding the scope for genetic variation studies. However, understanding the phenotypic consequences of genetic variation remains a challenge in eukaryotes due to the complexity of the genotype-phenotype map. One approach to this is the intensive study of model systems for which diverse sources of information can be accumulated and integrated. Saccharomyces cerevisiae is an extensively studied model organism, with well-known protein functions and thoroughly curated phenotype data. To develop and expand the available resources linking genomic variation with function in yeast, we aim to model the pan-genome of S. cerevisiae. To initiate the yeast pan-genome, we newly sequenced or re-sequenced the genomes of 25 strains that are commonly used in the yeast research community using advanced sequencing technology at high quality. We also developed a pipeline for automated pan-genome analysis, which integrates the steps of assembly, annotation, and variation calling. To assign strain-specific functional annotations, we identified genes that were not present in the reference genome. We classified these according to their presence or absence across strains and characterized each group of genes with known functional and phenotypic features. The functional roles of novel genes not found in the reference genome and associated with strains or groups of strains appear to be consistent with anticipated adaptations in specific lineages. As more S. cerevisiae strain genomes are released, our analysis can be used to collate genome data and relate it to lineage-specific patterns of genome evolution. Our new tool set will enhance our understanding of genomic and functional evolution in S. cerevisiae, and will be available to the yeast genetics and molecular biology community.

  19. Analysis of Complexity Evolution Management and Human Performance Issues in Commercial Aircraft Automation Systems

    Science.gov (United States)

    Vakil, Sanjay S.; Hansman, R. John

    2000-01-01

    Autoflight systems in the current generation of aircraft have been implicated in several recent incidents and accidents. A contributory aspect to these incidents may be the manner in which aircraft transition between differing behaviours or 'modes.' The current state of aircraft automation was investigated and the incremental development of the autoflight system was tracked through a set of aircraft to gain insight into how these systems developed. This process appears to have resulted in a system without a consistent global representation. In order to evaluate and examine autoflight systems, a 'Hybrid Automation Representation' (HAR) was developed. This representation was used to examine several specific problems known to exist in aircraft systems. Cyclomatic complexity is an analysis tool from computer science which counts the number of linearly independent paths through a program graph. This approach was extended to examine autoflight mode transitions modelled with the HAR. A survey was conducted of pilots to identify those autoflight mode transitions which airline pilots find difficult. The transitions identified in this survey were analyzed using cyclomatic complexity to gain insight into the apparent complexity of the autoflight system from the perspective of the pilot. Mode transitions which had been identified as complex by pilots were found to have a high cyclomatic complexity. Further examination was made into a set of specific problems identified in aircraft: the lack of a consistent representation of automation, concern regarding appropriate feedback from the automation, and the implications of physical limitations on the autoflight systems. Mode transitions involved in changing to and leveling at a new altitude were identified across multiple aircraft by numerous pilots. Where possible, evaluation and verification of the behaviour of these autoflight mode transitions was investigated via aircraft-specific high fidelity simulators. Three solution

  20. Reproducibility of In Vivo Corneal Confocal Microscopy Using an Automated Analysis Program for Detection of Diabetic Sensorimotor Polyneuropathy.

    Directory of Open Access Journals (Sweden)

    Ilia Ostrovski

    Full Text Available In vivo Corneal Confocal Microscopy (IVCCM is a validated, non-invasive test for diabetic sensorimotor polyneuropathy (DSP detection, but its utility is limited by the image analysis time and expertise required. We aimed to determine the inter- and intra-observer reproducibility of a novel automated analysis program compared to manual analysis.In a cross-sectional diagnostic study, 20 non-diabetes controls (mean age 41.4±17.3y, HbA1c 5.5±0.4% and 26 participants with type 1 diabetes (42.8±16.9y, 8.0±1.9% underwent two separate IVCCM examinations by one observer and a third by an independent observer. Along with nerve density and branch density, corneal nerve fibre length (CNFL was obtained by manual analysis (CNFLMANUAL, a protocol in which images were manually selected for automated analysis (CNFLSEMI-AUTOMATED, and one in which selection and analysis were performed electronically (CNFLFULLY-AUTOMATED. Reproducibility of each protocol was determined using intraclass correlation coefficients (ICC and, as a secondary objective, the method of Bland and Altman was used to explore agreement between protocols.Mean CNFLManual was 16.7±4.0, 13.9±4.2 mm/mm2 for non-diabetes controls and diabetes participants, while CNFLSemi-Automated was 10.2±3.3, 8.6±3.0 mm/mm2 and CNFLFully-Automated was 12.5±2.8, 10.9 ± 2.9 mm/mm2. Inter-observer ICC and 95% confidence intervals (95%CI were 0.73(0.56, 0.84, 0.75(0.59, 0.85, and 0.78(0.63, 0.87, respectively (p = NS for all comparisons. Intra-observer ICC and 95%CI were 0.72(0.55, 0.83, 0.74(0.57, 0.85, and 0.84(0.73, 0.91, respectively (p<0.05 for CNFLFully-Automated compared to others. The other IVCCM parameters had substantially lower ICC compared to those for CNFL. CNFLSemi-Automated and CNFLFully-Automated underestimated CNFLManual by mean and 95%CI of 35.1(-4.5, 67.5% and 21.0(-21.6, 46.1%, respectively.Despite an apparent measurement (underestimation bias in comparison to the manual strategy of image

  1. Bullet Retarding Forces in Ballistic Gelatin by Analysis of High Speed Video

    CERN Document Server

    Gaylord, Steven; Courtney, Michael; Courtney, Amy

    2013-01-01

    Though three distinct wounding mechanisms (permanent cavity, temporary cavity, and ballistic pressure wave) are described in the wound ballistics literature, they all have their physical origin in the retarding force between bullet and tissue as the bullet penetrates. If the bullet path is the same, larger retarding forces produce larger wounding effects and a greater probability of rapid incapacitation. By Newton's third law, the force of the bullet on the tissue is equal in magnitude and opposite in direction to the force of the tissue on the bullet. For bullets penetrating with constant mass, the retarding force on the bullet can be determined by frame by frame analysis of high speed video of the bullet penetrating a suitable tissue simulant such as calibrated 10% ballistic gelatin. Here the technique is demonstrated with 9mm NATO bullets, 32 cm long blocks of gelatin, and a high speed video camera operating at 20,000 frames per second. It is found that different 9mm NATO bullets have a wide variety of pot...

  2. Exposure to violent video games and aggression in German adolescents: a longitudinal analysis.

    Science.gov (United States)

    Möller, Ingrid; Krahé, Barbara

    2009-01-01

    The relationship between exposure to violent electronic games and aggressive cognitions and behavior was examined in a longitudinal study. A total of 295 German adolescents completed the measures of violent video game usage, endorsement of aggressive norms, hostile attribution bias, and physical as well as indirect/relational aggression cross-sectionally, and a subsample of N=143 was measured again 30 months later. Cross-sectional results at T1 showed a direct relationship between violent game usage and aggressive norms, and an indirect link to hostile attribution bias through aggressive norms. In combination, exposure to game violence, normative beliefs, and hostile attribution bias predicted physical and indirect/relational aggression. Longitudinal analyses using path analysis showed that violence exposure at T1 predicted physical (but not indirect/relational) aggression 30 months later, whereas aggression at T1 was unrelated to later video game use. Exposure to violent games at T1 influenced physical (but not indirect/relational) aggression at T2 via an increase of aggressive norms and hostile attribution bias. The findings are discussed in relation to social-cognitive explanations of long-term effects of media violence on aggression. PMID:19016226

  3. Automated motion imagery exploitation for surveillance and reconnaissance

    Science.gov (United States)

    Se, Stephen; Laliberte, France; Kotamraju, Vinay; Dutkiewicz, Melanie

    2012-06-01

    Airborne surveillance and reconnaissance are essential for many military missions. Such capabilities are critical for troop protection, situational awareness, mission planning and others, such as post-operation analysis / damage assessment. Motion imagery gathered from both manned and unmanned platforms provides surveillance and reconnaissance information that can be used for pre- and post-operation analysis, but these sensors can gather large amounts of video data. It is extremely labour-intensive for operators to analyse hours of collected data without the aid of automated tools. At MDA Systems Ltd. (MDA), we have previously developed a suite of automated video exploitation tools that can process airborne video, including mosaicking, change detection and 3D reconstruction, within a GIS framework. The mosaicking tool produces a geo-referenced 2D map from the sequence of video frames. The change detection tool identifies differences between two repeat-pass videos taken of the same terrain. The 3D reconstruction tool creates calibrated geo-referenced photo-realistic 3D models. The key objectives of the on-going project are to improve the robustness, accuracy and speed of these tools, and make them more user-friendly to operational users. Robustness and accuracy are essential to provide actionable intelligence, surveillance and reconnaissance information. Speed is important to reduce operator time on data analysis. We are porting some processor-intensive algorithms to run on a Graphics Processing Unit (GPU) in order to improve throughput. Many aspects of video processing are highly parallel and well-suited for optimization on GPUs, which are now commonly available on computers. Moreover, we are extending the tools to handle video data from various airborne platforms and developing the interface to the Coalition Shared Database (CSD). The CSD server enables the dissemination and storage of data from different sensors among NATO countries. The CSD interface allows

  4. Automated Production of Movies on a Cluster of Computers

    Science.gov (United States)

    Nail, Jasper; Le, Duong; Nail, William L.; Nail, William

    2008-01-01

    A method of accelerating and facilitating production of video and film motion-picture products, and software and generic designs of computer hardware to implement the method, are undergoing development. The method provides for automation of most of the tedious and repetitive tasks involved in editing and otherwise processing raw digitized imagery into final motion-picture products. The method was conceived to satisfy requirements, in industrial and scientific testing, for rapid processing of multiple streams of simultaneously captured raw video imagery into documentation in the form of edited video imagery and video derived data products for technical review and analysis. In the production of such video technical documentation, unlike in production of motion-picture products for entertainment, (1) it is often necessary to produce multiple video derived data products, (2) there are usually no second chances to repeat acquisition of raw imagery, (3) it is often desired to produce final products within minutes rather than hours, days, or months, and (4) consistency and quality, rather than aesthetics, are the primary criteria for judging the products. In the present method, the workflow has both serial and parallel aspects: processing can begin before all the raw imagery has been acquired, each video stream can be subjected to different stages of processing simultaneously on different computers that may be grouped into one or more cluster(s), and the final product may consist of multiple video streams. Results of processing on different computers are shared, so that workers can collaborate effectively.

  5. Automated kidney morphology measurements from ultrasound images using texture and edge analysis

    Science.gov (United States)

    Ravishankar, Hariharan; Annangi, Pavan; Washburn, Michael; Lanning, Justin

    2016-04-01

    In a typical ultrasound scan, a sonographer measures Kidney morphology to assess renal abnormalities. Kidney morphology can also help to discriminate between chronic and acute kidney failure. The caliper placements and volume measurements are often time consuming and an automated solution will help to improve accuracy, repeatability and throughput. In this work, we developed an automated Kidney morphology measurement solution from long axis Ultrasound scans. Automated kidney segmentation is challenging due to wide variability in kidney shape, size, weak contrast of the kidney boundaries and presence of strong edges like diaphragm, fat layers. To address the challenges and be able to accurately localize and detect kidney regions, we present a two-step algorithm that makes use of edge and texture information in combination with anatomical cues. First, we use an edge analysis technique to localize kidney region by matching the edge map with predefined templates. To accurately estimate the kidney morphology, we use textural information in a machine learning algorithm framework using Haar features and Gradient boosting classifier. We have tested the algorithm on 45 unseen cases and the performance against ground truth is measured by computing Dice overlap, % error in major and minor axis of kidney. The algorithm shows successful performance on 80% cases.

  6. An architecture and model for cognitive engineering simulation analysis - Application to advanced aviation automation

    Science.gov (United States)

    Corker, Kevin M.; Smith, Barry R.

    1993-01-01

    The process of designing crew stations for large-scale, complex automated systems is made difficult because of the flexibility of roles that the crew can assume, and by the rapid rate at which system designs become fixed. Modern cockpit automation frequently involves multiple layers of control and display technology in which human operators must exercise equipment in augmented, supervisory, and fully automated control modes. In this context, we maintain that effective human-centered design is dependent on adequate models of human/system performance in which representations of the equipment, the human operator(s), and the mission tasks are available to designers for manipulation and modification. The joint Army-NASA Aircrew/Aircraft Integration (A3I) Program, with its attendant Man-machine Integration Design and Analysis System (MIDAS), was initiated to meet this challenge. MIDAS provides designers with a test bed for analyzing human-system integration in an environment in which both cognitive human function and 'intelligent' machine function are described in similar terms. This distributed object-oriented simulation system, its architecture and assumptions, and our experiences from its application in advanced aviation crew stations are described.

  7. Automated High-Throughput Permethylation for Glycosylation Analysis of Biologics Using MALDI-TOF-MS.

    Science.gov (United States)

    Shubhakar, Archana; Kozak, Radoslaw P; Reiding, Karli R; Royle, Louise; Spencer, Daniel I R; Fernandes, Daryl L; Wuhrer, Manfred

    2016-09-01

    Monitoring glycoprotein therapeutics for changes in glycosylation throughout the drug's life cycle is vital, as glycans significantly modulate the stability, biological activity, serum half-life, safety, and immunogenicity. Biopharma companies are increasingly adopting Quality by Design (QbD) frameworks for measuring, optimizing, and controlling drug glycosylation. Permethylation of glycans prior to analysis by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF-MS) is a valuable tool for glycan characterization and for screening of large numbers of samples in QbD drug realization. However, the existing protocols for manual permethylation and liquid-liquid extraction (LLE) steps are labor intensive and are thus not practical for high-throughput (HT) studies. Here we present a glycan permethylation protocol, based on 96-well microplates, that has been developed into a kit suitable for HT work. The workflow is largely automated using a liquid handling robot and includes N-glycan release, enrichment of N-glycans, permethylation, and LLE. The kit has been validated according to industry analytical performance guidelines and applied to characterize biopharmaceutical samples, including IgG4 monoclonal antibodies (mAbs) and recombinant human erythropoietin (rhEPO). The HT permethylation enabled glycan characterization and relative quantitation with minimal side reactions: the MALDI-TOF-MS profiles obtained were in good agreement with hydrophilic liquid interaction chromatography (HILIC) and ultrahigh performance liquid chromatography (UHPLC) data. Automated permethylation and extraction of 96 glycan samples was achieved in less than 5 h and automated data acquisition on MALDI-TOF-MS took on average less than 1 min per sample. This automated and HT glycan preparation and permethylation showed to be convenient, fast, and reliable and can be applied for drug glycan profiling and clinical glycan biomarker studies. PMID:27479043

  8. Bulk velocity measurements by video analysis of dye tracer in a macro-rough channel

    Science.gov (United States)

    Ghilardi, T.; Franca, M. J.; Schleiss, A. J.

    2014-03-01

    Steep mountain rivers have hydraulic and morphodynamic characteristics that hinder velocity measurements. The high spatial variability of hydraulic parameters, such as water depth (WD), river width and flow velocity, makes the choice of a representative cross-section to measure the velocity in detail challenging. Additionally, sediment transport and rapidly changing bed morphology exclude the utilization of standard and often intrusive velocity measurement techniques. The limited technical choices are further reduced in the presence of macro-roughness elements, such as large, relatively immobile boulders. Tracer tracking techniques are among the few reliable methods that can be used under these conditions to evaluate the mean flow velocity. However, most tracer tracking techniques calculate bulk flow velocities between two or more fixed cross-sections. In the presence of intense sediment transport resulting in an important temporal variability of the bed morphology, dead water zones may appear in the few selected measurement sections. Thus a technique based on the analysis of an entire channel reach is needed in this study. A dye tracer measurement technique in which a single camcorder visualizes a long flume reach is described and developed. This allows us to overcome the problem of the presence of dead water zones. To validate this video analysis technique, velocity measurements were carried out on a laboratory flume simulating a torrent, with a relatively gentle slope of 1.97% and without sediment transport, using several commonly used velocity measurement instruments. In the absence of boulders, salt injections, WD and ultrasonic velocity profiler measurements were carried out, along with dye injection technique. When boulders were present, dye tracer technique was validated only by comparison with salt tracer. Several video analysis techniques used to infer velocities were developed and compared, showing that dye tracking is a valid technique for bulk velocity

  9. Bulk velocity measurements by video analysis of dye tracer in a macro-rough channel

    International Nuclear Information System (INIS)

    Steep mountain rivers have hydraulic and morphodynamic characteristics that hinder velocity measurements. The high spatial variability of hydraulic parameters, such as water depth (WD), river width and flow velocity, makes the choice of a representative cross-section to measure the velocity in detail challenging. Additionally, sediment transport and rapidly changing bed morphology exclude the utilization of standard and often intrusive velocity measurement techniques. The limited technical choices are further reduced in the presence of macro-roughness elements, such as large, relatively immobile boulders. Tracer tracking techniques are among the few reliable methods that can be used under these conditions to evaluate the mean flow velocity. However, most tracer tracking techniques calculate bulk flow velocities between two or more fixed cross-sections. In the presence of intense sediment transport resulting in an important temporal variability of the bed morphology, dead water zones may appear in the few selected measurement sections. Thus a technique based on the analysis of an entire channel reach is needed in this study. A dye tracer measurement technique in which a single camcorder visualizes a long flume reach is described and developed. This allows us to overcome the problem of the presence of dead water zones. To validate this video analysis technique, velocity measurements were carried out on a laboratory flume simulating a torrent, with a relatively gentle slope of 1.97% and without sediment transport, using several commonly used velocity measurement instruments. In the absence of boulders, salt injections, WD and ultrasonic velocity profiler measurements were carried out, along with dye injection technique. When boulders were present, dye tracer technique was validated only by comparison with salt tracer. Several video analysis techniques used to infer velocities were developed and compared, showing that dye tracking is a valid technique for bulk velocity

  10. a Psycholinguistic Model for Simultaneous Translation, and Proficiency Assessment by Automated Acoustic Analysis of Discourse.

    Science.gov (United States)

    Yaghi, Hussein M.

    Two separate but related issues are addressed: how simultaneous translation (ST) works on a cognitive level and how such translation can be objectively assessed. Both of these issues are discussed in the light of qualitative and quantitative analyses of a large corpus of recordings of ST and shadowing. The proposed ST model utilises knowledge derived from a discourse analysis of the data, many accepted facts in the psychology tradition, and evidence from controlled experiments that are carried out here. This model has three advantages: (i) it is based on analyses of extended spontaneous speech rather than word-, syllable-, or clause -bound stimuli; (ii) it draws equally on linguistic and psychological knowledge; and (iii) it adopts a non-traditional view of language called 'the linguistic construction of reality'. The discourse-based knowledge is also used to develop three computerised systems for the assessment of simultaneous translation: one is a semi-automated system that treats the content of the translation; and two are fully automated, one of which is based on the time structure of the acoustic signals whilst the other is based on their cross-correlation. For each system, several parameters of performance are identified, and they are correlated with assessments rendered by the traditional, subjective, qualitative method. Using signal processing techniques, the acoustic analysis of discourse leads to the conclusion that quality in simultaneous translation can be assessed quantitatively with varying degrees of automation. It identifies as measures of performance (i) three content-based standards; (ii) four time management parameters that reflect the influence of the source on the target language time structure; and (iii) two types of acoustical signal coherence. Proficiency in ST is shown to be directly related to coherence and speech rate but inversely related to omission and delay. High proficiency is associated with a high degree of simultaneity and

  11. Semantic Concept Mining Based on Hierarchical Event Detection for Soccer Video Indexing

    Directory of Open Access Journals (Sweden)

    Maheshkumar H. Kolekar

    2009-10-01

    Full Text Available In this paper, we present a novel automated indexing and semantic labeling for broadcast soccer video sequences. The proposed method automatically extracts silent events from the video and classifies each event sequence into a concept by sequential association mining. The paper makes three new contributions in multimodal sports video indexing and summarization. First, we propose a novel hierarchical framework for soccer (football video event sequence detection and classification. Unlike most existing video classification approaches, which focus on shot detection followed by shot-clustering for classification, the proposed scheme perform a top-down video scene classification which avoids shot clustering. This improves the classification accuracy and also maintains the temporal order of shots. Second, we compute the association for the events of each excitement clip using a priori mining algorithm. We propose a novel sequential association distance to classify the association of the excitement clip into semantic concepts. For soccer video, we have considered goal scored by team-A, goal scored by team-B, goal saved by team-A, goal saved by team-B as semantic concepts. Third, the extracted excitement clips with semantic concept label helps us to summarize many hours of video to collection of soccer highlights such as goals, saves, corner kicks, etc. We show promising results, with correctly indexed soccer scenes, enabling structural and temporal analysis, such as video retrieval, highlight extraction, and video skimming.

  12. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... Additional Resources Videos Educational Video ANA's First 25 Years Video Support Group Video Library Mark Ruffalo Story ... Additional Resources Videos Educational Video ANA's First 25 Years Video Support Group Video Library Mark Ruffalo Story ...

  13. A method for the automated detection phishing websites through both site characteristics and image analysis

    Science.gov (United States)

    White, Joshua S.; Matthews, Jeanna N.; Stacy, John L.

    2012-06-01

    Phishing website analysis is largely still a time-consuming manual process of discovering potential phishing sites, verifying if suspicious sites truly are malicious spoofs and if so, distributing their URLs to the appropriate blacklisting services. Attackers increasingly use sophisticated systems for bringing phishing sites up and down rapidly at new locations, making automated response essential. In this paper, we present a method for rapid, automated detection and analysis of phishing websites. Our method relies on near real-time gathering and analysis of URLs posted on social media sites. We fetch the pages pointed to by each URL and characterize each page with a set of easily computed values such as number of images and links. We also capture a screen-shot of the rendered page image, compute a hash of the image and use the Hamming distance between these image hashes as a form of visual comparison. We provide initial results demonstrate the feasibility of our techniques by comparing legitimate sites to known fraudulent versions from Phishtank.com, by actively introducing a series of minor changes to a phishing toolkit captured in a local honeypot and by performing some initial analysis on a set of over 2.8 million URLs posted to Twitter over a 4 days in August 2011. We discuss the issues encountered during our testing such as resolvability and legitimacy of URL's posted on Twitter, the data sets used, the characteristics of the phishing sites we discovered, and our plans for future work.

  14. Development and Applications of a Prototypic SCALE Control Module for Automated Burnup Credit Analysis

    International Nuclear Information System (INIS)

    Consideration of the depletion phenomena and isotopic uncertainties in burnup-credit criticality analysis places an increasing reliance on computational tools and significantly increases the overall complexity of the calculations. An automated analysis and data management capability is essential for practical implementation of large-scale burnup credit analyses that can be performed in a reasonable amount of time. STARBUCS is a new prototypic analysis sequence being developed for the SCALE code system to perform automated criticality calculations of spent fuel systems employing burnup credit. STARBUCS is designed to help analyze the dominant burnup credit phenomena including spatial burnup gradients and isotopic uncertainties. A search capability also allows STARBUCS to iterate to determine the spent fuel parameters (e.g., enrichment and burnup combinations) that result in a desired keff for a storage configuration. Although STARBUCS was developed to address the analysis needs for spent fuel transport and storage systems, it provides sufficient flexibility to allow virtually any configuration of spent fuel to be analyzed, such as storage pools and reprocessing operations. STARBUCS has been used extensively at Oak Ridge National Laboratory (ORNL) to study burnup credit phenomena in support of the NRC Research program

  15. Integrating automated structured analysis and design with Ada programming support environments

    Science.gov (United States)

    Hecht, Alan; Simmons, Andy

    1986-01-01

    Ada Programming Support Environments (APSE) include many powerful tools that address the implementation of Ada code. These tools do not address the entire software development process. Structured analysis is a methodology that addresses the creation of complete and accurate system specifications. Structured design takes a specification and derives a plan to decompose the system subcomponents, and provides heuristics to optimize the software design to minimize errors and maintenance. It can also produce the creation of useable modules. Studies have shown that most software errors result from poor system specifications, and that these errors also become more expensive to fix as the development process continues. Structured analysis and design help to uncover error in the early stages of development. The APSE tools help to insure that the code produced is correct, and aid in finding obscure coding errors. However, they do not have the capability to detect errors in specifications or to detect poor designs. An automated system for structured analysis and design TEAMWORK, which can be integrated with an APSE to support software systems development from specification through implementation is described. These tools completement each other to help developers improve quality and productivity, as well as to reduce development and maintenance costs. Complete system documentation and reusable code also resultss from the use of these tools. Integrating an APSE with automated tools for structured analysis and design provide capabilities and advantages beyond those realized with any of these systems used by themselves.

  16. Gender (In)equality in Internet Pornography: A Content Analysis of Popular Pornographic Internet Videos.

    Science.gov (United States)

    Klaassen, Marleen J E; Peter, Jochen

    2015-01-01

    Although Internet pornography is widely consumed and researchers have started to investigate its effects, we still know little about its content. This has resulted in contrasting claims about whether Internet pornography depicts gender (in)equality and whether this depiction differs between amateur and professional pornography. We conducted a content analysis of three main dimensions of gender (in)equality (i.e., objectification, power, and violence) in 400 popular pornographic Internet videos from the most visited pornographic Web sites. Objectification was depicted more often for women through instrumentality, but men were more frequently objectified through dehumanization. Regarding power, men and women did not differ in social or professional status, but men were more often shown as dominant and women as submissive during sexual activities. Except for spanking and gagging, violence occurred rather infrequently. Nonconsensual sex was also relatively rare. Overall, amateur pornography contained more gender inequality at the expense of women than professional pornography did.

  17. Structural analysis of color video camera installation on tank 241AW101 (2 Volumes)

    Energy Technology Data Exchange (ETDEWEB)

    Strehlow, J.P.

    1994-08-24

    A video camera is planned to be installed on the radioactive storage tank 241AW101 at the DOE` s Hanford Site in Richland, Washington. The camera will occupy the 20 inch port of the Multiport Flange riser which is to be installed on riser 5B of the 241AW101 (3,5,10). The objective of the project reported herein was to perform a seismic analysis and evaluation of the structural components of the camera for a postulated Design Basis Earthquake (DBE) per the reference Structural Design Specification (SDS) document (6). The detail of supporting engineering calculations is documented in URS/Blume Calculation No. 66481-01-CA-03 (1).

  18. Gender (In)equality in Internet Pornography: A Content Analysis of Popular Pornographic Internet Videos.

    Science.gov (United States)

    Klaassen, Marleen J E; Peter, Jochen

    2015-01-01

    Although Internet pornography is widely consumed and researchers have started to investigate its effects, we still know little about its content. This has resulted in contrasting claims about whether Internet pornography depicts gender (in)equality and whether this depiction differs between amateur and professional pornography. We conducted a content analysis of three main dimensions of gender (in)equality (i.e., objectification, power, and violence) in 400 popular pornographic Internet videos from the most visited pornographic Web sites. Objectification was depicted more often for women through instrumentality, but men were more frequently objectified through dehumanization. Regarding power, men and women did not differ in social or professional status, but men were more often shown as dominant and women as submissive during sexual activities. Except for spanking and gagging, violence occurred rather infrequently. Nonconsensual sex was also relatively rare. Overall, amateur pornography contained more gender inequality at the expense of women than professional pornography did. PMID:25420868

  19. Open-Ended Interaction in Cooperative Pro-to-typing: A Video-based Analysis

    DEFF Research Database (Denmark)

    Bødker, Susanne; Grønbæk, Kaj; Trigg, Randal

    1991-01-01

    Cooperative Prototyping can be characterized as the use and development of prototypes as catalysts during discussions between designers and potential users – the overall intention being one of mutual learning. On the one hand, the designers learn more about the work practices of the users in ways...... that are tied concretely to some current version of the prototype. On the other hand, the users learn more about the potential for change in their work practice, whether computer-based or otherwise. This paper presents the results of a field study of the cooperative prototyping process. The study is based...... and how cooperative prototyping can be successful with users who are reluctant to “play in the future.” The paper also discusses issues in applying video analysis to system design....

  20. A feasibility assessment of automated FISH image and signal analysis to assist cervical cancer detection

    Science.gov (United States)

    Wang, Xingwei; Li, Yuhua; Liu, Hong; Li, Shibo; Zhang, Roy R.; Zheng, Bin

    2012-02-01

    Fluorescence in situ hybridization (FISH) technology provides a promising molecular imaging tool to detect cervical cancer. Since manual FISH analysis is difficult, time-consuming, and inconsistent, the automated FISH image scanning systems have been developed. Due to limited focal depth of scanned microscopic image, a FISH-probed specimen needs to be scanned in multiple layers that generate huge image data. To improve diagnostic efficiency of using automated FISH image analysis, we developed a computer-aided detection (CAD) scheme. In this experiment, four pap-smear specimen slides were scanned by a dual-detector fluorescence image scanning system that acquired two spectrum images simultaneously, which represent images of interphase cells and FISH-probed chromosome X. During image scanning, once detecting a cell signal, system captured nine image slides by automatically adjusting optical focus. Based on the sharpness index and maximum intensity measurement, cells and FISH signals distributed in 3-D space were projected into a 2-D con-focal image. CAD scheme was applied to each con-focal image to detect analyzable interphase cells using an adaptive multiple-threshold algorithm and detect FISH-probed signals using a top-hat transform. The ratio of abnormal cells was calculated to detect positive cases. In four scanned specimen slides, CAD generated 1676 con-focal images that depicted analyzable cells. FISH-probed signals were independently detected by our CAD algorithm and an observer. The Kappa coefficients for agreement between CAD and observer ranged from 0.69 to 1.0 in detecting/counting FISH signal spots. The study demonstrated the feasibility of applying automated FISH image and signal analysis to assist cyto-geneticists in detecting cervical cancers.

  1. Performance characterization of image and video analysis systems at Siemens Corporate Research

    Science.gov (United States)

    Ramesh, Visvanathan; Jolly, Marie-Pierre; Greiffenhagen, Michael

    2000-06-01

    There has been a significant increase in commercial products using imaging analysis techniques to solve real-world problems in diverse fields such as manufacturing, medical imaging, document analysis, transportation and public security, etc. This has been accelerated by various factors: more advanced algorithms, the availability of cheaper sensors, and faster processors. While algorithms continue to improve in performance, a major stumbling block in translating improvements in algorithms to faster deployment of image analysis systems is the lack of characterization of limits of algorithms and how they affect total system performance. The research community has realized the need for performance analysis and there have been significant efforts in the last few years to remedy the situation. Our efforts at SCR have been on statistical modeling and characterization of modules and systems. The emphasis is on both white-box and black box methodologies to evaluate and optimize vision systems. In the first part of this paper we review the literature on performance characterization and then provide an overview of the status of research in performance characterization of image and video understanding systems. The second part of the paper is on performance evaluation of medical image segmentation algorithms. Finally, we highlight some research issues in performance analysis in medical imaging systems.

  2. Dashboard Videos

    Science.gov (United States)

    Gleue, Alan D.; Depcik, Chris; Peltier, Ted

    2012-01-01

    Last school year, I had a web link emailed to me entitled "A Dashboard Physics Lesson." The link, created and posted by Dale Basier on his "Lab Out Loud" blog, illustrates video of a car's speedometer synchronized with video of the road. These two separate video streams are compiled into one video that students can watch and analyze. After seeing…

  3. Bacterial growth on surfaces: Automated image analysis for quantification of growth rate-related parameters

    DEFF Research Database (Denmark)

    Møller, S.; Sternberg, Claus; Poulsen, L. K.;

    1995-01-01

    species-specific hybridizations with fluorescence-labelled ribosomal probes to estimate the single-cell concentration of RNA. By automated analysis of digitized images of stained cells, we determined four independent growth rate-related parameters: cellular RNA and DNA contents, cell volume......, and the frequency of dividing cells in a cell population. These parameters were used to compare physiological states of liquid-suspended and surfacegrowing Pseudomonas putida KT2442 in chemostat cultures. The major finding is that the correlation between substrate availability and cellular growth rate found...

  4. Analysis of Automated Modern Web Crawling and Testing Tools and Their Possible Employment for Information Extraction

    Directory of Open Access Journals (Sweden)

    Tomas Grigalis

    2012-04-01

    Full Text Available World Wide Web has become an enormously big repository of data. Extracting, integrating and reusing this kind of data has a wide range of applications, including meta-searching, comparison shopping, business intelligence tools and security analysis of information in websites. However, reaching information in modern WEB 2.0 web pages, where HTML tree is often dynamically modified by various JavaScript codes, new data are added by asynchronous requests to the web server and elements are positioned with the help of cascading style sheets, is a difficult task. The article reviews automated web testing tools for information extraction tasks.Article in Lithuanian

  5. Automated Static Culture System Cell Module Mixing Protocol and Computational Fluid Dynamics Analysis

    Science.gov (United States)

    Kleis, Stanley J.; Truong, Tuan; Goodwin, Thomas J,

    2004-01-01

    This report is a documentation of a fluid dynamic analysis of the proposed Automated Static Culture System (ASCS) cell module mixing protocol. The report consists of a review of some basic fluid dynamics principles appropriate for the mixing of a patch of high oxygen content media into the surrounding media which is initially depleted of oxygen, followed by a computational fluid dynamics (CFD) study of this process for the proposed protocol over a range of the governing parameters. The time histories of oxygen concentration distributions and mechanical shear levels generated are used to characterize the mixing process for different parameter values.

  6. Software Tool for Automated Failure Modes and Effects Analysis (FMEA) of Hydraulic Systems

    DEFF Research Database (Denmark)

    Stecki, J. S.; Conrad, Finn; Oh, B.

    2002-01-01

    management techniques and a vast array of computer aided techniques are applied during design and testing stages. The paper present and discusses the research and development of a software tool for automated failure mode and effects analysis - FMEA - of hydraulic systems. The paper explains the underlying......Offshore, marine,aircraft and other complex engineering systems operate in harsh environmental and operational conditions and must meet stringent requirements of reliability, safety and maintability. To reduce the hight costs of development of new systems in these fields improved the design...

  7. A cross-sectional analysis of video games and attention deficit hyperactivity disorder symptoms in adolescents

    OpenAIRE

    Rabinowitz Terry; Chan Philip A

    2006-01-01

    Abstract Background Excessive use of the Internet has been associated with attention deficit hyperactivity disorder (ADHD), but the relationship between video games and ADHD symptoms in adolescents is unknown. Method A survey of adolescents and parents (n = 72 adolescents, 72 parents) was performed assessing daily time spent on the Internet, television, console video games, and Internet video games, and their association with academic and social functioning. Subjects were high school students...

  8. What makes a blockbuster video game?:an empirical analysis of US sales data

    OpenAIRE

    Cox, Joe

    2014-01-01

    This study uses a unique data set of individual video game titles to estimate the effect of an exhaustive set of observable characteristics on the likelihood of a video game becoming a block-buster title. Due to the long-tailed distribution of the sales data, both ordinary least squares and logistic regression models are estimated. The results consistently show that blockbuster video games are more likely to be released by one of the major publishers for popular hardware platforms. Results al...

  9. Advanced digital video surveillance for safeguard and physical protection

    International Nuclear Information System (INIS)

    Full text: Video surveillance is a very crucial component in safeguard and physical protection. Digital technology has revolutionized the surveillance scenario and brought in various new capabilities like better image quality, faster search and retrieval of video images, less storage space for recording, efficient transmission and storage of video, better protection of recorded video images, and easy remote accesses to live and recorded video etc. The basic safeguard requirement for verifiably uninterrupted surveillance has remained largely unchanged since its inception. However, changes to the inspection paradigm to admit automated review and remote monitoring have dramatically increased the demands on safeguard surveillance system. Today's safeguard systems can incorporate intelligent motion detection with very low rate of false alarm and less archiving volume, embedded image processing capability for object behavior and event based indexing, object recognition, efficient querying and report generation etc. It also demands cryptographically authenticating, encrypted, and highly compressed video data for efficient, secure, tamper indicating and transmission. In physical protection, intelligent on robust video motion detection, real time moving object detection and tracking from stationary and moving camera platform, multi-camera cooperative tracking, activity detection and recognition, human motion analysis etc. is going to play a key rote in perimeter security. Incorporation of front and video imagery exploitation tools like automatic number plate recognition, vehicle identification and classification, vehicle undercarriage inspection, face recognition, iris recognition and other biometric tools, gesture recognition etc. makes personnel and vehicle access control robust and foolproof. Innovative digital image enhancement techniques coupled with novel sensor design makes low cost, omni-directional vision capable, all weather, day night surveillance a reality

  10. VAPI: low-cost, rapid automated visual inspection system for Petri plate analysis

    Science.gov (United States)

    Chatburn, L. T.; Kirkup, B. C.; Polz, M. F.

    2007-09-01

    Most culture-based microbiology tasks utilize a petri plate during processing, but rarely do the scientists capture the full information available from the plate. In particular, visual analysis of plates is an under-developed rich source of data that can be rapid and non-invasive. However, collecting this data has been limited by the difficulties of standardizing and quantifying human observations, by the limits of a scientists' fatigue, and by the cost of automating the process. The availability of specialized counting equipment and intelligent camera systems has not changed this - they are prohibitively expensive for many laboratories, only process a limited number of plate types, are often destructive to the sample, and have limited accuracy. This paper describes an automated visual inspection solution, VAPI, that employs inexpensive consumer computing hardware and digital cameras along with custom cross-platform open-source software written in C++, combining Trolltech's Qt GUI toolkit with Intel's OpenCV computer vision library. The system is more accurate than common commercial systems costing many times as much, while being flexible in use and offering comparable responsiveness. VAPI not only counts colonies but also sorts and enumerates colonies by morphology, tracks colony growth by time series analysis, and provides other analytical resources. Output to XML files or directly to a database provides data that can be easily maintained and manipulated by the end user, offering ready access for system enhancement, interaction with other software systems, and rapid development of advanced analysis applications.

  11. Automated static image analysis as a novel tool in describing the physical properties of dietary fiber

    Directory of Open Access Journals (Sweden)

    Marcin Andrzej KUREK

    2015-01-01

    Full Text Available Abstract The growing interest in the usage of dietary fiber in food has caused the need to provide precise tools for describing its physical properties. This research examined two dietary fibers from oats and beets, respectively, in variable particle sizes. The application of automated static image analysis for describing the hydration properties and particle size distribution of dietary fiber was analyzed. Conventional tests for water holding capacity (WHC were conducted. The particles were measured at two points: dry and after water soaking. The most significant water holding capacity (7.00 g water/g solid was achieved by the smaller sized oat fiber. Conversely, the water holding capacity was highest (4.20 g water/g solid in larger sized beet fiber. There was evidence for water absorption increasing with a decrease in particle size in regards to the same fiber source. Very strong correlations were drawn between particle shape parameters, such as fiber length, straightness, width and hydration properties measured conventionally. The regression analysis provided the opportunity to estimate whether the automated static image analysis method could be an efficient tool in describing the hydration properties of dietary fiber. The application of the method was validated using mathematical model which was verified in comparison to conventional WHC measurement results.

  12. StormVideo - digital video in the field of meteorology

    OpenAIRE

    Nybø, Olav; Hartvigsen, Gunnar; Johansen, Dag

    1994-01-01

    Visual observations constitute important input to different meteorological tasks. Previously, these observations were made by human observers and manually reported to the meteorologists. With the employment of video technology, this kind of observations can be automated. This paper presents one approach to visual weather observations. This involves software compression and transmission of digital video. The paper presents a modified version of the PVC algorithm and shows that this algorithm i...

  13. Accuracy and Feasibility of Video Analysis for Assessing Hamstring Flexibility and Validity of the Sit-and-Reach Test

    Science.gov (United States)

    Mier, Constance M.

    2011-01-01

    The accuracy of video analysis of the passive straight-leg raise test (PSLR) and the validity of the sit-and-reach test (SR) were tested in 60 men and women. Computer software measured static hip-joint flexion accurately. High within-session reliability of the PSLR was demonstrated (R greater than 0.97). Test-retest (separate days) reliability for…

  14. Improving evaluation of the distribution and density of immunostained cells in breast cancer using computerized video image analysis

    International Nuclear Information System (INIS)

    Quantitation of cell density in tissues has proven problematic over the years. The manual microscopic methodology, where an investigator visually samples multiple areas within slides of tissue sections, has long remained the basic ‘standard’ for many studies and for routine histopathologic reporting. Nevertheless, novel techniques that may provide a more standardized approach to quantitation of cells in tissue sections have been made possible by computerized video image analysis methods over recent years. The present study describes a novel, computer-assisted video image analysis method of quantitating immunostained cells within tissue sections, providing continuous graphical data. This technique enables the measurement of both distribution and density of cells within tissue sections. Specifically, the study considered immunoperoxidase-stained tumor infiltrating lymphocytes within breast tumor specimens, using the number of immunostained pixels within tissue sections to determine cellular density and number. Comparison was made between standard manual graded quantitation methods and video image analysis, using the same tissue sections. The study demonstrates that video image techniques and computer analysis can provide continuous data on cell density and number in immunostained tissue sections, which compares favorably with standard visual quantitation methods, and may offer an alternative

  15. Exploring the Nonformal Adult Educator in Twenty-First Century Contexts Using Qualitative Video Data Analysis Techniques

    Science.gov (United States)

    Alston, Geleana Drew; Ellis-Hervey, Nina

    2015-01-01

    This study examined how "YouTube" creates a unique, nonformal cyberspace for Black females to vlog about natural hair. Specifically, we utilized qualitative video data analysis techniques to understand how using "YouTube" as a facilitation tool has the ability to collectively capture and maintain an audience of more than a…

  16. Effects of video interaction analysis training on nurse-patient communication in the care of the elderly.

    NARCIS (Netherlands)

    Caris-Verhallen, W.M.C.M.; Kerkstra, A.; Bensing, J.M.; Grypdonck, M.H.F.

    2000-01-01

    This paper describes an empirical evaluation of communication skills training for nurses in elderly care. The training programme was based on Video Interaction Analysis and aimed to improve nurses' communication skills such that they pay attention to patients' physical, social and emotional needs an

  17. Effects of Video Interaction Analysis Training on Nurse-Patient Communication in the Care of the Elderly.

    Science.gov (United States)

    Caris-Verhallen, Wilma M. C. M.; Kerkstra, Ada; Bensing, Jozien M.; Grypdonck, Mieke H. F.

    2000-01-01

    Describes an empirical evaluation of training based on Video Interaction Analysis. The training aimed to improve nurses' (N=40) communication skills such that they pay attention to patients' physical, social, and emotional needs and support self care in elderly people. Limitations of this study and topics for further research are discussed.…

  18. Effects of video interaction analysis training on nurse–patient communication in the care of the elderly

    NARCIS (Netherlands)

    Caris-Verhallen, W.M.C.M.; Kerkstra, A.; Bensing, J.; Grypdonck, M.H.F.

    2000-01-01

    This paper describes an empirical evaluation of communication skills training for nurses in elderly care. The training programme was based on Video Interaction Analysis and aimed to improve nurses’ communication skills such that they pay attention to patients’ physical, social and emotional needs an

  19. Development of Students' Conceptual Thinking by Means of Video Analysis and Interactive Simulations at Technical Universities

    Science.gov (United States)

    Hockicko, Peter; Krišták, Luboš; Nemec, Miroslav

    2015-01-01

    Video analysis, using the program Tracker (Open Source Physics), in the educational process introduces a new creative method of teaching physics and makes natural sciences more interesting for students. This way of exploring the laws of nature can amaze students because this illustrative and interactive educational software inspires them to think…

  20. Motmot, an open-source toolkit for realtime video acquisition and analysis

    Directory of Open Access Journals (Sweden)

    Dickinson Michael H

    2009-07-01

    Full Text Available Abstract Background Video cameras sense passively from a distance, offer a rich information stream, and provide intuitively meaningful raw data. Camera-based imaging has thus proven critical for many advances in neuroscience and biology, with applications ranging from cellular imaging of fluorescent dyes to tracking of whole-animal behavior at ecologically relevant spatial scales. Results Here we present 'Motmot': an open-source software suite for acquiring, displaying, saving, and analyzing digital video in real-time. At the highest level, Motmot is written in the Python computer language. The large amounts of data produced by digital cameras are handled by low-level, optimized functions, usually written in C. This high-level/low-level partitioning and use of select external libraries allow Motmot, with only modest complexity, to perform well as a core technology for many high-performance imaging tasks. In its current form, Motmot allows for: (1 image acquisition from a variety of camera interfaces (package motmot.cam_iface, (2 the display of these images with minimal latency and computer resources using wxPython and OpenGL (package motmot.wxglvideo, (3 saving images with no compression in a single-pass, low-CPU-use format (package motmot.FlyMovieFormat, (4 a pluggable framework for custom analysis of images in realtime and (5 firmware for an inexpensive USB device to synchronize image acquisition across multiple cameras, with analog input, or with other hardware devices (package motmot.fview_ext_trig. These capabilities are brought together in a graphical user interface, called 'FView', allowing an end user to easily view and save digital video without writing any code. One plugin for FView, 'FlyTrax', which tracks the movement of fruit flies in real-time, is included with Motmot, and is described to illustrate the capabilities of FView. Conclusion Motmot enables realtime image processing and display using the Python computer language. In

  1. Fully automated quantitative analysis of breast cancer risk in DCE-MR images

    Science.gov (United States)

    Jiang, Luan; Hu, Xiaoxin; Gu, Yajia; Li, Qiang

    2015-03-01

    Amount of fibroglandular tissue (FGT) and background parenchymal enhancement (BPE) in dynamic contrast enhanced magnetic resonance (DCE-MR) images are two important indices for breast cancer risk assessment in the clinical practice. The purpose of this study is to develop and evaluate a fully automated scheme for quantitative analysis of FGT and BPE in DCE-MR images. Our fully automated method consists of three steps, i.e., segmentation of whole breast, fibroglandular tissues, and enhanced fibroglandular tissues. Based on the volume of interest extracted automatically, dynamic programming method was applied in each 2-D slice of a 3-D MR scan to delineate the chest wall and breast skin line for segmenting the whole breast. This step took advantages of the continuity of chest wall and breast skin line across adjacent slices. We then further used fuzzy c-means clustering method with automatic selection of cluster number for segmenting the fibroglandular tissues within the segmented whole breast area. Finally, a statistical method was used to set a threshold based on the estimated noise level for segmenting the enhanced fibroglandular tissues in the subtraction images of pre- and post-contrast MR scans. Based on the segmented whole breast, fibroglandular tissues, and enhanced fibroglandular tissues, FGT and BPE were automatically computed. Preliminary results of technical evaluation and clinical validation showed that our fully automated scheme could obtain good segmentation of the whole breast, fibroglandular tissues, and enhanced fibroglandular tissues to achieve accurate assessment of FGT and BPE for quantitative analysis of breast cancer risk.

  2. Automated, Ultra-Sterile Solid Sample Handling and Analysis on a Chip

    Science.gov (United States)

    Mora, Maria F.; Stockton, Amanda M.; Willis, Peter A.

    2013-01-01

    There are no existing ultra-sterile lab-on-a-chip systems that can accept solid samples and perform complete chemical analyses without human intervention. The proposed solution is to demonstrate completely automated lab-on-a-chip manipulation of powdered solid samples, followed by on-chip liquid extraction and chemical analysis. This technology utilizes a newly invented glass micro-device for solid manipulation, which mates with existing lab-on-a-chip instrumentation. Devices are fabricated in a Class 10 cleanroom at the JPL MicroDevices Lab, and are plasma-cleaned before and after assembly. Solid samples enter the device through a drilled hole in the top. Existing micro-pumping technology is used to transfer milligrams of powdered sample into an extraction chamber where it is mixed with liquids to extract organic material. Subsequent chemical analysis is performed using portable microchip capillary electrophoresis systems (CE). These instruments have been used for ultra-highly sensitive (parts-per-trillion, pptr) analysis of organic compounds including amines, amino acids, aldehydes, ketones, carboxylic acids, and thiols. Fully autonomous amino acid analyses in liquids were demonstrated; however, to date there have been no reports of completely automated analysis of solid samples on chip. This approach utilizes an existing portable instrument that houses optics, high-voltage power supplies, and solenoids for fully autonomous microfluidic sample processing and CE analysis with laser-induced fluorescence (LIF) detection. Furthermore, the entire system can be sterilized and placed in a cleanroom environment for analyzing samples returned from extraterrestrial targets, if desired. This is an entirely new capability never demonstrated before. The ability to manipulate solid samples, coupled with lab-on-a-chip analysis technology, will enable ultraclean and ultrasensitive end-to-end analysis of samples that is orders of magnitude more sensitive than the ppb goal given

  3. Good clean fun? A content analysis of profanity in video games and its prevalence across game systems and ratings.

    Science.gov (United States)

    Ivory, James D; Williams, Dmitri; Martins, Nicole; Consalvo, Mia

    2009-08-01

    Although violent video game content and its effects have been examined extensively by empirical research, verbal aggression in the form of profanity has received less attention. Building on preliminary findings from previous studies, an extensive content analysis of profanity in video games was conducted using a sample of the 150 top-selling video games across all popular game platforms (including home consoles, portable consoles, and personal computers). The frequency of profanity, both in general and across three profanity categories, was measured and compared to games' ratings, sales, and platforms. Generally, profanity was found in about one in five games and appeared primarily in games rated for teenagers or above. Games containing profanity, however, tended to contain it frequently. Profanity was not found to be related to games' sales or platforms. PMID:19514818

  4. Linking Automated Data Analysis and Visualization with Applications in Developmental Biology and High-Energy Physics

    Energy Technology Data Exchange (ETDEWEB)

    Ruebel, Oliver [Technical Univ. of Darmstadt (Germany)

    2009-11-20

    Knowledge discovery from large and complex collections of today's scientific datasets is a challenging task. With the ability to measure and simulate more processes at increasingly finer spatial and temporal scales, the increasing number of data dimensions and data objects is presenting tremendous challenges for data analysis and effective data exploration methods and tools. Researchers are overwhelmed with data and standard tools are often insufficient to enable effective data analysis and knowledge discovery. The main objective of this thesis is to provide important new capabilities to accelerate scientific knowledge discovery form large, complex, and multivariate scientific data. The research covered in this thesis addresses these scientific challenges using a combination of scientific visualization, information visualization, automated data analysis, and other enabling technologies, such as efficient data management. The effectiveness of the proposed analysis methods is demonstrated via applications in two distinct scientific research fields, namely developmental biology and high-energy physics.Advances in microscopy, image analysis, and embryo registration enable for the first time measurement of gene expression at cellular resolution for entire organisms. Analysis of high-dimensional spatial gene expression datasets is a challenging task. By integrating data clustering and visualization, analysis of complex, time-varying, spatial gene expression patterns and their formation becomes possible. The analysis framework MATLAB and the visualization have been integrated, making advanced analysis tools accessible to biologist and enabling bioinformatic researchers to directly integrate their analysis with the visualization. Laser wakefield particle accelerators (LWFAs) promise to be a new compact source of high-energy particles and radiation, with wide applications ranging from medicine to physics. To gain insight into the complex physical processes of particle

  5. Linking Automated Data Analysis and Visualization with Applications in Developmental Biology and High-Energy Physics

    Energy Technology Data Exchange (ETDEWEB)

    Ruebel, Oliver

    2009-12-01

    Knowledge discovery from large and complex collections of today's scientific datasets is a challenging task. With the ability to measure and simulate more processes at increasingly finer spatial and temporal scales, the increasing number of data dimensions and data objects is presenting tremendous challenges for data analysis and effective data exploration methods and tools. Researchers are overwhelmed with data and standard tools are often insufficient to enable effective data analysis and knowledge discovery. The main objective of this thesis is to provide important new capabilities to accelerate scientific knowledge discovery form large, complex, and multivariate scientific data. The research covered in this thesis addresses these scientific challenges using a combination of scientific visualization, information visualization, automated data analysis, and other enabling technologies, such as efficient data management. The effectiveness of the proposed analysis methods is demonstrated via applications in two distinct scientific research fields, namely developmental biology and high-energy physics.Advances in microscopy, image analysis, and embryo registration enable for the first time measurement of gene expression at cellular resolution for entire organisms. Analysis of high-dimensional spatial gene expression datasets is a challenging task. By integrating data clustering and visualization, analysis of complex, time-varying, spatial gene expression patterns and their formation becomes possible. The analysis framework MATLAB and the visualization have been integrated, making advanced analysis tools accessible to biologist and enabling bioinformatic researchers to directly integrate their analysis with the visualization. Laser wakefield particle accelerators (LWFAs) promise to be a new compact source of high-energy particles and radiation, with wide applications ranging from medicine to physics. To gain insight into the complex physical processes of particle

  6. Linking Automated Data Analysis and Visualization with Applications in Developmental Biology and High-Energy Physics

    International Nuclear Information System (INIS)

    Knowledge discovery from large and complex collections of today's scientific datasets is a challenging task. With the ability to measure and simulate more processes at increasingly finer spatial and temporal scales, the increasing number of data dimensions and data objects is presenting tremendous challenges for data analysis and effective data exploration methods and tools. Researchers are overwhelmed with data and standard tools are often insufficient to enable effective data analysis and knowledge discovery. The main objective of this thesis is to provide important new capabilities to accelerate scientific knowledge discovery form large, complex, and multivariate scientific data. The research covered in this thesis addresses these scientific challenges using a combination of scientific visualization, information visualization, automated data analysis, and other enabling technologies, such as efficient data management. The effectiveness of the proposed analysis methods is demonstrated via applications in two distinct scientific research fields, namely developmental biology and high-energy physics.Advances in microscopy, image analysis, and embryo registration enable for the first time measurement of gene expression at cellular resolution for entire organisms. Analysis of high-dimensional spatial gene expression datasets is a challenging task. By integrating data clustering and visualization, analysis of complex, time-varying, spatial gene expression patterns and their formation becomes possible. The analysis framework MATLAB and the visualization have been integrated, making advanced analysis tools accessible to biologist and enabling bioinformatic researchers to directly integrate their analysis with the visualization. Laser wakefield particle accelerators (LWFAs) promise to be a new compact source of high-energy particles and radiation, with wide applications ranging from medicine to physics. To gain insight into the complex physical processes of particle

  7. Automated Nanofiber Diameter Measurement in SEM Images Using a Robust Image Analysis Method

    Directory of Open Access Journals (Sweden)

    Ertan Öznergiz

    2014-01-01

    Full Text Available Due to the high surface area, porosity, and rigidity, applications of nanofibers and nanosurfaces have developed in recent years. Nanofibers and nanosurfaces are typically produced by electrospinning method. In the production process, determination of average fiber diameter is crucial for quality assessment. Average fiber diameter is determined by manually measuring the diameters of randomly selected fibers on scanning electron microscopy (SEM images. However, as the number of the images increases, manual fiber diameter determination becomes a tedious and time consuming task as well as being sensitive to human errors. Therefore, an automated fiber diameter measurement system is desired. In the literature, this task is achieved by using image analysis algorithms. Typically, these methods first isolate each fiber in the image and measure the diameter of each isolated fiber. Fiber isolation is an error-prone process. In this study, automated calculation of nanofiber diameter is achieved without fiber isolation using image processing and analysis algorithms. Performance of the proposed method was tested on real data. The effectiveness of the proposed method is shown by comparing automatically and manually measured nanofiber diameter values.

  8. Automated Aflatoxin Analysis Using Inline Reusable Immunoaffinity Column Cleanup and LC-Fluorescence Detection.

    Science.gov (United States)

    Rhemrev, Ria; Pazdanska, Monika; Marley, Elaine; Biselli, Scarlett; Staiger, Simone

    2015-01-01

    A novel reusable immunoaffinity cartridge containing monoclonal antibodies to aflatoxins coupled to a pressure resistant polymer has been developed. The cartridge is used in conjunction with a handling system inline to LC with fluorescence detection to provide fully automated aflatoxin analysis for routine monitoring of a variety of food matrixes. The handling system selects an immunoaffinity cartridge from a tray and automatically applies the sample extract. The cartridge is washed, then aflatoxins B1, B2, G1, and G2 are eluted and transferred inline to the LC system for quantitative analysis using fluorescence detection with postcolumn derivatization using a KOBRA® cell. Each immunoaffinity cartridge can be used up to 15 times without loss in performance, offering increased sample throughput and reduced costs compared to conventional manual sample preparation and cleanup. The system was validated in two independent laboratories using samples of peanuts and maize spiked at 2, 8, and 40 μg/kg total aflatoxins, and paprika, nutmeg, and dried figs spiked at 5, 20, and 100 μg/kg total aflatoxins. Recoveries exceeded 80% for both aflatoxin B1 and total aflatoxins. The between-day repeatability ranged from 2.1 to 9.6% for aflatoxin B1 for the six levels and five matrixes. Satisfactory Z-scores were obtained with this automated system when used for participation in proficiency testing (FAPAS®) for samples of chilli powder and hazelnut paste containing aflatoxins. PMID:26651571

  9. Wine analysis to check quality and authenticity by fully-automated 1H-NMR

    Directory of Open Access Journals (Sweden)

    Spraul Manfred

    2015-01-01

    Full Text Available Fully-automated high resolution 1H-NMR spectroscopy offers unique screening capabilities for food quality and safety by combining non-targeted and targeted screening in one analysis (15–20 min from acquisition to report. The advantage of high resolution 1H-NMR is its absolute reproducibility and transferability from laboratory to laboratory, which is not equaled by any other method currently used in food analysis. NMR reproducibility allows statistical investigations e.g. for detection of variety, geographical origin and adulterations, where smallest changes of many ingredients at the same time must be recorded. Reproducibility and transferability of the solutions shown are user-, instrument- and laboratory-independent. Sample prepara- tion, measurement and processing are based on strict standard operation procedures which are substantial for this fully automated solution. The non-targeted approach to the data allows detecting even unknown deviations, if they are visible in the 1H-NMR spectra of e.g. fruit juice, wine or honey. The same data acquired in high-throughput mode are also subjected to quantification of multiple compounds. This 1H-NMR methodology will shortly be introduced, then results on wine will be presented and the advantages of the solutions shown. The method has been proven on juice, honey and wine, where so far unknown frauds could be detected, while at the same time generating targeted parameters are obtained.

  10. Progress on automated data analysis algorithms for ultrasonic inspection of composites

    Science.gov (United States)

    Aldrin, John C.; Forsyth, David S.; Welter, John T.

    2015-03-01

    Progress is presented on the development and demonstration of automated data analysis (ADA) software to address the burden in interpreting ultrasonic inspection data for large composite structures. The automated data analysis algorithm is presented in detail, which follows standard procedures for analyzing signals for time-of-flight indications and backwall amplitude dropout. New algorithms have been implemented to reliably identify indications in time-of-flight images near the front and back walls of composite panels. Adaptive call criteria have also been applied to address sensitivity to variation in backwall signal level, panel thickness variation, and internal signal noise. ADA processing results are presented for a variety of test specimens that include inserted materials and discontinuities produced under poor manufacturing conditions. Software tools have been developed to support both ADA algorithm design and certification, producing a statistical evaluation of indication results and false calls using a matching process with predefined truth tables. Parametric studies were performed to evaluate detection and false call results with respect to varying algorithm settings.

  11. Possibilities for retracing of copyright violations on current video game consoles by optical disk analysis

    Science.gov (United States)

    Irmler, Frank; Creutzburg, Reiner

    2014-02-01

    This paper deals with the possibilities of retracing copyright violations on current video game consoles (e.g. Microsoft Xbox, Sony PlayStation, ...) by studying the corresponding optical storage media DVD and Blu-ray. The possibilities of forensic investigation of DVD and Blu-ray Discs are presented. It is shown which information can be read by using freeware and commercial software for forensic examination. A detailed analysis is given on the visualization of hidden content and the possibility to find out information about the burning hardware used for writing on the optical discs. In connection with a forensic analysis of the Windows registry of a suspects PC a detailed overview of the crime scene for forged DVD and Blu-ray Discs can be obtained. Optical discs are examined under forensic aspects and the obtained results are implemented into automatic analysis scripts for the commercial forensics program EnCase Forensic. It is shown that for the optical storage media a possibility of identification of the drive used for writing can be obtained. In particular Blu-ray Discs contain the serial number of the burner. These and other findings were incorporated into the creation of various EnCase scripts for the professional forensic investigation with EnCase Forensic. Furthermore, a detailed flowchart for a forensic investigation of copyright infringement was developed.

  12. Agreement Between Face-to-Face and Free Software Video Analysis for Assessing Hamstring Flexibility in Adolescents.

    Science.gov (United States)

    Moral-Muñoz, José A; Esteban-Moreno, Bernabé; Arroyo-Morales, Manuel; Cobo, Manuel J; Herrera-Viedma, Enrique

    2015-09-01

    The objective of this study was to determine the level of agreement between face-to-face hamstring flexibility measurements and free software video analysis in adolescents. Reduced hamstring flexibility is common in adolescents (75% of boys and 35% of girls aged 10). The length of the hamstring muscle has an important role in both the effectiveness and the efficiency of basic human movements, and reduced hamstring flexibility is related to various musculoskeletal conditions. There are various approaches to measuring hamstring flexibility with high reliability; the most commonly used approaches in the scientific literature are the sit-and-reach test, hip joint angle (HJA), and active knee extension. The assessment of hamstring flexibility using video analysis could help with adolescent flexibility follow-up. Fifty-four adolescents from a local school participated in a descriptive study of repeated measures using a crossover design. Active knee extension and HJA were measured with an inclinometer and were simultaneously recorded with a video camera. Each video was downloaded to a computer and subsequently analyzed using Kinovea 0.8.15, a free software application for movement analysis. All outcome measures showed reliability estimates with α > 0.90. The lowest reliability was obtained for HJA (α = 0.91). The preliminary findings support the use of a free software tool for assessing hamstring flexibility, offering health professionals a useful tool for adolescent flexibility follow-up.

  13. Agreement Between Face-to-Face and Free Software Video Analysis for Assessing Hamstring Flexibility in Adolescents.

    Science.gov (United States)

    Moral-Muñoz, José A; Esteban-Moreno, Bernabé; Arroyo-Morales, Manuel; Cobo, Manuel J; Herrera-Viedma, Enrique

    2015-09-01

    The objective of this study was to determine the level of agreement between face-to-face hamstring flexibility measurements and free software video analysis in adolescents. Reduced hamstring flexibility is common in adolescents (75% of boys and 35% of girls aged 10). The length of the hamstring muscle has an important role in both the effectiveness and the efficiency of basic human movements, and reduced hamstring flexibility is related to various musculoskeletal conditions. There are various approaches to measuring hamstring flexibility with high reliability; the most commonly used approaches in the scientific literature are the sit-and-reach test, hip joint angle (HJA), and active knee extension. The assessment of hamstring flexibility using video analysis could help with adolescent flexibility follow-up. Fifty-four adolescents from a local school participated in a descriptive study of repeated measures using a crossover design. Active knee extension and HJA were measured with an inclinometer and were simultaneously recorded with a video camera. Each video was downloaded to a computer and subsequently analyzed using Kinovea 0.8.15, a free software application for movement analysis. All outcome measures showed reliability estimates with α > 0.90. The lowest reliability was obtained for HJA (α = 0.91). The preliminary findings support the use of a free software tool for assessing hamstring flexibility, offering health professionals a useful tool for adolescent flexibility follow-up. PMID:26313580

  14. Automated retinofugal visual pathway reconstruction with multi-shell HARDI and FOD-based analysis.

    Science.gov (United States)

    Kammen, Alexandra; Law, Meng; Tjan, Bosco S; Toga, Arthur W; Shi, Yonggang

    2016-01-15

    Diffusion MRI tractography provides a non-invasive modality to examine the human retinofugal projection, which consists of the optic nerves, optic chiasm, optic tracts, the lateral geniculate nuclei (LGN) and the optic radiations. However, the pathway has several anatomic features that make it particularly challenging to study with tractography, including its location near blood vessels and bone-air interface at the base of the cerebrum, crossing fibers at the chiasm, somewhat-tortuous course around the temporal horn via Meyer's Loop, and multiple closely neighboring fiber bundles. To date, these unique complexities of the visual pathway have impeded the development of a robust and automated reconstruction method using tractography. To overcome these challenges, we develop a novel, fully automated system to reconstruct the retinofugal visual pathway from high-resolution diffusion imaging data. Using multi-shell, high angular resolution diffusion imaging (HARDI) data, we reconstruct precise fiber orientation distributions (FODs) with high order spherical harmonics (SPHARM) to resolve fiber crossings, which allows the tractography algorithm to successfully navigate the complicated anatomy surrounding the retinofugal pathway. We also develop automated algorithms for the identification of ROIs used for fiber bundle reconstruction. In particular, we develop a novel approach to extract the LGN region of interest (ROI) based on intrinsic shape analysis of a fiber bundle computed from a seed region at the optic chiasm to a target at the primary visual cortex. By combining automatically identified ROIs and FOD-based tractography, we obtain a fully automated system to compute the main components of the retinofugal pathway, including the optic tract and the optic radiation. We apply our method to the multi-shell HARDI data of 215 subjects from the Human Connectome Project (HCP). Through comparisons with post-mortem dissection measurements, we demonstrate the retinotopic

  15. Using Videos and Multimodal Discourse Analysis to Study How Students Learn a Trade

    Science.gov (United States)

    Chan, Selena

    2013-01-01

    The use of video to assist with ethnographical-based research is not a new phenomenon. Recent advances in technology have reduced the costs and technical expertise required to use videos for gathering research data. Audio-visual records of learning activities as they take place, allow for many non-vocal and inter-personal communication…

  16. How violent video games communicate violence: A literature review and content analysis of moral disengagement factors

    NARCIS (Netherlands)

    Hartmann, T.; Krakowiak, M.; Tsay-Vogel, M.

    2014-01-01

    Mechanisms of moral disengagement in violent video game play have recently received considerable attention among communication scholars. To date, however, no study has analyzed the prevalence of moral disengagement factors in violent video games. To fill this research gap, the present approach inclu

  17. In Pursuit of Reciprocity: Researchers, Teachers, and School Reformers Engaged in Collaborative Analysis of Video Records

    Science.gov (United States)

    Curry, Marnie W.

    2012-01-01

    In the ideal, reciprocity in qualitative inquiry occurs when there is give-and-take between researchers and the researched; however, the demands of the academy and resource constraints often make the pursuit of reciprocity difficult. Drawing on two video-based, qualitative studies in which researchers utilized video records as resources to enhance…

  18. Multimedia in physics education: a video for the quantitative analysis of the Reynolds number

    International Nuclear Information System (INIS)

    A video of the Reynolds transition experiment, developed for physics teaching, shows the continuous transition from laminar to turbulent flow. Additionally, the critical Reynolds number of the experimental set-up is determined approximately. By looking at it, the user of the video can measure all necessary data and then calculate a result

  19. The Effects of Violent Video Games on Aggression: A Meta-Analysis.

    Science.gov (United States)

    Sherry, John L.

    2001-01-01

    Cumulates findings across existing empirical research on the effects of violent video games to estimate overall effect size and discern important trends and moderating variables. Suggests there is a smaller effect of violent video games on aggression than has been found with television violence on aggression. (SG)

  20. A Gatekeeper Final Boss: An Analysis of MOGAI Representation in Video Games

    Directory of Open Access Journals (Sweden)

    Jared Talbert

    2016-07-01

    Full Text Available There have been MOGAI characters since near the beginning of video games, but their representation has been something of debate and controversy. This paper looks at not only the history of representing MOGAI characters, but the dynamics of how these populations are represented within video games, and analyses how players feel regarding this subject.

  1. A Gatekeeper Final Boss: An Analysis of MOGAI Representation in Video Games

    OpenAIRE

    Jared Talbert

    2016-01-01

    There have been MOGAI characters since near the beginning of video games, but their representation has been something of debate and controversy. This paper looks at not only the history of representing MOGAI characters, but the dynamics of how these populations are represented within video games, and analyses how players feel regarding this subject.

  2. Conducting Video Research in the Learning Sciences: Guidance on Selection, Analysis, Technology, and Ethics

    Science.gov (United States)

    Derry, Sharon J.; Pea, Roy D.; Barron, Brigid; Engle, Randi A.; Erickson, Frederick; Goldman, Ricki; Hall, Rogers; Koschmann, Timothy; Lemke, Jay L.; Sherin, Miriam Gamoran; Sherin, Bruce L.

    2010-01-01

    Focusing on expanding technical capabilities and new collaborative possibilities, we address 4 challenges for scientists who collect and use video records to conduct research in and on complex learning environments: (a) Selection: How can researchers be systematic in deciding which elements of a complex environment or extensive video corpus to…

  3. Content Based Video Retrieval

    Directory of Open Access Journals (Sweden)

    B.V.Patel

    2012-11-01

    Full Text Available Content based video retrieval is an approach for facilitating the searching and browsing of large image collections over World Wide Web. In this approach, video analysis is conducted on low level visual properties extracted from video frame. We believed that in order to create an effective video retrieval system, visual perception must be taken into account. We conjectured that a technique which employs multiple features for indexing and retrieval would be more effective in the discrimination and search tasks of videos. In order to validate this claim, content based indexing and retrieval systems were implemented using color histogram, various texture features and other approaches. Videos were stored in Oracle 9i Database and a user study measured correctness of response.

  4. Content Based Video Retrieval

    Directory of Open Access Journals (Sweden)

    B. V. Patel

    2012-10-01

    Full Text Available Content based video retrieval is an approach for facilitating the searching and browsing of large image collections over World Wide Web. In this approach, video analysis is conducted on low level visual properties extracted from video frame. We believed that in order to create an effective video retrieval system, visual perception must be taken into account. We conjectured that a technique which employs multiple features for indexing and retrieval would be more effective in the discrimination and search tasks of videos. In order to validate this claim, content based indexing and retrieval systems were implemented using color histogram, various texture features and other approaches. Videos were stored in Oracle 9i Database and a user study measured correctness of response.

  5. Impact Analysis of Baseband Quantizer on Coding Efficiency for HDR Video

    Science.gov (United States)

    Wong, Chau-Wai; Su, Guan-Ming; Wu, Min

    2016-10-01

    Digitally acquired high dynamic range (HDR) video baseband signal can take 10 to 12 bits per color channel. It is economically important to be able to reuse the legacy 8 or 10-bit video codecs to efficiently compress the HDR video. Linear or nonlinear mapping on the intensity can be applied to the baseband signal to reduce the dynamic range before the signal is sent to the codec, and we refer to this range reduction step as a baseband quantization. We show analytically and verify using test sequences that the use of the baseband quantizer lowers the coding efficiency. Experiments show that as the baseband quantizer is strengthened by 1.6 bits, the drop of PSNR at a high bitrate is up to 1.60dB. Our result suggests that in order to achieve high coding efficiency, information reduction of videos in terms of quantization error should be introduced in the video codec instead of on the baseband signal.

  6. Advances in Computer, Communication, Control and Automation

    CERN Document Server

    011 International Conference on Computer, Communication, Control and Automation

    2012-01-01

    The volume includes a set of selected papers extended and revised from the 2011 International Conference on Computer, Communication, Control and Automation (3CA 2011). 2011 International Conference on Computer, Communication, Control and Automation (3CA 2011) has been held in Zhuhai, China, November 19-20, 2011. This volume  topics covered include signal and Image processing, speech and audio Processing, video processing and analysis, artificial intelligence, computing and intelligent systems, machine learning, sensor and neural networks, knowledge discovery and data mining, fuzzy mathematics and Applications, knowledge-based systems, hybrid systems modeling and design, risk analysis and management, system modeling and simulation. We hope that researchers, graduate students and other interested readers benefit scientifically from the proceedings and also find it stimulating in the process.

  7. Automated image analysis of the host-pathogen interaction between phagocytes and Aspergillus fumigatus.

    Directory of Open Access Journals (Sweden)

    Franziska Mech

    Full Text Available Aspergillus fumigatus is a ubiquitous airborne fungus and opportunistic human pathogen. In immunocompromised hosts, the fungus can cause life-threatening diseases like invasive pulmonary aspergillosis. Since the incidence of fungal systemic infections drastically increased over the last years, it is a major goal to investigate the pathobiology of A. fumigatus and in particular the interactions of A. fumigatus conidia with immune cells. Many of these studies include the activity of immune effector cells, in particular of macrophages, when they are confronted with conidia of A. fumigus wild-type and mutant strains. Here, we report the development of an automated analysis of confocal laser scanning microscopy images from macrophages coincubated with different A. fumigatus strains. At present, microscopy images are often analysed manually, including cell counting and determination of interrelations between cells, which is very time consuming and error-prone. Automation of this process overcomes these disadvantages and standardises the analysis, which is a prerequisite for further systems biological studies including mathematical modeling of the infection process. For this purpose, the cells in our experimental setup were differentially stained and monitored by confocal laser scanning microscopy. To perform the image analysis in an automatic fashion, we developed a ruleset that is generally applicable to phagocytosis assays and in the present case was processed by the software Definiens Developer XD. As a result of a complete image analysis we obtained features such as size, shape, number of cells and cell-cell contacts. The analysis reported here, reveals that different mutants of A. fumigatus have a major influence on the ability of macrophages to adhere and to phagocytose the respective conidia. In particular, we observe that the phagocytosis ratio and the aggregation behaviour of pksP mutant compared to wild-type conidia are both significantly

  8. Automated Modular High Throughput Exopolysaccharide Screening Platform Coupled with Highly Sensitive Carbohydrate Fingerprint Analysis.

    Science.gov (United States)

    Rühmann, Broder; Schmid, Jochen; Sieber, Volker

    2016-01-01

    Many microorganisms are capable of producing and secreting exopolysaccharides (EPS), which have important implications in medical fields, food applications or in the replacement of petro-based chemicals. We describe an analytical platform to be automated on a liquid handling system that allows the fast and reliable analysis of the type and the amount of EPS produced by microorganisms. It enables the user to identify novel natural microbial exopolysaccharide producers and to analyze the carbohydrate fingerprint of the corresponding polymers within one day in high-throughput (HT). Using this platform, strain collections as well as libraries of strain variants that might be obtained in engineering approaches can be screened. The platform has a modular setup, which allows a separation of the protocol into two major parts. First, there is an automated screening system, which combines different polysaccharide detection modules: a semi-quantitative analysis of viscosity formation via a centrifugation step, an analysis of polymer formation via alcohol precipitation and the determination of the total carbohydrate content via a phenol-sulfuric-acid transformation. Here, it is possible to screen up to 384 strains per run. The second part provides a detailed monosaccharide analysis for all the selected EPS producers identified in the first part by combining two essential modules: the analysis of the complete monomer composition via ultra-high performance liquid chromatography coupled with ultra violet and electrospray ionization ion trap detection (UHPLC-UV-ESI-MS) and the determination of pyruvate as a polymer substituent (presence of pyruvate ketal) via enzymatic oxidation that is coupled to a color formation. All the analytical modules of this screening platform can be combined in different ways and adjusted to individual requirements. Additionally, they can all be handled manually or performed with a liquid handling system. Thereby, the screening platform enables a huge

  9. Redevelopment and reliability study of simultaneously uranium and thorium analysis automation control system

    International Nuclear Information System (INIS)

    Full-text: This project is to refurbish the Instrumental Delayed Neutron Activation Analysis System for Simultaneously Determination of Uranium and Thorium namely PAUTS. PAUTS use nuclear techniques for the quantitative determination of Uranium-235 (U-235) and Thorium-232 (Th-232)radionuclides contents in the samples. It consists of three main automation procedures namely Control sample handling, Data Acquisition for neutron counting, and data handling and analysis program. The automation control technology for this project is based on a personal computer (PC), Ethernet communication support, programmable automation control (PAC) module CFP 2220, infrared photo sensors and LabVIEW software package. The analysis samples capsule was placed in transfers containers or rabbit and will be transfer using fast pneumatic sample handling for activation by irradiate it to neutron in the reactor core. Both radionuclides as a fission product will decay and emit the delayed neutron which are count using the nuclear counting electronics module. Studies on the reliability of fast pneumatic sample handling using the statistical method shows that 95 % confidence level had been reach. Results shows the mean transfer time of the sample from the loader to the reactor core is 3251 ± 210 ms, while the mean transfer time of the samples from the core to the counter chamber is 3264 ± 407 ms. The overall system reliability has been verified using analysis of calibration standard material with known quantity of uranium and thorium IAEA-S17, the IAEA-ThO2 and the IAEA-S14 method. At the moment nuclear counting electronic based on 4 units neutron detector and the results were in line with the previous experiment. Results shows that the content of U and Th is in the average of 19:35 ppm and 432.25 ppm respectively compared with the known quantity of the sample is 29.0 ppm and 460 ppm. Studies on the effects pneumatic sample handling to the irradiation time parameter indicated that the previous

  10. Automated preparation of Kepler time series of planet hosts for asteroseismic analysis

    CERN Document Server

    Handberg, R

    2014-01-01

    One of the tasks of the Kepler Asteroseismic Science Operations Center (KASOC) is to provide asteroseismic analyses on Kepler Objects of Interest (KOIs). However, asteroseismic analysis of planetary host stars presents some unique complications with respect to data preprocessing, compared to pure asteroseismic targets. If not accounted for, the presence of planetary transits in the photometric time series often greatly complicates or even hinders these asteroseismic analyses. This drives the need for specialised methods of preprocessing data to make them suitable for asteroseismic analysis. In this paper we present the KASOC Filter, which is used to automatically prepare data from the Kepler/K2 mission for asteroseismic analyses of solar-like planet host stars. The methods are very effective at removing unwanted signals of both instrumental and planetary origins and produce significantly cleaner photometric time series than the original data. The methods are automated and can therefore easily be applied to a ...

  11. A meta-analysis of active video games on health outcomes among children and adolescents.

    Science.gov (United States)

    Gao, Z; Chen, S; Pasco, D; Pope, Z

    2015-09-01

    This meta-analysis synthesizes current literature concerning the effects of active video games (AVGs) on children/adolescents' health-related outcomes. A total of 512 published studies on AVGs were located, and 35 articles were included based on the following criteria: (i) data-based research articles published in English between 1985 and 2015; (ii) studied some types of AVGs and related outcomes among children/adolescents and (iii) had at least one comparison within each study. Data were extracted to conduct comparisons for outcome measures in three separate categories: AVGs and sedentary behaviours, AVGs and laboratory-based exercise, and AVGs and field-based physical activity. Effect size for each entry was calculated with the Comprehensive Meta-Analysis software in 2015. Mean effect size (Hedge's g) and standard deviation were calculated for each comparison. Compared with sedentary behaviours, AVGs had a large effect on health outcomes. The effect sizes for physiological outcomes were marginal when comparing AVGs with laboratory-based exercises. The comparison between AVGs and field-based physical activity had null to moderate effect sizes. AVGs could yield equivalent health benefits to children/adolescents as laboratory-based exercise or field-based physical activity. Therefore, AVGs can be a good alternative for sedentary behaviour and addition to traditional physical activity and sports in children/adolescents.

  12. Bringing Javanesse Traditional Dance into Basic Physics Class: Exemplifying Projectile Motion through Video Analysis

    Science.gov (United States)

    Handayani, Langlang; Prasetya Aji, Mahardika; Susilo; Marwoto, Putut

    2016-08-01

    An alternative approach of an arts-based instruction for Basic Physics class has been developed through the implementation of video analysis of a Javanesse traditional dance: Bambangan Cakil. A particular movement of the dance -weapon throwing- was analyzed by employing the LoggerPro software package to exemplify projectile motion. The results of analysis indicated that the movement of the thrown weapon in Bambangan Cakil dance provides some helping explanations of several physics concepts of projectile motion: object's path, velocity, and acceleration, in a form of picture, graph and also table. Such kind of weapon path and velocity can be shown via a picture or graph, while such concepts of decreasing velocity in y direction (weapon moving downward and upward) due to acceleration g can be represented through the use of a table. It was concluded that in a Javanesse traditional dance there are many physics concepts which can be explored. The study recommends to bring the traditional dance into a science class which will enable students to get more understanding of both physics concepts and Indonesia cultural heritage.

  13. Video-Assisted Thoracoscopic Sympathectomy for Palmar Hyperhidrosis: A Meta-Analysis of Randomized Controlled Trials

    Science.gov (United States)

    Zhang, Wenxiong; Yu, Dongliang; Jiang, Han; Xu, Jianjun; Wei, Yiping

    2016-01-01

    Objectives Video-assisted thoracoscopic sympathectomy (VTS) is effective in treating palmar hyperhidrosis (PH). However, it is no consensus over which segment should undergo VTS to maximize efficacy and minimize the complications of compensatory hyperhidrosis (CH). This study was designed to compare the efficiency and side effects of VTS of different segments in the treatment of PH. Methods A comprehensive search of PubMed, Ovid MEDLINE, EMBASE, Web of Science, ScienceDirect, the Cochrane Library, Scopus and Google Scholar was performed to identify studies comparing VTS of different segments for treatment of PH. The data was analyzed by Revman 5.3 software and SPSS 18.0. Results A total of eight randomized controlled trials (RCTs) involving 1200 patients were included. Meta-analysis showed that single segment/low segments VTS could reduce the risk of moderate/severe CH compared with multiple segments/high segments. The risk of total CH had a similar trend. In the subgroup analysis of single segment VTS, no significant differences were found between T2/T3 VTS and other segments in postoperative CH and degree of CH. T4 VTS showed better efficacy in limiting CH compared with other segments. Conclusions T4 appears to be the best segment for the surgical treatment of PH. Our findings require further validation in more high-quality, large-scale randomized controlled trials. PMID:27187774

  14. Video-Assisted Thoracoscopic Sympathectomy for Palmar Hyperhidrosis: A Meta-Analysis of Randomized Controlled Trials.

    Directory of Open Access Journals (Sweden)

    Wenxiong Zhang

    Full Text Available Video-assisted thoracoscopic sympathectomy (VTS is effective in treating palmar hyperhidrosis (PH. However, it is no consensus over which segment should undergo VTS to maximize efficacy and minimize the complications of compensatory hyperhidrosis (CH. This study was designed to compare the efficiency and side effects of VTS of different segments in the treatment of PH.A comprehensive search of PubMed, Ovid MEDLINE, EMBASE, Web of Science, ScienceDirect, the Cochrane Library, Scopus and Google Scholar was performed to identify studies comparing VTS of different segments for treatment of PH. The data was analyzed by Revman 5.3 software and SPSS 18.0.A total of eight randomized controlled trials (RCTs involving 1200 patients were included. Meta-analysis showed that single segment/low segments VTS could reduce the risk of moderate/severe CH compared with multiple segments/high segments. The risk of total CH had a similar trend. In the subgroup analysis of single segment VTS, no significant differences were found between T2/T3 VTS and other segments in postoperative CH and degree of CH. T4 VTS showed better efficacy in limiting CH compared with other segments.T4 appears to be the best segment for the surgical treatment of PH. Our findings require further validation in more high-quality, large-scale randomized controlled trials.

  15. A meta-analysis of active video games on health outcomes among children and adolescents.

    Science.gov (United States)

    Gao, Z; Chen, S; Pasco, D; Pope, Z

    2015-09-01

    This meta-analysis synthesizes current literature concerning the effects of active video games (AVGs) on children/adolescents' health-related outcomes. A total of 512 published studies on AVGs were located, and 35 articles were included based on the following criteria: (i) data-based research articles published in English between 1985 and 2015; (ii) studied some types of AVGs and related outcomes among children/adolescents and (iii) had at least one comparison within each study. Data were extracted to conduct comparisons for outcome measures in three separate categories: AVGs and sedentary behaviours, AVGs and laboratory-based exercise, and AVGs and field-based physical activity. Effect size for each entry was calculated with the Comprehensive Meta-Analysis software in 2015. Mean effect size (Hedge's g) and standard deviation were calculated for each comparison. Compared with sedentary behaviours, AVGs had a large effect on health outcomes. The effect sizes for physiological outcomes were marginal when comparing AVGs with laboratory-based exercises. The comparison between AVGs and field-based physical activity had null to moderate effect sizes. AVGs could yield equivalent health benefits to children/adolescents as laboratory-based exercise or field-based physical activity. Therefore, AVGs can be a good alternative for sedentary behaviour and addition to traditional physical activity and sports in children/adolescents. PMID:25943852

  16. Automated Analysis and Classification of Histological Tissue Features by Multi-Dimensional Microscopic Molecular Profiling.

    Directory of Open Access Journals (Sweden)

    Daniel P Riordan

    Full Text Available Characterization of the molecular attributes and spatial arrangements of cells and features within complex human tissues provides a critical basis for understanding processes involved in development and disease. Moreover, the ability to automate steps in the analysis and interpretation of histological images that currently require manual inspection by pathologists could revolutionize medical diagnostics. Toward this end, we developed a new imaging approach called multidimensional microscopic molecular profiling (MMMP that can measure several independent molecular properties in situ at subcellular resolution for the same tissue specimen. MMMP involves repeated cycles of antibody or histochemical staining, imaging, and signal removal, which ultimately can generate information analogous to a multidimensional flow cytometry analysis on intact tissue sections. We performed a MMMP analysis on a tissue microarray containing a diverse set of 102 human tissues using a panel of 15 informative antibody and 5 histochemical stains plus DAPI. Large-scale unsupervised analysis of MMMP data, and visualization of the resulting classifications, identified molecular profiles that were associated with functional tissue features. We then directly annotated H&E images from this MMMP series such that canonical histological features of interest (e.g. blood vessels, epithelium, red blood cells were individually labeled. By integrating image annotation data, we identified molecular signatures that were associated with specific histological annotations and we developed statistical models for automatically classifying these features. The classification accuracy for automated histology labeling was objectively evaluated using a cross-validation strategy, and significant accuracy (with a median per-pixel rate of 77% per feature from 15 annotated samples for de novo feature prediction was obtained. These results suggest that high-dimensional profiling may advance the

  17. Development of an automated imaging pipeline for the analysis of the zebrafish larval kidney.

    Directory of Open Access Journals (Sweden)

    Jens H Westhoff

    Full Text Available The analysis of kidney malformation caused by environmental influences during nephrogenesis or by hereditary nephropathies requires animal models allowing the in vivo observation of developmental processes. The zebrafish has emerged as a useful model system for the analysis of vertebrate organ development and function, and it is suitable for the identification of organotoxic or disease-modulating compounds on a larger scale. However, to fully exploit its potential in high content screening applications, dedicated protocols are required allowing the consistent visualization of inner organs such as the embryonic kidney. To this end, we developed a high content screening compatible pipeline for the automated imaging of standardized views of the developing pronephros in zebrafish larvae. Using a custom designed tool, cavities were generated in agarose coated microtiter plates allowing for accurate positioning and orientation of zebrafish larvae. This enabled the subsequent automated acquisition of stable and consistent dorsal views of pronephric kidneys. The established pipeline was applied in a pilot screen for the analysis of the impact of potentially nephrotoxic drugs on zebrafish pronephros development in the Tg(wt1b:EGFP transgenic line in which the developing pronephros is highlighted by GFP expression. The consistent image data that was acquired allowed for quantification of gross morphological pronephric phenotypes, revealing concentration dependent effects of several compounds on nephrogenesis. In addition, applicability of the imaging pipeline was further confirmed in a morpholino based model for cilia-associated human genetic disorders associated with different intraflagellar transport genes. The developed tools and pipeline can be used to study various aspects in zebrafish kidney research, and can be readily adapted for the analysis of other organ systems.

  18. Automated Source Code Analysis to Identify and Remove Software Security Vulnerabilities: Case Studies on Java Programs

    Directory of Open Access Journals (Sweden)

    Natarajan Meghanathan

    2013-01-01

    Full Text Available The high-level contribution of this paper is to illustrate the development of generic solution strategies to remove software security vulnerabilities that could be identified using automated tools for source code analysis on software programs (developed in Java. We use the Source Code Analyzer and Audit Workbench automated tools, developed by HP Fortify Inc., for our testing purposes. We present case studies involving a file writer program embedded with features for password validation, and connection-oriented server socket programs to discover, analyze the impact and remove the following software security vulnerabilities: (i Hardcoded Password, (ii Empty Password Initialization, (iii Denial of Service, (iv System Information Leak, (v Unreleased Resource, (vi Path Manipulation, and (vii Resource Injection vulnerabilities. For each of these vulnerabilities, we describe the potential risks associated with leaving them unattended in a software program, and provide the solutions (including the code snippets in Java that can be incorporated to remove these vulnerabilities. The proposed solutions are very generic in nature, and can be suitably modified to correct any such vulnerabilities in software developed in any other programming language.

  19. Automated sample preparation and analysis using a sequential-injection-capillary electrophoresis (SI-CE) interface.

    Science.gov (United States)

    Kulka, Stephan; Quintás, Guillermo; Lendl, Bernhard

    2006-06-01

    A fully automated sequential-injection-capillary electrophoresis (SI-CE) system was developed using commercially available components as the syringe pump, the selection and injection valves and the high voltage power supply. The interface connecting the SI with the CE unit consisted of two T-pieces, where the capillary was inserted in one T-piece and a Pt electrode in the other (grounded) T-piece. By pressurising the whole system using a syringe pump, hydrodynamic injection was feasible. For characterisation, the system was applied to a mixture of adenosine and adenosine monophosphate at different concentrations. The calibration curve obtained gave a detection limit of 0.5 microg g(-1) (correlation coefficient of 0.997). The reproducibility of the injection was also assessed, resulting in a RSD value (5 injections) of 5.4%. The total time of analysis, from injection, conditioning and separation to cleaning the capillary again was 15 minutes. In another application, employing the full power of the automated SIA-CE system, myoglobin was mixed directly using the flow system with different concentrations of sodium dodecyl sulfate (SDS), a known denaturing agent. The different conformations obtained in this way were analysed with the CE system and a distinct shift in migration time and decreasing of the native peak of myoglobin (Mb) could be observed. The protein samples prepared were also analysed with off-line infrared spectroscopy (IR), confirming these results. PMID:16732362

  20. A computer based, automated analysis of process and outcomes of diabetic care in 23 GP practices.

    LENUS (Irish Health Repository)

    Hill, F

    2012-02-01

    The predicted prevalence of diabetes in Ireland by 2015 is 190,000. Structured diabetes care in general practice has outcomes equivalent to secondary care and good diabetes care has been shown to be associated with the use of electronic healthcare records (EHRs). This automated analysis of EHRs in 23 practices took 10 minutes per practice compared with 15 hours per practice for manual searches. Data was extracted for 1901 type II diabetics. There was valid data for >80% of patients for 6 of the 9 key indicators in the previous year. 543 (34%) had a Hba1c > 7.5%, 142 (9%) had a total cholesterol >6 mmol\\/l, 83 (6%) had an LDL cholesterol >4 mmol\\/l, 367 (22%) had Triglycerides > 2.2 mmol\\/l and 162 (10%) had Blood Pressure > 160\\/100 mmHg. Data quality and key indicators of care compare well with manual audits in Ireland and the U.K. electronic healthcare records and automated audits should be a feature of all chronic disease management programs.