WorldWideScience

Sample records for video processing algorithms

  1. Interactive video algorithms and technologies

    CERN Document Server

    Hammoud, Riad

    2006-01-01

    This book covers both algorithms and technologies of interactive videos, so that businesses in IT and data managements, scientists and software engineers in video processing and computer vision, coaches and instructors that use video technology in teaching, and finally end-users will greatly benefit from it. This book contains excellent scientific contributions made by a number of pioneering scientists and experts from around the globe. It consists of five parts. The first part introduces the reader to interactive video and video summarization and presents effective methodologies for automatic abstraction of a single video sequence, a set of video sequences, and a combined audio-video sequence. In the second part, a list of advanced algorithms and methodologies for automatic and semi-automatic analysis and editing of audio-video documents are presented. The third part tackles a more challenging level of automatic video re-structuring, filtering of video stream by extracting of highlights, events, and meaningf...

  2. Algorithm-Based Motion Magnification for Video Processing in Urological Laparoscopy.

    Science.gov (United States)

    Adams, Fabian; Schoelly, Reto; Schlager, Daniel; Schoenthaler, Martin; Schoeb, Dominik S; Wilhelm, Konrad; Hein, Simon; Wetterauer, Ulrich; Miernik, Arkadiusz

    2017-06-01

    Minimally invasive surgery is in constant further development and has replaced many conventional operative procedures. If vascular structure movement could be detected during these procedures, it could reduce the risk of vascular injury and conversion to open surgery. The recently proposed motion-amplifying algorithm, Eulerian Video Magnification (EVM), has been shown to substantially enhance minimal object changes in digitally recorded video that is barely perceptible to the human eye. We adapted and examined this technology for use in urological laparoscopy. Video sequences of routine urological laparoscopic interventions were recorded and further processed using spatial decomposition and filtering algorithms. The freely available EVM algorithm was investigated for its usability in real-time processing. In addition, a new image processing technology, the CRS iimotion Motion Magnification (CRSMM) algorithm, was specifically adjusted for endoscopic requirements, applied, and validated by our working group. Using EVM, no significant motion enhancement could be detected without severe impairment of the image resolution, motion, and color presentation. The CRSMM algorithm significantly improved image quality in terms of motion enhancement. In particular, the pulsation of vascular structures could be displayed more accurately than in EVM. Motion magnification image processing technology has the potential for clinical importance as a video optimizing modality in endoscopic and laparoscopic surgery. Barely detectable (micro)movements can be visualized using this noninvasive marker-free method. Despite these optimistic results, the technology requires considerable further technical development and clinical tests.

  3. Using image processing technology combined with decision tree algorithm in laryngeal video stroboscope automatic identification of common vocal fold diseases.

    Science.gov (United States)

    Jeffrey Kuo, Chung-Feng; Wang, Po-Chun; Chu, Yueng-Hsiang; Wang, Hsing-Won; Lai, Chun-Yu

    2013-10-01

    This study used the actual laryngeal video stroboscope videos taken by physicians in clinical practice as the samples for experimental analysis. The samples were dynamic vocal fold videos. Image processing technology was used to automatically capture the image of the largest glottal area from the video to obtain the physiological data of the vocal folds. In this study, an automatic vocal fold disease identification system was designed, which can obtain the physiological parameters for normal vocal folds, vocal paralysis and vocal nodules from image processing according to the pathological features. The decision tree algorithm was used as the classifier of the vocal fold diseases. The identification rate was 92.6%, and the identification rate with an image recognition improvement processing procedure after classification can be improved to 98.7%. Hence, the proposed system has value in clinical practices. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  4. Video processing project

    CSIR Research Space (South Africa)

    Globisch, R

    2009-03-01

    Full Text Available Video processing source code for algorithms and tools used in software media pipelines (e.g. image scalers, colour converters, etc.) The currently available source code is written in C++ with their associated libraries and DirectShow- Filters....

  5. High Quality Real-Time Video with Scanning Electron Microscope Using Total Variation Algorithm on a Graphics Processing Unit

    Science.gov (United States)

    Ouarti, Nizar; Sauvet, Bruno; Régnier, Stéphane

    2012-04-01

    The scanning electron microscope (SEM) is usually dedicated to taking a picture of micro-nanoscopic objects. In the present study, we wondered whether a SEM can be converted as a real-time video display. To this end, we designed a new methodology. We use the slow mode of the SEM to acquire a high quality reference image that can then be used to estimate the optimal parameters that regularize the signal for a given method. Here, we employ Total Variation, a method which minimizes the noise and regularizes the image. An optimal lagrangian multiplier can be computed that regularizes the image efficiently. We showed that a limited number of iterations for Total Variation algorithm can lead to an acceptable quality of regularization. This algorithm is parallel and deployed on a Graphics Processing Unit to obtain a real-time high quality video with a SEM. It opens the possibility of a real-time interaction at micro-nanoscales.

  6. Hardware acceleration of lucky-region fusion (LRF) algorithm for high-performance real-time video processing

    Science.gov (United States)

    Browning, Tyler; Jackson, Christopher; Cayci, Furkan; Carhart, Gary W.; Liu, J. J.; Kiamilev, Fouad

    2015-06-01

    "Lucky-region" fusion (LRF) is a synthetic imaging technique that has proven successful in enhancing the quality of images distorted by atmospheric turbulence. The LRF algorithm extracts sharp regions of an image obtained from a series of short exposure frames from fast, high-resolution image sensors, and fuses the sharp regions into a final, improved image. In our previous research, the LRF algorithm had been implemented on CPU and field programmable gate array (FPGA) platforms. The CPU did not have sufficient processing power to handle real-time processing of video. Last year, we presented a real-time LRF implementation using an FPGA. However, due to the slow register-transfer level (RTL) development and simulation time, it was difficult to adjust and discover optimal LRF settings such as Gaussian kernel radius and synthetic frame buffer size. To overcome this limitation, we implemented the LRF algorithm on an off-the-shelf graphical processing unit (GPU) in order to take advantage of built-in parallelization and significantly faster development time. Our initial results show that the unoptimized GPU implementation has almost comparable turbulence mitigation to the FPGA version. In our presentation, we will explore optimization of the LRF algorithm on the GPU to achieve higher performance results, and adding new performance capabilities such as image stabilization.

  7. The XH-map algorithm: A method to process stereo video to produce a real-time obstacle map

    Science.gov (United States)

    Rosselot, Donald; Hall, Ernest L.

    2005-10-01

    This paper presents a novel, simple and fast algorithm to produce a "floor plan" obstacle map in real time using video. The XH-map algorithm is a transformation of stereo vision data in disparity map space into a two dimensional obstacle map space using a method that can be likened to a histogram reduction of image information. The classic floor-ground background noise problem is addressed with a simple one-time semi-automatic calibration method incorporated into the algorithm. This implementation of this algorithm utilizes the Intel Performance Primitives library and OpenCV libraries for extremely fast and efficient execution, creating a scaled obstacle map from a 480x640x256 stereo pair in 1.4 milliseconds. This algorithm has many applications in robotics and computer vision including enabling an "Intelligent Robot" robot to "see" for path planning and obstacle avoidance.

  8. Fast prediction algorithm for multiview video coding

    Science.gov (United States)

    Abdelazim, Abdelrahman; Mein, Stephen James; Varley, Martin Roy; Ait-Boudaoud, Djamel

    2013-03-01

    The H.264/multiview video coding (MVC) standard has been developed to enable efficient coding for three-dimensional and multiple viewpoint video sequences. The inter-view statistical dependencies are utilized and an inter-view prediction is employed to provide more efficient coding; however, this increases the overall encoding complexity. Motion homogeneity is exploited here to selectively enable inter-view prediction, and to reduce complexity in the motion estimation (ME) and the mode selection processes. This has been accomplished by defining situations that relate macro-blocks' motion characteristics to the mode selection and the inter-view prediction processes. When comparing the proposed algorithm to the H.264/MVC reference software and other recent work, the experimental results demonstrate a significant reduction in ME time while maintaining similar rate-distortion performance.

  9. Attacking Automatic Video Analysis Algorithms: A Case Study of Google Cloud Video Intelligence API

    OpenAIRE

    Hosseini, Hossein; Xiao, Baicen; Clark, Andrew; Poovendran, Radha

    2017-01-01

    Due to the growth of video data on Internet, automatic video analysis has gained a lot of attention from academia as well as companies such as Facebook, Twitter and Google. In this paper, we examine the robustness of video analysis algorithms in adversarial settings. Specifically, we propose targeted attacks on two fundamental classes of video analysis algorithms, namely video classification and shot detection. We show that an adversary can subtly manipulate a video in such a way that a human...

  10. Processing Decoded Video for Backlight Dimming

    DEFF Research Database (Denmark)

    Burini, Nino; Korhonen, Jari

    Quality of digital image and video signals on TV screens is aected by many factors, including the display technology and compression standards. An accurate knowledge of the characteristics of the display andof the video signals can be used to develop advanced algorithms that improve the visual...... rendition of the signals, particularly in the case of LCDs with dynamic local backlight. This thesis shows that it is possible to model LCDs with dynamic backlight to design algorithms that improve the visual quality of 2D and 3D content, and that digital video coding artifacts like blocking or ringing can...... be reduced with post-processing. LCD screens with dynamic local backlight are modeled in their main aspects, like pixel luminance, light diusion and light perception. Following the model, novel algorithms based on optimization are presented and extended, then reduced in complexity, to produce backlights...

  11. An efficient video dehazing algorithm based on spectral clustering

    Science.gov (United States)

    Zhao, Fan; Yao, Zao; Song, XiaoFang; Yao, Yi

    2017-07-01

    Image and video dehazing is a popular topic in the field of computer vision and digital image processing. A fast, optimized dehazing algorithm was recently proposed that enhances contrast and reduces flickering artifacts in a dehazed video sequence by minimizing a cost function that makes transmission values spatially and temporally coherent. However, its fixed-size block partitioning leads to block effects. Further, the weak edges in a hazy image are not addressed. Hence, a video dehazing algorithm based on customized spectral clustering is proposed. To avoid block artifacts, the spectral clustering is customized to segment static scenes to ensure the same target has the same transmission value. Assuming that dehazed edge images have richer detail than before restoration, an edge cost function is added to the ransmission model. The experimental results demonstrate that the proposed method provides higher dehazing quality and lower time complexity than the previous technique.

  12. A novel 3D scalable video compression algorithm

    Science.gov (United States)

    Somasundaram, Siva; Subbalakshmi, Koduvayur P.

    2003-05-01

    In this paper we propose a scalable video coding scheme that utilizes the embedded block coding with optimal truncation (EBCOT) compression algorithm. Three dimensional spatio-temporal decomposition of the video sequence succeeded by compression using the EBCOT generates a SNR and resolution scalable bit stream. The proposed video coding algorithm not only performs closer to the MPEG-4 video coding standard in compression efficiency but also provides better SNR and resolution scalability. Experimental results show that the performance of the proposed algorithm does better than the 3-D SPIHT (Set Partitioning in Hierarchial Trees) algorithm by 1.5dB.

  13. Motion estimation for video coding efficient algorithms and architectures

    CERN Document Server

    Chakrabarti, Indrajit; Chatterjee, Sumit Kumar

    2015-01-01

    The need of video compression in the modern age of visual communication cannot be over-emphasized. This monograph will provide useful information to the postgraduate students and researchers who wish to work in the domain of VLSI design for video processing applications. In this book, one can find an in-depth discussion of several motion estimation algorithms and their VLSI implementation as conceived and developed by the authors. It records an account of research done involving fast three step search, successive elimination, one-bit transformation and its effective combination with diamond search and dynamic pixel truncation techniques. Two appendices provide a number of instances of proof of concept through Matlab and Verilog program segments. In this aspect, the book can be considered as first of its kind. The architectures have been developed with an eye to their applicability in everyday low-power handheld appliances including video camcorders and smartphones.

  14. Processing Decoded Video for LCD-LED Backlight Display

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan

    on local LED-LCD backlight. Second, removing the digital video codec artifacts such as blocking and ringing artifacts by post-processing algorithms. A novel algorithm based on image features with optimal balance between visual quality and power consumption was developed. In addition, to remove flickering......The quality of digital images and video signal on visual media such as TV screens and LCD displays is affected by two main factors; the display technology and compression standards. Accurate knowledge about the characteristics of display and the video signal can be utilized to develop advanced...... algorithms for signal (image or video) enhancement. One particular application of such algorithms is the case of LCDs with dynamic local backlight. The thesis addressed two main problems; first, designing algorithms that improve the visual quality of perceived image and video and reduce power consumption...

  15. A baseline algorithm for face detection and tracking in video

    Science.gov (United States)

    Manohar, Vasant; Soundararajan, Padmanabhan; Korzhova, Valentina; Boonstra, Matthew; Goldgof, Dmitry; Kasturi, Rangachar

    2007-10-01

    Establishing benchmark datasets, performance metrics and baseline algorithms have considerable research significance in gauging the progress in any application domain. These primarily allow both users and developers to compare the performance of various algorithms on a common platform. In our earlier works, we focused on developing performance metrics and establishing a substantial dataset with ground truth for object detection and tracking tasks (text and face) in two video domains -- broadcast news and meetings. In this paper, we present the results of a face detection and tracking algorithm on broadcast news videos with the objective of establishing a baseline performance for this task-domain pair. The detection algorithm uses a statistical approach that was originally developed by Viola and Jones and later extended by Lienhart. The algorithm uses a feature set that is Haar-like and a cascade of boosted decision tree classifiers as a statistical model. In this work, we used the Intel Open Source Computer Vision Library (OpenCV) implementation of the Haar face detection algorithm. The optimal values for the tunable parameters of this implementation were found through an experimental design strategy commonly used in statistical analyses of industrial processes. Tracking was accomplished as continuous detection with the detected objects in two frames mapped using a greedy algorithm based on the distances between the centroids of bounding boxes. Results on the evaluation set containing 50 sequences (~ 2.5 mins.) using the developed performance metrics show good performance of the algorithm reflecting the state-of-the-art which makes it an appropriate choice as the baseline algorithm for the problem.

  16. MPEG-2 Compressed-Domain Algorithms for Video Analysis

    Directory of Open Access Journals (Sweden)

    Hesseler Wolfgang

    2006-01-01

    Full Text Available This paper presents new algorithms for extracting metadata from video sequences in the MPEG-2 compressed domain. Three algorithms for efficient low-level metadata extraction in preprocessing stages are described. The first algorithm detects camera motion using the motion vector field of an MPEG-2 video. The second method extends the idea of motion detection to a limited region of interest, yielding an efficient algorithm to track objects inside video sequences. The third algorithm performs a cut detection using macroblock types and motion vectors.

  17. Fast motion prediction algorithm for multiview video coding

    Science.gov (United States)

    Abdelazim, Abdelrahman; Zhang, Guang Y.; Mein, Stephen J.; Varley, Martin R.; Ait-Boudaoud, Djamel

    2011-06-01

    Multiview Video Coding (MVC) is an extension to the H.264/MPEG-4 AVC video compression standard developed with joint efforts by MPEG/VCEG to enable efficient encoding of sequences captured simultaneously from multiple cameras using a single video stream. Therefore the design is aimed at exploiting inter-view dependencies in addition to reducing temporal redundancies. However, this further increases the overall encoding complexity In this paper, the high correlation between a macroblock and its enclosed partitions is utilised to estimate motion homogeneity, and based on the result inter-view prediction is selectively enabled or disabled. Moreover, if the MVC is divided into three layers in terms of motion prediction; the first being the full and sub-pixel motion search, the second being the mode selection process and the third being repetition of the first and second for inter-view prediction, the proposed algorithm significantly reduces the complexity in the three layers. To assess the proposed algorithm, a comprehensive set of experiments were conducted. The results show that the proposed algorithm significantly reduces the motion estimation time whilst maintaining similar Rate Distortion performance, when compared to both the H.264/MVC reference software and recently reported work.

  18. Low-Complexity Saliency Detection Algorithm for Fast Perceptual Video Coding

    Directory of Open Access Journals (Sweden)

    Pengyu Liu

    2013-01-01

    Full Text Available A low-complexity saliency detection algorithm for perceptual video coding is proposed; low-level encoding information is adopted as the characteristics of visual perception analysis. Firstly, this algorithm employs motion vector (MV to extract temporal saliency region through fast MV noise filtering and translational MV checking procedure. Secondly, spatial saliency region is detected based on optimal prediction mode distributions in I-frame and P-frame. Then, it combines the spatiotemporal saliency detection results to define the video region of interest (VROI. The simulation results validate that the proposed algorithm can avoid a large amount of computation work in the visual perception characteristics analysis processing compared with other existing algorithms; it also has better performance in saliency detection for videos and can realize fast saliency detection. It can be used as a part of the video standard codec at medium-to-low bit-rates or combined with other algorithms in fast video coding.

  19. New algorithm for iris recognition based on video sequences

    Science.gov (United States)

    Bourennane, Salah; Fossati, Caroline; Ketchantang, William

    2010-07-01

    Among existing biometrics, iris recognition systems are among the most accurate personal biometric identification systems. However, the acquisition of a workable iris image requires strict cooperation of the user; otherwise, the image will be rejected by a verification module because of its poor quality, inducing a high false reject rate (FRR). The FRR may also increase when iris localization fails or when the pupil is too dilated. To improve the existing methods, we propose to use video sequences acquired in real time by a camera. In order to keep the same computational load to identify the iris, we propose a new method to estimate the iris characteristics. First, we propose a new iris texture characterization based on Fourier-Mellin transform, which is less sensitive to pupil dilatations than previous methods. Then, we develop a new iris localization algorithm that is robust to variations of quality (partial occlusions due to eyelids and eyelashes, light reflects, etc.), and finally, we introduce a fast and new criterion of suitable image selection from an iris video sequence for an accurate recognition. The accuracy of each step of the algorithm in the whole proposed recognition process is tested and evaluated using our own iris video database and several public image databases, such as CASIA, UBIRIS, and BATH.

  20. Multimedia image and video processing

    CERN Document Server

    Guan, Ling

    2012-01-01

    As multimedia applications have become part of contemporary daily life, numerous paradigm-shifting technologies in multimedia processing have emerged over the last decade. Substantially updated with 21 new chapters, Multimedia Image and Video Processing, Second Edition explores the most recent advances in multimedia research and applications. This edition presents a comprehensive treatment of multimedia information mining, security, systems, coding, search, hardware, and communications as well as multimodal information fusion and interaction. Clearly divided into seven parts, the book begins w

  1. Practical image and video processing using MATLAB

    CERN Document Server

    Marques, Oge

    2011-01-01

    "The book provides a practical introduction to the most important topics in image and video processing using MATLAB (and its Image Processing Toolbox) as a tool to demonstrate the most important techniques and algorithms. The contents are presented in a clear, technically accurate, objective way, with just enough mathematical detail. Most of the chapters are supported by figures, examples, illustrative problems, MATLAB scripts, suggestions for further reading, bibliographical references, useful Web sites, and exercises and computer projects to extend the understanding of their contents"--

  2. Hardware architectures for real time processing of High Definition video sequences

    OpenAIRE

    Genovese, Mariangela

    2014-01-01

    Actually, application fields, such as medicine, space exploration, surveillance, authentication, HDTV, and automated industry inspection, require capturing, storing and processing continuous streams of video data. Consequently, different process techniques (video enhancement, segmentation, object detection, or video compression, as examples) are involved in these applications. Such techniques often require a significant number of operations depending on the algorithm complexity and the video ...

  3. Continuity-Aware Scheduling Algorithm for Scalable Video Streaming

    Directory of Open Access Journals (Sweden)

    Atinat Palawan

    2016-05-01

    Full Text Available The consumer demand for retrieving and delivering visual content through consumer electronic devices has increased rapidly in recent years. The quality of video in packet networks is susceptible to certain traffic characteristics: average bandwidth availability, loss, delay and delay variation (jitter. This paper presents a scheduling algorithm that modifies the stream of scalable video to combat jitter. The algorithm provides unequal look-ahead by safeguarding the base layer (without the need for overhead of the scalable video. The results of the experiments show that our scheduling algorithm reduces the number of frames with a violated deadline and significantly improves the continuity of the video stream without compromising the average Y Peek Signal-to-Noise Ratio (PSNR.

  4. A Super-resolution Reconstruction Algorithm for Surveillance Video

    Directory of Open Access Journals (Sweden)

    Jian Shao

    2017-01-01

    Full Text Available Recent technological developments have resulted in surveillance video becoming a primary method of preserving public security. Many city crimes are observed in surveillance video. The most abundant evidence collected by the police is also acquired through surveillance video sources. Surveillance video footage offers very strong support for solving criminal cases, therefore, creating an effective policy, and applying useful methods to the retrieval of additional evidence is becoming increasingly important. However, surveillance video has had its failings, namely, video footage being captured in low resolution (LR and bad visual quality. In this paper, we discuss the characteristics of surveillance video and describe the manual feature registration – maximum a posteriori – projection onto convex sets to develop a super-resolution reconstruction method, which improves the quality of surveillance video. From this method, we can make optimal use of information contained in the LR video image, but we can also control the image edge clearly as well as the convergence of the algorithm. Finally, we make a suggestion on how to adjust the algorithm adaptability by analyzing the prior information of target image.

  5. Research of Video Steganalysis Algorithm Based on H265 Protocol

    Directory of Open Access Journals (Sweden)

    Wu Kaicheng

    2015-01-01

    This paper researches LSB matching VSA based on H265 protocol with the research background of 26 original Video sequences, it firstly extracts classification features out from training samples as input of SVM, and trains in SVM to obtain high-quality category classification model, and then tests whether there is suspicious information in the video sample. The experimental results show that VSA algorithm based on LSB matching can be more practical to obtain all frame embedded secret information and carrier and video of local frame embedded. In addition, VSA adopts the method of frame by frame with a strong robustness in resisting attack in the corresponding time domain.

  6. An Algorithm of Extracting I-Frame in Compressed Video

    Directory of Open Access Journals (Sweden)

    Zhu Yaling

    2015-01-01

    Full Text Available The MPEG video data includes three types of frames, that is: I-frame, P-frame and B-frame. However, the I-frame records the main information of video data, the P-frame and the B-frame are just regarded as motion compensations of the I-frame. This paper presents the approach which analyzes the MPEG video stream in the compressed domain, and find out the key frame of MPEG video stream by extracting the I-frame. Experiments indicated that this method can be automatically realized in the compressed MPEG video and it will lay the foundation for the video processing in the future.

  7. Video Segmentation Using Fast Marching and Region Growing Algorithms

    Directory of Open Access Journals (Sweden)

    Eftychis Sifakis

    2002-04-01

    Full Text Available The algorithm presented in this paper is comprised of three main stages: (1 classification of the image sequence and, in the case of a moving camera, parametric motion estimation, (2 change detection having as reference a fixed frame, an appropriately selected frame or a displaced frame, and (3 object localization using local colour features. The image sequence classification is based on statistical tests on the frame difference. The change detection module uses a two-label fast marching algorithm. Finally, the object localization uses a region growing algorithm based on the colour similarity. Video object segmentation results are shown using the COST 211 data set.

  8. Real-time demonstration hardware for enhanced DPCM video compression algorithm

    Science.gov (United States)

    Bizon, Thomas P.; Whyte, Wayne A., Jr.; Marcopoli, Vincent R.

    The lack of available wideband digital links as well as the complexity of implementation of bandwidth efficient digital video CODECs (encoder/decoder) has worked to keep the cost of digital television transmission too high to compete with analog methods. Terrestrial and satellite video service providers, however, are now recognizing the potential gains that digital video compression offers and are proposing to incorporate compression systems to increase the number of available program channels. NASA is similarly recognizing the benefits of and trend toward digital video compression techniques for transmission of high quality video from space and therefore, has developed a digital television bandwidth compression algorithm to process standard National Television Systems Committee (NTSC) composite color television signals. The algorithm is based on differential pulse code modulation (DPCM), but additionally utilizes a non-adaptive predictor, non-uniform quantizer and multilevel Huffman coder to reduce the data rate substantially below that achievable with straight DPCM. The non-adaptive predictor and multilevel Huffman coder combine to set this technique apart from other DPCM encoding algorithms. All processing is done on a intra-field basis to prevent motion degradation and minimize hardware complexity. Computer simulations have shown the algorithm will produce broadcast quality reconstructed video at an average transmission rate of 1.8 bits/pixel. Hardware implementation of the DPCM circuit, non-adaptive predictor and non-uniform quantizer has been completed, providing realtime demonstration of the image quality at full video rates. Video sampling/reconstruction circuits have also been constructed to accomplish the analog video processing necessary for the real-time demonstration. Performance results for the completed hardware compare favorably with simulation results. Hardware implementation of the multilevel Huffman encoder/decoder is currently under development

  9. System of video observation for electron beam welding process

    Science.gov (United States)

    Laptenok, V. D.; Seregin, Y. N.; Bocharov, A. N.; Murygin, A. V.; Tynchenko, V. S.

    2016-04-01

    Equipment of video observation system for electron beam welding process was developed. Construction of video observation system allows to reduce negative effects on video camera during the process of electron beam welding and get qualitative images of this process.

  10. ALGORITHM OF PLACEMENT OF VIDEO SURVEILLANCE CAMERAS AND ITS SOFTWARE IMPLEMENTATION

    Directory of Open Access Journals (Sweden)

    Loktev Alexey Alexeevich

    2012-10-01

    Full Text Available Comprehensive distributed safety, control, and monitoring systems applied by companies and organizations of different ownership structure play a substantial role in the present-day society. Video surveillance elements that ensure image processing and decision making in automated or automatic modes are the essential components of new systems. This paper covers the modeling of video surveillance systems installed in buildings, and the algorithm, or pattern, of video camera placement with due account for nearly all characteristics of buildings, detection and recognition facilities, and cameras themselves. This algorithm will be subsequently implemented as a user application. The project contemplates a comprehensive approach to the automatic placement of cameras that take account of their mutual positioning and compatibility of tasks. The project objective is to develop the principal elements of the algorithm of recognition of a moving object to be detected by several cameras. The image obtained by different cameras will be processed. Parameters of motion are to be identified to develop a table of possible options of routes. The implementation of the recognition algorithm represents an independent research project to be covered by a different article. This project consists in the assessment of the degree of complexity of an algorithm of camera placement designated for identification of cases of inaccurate algorithm implementation, as well as in the formulation of supplementary requirements and input data by means of intercrossing sectors covered by neighbouring cameras. The project also contemplates identification of potential problems in the course of development of a physical security and monitoring system at the stage of the project design development and testing. The camera placement algorithm has been implemented as a software application that has already been pilot tested on buildings and inside premises that have irregular dimensions. The

  11. DMPDS: A Fast Motion Estimation Algorithm Targeting High Resolution Videos and Its FPGA Implementation

    Directory of Open Access Journals (Sweden)

    Gustavo Sanchez

    2012-01-01

    Full Text Available This paper presents a new fast motion estimation (ME algorithm targeting high resolution digital videos and its efficient hardware architecture design. The new Dynamic Multipoint Diamond Search (DMPDS algorithm is a fast algorithm which increases the ME quality when compared with other fast ME algorithms. The DMPDS achieves a better digital video quality reducing the occurrence of local minima falls, especially in high definition videos. The quality results show that the DMPDS is able to reach an average PSNR gain of 1.85 dB when compared with the well-known Diamond Search (DS algorithm. When compared to the optimum results generated by the Full Search (FS algorithm the DMPDS shows a lose of only 1.03 dB in the PSNR. On the other hand, the DMPDS reached a complexity reduction higher than 45 times when compared to FS. The quality gains related to DS caused an expected increase in the DMPDS complexity which uses 6.4-times more calculations than DS. The DMPDS architecture was designed focused on high performance and low cost, targeting to process Quad Full High Definition (QFHD videos in real time (30 frames per second. The architecture was described in VHDL and synthesized to Altera Stratix 4 and Xilinx Virtex 5 FPGAs. The synthesis results show that the architecture is able to achieve processing rates higher than 53 QFHD fps, reaching the real-time requirements. The DMPDS architecture achieved the highest processing rate when compared to related works in the literature. This high processing rate was obtained designing an architecture with a high operation frequency and low numbers of cycles necessary to process each block.

  12. Shot Boundary Detection in Soccer Video using Twin-comparison Algorithm and Dominant Color Region

    Directory of Open Access Journals (Sweden)

    Matko Šarić

    2008-06-01

    Full Text Available The first step in generic video processing is temporal segmentation, i.e. shot boundary detection. Camera shot transitions can be either abrupt (e.g. cuts or gradual (e.g. fades, dissolves, wipes. Sports video is one of the most challenging domains for robust shot boundary detection. We proposed a shot boundary detection algorithm for soccer video based on the twin-comparison method and the absolute difference between frames in their ratios of dominant colored pixels to total number of pixels. With this approach the detection of gradual transitions is improved by decreasing the number of false positives caused by some camera operations. We also compared performances of our algorithm and the standard twin-comparison method.

  13. Toward real-time remote processing of laparoscopic video.

    Science.gov (United States)

    Ronaghi, Zahra; Duffy, Edward B; Kwartowitz, David M

    2015-10-01

    Laparoscopic surgery is a minimally invasive surgical technique where surgeons insert a small video camera into the patient's body to visualize internal organs and use small tools to perform surgical procedures. However, the benefit of small incisions has a drawback of limited visualization of subsurface tissues, which can lead to navigational challenges in the delivering of therapy. Image-guided surgery uses the images to map subsurface structures and can reduce the limitations of laparoscopic surgery. One particular laparoscopic camera system of interest is the vision system of the daVinci-Si robotic surgical system (Intuitive Surgical, Sunnyvale, California). The video streams generate approximately 360 MB of data per second, demonstrating a trend toward increased data sizes in medicine, primarily due to higher-resolution video cameras and imaging equipment. Processing this data on a bedside PC has become challenging and a high-performance computing (HPC) environment may not always be available at the point of care. To process this data on remote HPC clusters at the typical 30 frames per second (fps) rate, it is required that each 11.9 MB video frame be processed by a server and returned within 1/30th of a second. The ability to acquire, process, and visualize data in real time is essential for the performance of complex tasks as well as minimizing risk to the patient. As a result, utilizing high-speed networks to access computing clusters will lead to real-time medical image processing and improve surgical experiences by providing real-time augmented laparoscopic data. We have performed image processing algorithms on a high-definition head phantom video (1920 × 1080 pixels) and transferred the video using a message passing interface. The total transfer time is around 53 ms or 19 fps. We will optimize and parallelize these algorithms to reduce the total time to 30 ms.

  14. Novel Intermode Prediction Algorithm for High Efficiency Video Coding Encoder

    Directory of Open Access Journals (Sweden)

    Chan-seob Park

    2014-01-01

    Full Text Available The joint collaborative team on video coding (JCT-VC is developing the next-generation video coding standard which is called high efficiency video coding (HEVC. In the HEVC, there are three units in block structure: coding unit (CU, prediction unit (PU, and transform unit (TU. The CU is the basic unit of region splitting like macroblock (MB. Each CU performs recursive splitting into four blocks with equal size, starting from the tree block. In this paper, we propose a fast CU depth decision algorithm for HEVC technology to reduce its computational complexity. In 2N×2N PU, the proposed method compares the rate-distortion (RD cost and determines the depth using the compared information. Moreover, in order to speed up the encoding time, the efficient merge SKIP detection method is developed additionally based on the contextual mode information of neighboring CUs. Experimental result shows that the proposed algorithm achieves the average time-saving factor of 44.84% in the random access (RA at Main profile configuration with the HEVC test model (HM 10.0 reference software. Compared to HM 10.0 encoder, a small BD-bitrate loss of 0.17% is also observed without significant loss of image quality.

  15. The Development of Video Learning to Deliver a Basic Algorithm Learning

    Directory of Open Access Journals (Sweden)

    slamet kurniawan fahrurozi

    2017-12-01

    Full Text Available The world of education is currently entering the era of the media world, where learning activities demand reduction of lecture methods and Should be replaced by the use of many medias. In relation to the function of instructional media, it can be emphasized as follows: as a tool to make learning more effective, accelerate the teaching and learning process and improve the quality of teaching and learning process. This research aimed to develop a learning video programming basic materials algorithm that is appropriate to be applied as a learning resource in class X SMK. This study was also aimed to know the feasibility of learning video media developed. The research method used was research was research and development using development model developed by Alessi and Trollip (2001. The development model was divided into 3 stages namely Planning, Design, and Develpoment. Data collection techniques used interview method, literature method and instrument method. In the next stage, learning video was validated or evaluated by the material experts, media experts and users who are implemented to 30 Learners. The result of the research showed that video learning has been successfully made on basic programming subjects which consist of 8 scane video. Based on the learning video validation result, the percentage of learning video's eligibility is 90.5% from material experts, 95.9% of media experts, and 84% of users or learners. From the testing result that the learning videos that have been developed can be used as learning resources or instructional media programming subjects basic materials algorithm.

  16. Research of real-time video processing system based on 6678 multi-core DSP

    Science.gov (United States)

    Li, Xiangzhen; Xie, Xiaodan; Yin, Xiaoqiang

    2017-10-01

    In the information age, the rapid development in the direction of intelligent video processing, complex algorithm proposed the powerful challenge on the performance of the processor. In this article, through the FPGA + TMS320C6678 frame structure, the image to fog, merge into an organic whole, to stabilize the image enhancement, its good real-time, superior performance, break through the traditional function of video processing system is simple, the product defects such as single, solved the video application in security monitoring, video, etc. Can give full play to the video monitoring effectiveness, improve enterprise economic benefits.

  17. A Process Algebra Genetic Algorithm

    OpenAIRE

    Karaman, Sertac; Shima, Tal; Frazzoli, Emilio

    2011-01-01

    A genetic algorithm that utilizes process algebra for coding of solution chromosomes and for defining evolutionary based operators is presented. The algorithm is applicable to mission planning and optimization problems. As an example the high level mission planning for a cooperative group of uninhabited aerial vehicles is investigated. The mission planning problem is cast as an assignment problem, and solutions to the assignment problem are given in the form of chromosomes that are manipulate...

  18. Low-Cost Super-Resolution Algorithms Implementation Over a HW/SW Video Compression Platform

    Directory of Open Access Journals (Sweden)

    Llopis Rafael Peset

    2006-01-01

    Full Text Available Two approaches are presented in this paper to improve the quality of digital images over the sensor resolution using super-resolution techniques: iterative super-resolution (ISR and noniterative super-resolution (NISR algorithms. The results show important improvements in the image quality, assuming that sufficient sample data and a reasonable amount of aliasing are available at the input images. These super-resolution algorithms have been implemented over a codesign video compression platform developed by Philips Research, performing minimal changes on the overall hardware architecture. In this way, a novel and feasible low-cost implementation has been obtained by using the resources encountered in a generic hybrid video encoder. Although a specific video codec platform has been used, the methodology presented in this paper is easily extendable to any other video encoder architectures. Finally a comparison in terms of memory, computational load, and image quality for both algorithms, as well as some general statements about the final impact of the sampling process on the quality of the super-resolved (SR image, are also presented.

  19. Comprehensive Analysis and Evaluation of Background Subtraction Algorithms for Surveillance Video

    National Research Council Canada - National Science Library

    Yan Feng; Shengmei Luo; Yumin Tian; Shuo Deng; Haihong Zheng

    2014-01-01

    .... Then, the algorithms were implemented and tested using different videos with ground truth, such as baseline, dynamic background, camera jitter, and intermittent object motion and shadow scenarios...

  20. Squint mode SAR processing algorithms

    Science.gov (United States)

    Chang, C. Y.; Jin, M.; Curlander, J. C.

    1989-01-01

    The unique characteristics of a spaceborne SAR (synthetic aperture radar) operating in a squint mode include large range walk and large variation in the Doppler centroid as a function of range. A pointing control technique to reduce the Doppler drift and a new processing algorithm to accommodate large range walk are presented. Simulations of the new algorithm for squint angles up to 20 deg and look angles up to 44 deg for the Earth Observing System (Eos) L-band SAR configuration demonstrate that it is capable of maintaining the resolution broadening within 20 percent and the ISLR within a fraction of a decibel of the theoretical value.

  1. A BitTorrent-Based Dynamic Bandwidth Adaptation Algorithm for Video Streaming

    Science.gov (United States)

    Hsu, Tz-Heng; Liang, You-Sheng; Chiang, Meng-Shu

    In this paper, we propose a BitTorrent-based dynamic bandwidth adaptation algorithm for video streaming. Two mechanisms to improve the original BitTorrent protocol are proposed: (1) the decoding order frame first (DOFF) frame selection algorithm and (2) the rarest I frame first (RIFF) frame selection algorithm. With the proposed algorithms, a peer can periodically check the number of downloaded frames in the buffer and then allocate the available bandwidth adaptively for video streaming. As a result, users can have smooth video playout experience with the proposed algorithms.

  2. Edge Polynomial Fractal Compression Algorithm for High Quality Video Transmission. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Freddie

    1999-06-01

    In this final report, Physical Optics Corporation (POC) provides a review of its Edge Polynomial Autonomous Compression (EPAC) technology. This project was undertaken to meet the need for low bandwidth transmission of full-motion video images. In addition, this report offers a synopsis of the logical data representation study that was performed to compress still images and video. The mapping singularities and polynomial representation of 3-D surfaces were found to be ideal for very high image compression. Our efforts were then directed to extending the EPAC algorithm for the motion of singularities by tracking the 3-D coordinates of characteristic points and the development of system components. Finally, we describe the integration of the software with the hardware components. This process consists of acquiring and processing each separate camera view, combining the information from different cameras to calculate the location of an object in three dimensions, and tracking the information history and the behavior of the objects.

  3. 3D video coding for embedded devices energy efficient algorithms and architectures

    CERN Document Server

    Zatt, Bruno; Bampi, Sergio; Henkel, Jörg

    2013-01-01

    This book shows readers how to develop energy-efficient algorithms and hardware architectures to enable high-definition 3D video coding on resource-constrained embedded devices.  Users of the Multiview Video Coding (MVC) standard face the challenge of exploiting its 3D video-specific coding tools for increasing compression efficiency at the cost of increasing computational complexity and, consequently, the energy consumption.  This book enables readers to reduce the multiview video coding energy consumption through jointly considering the algorithmic and architectural levels.  Coverage includes an introduction to 3D videos and an extensive discussion of the current state-of-the-art of 3D video coding, as well as energy-efficient algorithms for 3D video coding and energy-efficient hardware architecture for 3D video coding.     ·         Discusses challenges related to performance and power in 3D video coding for embedded devices; ·         Describes energy-efficient algorithms for reduci...

  4. Video error concealment using block matching and frequency selective extrapolation algorithms

    Science.gov (United States)

    P. K., Rajani; Khaparde, Arti

    2017-06-01

    Error Concealment (EC) is a technique at the decoder side to hide the transmission errors. It is done by analyzing the spatial or temporal information from available video frames. It is very important to recover distorted video because they are used for various applications such as video-telephone, video-conference, TV, DVD, internet video streaming, video games etc .Retransmission-based and resilient-based methods, are also used for error removal. But these methods add delay and redundant data. So error concealment is the best option for error hiding. In this paper, the error concealment methods such as Block Matching error concealment algorithm is compared with Frequency Selective Extrapolation algorithm. Both the works are based on concealment of manually error video frames as input. The parameter used for objective quality measurement was PSNR (Peak Signal to Noise Ratio) and SSIM(Structural Similarity Index). The original video frames along with error video frames are compared with both the Error concealment algorithms. According to simulation results, Frequency Selective Extrapolation is showing better quality measures such as 48% improved PSNR and 94% increased SSIM than Block Matching Algorithm.

  5. Complexity control algorithm based on adaptive mode selection for interframe coding in high efficiency video coding

    Science.gov (United States)

    Chen, Gang; Yang, Bing; Zhang, Xiaoyun; Gao, Zhiyong

    2017-07-01

    The latest high efficiency video coding (HEVC) standard significantly increases the encoding complexity for improving its coding efficiency. Due to the limited computational capability of handheld devices, complexity constrained video coding has drawn great attention in recent years. A complexity control algorithm based on adaptive mode selection is proposed for interframe coding in HEVC. Considering the direct proportionality between encoding time and computational complexity, the computational complexity is measured in terms of encoding time. First, complexity is mapped to a target in terms of prediction modes. Then, an adaptive mode selection algorithm is proposed for the mode decision process. Specifically, the optimal mode combination scheme that is chosen through offline statistics is developed at low complexity. If the complexity budget has not been used up, an adaptive mode sorting method is employed to further improve coding efficiency. The experimental results show that the proposed algorithm achieves a very large complexity control range (as low as 10%) for the HEVC encoder while maintaining good rate-distortion performance. For the lowdelayP condition, compared with the direct resource allocation method and the state-of-the-art method, an average gain of 0.63 and 0.17 dB in BDPSNR is observed for 18 sequences when the target complexity is around 40%.

  6. TRAFFIC SIGN RECOGNATION WITH VIDEO PROCESSING TECHNIQUE

    Directory of Open Access Journals (Sweden)

    Musa AYDIN

    2013-01-01

    Full Text Available In this study, traffic signs are aimed to be recognized and identified from a video image which is taken through a video camera. To accomplish our aim, a traffic sign recognition program has been developed in MATLAB/Simulink environment. The target traffic sign are recognized in the video image with the developed program.

  7. High efficiency video coding (HEVC) algorithms and architectures

    CERN Document Server

    Budagavi, Madhukar; Sullivan, Gary

    2014-01-01

    This book provides developers, engineers, researchers and students with detailed knowledge about the High Efficiency Video Coding (HEVC) standard. HEVC is the successor to the widely successful H.264/AVC video compression standard, and it provides around twice as much compression as H.264/AVC for the same level of quality. The applications for HEVC will not only cover the space of the well-known current uses and capabilities of digital video – they will also include the deployment of new services and the delivery of enhanced video quality, such as ultra-high-definition television (UHDTV) and video with higher dynamic range, wider range of representable color, and greater representation precision than what is typically found today. HEVC is the next major generation of video coding design – a flexible, reliable and robust solution that will support the next decade of video applications and ease the burden of video on world-wide network traffic. This book provides a detailed explanation of the various parts ...

  8. A Novel Face Segmentation Algorithm from a Video Sequence for Real-Time Face Recognition

    Directory of Open Access Journals (Sweden)

    Sudhaker Samuel RD

    2007-01-01

    Full Text Available The first step in an automatic face recognition system is to localize the face region in a cluttered background and carefully segment the face from each frame of a video sequence. In this paper, we propose a fast and efficient algorithm for segmenting a face suitable for recognition from a video sequence. The cluttered background is first subtracted from each frame, in the foreground regions, a coarse face region is found using skin colour. Then using a dynamic template matching approach the face is efficiently segmented. The proposed algorithm is fast and suitable for real-time video sequence. The algorithm is invariant to large scale and pose variation. The segmented face is then handed over to a recognition algorithm based on principal component analysis and linear discriminant analysis. The online face detection, segmentation, and recognition algorithms take an average of 0.06 second on a 3.2 GHz P4 machine.

  9. Algorithm-Architecture Matching for Signal and Image Processing

    CERN Document Server

    Gogniat, Guy; Morawiec, Adam; Erdogan, Ahmet

    2011-01-01

    Advances in signal and image processing together with increasing computing power are bringing mobile technology closer to applications in a variety of domains like automotive, health, telecommunication, multimedia, entertainment and many others. The development of these leading applications, involving a large diversity of algorithms (e.g. signal, image, video, 3D, communication, cryptography) is classically divided into three consecutive steps: a theoretical study of the algorithms, a study of the target architecture, and finally the implementation. Such a linear design flow is reaching its li

  10. A Fast PDE Algorithm Using Adaptive Scan and Search for Video Coding

    Science.gov (United States)

    Kim, Jong-Nam

    In this paper, we propose an algorithm that reduces unnecessary computations, while keeping the same prediction quality as that of the full search algorithm. In the proposed algorithm, we can reduce unnecessary computations efficiently by calculating initial matching error point from first 1/N partial errors. We can increase the probability that hits minimum error point as soon as possible. Our algorithm decreases the computational amount by about 20% of the conventional PDE algorithm without any degradation of prediction quality. Our algorithm would be useful in real-time video coding applications using MPEG-2/4 AVC standards.

  11. Comparative Evaluation of Background Subtraction Algorithms in Remote Scene Videos Captured by MWIR Sensors

    Science.gov (United States)

    Yao, Guangle; Lei, Tao; Zhong, Jiandan; Jiang, Ping; Jia, Wenwu

    2017-01-01

    Background subtraction (BS) is one of the most commonly encountered tasks in video analysis and tracking systems. It distinguishes the foreground (moving objects) from the video sequences captured by static imaging sensors. Background subtraction in remote scene infrared (IR) video is important and common to lots of fields. This paper provides a Remote Scene IR Dataset captured by our designed medium-wave infrared (MWIR) sensor. Each video sequence in this dataset is identified with specific BS challenges and the pixel-wise ground truth of foreground (FG) for each frame is also provided. A series of experiments were conducted to evaluate BS algorithms on this proposed dataset. The overall performance of BS algorithms and the processor/memory requirements were compared. Proper evaluation metrics or criteria were employed to evaluate the capability of each BS algorithm to handle different kinds of BS challenges represented in this dataset. The results and conclusions in this paper provide valid references to develop new BS algorithm for remote scene IR video sequence, and some of them are not only limited to remote scene or IR video sequence but also generic for background subtraction. The Remote Scene IR dataset and the foreground masks detected by each evaluated BS algorithm are available online: https://github.com/JerryYaoGl/BSEvaluationRemoteSceneIR. PMID:28837112

  12. Comparative Evaluation of Background Subtraction Algorithms in Remote Scene Videos Captured by MWIR Sensors

    Directory of Open Access Journals (Sweden)

    Guangle Yao

    2017-08-01

    Full Text Available Background subtraction (BS is one of the most commonly encountered tasks in video analysis and tracking systems. It distinguishes the foreground (moving objects from the video sequences captured by static imaging sensors. Background subtraction in remote scene infrared (IR video is important and common to lots of fields. This paper provides a Remote Scene IR Dataset captured by our designed medium-wave infrared (MWIR sensor. Each video sequence in this dataset is identified with specific BS challenges and the pixel-wise ground truth of foreground (FG for each frame is also provided. A series of experiments were conducted to evaluate BS algorithms on this proposed dataset. The overall performance of BS algorithms and the processor/memory requirements were compared. Proper evaluation metrics or criteria were employed to evaluate the capability of each BS algorithm to handle different kinds of BS challenges represented in this dataset. The results and conclusions in this paper provide valid references to develop new BS algorithm for remote scene IR video sequence, and some of them are not only limited to remote scene or IR video sequence but also generic for background subtraction. The Remote Scene IR dataset and the foreground masks detected by each evaluated BS algorithm are available online: https://github.com/JerryYaoGl/BSEvaluationRemoteSceneIR.

  13. Designing with video focusing the user-centred design process

    CERN Document Server

    Ylirisku, Salu Pekka

    2007-01-01

    Digital video for user-centered co-design is an emerging field of design, gaining increasing interest in both industry and academia. It merges the techniques and approaches of design ethnography, participatory design, interaction analysis, scenario-based design, and usability studies. This book covers the complete user-centered design project. It illustrates in detail how digital video can be utilized throughout the design process, from early user studies to making sense of video content and envisioning the future with video scenarios to provoking change with video artifacts. The text includes

  14. A Total Variation Regularization Based Super-Resolution Reconstruction Algorithm for Digital Video

    Directory of Open Access Journals (Sweden)

    Zhang Liangpei

    2007-01-01

    Full Text Available Super-resolution (SR reconstruction technique is capable of producing a high-resolution image from a sequence of low-resolution images. In this paper, we study an efficient SR algorithm for digital video. To effectively deal with the intractable problems in SR video reconstruction, such as inevitable motion estimation errors, noise, blurring, missing regions, and compression artifacts, the total variation (TV regularization is employed in the reconstruction model. We use the fixed-point iteration method and preconditioning techniques to efficiently solve the associated nonlinear Euler-Lagrange equations of the corresponding variational problem in SR. The proposed algorithm has been tested in several cases of motion and degradation. It is also compared with the Laplacian regularization-based SR algorithm and other TV-based SR algorithms. Experimental results are presented to illustrate the effectiveness of the proposed algorithm.

  15. General Video Game Evaluation Using Relative Algorithm Performance Profiles

    DEFF Research Database (Denmark)

    Nielsen, Thorbjørn; Barros, Gabriella; Togelius, Julian

    2015-01-01

    In order to generate complete games through evolution we need generic and reliably evaluation functions for games. It has been suggested that game quality could be characterised through playing a game with different controllers and comparing their performance. This paper explores that idea through...... investigating the relative performance of different general game-playing algorithms. Seven game-playing algorithms was used to play several hand-designed, mutated and randomly generated VGDL game descriptions. Results discussed appear to support the conjecture that well-designed games have, in average, a higher...... performance difference between better and worse game-playing algorithms....

  16. Heterogeneous architecture to process swarm optimization algorithms

    Directory of Open Access Journals (Sweden)

    Maria A. Dávila-Guzmán

    2014-01-01

    Full Text Available Since few years ago, the parallel processing has been embedded in personal computers by including co-processing units as the graphics processing units resulting in a heterogeneous platform. This paper presents the implementation of swarm algorithms on this platform to solve several functions from optimization problems, where they highlight their inherent parallel processing and distributed control features. In the swarm algorithms, each individual and dimension problem are parallelized by the granularity of the processing system which also offer low communication latency between individuals through the embedded processing. To evaluate the potential of swarm algorithms on graphics processing units we have implemented two of them: the particle swarm optimization algorithm and the bacterial foraging optimization algorithm. The algorithms’ performance is measured using the acceleration where they are contrasted between a typical sequential processing platform and the NVIDIA GeForce GTX480 heterogeneous platform; the results show that the particle swarm algorithm obtained up to 36.82x and the bacterial foraging swarm algorithm obtained up to 9.26x. Finally, the effect to increase the size of the population is evaluated where we show both the dispersion and the quality of the solutions are decreased despite of high acceleration performance since the initial distribution of the individuals can converge to local optimal solution.

  17. Hardware implementation of machine vision systems: image and video processing

    Science.gov (United States)

    Botella, Guillermo; García, Carlos; Meyer-Bäse, Uwe

    2013-12-01

    This contribution focuses on different topics covered by the special issue titled `Hardware Implementation of Machine vision Systems' including FPGAs, GPUS, embedded systems, multicore implementations for image analysis such as edge detection, segmentation, pattern recognition and object recognition/interpretation, image enhancement/restoration, image/video compression, image similarity and retrieval, satellite image processing, medical image processing, motion estimation, neuromorphic and bioinspired vision systems, video processing, image formation and physics based vision, 3D processing/coding, scene understanding, and multimedia.

  18. Processor core for real time background identification of HD video based on OpenCV Gaussian mixture model algorithm

    Science.gov (United States)

    Genovese, Mariangela; Napoli, Ettore

    2013-05-01

    The identification of moving objects is a fundamental step in computer vision processing chains. The development of low cost and lightweight smart cameras steadily increases the request of efficient and high performance circuits able to process high definition video in real time. The paper proposes two processor cores aimed to perform the real time background identification on High Definition (HD, 1920 1080 pixel) video streams. The implemented algorithm is the OpenCV version of the Gaussian Mixture Model (GMM), an high performance probabilistic algorithm for the segmentation of the background that is however computationally intensive and impossible to implement on general purpose CPU with the constraint of real time processing. In the proposed paper, the equations of the OpenCV GMM algorithm are optimized in such a way that a lightweight and low power implementation of the algorithm is obtained. The reported performances are also the result of the use of state of the art truncated binary multipliers and ROM compression techniques for the implementation of the non-linear functions. The first circuit has commercial FPGA devices as a target and provides speed and logic resource occupation that overcome previously proposed implementations. The second circuit is oriented to an ASIC (UMC-90nm) standard cell implementation. Both implementations are able to process more than 60 frames per second in 1080p format, a frame rate compatible with HD television.

  19. A high-efficient significant coefficient scanning algorithm for 3-D embedded wavelet video coding

    Science.gov (United States)

    Song, Haohao; Yu, Songyu; Song, Li; Xiong, Hongkai

    2005-07-01

    3-D embedded wavelet video coding (3-D EWVC) algorithms become a vital scheme for state-of-the-art scalable video coding. A major objective in a progressive transmission scheme is to select the most important information which yields the largest distortion reduction to be transmitted first, so traditional 3-D EWVC algorithms scan coefficients according to bit-plane order. To significant bit information of the same bit-plane, however, these algorithms neglect the different effect of coefficients in different subbands to distortion. In this paper, we analyze different effect of significant information bits of the same bit-plane in different subbands to distortion and propose a high-efficient significant coefficient scanning algorithm. Experimental results of 3-D SPIHT and 3-D SPECK show that high-efficient significant coefficient scanning algorithm can improve traditional 3-D EWVC algorithms' ability of compression, and make reconstructed videos have higher PSNR and better visual effects in the same bit rate compared to original significant coefficient scanning algorithms respectively.

  20. Delta modulation. [overshoot suppression algorithm for video data transmission

    Science.gov (United States)

    Schilling, D. L.

    1973-01-01

    The overshoot suppression algorithm has been more extensively studied. Computer generated test-pictures show a radical improvement due to the overshoot suppression algorithm. Considering the delta modulator link as a nonlinear digital filter, a formula that relates the minimum rise time that can be handled for given filter parameters and voltage swings has been developed. The settling time has been calculated for the case of overshoot suppression as well as when no suppression is employed. The results indicate a significant decrease in settling time when overshoot suppression is used. An algorithm for correcting channel errors has been developed. It is shown that pulse stuffing PCM words in the DM bit stream results in a significant reduction in error length.

  1. Multiscale Architectures and Parallel Algorithms for Video Object Tracking

    Science.gov (United States)

    2011-10-01

    Black River Systems. This may have inadvertently introduced bugs that were later discovered by AFRL during testing (of the June 22, 2011 version of...Parallelism in Algorithms and Architectures, pages 289–298, 2007. [3] S. Ali and M. Shah. COCOA - Tracking in aerial imagery. In Daniel J. Henry

  2. FPGA Implementation of Gaussian Mixture Model Algorithm for 47 fps Segmentation of 1080p Video

    Directory of Open Access Journals (Sweden)

    Mariangela Genovese

    2013-01-01

    Full Text Available Circuits and systems able to process high quality video in real time are fundamental in nowadays imaging systems. The circuit proposed in the paper, aimed at the robust identification of the background in video streams, implements the improved formulation of the Gaussian Mixture Model (GMM algorithm that is included in the OpenCV library. An innovative, hardware oriented, formulation of the GMM equations, the use of truncated binary multipliers, and ROM compression techniques allow reduced hardware complexity and increased processing capability. The proposed circuit has been designed having commercial FPGA devices as target and provides speed and logic resources occupation that overcome previously proposed implementations. The circuit, when implemented on Virtex6 or StratixIV, processes more than 45 frame per second in 1080p format and uses few percent of FPGA logic resources.

  3. Optimization of image processing algorithms on mobile platforms

    Science.gov (United States)

    Poudel, Pramod; Shirvaikar, Mukul

    2011-03-01

    This work presents a technique to optimize popular image processing algorithms on mobile platforms such as cell phones, net-books and personal digital assistants (PDAs). The increasing demand for video applications like context-aware computing on mobile embedded systems requires the use of computationally intensive image processing algorithms. The system engineer has a mandate to optimize them so as to meet real-time deadlines. A methodology to take advantage of the asymmetric dual-core processor, which includes an ARM and a DSP core supported by shared memory, is presented with implementation details. The target platform chosen is the popular OMAP 3530 processor for embedded media systems. It has an asymmetric dual-core architecture with an ARM Cortex-A8 and a TMS320C64x Digital Signal Processor (DSP). The development platform was the BeagleBoard with 256 MB of NAND RAM and 256 MB SDRAM memory. The basic image correlation algorithm is chosen for benchmarking as it finds widespread application for various template matching tasks such as face-recognition. The basic algorithm prototypes conform to OpenCV, a popular computer vision library. OpenCV algorithms can be easily ported to the ARM core which runs a popular operating system such as Linux or Windows CE. However, the DSP is architecturally more efficient at handling DFT algorithms. The algorithms are tested on a variety of images and performance results are presented measuring the speedup obtained due to dual-core implementation. A major advantage of this approach is that it allows the ARM processor to perform important real-time tasks, while the DSP addresses performance-hungry algorithms.

  4. Compressive Video Acquisition, Fusion and Processing

    Science.gov (United States)

    2010-12-14

    that we can explore in detail exploits the fact that even though each φm is testing a different 2D image slice, the image slices are often related...space-time cube. We related temporal bandwidth to the spacial resolution of the camera and the speed of objects in the scene. We applied our findings to...performed directly on the compressive measurements without requiring a potentially expensive video reconstruction. Accomplishments In our work exploring

  5. Colour-reproduction algorithm for transmitting variable video frames and its application to capsule endoscopy.

    Science.gov (United States)

    Khan, Tareq; Shrestha, Ravi; Imtiaz, Md Shamin; Wahid, Khan A

    2015-04-01

    Presented is a new power-efficient colour generation algorithm for wireless capsule endoscopy (WCE) application. In WCE, transmitting colour image data from the human intestine through radio frequency (RF) consumes a huge amount of power. The conventional way is to transmit all R, G and B components of all frames. Using the proposed dictionary-based colour generation scheme, instead of sending all R, G and B frames, first one colour frame is sent followed by a series of grey-scale frames. At the receiver end, the colour information is extracted from the colour frame and then added to colourise the grey-scale frames. After a certain number of grey-scale frames, another colour frame is sent followed by the same number of grey-scale frames. This process is repeated until the end of the video sequence to maintain the colour similarity. As a result, over 50% of RF transmission power can be saved using the proposed scheme, which will eventually lead to a battery life extension of the capsule by 4-7 h. The reproduced colour images have been evaluated both statistically and subjectively by professional gastroenterologists. The algorithm is finally implemented using a WCE prototype and the performance is validated using an ex-vivo trial.

  6. A 3-Step Algorithm Using Region-Based Active Contours for Video Objects Detection

    Directory of Open Access Journals (Sweden)

    Stéphanie Jehan-Besson

    2002-06-01

    Full Text Available We propose a 3-step algorithm for the automatic detection of moving objects in video sequences using region-based active contours. First, we introduce a very full general framework for region-based active contours with a new Eulerian method to compute the evolution equation of the active contour from a criterion including both region-based and boundary-based terms. This framework can be easily adapted to various applications, thanks to the introduction of functions named descriptors of the different regions. With this new Eulerian method based on shape optimization principles, we can easily take into account the case of descriptors depending upon features globally attached to the regions. Second, we propose a 3-step algorithm for detection of moving objects, with a static or a mobile camera, using region-based active contours. The basic idea is to hierarchically associate temporal and spatial information. The active contour evolves with successively three sets of descriptors: a temporal one, and then two spatial ones. The third spatial descriptor takes advantage of the segmentation of the image in intensity homogeneous regions. User interaction is reduced to the choice of a few parameters at the beginning of the process. Some experimental results are supplied.

  7. Colour-reproduction algorithm for transmitting variable video frames and its application to capsule endoscopy

    Science.gov (United States)

    Khan, Tareq; Shrestha, Ravi; Imtiaz, Md. Shamin

    2015-01-01

    Presented is a new power-efficient colour generation algorithm for wireless capsule endoscopy (WCE) application. In WCE, transmitting colour image data from the human intestine through radio frequency (RF) consumes a huge amount of power. The conventional way is to transmit all R, G and B components of all frames. Using the proposed dictionary-based colour generation scheme, instead of sending all R, G and B frames, first one colour frame is sent followed by a series of grey-scale frames. At the receiver end, the colour information is extracted from the colour frame and then added to colourise the grey-scale frames. After a certain number of grey-scale frames, another colour frame is sent followed by the same number of grey-scale frames. This process is repeated until the end of the video sequence to maintain the colour similarity. As a result, over 50% of RF transmission power can be saved using the proposed scheme, which will eventually lead to a battery life extension of the capsule by 4–7 h. The reproduced colour images have been evaluated both statistically and subjectively by professional gastroenterologists. The algorithm is finally implemented using a WCE prototype and the performance is validated using an ex-vivo trial. PMID:26609405

  8. HTA – Algorithm or Process?

    Science.gov (United States)

    Culyer, Anthony J.

    2016-01-01

    Daniels, Porteny and Urrutia et al make a good case for the idea that that public decisions ought to be made not only "in the light of" evidence but also "on the basis of" budget impact, financial protection and equity. Health technology assessment (HTA) should, they say, be accordingly expanded to consider matters additional to safety and cost-effectiveness. They also complain that most HTA reports fail to develop ethical arguments and generally do not even mention ethical issues. This comment argues that some of these defects are more apparent than real and are not inherent in HTA – as distinct from being common characteristics found in poorly conducted HTAs. More generally, HTA does not need "extension" since (1) ethical issues are already embedded in HTA processes, not least in their scoping phases, and (2) HTA processes are already sufficiently flexible to accommodate evidence about a wide range of factors, and will not need fundamental change in order to accommodate the new forms of decision-relevant evidence about distributional impact and financial protection that are now starting to emerge. HTA and related techniques are there to support decision-makers who have authority to make decisions. Analysts like us are there to support and advise them (and not to assume the responsibilities for which they, and not we, are accountable). The required quality in HTA then becomes its effectiveness as a means of addressing the issues of concern to decision-makers. What is also required is adherence by competent analysts to a standard template of good analytical practice. The competencies include not merely those of the usual disciplines (particularly biostatistics, cognitive psychology, health economics, epidemiology, and ethics) but also the imaginative and interpersonal skills for exploring the "real" question behind the decision-maker’s brief (actual or postulated) and eliciting the social values that necessarily pervade the entire analysis. The product of such

  9. Comprehensive Analysis and Evaluation of Background Subtraction Algorithms for Surveillance Video

    Directory of Open Access Journals (Sweden)

    Yan Feng

    2014-08-01

    Full Text Available Background Subtraction techniques are the basis for moving target detection and tracking in the domain of video surveillance, while the robust and reliable detection and tracking algorithms in complex environment is a challenging subject, so evaluations of various background subtraction algorithms are of great significance. Nine state of the art methods ranging from simple to sophisticated ones are discussed. Then the algorithms were implemented and tested using different videos with ground truth, such as baseline, dynamic background, camera jitter, and intermittent object motion and shadow scenarios. The best suited background modeling methods for each scenario are given by comprehensive analysis of three parameters: recall, precision and F-Measure, which facilitates more accurate target detection and tracking.

  10. Algorithms for image processing and computer vision

    CERN Document Server

    Parker, J R

    2010-01-01

    A cookbook of algorithms for common image processing applications Thanks to advances in computer hardware and software, algorithms have been developed that support sophisticated image processing without requiring an extensive background in mathematics. This bestselling book has been fully updated with the newest of these, including 2D vision methods in content-based searches and the use of graphics cards as image processing computational aids. It's an ideal reference for software engineers and developers, advanced programmers, graphics programmers, scientists, and other specialists wh

  11. Food processing optimization using evolutionary algorithms | Enitan ...

    African Journals Online (AJOL)

    Evolutionary algorithms are widely used in single and multi-objective optimization. They are easy to use and provide solution(s) in one simulation run. They are used in food processing industries for decision making. Food processing presents constrained and unconstrained optimization problems. This paper reviews the ...

  12. Study on the Detection of Moving Target in the Mining Method Based on Hybrid Algorithm for Sports Video Analysis

    Directory of Open Access Journals (Sweden)

    Huang Tian

    2014-10-01

    Full Text Available Moving object detection and tracking is the computer vision and image processing is a hot research direction, based on the analysis of the moving target detection and tracking algorithm in common use, focus on the sports video target tracking non rigid body. In sports video, non rigid athletes often have physical deformation in the process of movement, and may be associated with the occurrence of moving target under cover. Media data is surging to fast search and query causes more difficulties in data. However, the majority of users want to be able to quickly from the multimedia data to extract the interested content and implicit knowledge (concepts, rules, rules, models and correlation, retrieval and query quickly to take advantage of them, but also can provide the decision support problem solving hierarchy. Based on the motion in sport video object as the object of study, conducts the system research from the theoretical level and technical framework and so on, from the layer by layer mining between low level motion features to high-level semantic motion video, not only provides support for users to find information quickly, but also can provide decision support for the user to solve the problem.

  13. Guided filtering for solar image/video processing

    Science.gov (United States)

    Xu, Long; Yan, Yihua; Cheng, Jun

    2017-06-01

    A new image enhancement algorithm employing guided filtering is proposed in this work for enhancement of solar images and videos, so that users can easily figure out important fine structures imbedded in the recorded images/movies for solar observation. The proposed algorithm can efficiently remove image noises, including Gaussian and impulse noises. Meanwhile, it can further highlight fibrous structures on/beyond the solar disk. These fibrous structures can clearly demonstrate the progress of solar flare, prominence coronal mass emission, magnetic field, and so on. The experimental results prove that the proposed algorithm gives significant enhancement of visual quality of solar images beyond original input and several classical image enhancement algorithms, thus facilitating easier determi-nation of interesting solar burst activities from recorded images/movies.

  14. Integrating Algorithm Visualization Video into a First-Year Algorithm and Data Structure Course

    Science.gov (United States)

    Crescenzi, Pilu; Malizia, Alessio; Verri, M. Cecilia; Diaz, Paloma; Aedo, Ignacio

    2012-01-01

    In this paper we describe the results that we have obtained while integrating algorithm visualization (AV) movies (strongly tightened with the other teaching material), within a first-year undergraduate course on algorithms and data structures. Our experimental results seem to support the hypothesis that making these movies available significantly…

  15. General simulation algorithm for autocorrelated binary processes

    Science.gov (United States)

    Serinaldi, Francesco; Lombardo, Federico

    2017-02-01

    The apparent ubiquity of binary random processes in physics and many other fields has attracted considerable attention from the modeling community. However, generation of binary sequences with prescribed autocorrelation is a challenging task owing to the discrete nature of the marginal distributions, which makes the application of classical spectral techniques problematic. We show that such methods can effectively be used if we focus on the parent continuous process of beta distributed transition probabilities rather than on the target binary process. This change of paradigm results in a simulation procedure effectively embedding a spectrum-based iterative amplitude-adjusted Fourier transform method devised for continuous processes. The proposed algorithm is fully general, requires minimal assumptions, and can easily simulate binary signals with power-law and exponentially decaying autocorrelation functions corresponding, for instance, to Hurst-Kolmogorov and Markov processes. An application to rainfall intermittency shows that the proposed algorithm can also simulate surrogate data preserving the empirical autocorrelation.

  16. A New Algorithm of Rain (Snow) Removal in Video

    OpenAIRE

    Chen Zhen; Shen Jihong

    2013-01-01

    The images acquired by outdoor vision system in the rain or snow have low contrast and are blurred, and it can cause serious degradation. Traditional rain (snow) removal method is restricted with the intensity, so the effect is not ideal. According to the characteristic of vision system acquiring multiple different degraded images in a short time, the paper processes multiple images to realize restoration. Snow and rain have the dynamic characteristic that the direction, intensity and shape o...

  17. Digital CODEC for real-time processing of broadcast quality video signals at 1.8 bits/pixel

    Science.gov (United States)

    Shalkhauser, Mary JO; Whyte, Wayne A.

    1991-01-01

    Advances in very large scale integration and recent work in the field of bandwidth efficient digital modulation techniques have combined to make digital video processing technically feasible an potentially cost competitive for broadcast quality television transmission. A hardware implementation was developed for DPCM (differential pulse code midulation)-based digital television bandwidth compression algorithm which processes standard NTSC composite color television signals and produces broadcast quality video in real time at an average of 1.8 bits/pixel. The data compression algorithm and the hardware implementation of the codec are described, and performance results are provided.

  18. The effects of camera jitter for background subtraction algorithms on fused infrared-visible video streams

    Science.gov (United States)

    Becker, Stefan; Scherer-Negenborn, Norbert; Thakkar, Pooja; Hübner, Wolfgang; Arens, Michael

    2016-10-01

    This paper is a continuation of the work of Becker et al.1 In their work, they analyzed the robustness of various background subtraction algorithms on fused video streams originating from visible and infrared cameras. In order to cover a broader range of background subtraction applications, we show the effects of fusing infrared-visible video streams from vibrating cameras on a large set of background subtraction algorithms. The effectiveness is quantitatively analyzed on recorded data of a typical outdoor sequence with a fine-grained and accurate annotation of the images. Thereby, we identify approaches which can benefit from fused sensor signals with camera jitter. Finally conclusions on what fusion strategies should be preferred under such conditions are given.

  19. Source and Channel Adaptive Rate Control for Multicast Layered Video Transmission Based on a Clustering Algorithm

    Directory of Open Access Journals (Sweden)

    Jérôme Viéron

    2004-03-01

    Full Text Available This paper introduces source-channel adaptive rate control (SARC, a new congestion control algorithm for layered video transmission in large multicast groups. In order to solve the well-known feedback implosion problem in large multicast groups, we first present a mechanism for filtering RTCP receiver reports sent from receivers to the whole session. The proposed filtering mechanism provides a classification of receivers according to a predefined similarity measure. An end-to-end source and FEC rate control based on this distributed feedback aggregation mechanism coupled with a video layered coding system is then described. The number of layers, their rate, and their levels of protection are adapted dynamically to aggregated feedbacks. The algorithms have been validated with the NS2 network simulator.

  20. Algorithmic Design Tools in Design Process

    Directory of Open Access Journals (Sweden)

    Daryanto Daryanto

    2011-06-01

    Full Text Available This article explores algorithmic design methods in a design process that uses natural phenomena as the basis of its architectural morphology. It implements digital morphogenesis in reaction to ecology and the influential forces of the building environment. This paper is divided into two equally important sections: the process description and the project implementation. The description of the process demonstrates the methods used and the idea involved in incorporating nature’s influential elements as part of the creative task. Meanwhile, the project implementation showed practical case of the outcome of that process. Tools for visualizing and simulating nature’s environment are showed using algorithmic design method. The tools create transformations in NURBS-based surfaces through the translation of their respective control point matrices. The tools generate several different alternatives to be tested and analyzed. 

  1. The research of moving objects behavior detection and tracking algorithm in aerial video

    Science.gov (United States)

    Yang, Le-le; Li, Xin; Yang, Xiao-ping; Li, Dong-hui

    2015-12-01

    The article focuses on the research of moving target detection and tracking algorithm in Aerial monitoring. Study includes moving target detection, moving target behavioral analysis and Target Auto tracking. In moving target detection, the paper considering the characteristics of background subtraction and frame difference method, using background reconstruction method to accurately locate moving targets; in the analysis of the behavior of the moving object, using matlab technique shown in the binary image detection area, analyzing whether the moving objects invasion and invasion direction; In Auto Tracking moving target, A video tracking algorithm that used the prediction of object centroids based on Kalman filtering was proposed.

  2. VIDEO DENOISING USING SWITCHING ADAPTIVE DECISION BASED ALGORITHM WITH ROBUST MOTION ESTIMATION TECHNIQUE

    Directory of Open Access Journals (Sweden)

    V. Jayaraj

    2010-08-01

    Full Text Available A Non-linear adaptive decision based algorithm with robust motion estimation technique is proposed for removal of impulse noise, Gaussian noise and mixed noise (impulse and Gaussian with edge and fine detail preservation in images and videos. The algorithm includes detection of corrupted pixels and the estimation of values for replacing the corrupted pixels. The main advantage of the proposed algorithm is that an appropriate filter is used for replacing the corrupted pixel based on the estimation of the noise variance present in the filtering window. This leads to reduced blurring and better fine detail preservation even at the high mixed noise density. It performs both spatial and temporal filtering for removal of the noises in the filter window of the videos. The Improved Cross Diamond Search Motion Estimation technique uses Least Median Square as a cost function, which shows improved performance than other motion estimation techniques with existing cost functions. The results show that the proposed algorithm outperforms the other algorithms in the visual point of view and in Peak Signal to Noise Ratio, Mean Square Error and Image Enhancement Factor.

  3. Algorithmic errors. Cognitive processes and educational actions

    Directory of Open Access Journals (Sweden)

    Ana B. SÁNCHEZ GARCÍA

    2013-11-01

    Full Text Available In this paper we define the cognitive space of subtraction and place emphasis on procedural control and on the processes that need to be improved by the educa- tional framework for proper acquisition. We describe the theory behind error acquisi- tion. To do this, we consider the analysis of negative transfer processes induced from the educational context. The analysis is inscribed within the intersection between educational theory and cognitive theories of algorithmic learning.

  4. Regularized algorithm for Raman lidar data processing.

    Science.gov (United States)

    Shcherbakov, Valery

    2007-08-01

    A regularized algorithm that has the potential to improve the quality of Raman lidar data processing is presented. Compared to the conventional scheme, the proposed algorithm has the advantage, which results from the fact that it is based on a well-posed procedure. That is, the profile of the aerosol backscatter coefficient is computed directly, using the explicit relationships, without numerical differentiation. Thereafter, the profile of the lidar ratio is retrieved as a regularized solution of a first-kind Volterra integral equation. Once these two steps have been completed, the profile of the aerosol extinction coefficient is computed by a straightforward multiplication. The numerical simulations demonstrated that the proposed algorithm provides good accuracy and resolution of aerosol profile retrievals. The error analysis showed that the retrieved profiles are continuous functions of the measurement errors and of the a priori information uncertainties.

  5. Towards real-time remote processing of laparoscopic video

    Science.gov (United States)

    Ronaghi, Zahra; Duffy, Edward B.; Kwartowitz, David M.

    2015-03-01

    Laparoscopic surgery is a minimally invasive surgical technique where surgeons insert a small video camera into the patient's body to visualize internal organs and small tools to perform surgical procedures. However, the benefit of small incisions has a drawback of limited visualization of subsurface tissues, which can lead to navigational challenges in the delivering of therapy. Image-guided surgery (IGS) uses images to map subsurface structures and can reduce the limitations of laparoscopic surgery. One particular laparoscopic camera system of interest is the vision system of the daVinci-Si robotic surgical system (Intuitive Surgical, Sunnyvale, CA, USA). The video streams generate approximately 360 megabytes of data per second, demonstrating a trend towards increased data sizes in medicine, primarily due to higher-resolution video cameras and imaging equipment. Processing this data on a bedside PC has become challenging and a high-performance computing (HPC) environment may not always be available at the point of care. To process this data on remote HPC clusters at the typical 30 frames per second (fps) rate, it is required that each 11.9 MB video frame be processed by a server and returned within 1/30th of a second. The ability to acquire, process and visualize data in real-time is essential for performance of complex tasks as well as minimizing risk to the patient. As a result, utilizing high-speed networks to access computing clusters will lead to real-time medical image processing and improve surgical experiences by providing real-time augmented laparoscopic data. We aim to develop a medical video processing system using an OpenFlow software defined network that is capable of connecting to multiple remote medical facilities and HPC servers.

  6. Automated processing of massive audio/video content using FFmpeg

    Directory of Open Access Journals (Sweden)

    Kia Siang Hock

    2014-01-01

    Full Text Available Audio and video content forms an integral, important and expanding part of the digital collections in libraries and archives world-wide. While these memory institutions are familiar and well-versed in the management of more conventional materials such as books, periodicals, ephemera and images, the handling of audio (e.g., oral history recordings and video content (e.g., audio-visual recordings, broadcast content requires additional toolkits. In particular, a robust and comprehensive tool that provides a programmable interface is indispensable when dealing with tens of thousands of hours of audio and video content. FFmpeg is comprehensive and well-established open source software that is capable of the full-range of audio/video processing tasks (such as encode, decode, transcode, mux, demux, stream and filter. It is also capable of handling a wide-range of audio and video formats, a unique challenge in memory institutions. It comes with a command line interface, as well as a set of developer libraries that can be incorporated into applications.

  7. Spatial data processing for the purpose of video games

    Directory of Open Access Journals (Sweden)

    Chądzyńska Dominika

    2016-03-01

    Full Text Available Advanced terrain models are currently commonly used in many video/computers games. Professional GIS technologies, existing spatial datasets and cartographic methodology are more widely used in their development. This allows for achieving a realistic model of the world. On the other hand, the so-called game engines have very high capability of spatial data visualization. Preparing terrain models for the purpose of video games requires knowledge and experience of GIS specialists and cartographers, although it is also accessible for non-professionals. The authors point out commonness and variety of use of terrain models in video games and the existence of a series of ready, advanced tools and procedures of terrain model creating. Finally the authors describe the experiment of performing the process of data modeling for “Condor Soar Simulator”.

  8. A comparison between optimisation algorithms for metal forming processes

    NARCIS (Netherlands)

    Bonte, M.H.A.; Do, D.T.D.; Fourment, L.; van den Boogaard, Antonius H.; Huetink, Han; Juster, N.; Rosochowski, A.

    2006-01-01

    Coupling optimisation algorithms to Finite Element (FEM) simulations is a very promisingway to achieve optimal metal forming processes. However, many optimisation algorithms exist and it is notclear which of these algorithms to use. This paper compares an efficient Metamodel Assisted

  9. Feature-based fast coding unit partition algorithm for high efficiency video coding

    Directory of Open Access Journals (Sweden)

    Yih-Chuan Lin

    2015-04-01

    Full Text Available High Efficiency Video Coding (HEVC, which is the newest video coding standard, has been developed for the efficient compression of ultra high definition videos. One of the important features in HEVC is the adoption of a quad-tree based video coding structure, in which each incoming frame is represented as a set of non-overlapped coding tree blocks (CTB by variable-block sized prediction and coding process. To do this, each CTB needs to be recursively partitioned into coding unit (CU, predict unit (PU and transform unit (TU during the coding process, leading to a huge computational load in the coding of each video frame. This paper proposes to extract visual features in a CTB and uses them to simplify the coding procedure by reducing the depth of quad-tree partition for each CTB in HEVC intra coding mode. A measure for the edge strength in a CTB, which is defined with simple Sobel edge detection, is used to constrain the possible maximum depth of quad-tree partition of the CTB. With the constrained partition depth, the proposed method can reduce a lot of encoding time. Experimental results by HM10.1 show that the average time-savings is about 13.4% under the increase of encoded BD-Rate by only 0.02%, which is a less performance degradation in comparison to other similar methods.

  10. Sign Language Video Processing for Text Detection in Hindi Language

    Directory of Open Access Journals (Sweden)

    Rashmi B Hiremath

    2016-10-01

    Full Text Available Sign language is a way of expressing yourself with your body language, where every bit of ones expressions, goals, or sentiments are conveyed by physical practices, for example, outward appearances, body stance, motions, eye movements, touch and the utilization of space. Non-verbal communication exists in both creatures and people, yet this article concentrates on elucidations of human non-verbal or sign language interpretation into Hindi textual expression. The proposed method of implementation utilizes the image processing methods and synthetic intelligence strategies to get the goal of sign video recognition. To carry out the proposed task implementation it uses image processing methods such as frame analysing based tracking, edge detection, wavelet transform, erosion, dilation, blur elimination, noise elimination, on training videos. It also uses elliptical Fourier descriptors called SIFT for shape feature extraction and most important part analysis for feature set optimization and reduction. For result analysis, this paper uses different category videos such as sign of weeks, months, relations etc. Database of extracted outcomes are compared with the video fed to the system as a input of the signer by a trained unclear inference system.

  11. Providing Memory Management Abstraction for Self-Reconfigurable Video Processing Platforms

    Directory of Open Access Journals (Sweden)

    Kurt Franz Ackermann

    2009-01-01

    Full Text Available This paper presents a concept for an SDRAM controller targeting video processing platforms with dynamically reconfigurable processing units (RPUs. A priority-arbitration algorithm provides the required QoS and supports high bit-rate data streaming of multiple clients. Conforming to common video data structures the controller organizes the memory in partitions, frames, lines, and pixels. The raised level of abstraction drastically reduces the complexity of clients' addressing logic. Its uniform interface structure facilitates instantiations in systems with various clients. In addition to SDRAM controllers for regular applications, special demands of reconfigurable platforms have to be satisfied. The aim of this work is to minimize the number of required bus macros leading to relaxed place and route constraints and reducing the number of critical design paths. A suitable interface protocol is presented, and fundamental implementation issues are outlined.

  12. Analysis of Video Signal Transmission Through DWDM Network Based on a Quality Check Algorithm

    Directory of Open Access Journals (Sweden)

    A. Markovic

    2013-04-01

    Full Text Available This paper provides an analysis of the multiplexed video signal transmission through the Dense Wavelength Division Multiplexing (DWDM network based on a quality check algorithm, which determines where the interruption of the transmission quality starts. On the basis of this algorithm, simulations of transmission for specific values of fiber parameters ​​ are executed. The analysis of the results shows how the BER and Q-factor change depends on the length of the fiber, i.e. on the number of amplifiers, and what kind of an effect the number of multiplexed channels and the flow rate per channel have on a transmited signals. Analysis of DWDM systems is performed in the software package OptiSystem 7.0, which is designed for systems with flow rates of 2.5 Gb/s and 10 Gb/s per channel.

  13. Reinforcement Learning: Stochastic Approximation Algorithms for Markov Decision Processes

    OpenAIRE

    Krishnamurthy, Vikram

    2015-01-01

    This article presents a short and concise description of stochastic approximation algorithms in reinforcement learning of Markov decision processes. The algorithms can also be used as a suboptimal method for partially observed Markov decision processes.

  14. Iterative elimination algorithm for thermal image processing

    Directory of Open Access Journals (Sweden)

    A. H. Alkali

    2014-08-01

    Full Text Available Segmentation is employed in everyday image processing, in order to remove unwanted objects present in the image. There are scenarios where segmentation alone does not do the intended job automatically. In such cases, subjective means are required to eliminate the remnants which are time consuming especially when multiple images are involved. It is also not feasible when real-time applications are involved. This is even compounded when thermal imaging is involved as both foreground and background objects can have similar thermal distribution, thus making it impossible for straight segmentation to distinguish between the two. In this study, a real-time Iterative Elimination Algorithm (IEA was developed and it was shown that false foreground was removed in thermal images where segmentation failed to do so. The algorithm was tested on thermal images that were segmented using the inter-variance thresholding. The thermal images contained human subjects as foreground with some background objects having similar thermal distribution as the subject. Informed consent was obtained from the subject that voluntarily took part in the study. The IEA was only tested on thermal images and failed when false background object was connected to the foreground after segmentation.

  15. A QoE Aware Fairness Bi-level Resource Allocation Algorithm for Multiple Video Streaming in WLAN

    Directory of Open Access Journals (Sweden)

    Hu Zhou

    2015-11-01

    Full Text Available With the increasing of smart devices such as mobile phones and tablets, the scenario of multiple video users watching video streaming simultaneously in one wireless local area network (WLAN becomes more and more popular. However, the quality of experience (QoE and the fairness among multiple users are seriously impacted by the limited bandwidth and shared resources of WLAN. In this paper, we propose a novel bi-level resource allocation algorithm. To maximize the total throughput of the network, the WLAN is firstly tuned to the optimal operation point. Then the wireless resource is carefully allocated at the first level, i.e., between AP and uplink background traffic users, and the second level, i.e., among downlink video users. The simulation results show that the proposed algorithm can guarantee the QoE and the fairness for all the video users, and there is little impact on the average throughput of the background traffic users.

  16. USING A DIGITAL VIDEO CAMERA AS THE SMART SENSOR OF THE SYSTEM FOR AUTOMATIC PROCESS CONTROL OF GRANULAR FODDER MOLDING

    Directory of Open Access Journals (Sweden)

    M. M. Blagoveshchenskaya

    2014-01-01

    Full Text Available Summary. The most important operation of granular mixed fodder production is molding process. Properties of granular mixed fodder are defined during this process. They determine the process of production and final product quality. The possibility of digital video camera usage as intellectual sensor for control system in process of production is analyzed in the article. The developed parametric model of the process of bundles molding from granular fodder mass is presented in the paper. Dynamic characteristics of the molding process were determined. A mathematical model of motion of bundle of granular fodder mass after matrix holes was developed. The developed mathematical model of the automatic control system (ACS with the use of etalon video frame as the set point in the MATLAB software environment was shown. As a parameter of the bundles molding process it is proposed to use the value of the specific area defined in the mathematical treatment of the video frame. The algorithms of the programs to determine the changes in structural and mechanical properties of the feed mass in video frames images were developed. Digital video shooting of various modes of the molding machine was carried out and after the mathematical processing of video the transfer functions for use as a change of adjustable parameters of the specific area were determined. Structural and functional diagrams of the system of regulation of the food bundles molding process with the use of digital camcorders were built and analyzed. Based on the solution of the equations of fluid dynamics mathematical model of bundle motion after leaving the hole matrix was obtained. In addition to its viscosity, creep property was considered that is characteristic of the feed mass. The mathematical model ACS of the bundles molding process allowing to investigate transient processes which occur in the control system that uses a digital video camera as the smart sensor was developed in Simulink

  17. Framework for Processing Videos in the Presence of Spatially Varying Motion Blur

    Science.gov (United States)

    2016-02-10

    AFRL-AFOSR-JP-TR-2016-0030 Framework for Processing Videos in the Presence of Spatially Varying Motion Blur Ambasamudram Rajagopalan INDIAN...for Processing Videos in the Presence of Spatially Varying Motion Blur 5a. CONTRACT NUMBER FA23861314138 5b. GRANT NUMBER 13RSZ116_134138 5c... video analysis, Image Processing, Video analysis, Information Technology 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT SAR 18. NUMBER OF

  18. Video-rate processing in tomographic phase microscopy of biological cells using CUDA.

    Science.gov (United States)

    Dardikman, Gili; Habaza, Mor; Waller, Laura; Shaked, Natan T

    2016-05-30

    We suggest a new implementation for rapid reconstruction of three-dimensional (3-D) refractive index (RI) maps of biological cells acquired by tomographic phase microscopy (TPM). The TPM computational reconstruction process is extremely time consuming, making the analysis of large data sets unreasonably slow and the real-time 3-D visualization of the results impossible. Our implementation uses new phase extraction, phase unwrapping and Fourier slice algorithms, suitable for efficient CPU or GPU implementations. The experimental setup includes an external off-axis interferometric module connected to an inverted microscope illuminated coherently. We used single cell rotation by micro-manipulation to obtain interferometric projections from 73 viewing angles over a 180° angular range. Our parallel algorithms were implemented using Nvidia's CUDA C platform, running on Nvidia's Tesla K20c GPU. This implementation yields, for the first time to our knowledge, a 3-D reconstruction rate higher than video rate of 25 frames per second for 256 × 256-pixel interferograms with 73 different projection angles (64 × 64 × 64 output). This allows us to calculate additional cellular parameters, while still processing faster than video rate. This technique is expected to find uses for real-time 3-D cell visualization and processing, while yielding fast feedback for medical diagnosis and cell sorting.

  19. Recognising safety critical events: can automatic video processing improve naturalistic data analyses?

    Science.gov (United States)

    Dozza, Marco; González, Nieves Pañeda

    2013-11-01

    New trends in research on traffic accidents include Naturalistic Driving Studies (NDS). NDS are based on large scale data collection of driver, vehicle, and environment information in real world. NDS data sets have proven to be extremely valuable for the analysis of safety critical events such as crashes and near crashes. However, finding safety critical events in NDS data is often difficult and time consuming. Safety critical events are currently identified using kinematic triggers, for instance searching for deceleration below a certain threshold signifying harsh braking. Due to the low sensitivity and specificity of this filtering procedure, manual review of video data is currently necessary to decide whether the events identified by the triggers are actually safety critical. Such reviewing procedure is based on subjective decisions, is expensive and time consuming, and often tedious for the analysts. Furthermore, since NDS data is exponentially growing over time, this reviewing procedure may not be viable anymore in the very near future. This study tested the hypothesis that automatic processing of driver video information could increase the correct classification of safety critical events from kinematic triggers in naturalistic driving data. Review of about 400 video sequences recorded from the events, collected by 100 Volvo cars in the euroFOT project, suggested that drivers' individual reaction may be the key to recognize safety critical events. In fact, whether an event is safety critical or not often depends on the individual driver. A few algorithms, able to automatically classify driver reaction from video data, have been compared. The results presented in this paper show that the state of the art subjective review procedures to identify safety critical events from NDS can benefit from automated objective video processing. In addition, this paper discusses the major challenges in making such video analysis viable for future NDS and new potential

  20. Comparison Of Processing Time Of Different Size Of Images And Video Resolutions For Object Detection Using Fuzzy Inference System

    Directory of Open Access Journals (Sweden)

    Yogesh Yadav

    2017-01-01

    Full Text Available Object Detection with small computation cost and processing time is a necessity in diverse domains such as traffic analysis security cameras video surveillance etc .With current advances in technology and decrease in prices of image sensors and video cameras the resolution of captured images is more than 1MP and has higher frame rates. This implies a considerable data size that needs to be processed in a very short period of time when real-time operations and data processing is needed. Real time video processing with high performance can be achieved with GPU technology. The aim of this study is to evaluate the influence of different image and video resolutions on the processing time number of objects detections and accuracy of the detected object. MOG2 algorithm is used for processing video input data with GPU module. Fuzzy interference system is used to evaluate the accuracy of number of detected object and to show the difference between CPU and GPU computing methods.

  1. Pilot study on real-time motion detection in UAS video data by human observer and image exploitation algorithm

    Science.gov (United States)

    Hild, Jutta; Krüger, Wolfgang; Brüstle, Stefan; Trantelle, Patrick; Unmüßig, Gabriel; Voit, Michael; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen

    2017-05-01

    Real-time motion video analysis is a challenging and exhausting task for the human observer, particularly in safety and security critical domains. Hence, customized video analysis systems providing functions for the analysis of subtasks like motion detection or target tracking are welcome. While such automated algorithms relieve the human operators from performing basic subtasks, they impose additional interaction duties on them. Prior work shows that, e.g., for interaction with target tracking algorithms, a gaze-enhanced user interface is beneficial. In this contribution, we present an investigation on interaction with an independent motion detection (IDM) algorithm. Besides identifying an appropriate interaction technique for the user interface - again, we compare gaze-based and traditional mouse-based interaction - we focus on the benefit an IDM algorithm might provide for an UAS video analyst. In a pilot study, we exposed ten subjects to the task of moving target detection in UAS video data twice, once performing with automatic support, once performing without it. We compare the two conditions considering performance in terms of effectiveness (correct target selections). Additionally, we report perceived workload (measured using the NASA-TLX questionnaire) and user satisfaction (measured using the ISO 9241-411 questionnaire). The results show that a combination of gaze input and automated IDM algorithm provides valuable support for the human observer, increasing the number of correct target selections up to 62% and reducing workload at the same time.

  2. A Multi-Frame Post-Processing Approach to Improved Decoding of H.264/AVC Video

    DEFF Research Database (Denmark)

    Huang, Xin; Li, Huiying; Forchhammer, Søren

    2007-01-01

    Video compression techniques may yield visually annoying artifacts for limited bitrate coding. In order to improve video quality, a multi-frame based motion compensated filtering algorithm is reported based on combining multiple pictures to form a single super-resolution picture and decimation...

  3. On-board processing of video image sequences

    DEFF Research Database (Denmark)

    Andersen, Jakob Dahl; Chanrion, Olivier Arnaud; Forchhammer, Søren

    2008-01-01

    The ldquoatmosphere-space interactions monitorrdquo (ASIM) is a payload to be mounted on one of the external platforms of the Columbus module of the International Space Station (ISS). The instruments include six video cameras, six photometers and one X-ray detector. The main scientific objective...... of the mission is to study transient luminous events (TLE) above severe thunderstorms: the sprites, jets and elves. Other atmospheric phenomena are also studied including aurora, gravity waves and meteors. As part of the ASIM Phase B study, on-board processing of data from the cameras is being developed...

  4. GOP-based channel rate allocation using genetic algorithm for scalable video streaming over error-prone networks.

    Science.gov (United States)

    Fang, Tao; Chau, Lap-Pui

    2006-06-01

    In this paper, we address the problem of unequal error protection (UEP) for scalable video transmission over wireless packet-erasure channel. Unequal amounts of protection are allocated to the different frames (I- or P-frame) of a group-of-pictures (GOP), and in each frame, unequal amounts of protection are allocated to the progressive bit-stream of scalable video to provide a graceful degradation of video quality as packet loss rate varies. We use a genetic algorithm (GA) to quickly get the allocation pattern, which is hard to get with other conventional methods, like hill-climbing method. Theoretical analysis and experimental results both demonstrate the advantage of the proposed algorithm.

  5. Algorithms

    Indian Academy of Sciences (India)

    In the description of algorithms and programming languages, what is the role of control abstraction? • What are the inherent limitations of the algorithmic processes? In future articles in this series, we will show that these constructs are powerful and can be used to encode any algorithm. In the next article, we will discuss ...

  6. Comprehensive depth estimation algorithm for efficient stereoscopic content creation in three-dimensional video systems

    Science.gov (United States)

    Xu, Huihui; Jiang, Mingyan

    2015-07-01

    Two-dimensional to three-dimensional (3-D) conversion in 3-D video applications has attracted great attention as it can alleviate the problem of stereoscopic content shortage. Depth estimation is an essential part of this conversion since the depth accuracy directly affects the quality of a stereoscopic image. In order to generate a perceptually reasonable depth map, a comprehensive depth estimation algorithm that considers the scenario type is presented. Based on the human visual system mechanism, which is sensitive to a change in the scenario, this study classifies the type of scenario into four classes according to the relationship between the movements of the camera and the object, and then leverages different strategies on the basis of the scenario type. The proposed strategies efficiently extract the depth information from different scenarios. In addition, the depth generation method for a scenario in which there is no motion, neither of the object nor the camera, is also suitable for the single image. Qualitative and quantitative evaluation results demonstrate that the proposed depth estimation algorithm is very effective for generating stereoscopic content and providing a realistic visual experience.

  7. A comparison between optimisation algorithms for metal forming processes

    NARCIS (Netherlands)

    Bonte, M.H.A.; Do, T.T.; Fourment, L.; van den Boogaard, Antonius H.; Huetink, Han; Habbal, A.

    2006-01-01

    Coupling optimisation algorithms to Finite Element (FEM) simulations is a very promising way to achieve optimal metal forming processes. However, many optimisation algorithms exist and it is not clear which of these algorithms to use. This paper compares an efficient Metamodel Assisted Evolutionary

  8. A Genetic Algorithm and Fuzzy Logic Approach for Video Shot Boundary Detection.

    Science.gov (United States)

    Thounaojam, Dalton Meitei; Khelchandra, Thongam; Manglem Singh, Kh; Roy, Sudipta

    2016-01-01

    This paper proposed a shot boundary detection approach using Genetic Algorithm and Fuzzy Logic. In this, the membership functions of the fuzzy system are calculated using Genetic Algorithm by taking preobserved actual values for shot boundaries. The classification of the types of shot transitions is done by the fuzzy system. Experimental results show that the accuracy of the shot boundary detection increases with the increase in iterations or generations of the GA optimization process. The proposed system is compared to latest techniques and yields better result in terms of F1score parameter.

  9. Low-Complexity Hierarchical Mode Decision Algorithms Targeting VLSI Architecture Design for the H.264/AVC Video Encoder

    Directory of Open Access Journals (Sweden)

    Guilherme Corrêa

    2012-01-01

    Full Text Available In H.264/AVC, the encoding process can occur according to one of the 13 intraframe coding modes or according to one of the 8 available interframes block sizes, besides the SKIP mode. In the Joint Model reference software, the choice of the best mode is performed through exhaustive executions of the entire encoding process, which significantly increases the encoder's computational complexity and sometimes even forbids its use in real-time applications. Considering this context, this work proposes a set of heuristic algorithms targeting hardware architectures that lead to earlier selection of one encoding mode. The amount of repetitions of the encoding process is reduced by 47 times, at the cost of a relatively small cost in compression performance. When compared to other works, the fast hierarchical mode decision results are expressively more satisfactory in terms of computational complexity reduction, quality, and bit rate. The low-complexity mode decision architecture proposed is thus a very good option for real-time coding of high-resolution videos. The solution is especially interesting for embedded and mobile applications with support to multimedia systems, since it yields good compression rates and image quality with a very high reduction in the encoder complexity.

  10. Comparative analysis of video processing and 3D rendering for cloud video games using different virtualization technologies

    Science.gov (United States)

    Bada, Adedayo; Alcaraz-Calero, Jose M.; Wang, Qi; Grecos, Christos

    2014-05-01

    This paper describes a comprehensive empirical performance evaluation of 3D video processing employing the physical/virtual architecture implemented in a cloud environment. Different virtualization technologies, virtual video cards and various 3D benchmarks tools have been utilized in order to analyse the optimal performance in the context of 3D online gaming applications. This study highlights 3D video rendering performance under each type of hypervisors, and other factors including network I/O, disk I/O and memory usage. Comparisons of these factors under well-known virtual display technologies such as VNC, Spice and Virtual 3D adaptors reveal the strengths and weaknesses of the various hypervisors with respect to 3D video rendering and streaming.

  11. Towards Realising Secure and Efficient Image and Video Processing Applications on Quantum Computers

    Directory of Open Access Journals (Sweden)

    Abdullah M. Iliyasu

    2013-07-01

    Full Text Available Exploiting the promise of security and efficiency that quantum computing offers, the basic foundations leading to commercial applications for quantum image processing are proposed. Two mathematical frameworks and algorithms to accomplish the watermarking of quantum images, authentication of ownership of already watermarked images and recovery of their unmarked versions on quantum computers are proposed. Encoding the images as 2n-sized normalised Flexible Representation of Quantum Images (FRQI states, with n-qubits and 1-qubit dedicated to capturing the respective information about the colour and position of every pixel in the image respectively, the proposed algorithms utilise the flexibility inherent to the FRQI representation, in order to confine the transformations on an image to any predetermined chromatic or spatial (or a combination of both content of the image as dictated by the watermark embedding, authentication or recovery circuits. Furthermore, by adopting an apt generalisation of the criteria required to realise physical quantum computing hardware, three standalone components that make up the framework to prepare, manipulate and recover the various contents required to represent and produce movies on quantum computers are also proposed. Each of the algorithms and the mathematical foundations for their execution were simulated using classical (i.e., conventional or non-quantum computing resources, and their results were analysed alongside other longstanding classical computing equivalents. The work presented here, combined together with the extensions suggested, provide the basic foundations towards effectuating secure and efficient classical-like image and video processing applications on the quantum-computing framework.

  12. Video-based eyetracking methods and algorithms in head-mounted displays

    Science.gov (United States)

    Hua, Hong; Krishnaswamy, Prasanna; Rolland, Jannick P.

    2006-05-01

    Head pose is utilized to approximate a user’s line-of-sight for real-time image rendering and interaction in most of the 3D visualization applications using head-mounted displays (HMD). The eye often reaches an object of interest before the completion of most head movements. It is highly desirable to integrate eye-tracking capability into HMDs in various applications. While the added complexity of an eyetracked-HMD (ET-HMD) imposes challenges on designing a compact, portable, and robust system, the integration offers opportunities to improve eye tracking accuracy and robustness. In this paper, based on the modeling of an eye imaging and tracking system, we examine the challenges and identify parametric requirements for video-based pupil-glint tracking methods in an ET-HMD design, and predict how these parameters may affect the tracking accuracy, resolution, and robustness. We further present novel methods and associated algorithms that effectively improve eye-tracking accuracy and extend the tracking range.

  13. QoS control strategies for high-quality video processing

    NARCIS (Netherlands)

    Wüst, C.C.; Steffens, E.F.M.; Verhaegh, W.F.J.; Bril, R.J.; Hentschel, C.

    2005-01-01

    Video processing in software is often characterized by highly fluctuating, content-dependent processing times, and a limited tolerance for deadline misses. We present an approach that allows close-to-average-case resource allocation to a single video processing task, based on asynchronous, scalable

  14. Electromyography-based seizure detector: Preliminary results comparing a generalized tonic-clonic seizure detection algorithm to video-EEG recordings.

    Science.gov (United States)

    Szabó, Charles Ákos; Morgan, Lola C; Karkar, Kameel M; Leary, Linda D; Lie, Octavian V; Girouard, Michael; Cavazos, José E

    2015-09-01

    Automatic detection of generalized tonic-clonic seizures (GTCS) will facilitate patient monitoring and early intervention to prevent comorbidities, recurrent seizures, or death. Brain Sentinel (San Antonio, Texas, USA) developed a seizure-detection algorithm evaluating surface electromyography (sEMG) signals during GTCS. This study aims to validate the seizure-detection algorithm using inpatient video-electroencephalography (EEG) monitoring. sEMG was recorded unilaterally from the biceps/triceps muscles in 33 patients (17white/16 male) with a mean age of 40 (range 14-64) years who were admitted for video-EEG monitoring. Maximum voluntary biceps contraction was measured in each patient to set up the baseline physiologic muscle threshold. The raw EMG signal was recorded using conventional amplifiers, sampling at 1,024 Hz and filtered with a 60 Hz noise detection algorithm before it was processed with three band-pass filters at pass frequencies of 3-40, 130-240, and 300-400 Hz. A seizure-detection algorithm utilizing Hotelling's T-squared power analysis of compound muscle action potentials was used to identify GTCS and correlated with video-EEG recordings. In 1,399 h of continuous recording, there were 196 epileptic seizures (21 GTCS, 96 myoclonic, 28 tonic, 12 absence, and 42 focal seizures with or without loss of awareness) and 4 nonepileptic spells. During retrospective, offline evaluation of sEMG from the biceps alone, the algorithm detected 20 GTCS (95%) in 11 patients, averaging within 20 s of electroclinical onset of generalized tonic activity, as identified by video-EEG monitoring. Only one false-positive detection occurred during the postictal period following a GTCS, but false alarms were not triggered by other seizure types or spells. Brain Sentinel's seizure detection algorithm demonstrated excellent sensitivity and specificity for identifying GTCS recorded in an epilepsy monitoring unit. Further studies are needed in larger patient groups, including

  15. Dynamic Request Routing for Online Video-on-Demand Service: A Markov Decision Process Approach

    Directory of Open Access Journals (Sweden)

    Jianxiong Wan

    2014-01-01

    Full Text Available We investigate the request routing problem in the CDN-based Video-on-Demand system. We model the system as a controlled queueing system including a dispatcher and several edge servers. The system is formulated as a Markov decision process (MDP. Since the MDP formulation suffers from the so-called “the curse of dimensionality” problem, we then develop a greedy heuristic algorithm, which is simple and can be implemented online, to approximately solve the MDP model. However, we do not know how far it deviates from the optimal solution. To address this problem, we further aggregate the state space of the original MDP model and use the bounded-parameter MDP (BMDP to reformulate the system. This allows us to obtain a suboptimal solution with a known performance bound. The effectiveness of two approaches is evaluated in a simulation study.

  16. Mining business process variants: Challenges, scenarios, algorithms

    NARCIS (Netherlands)

    Li, C.; Reichert, M.U.; Wombacher, Andreas

    During the last years a new generation of process-aware information systems has emerged, which enables process model configurations at buildtime as well as process instance changes during runtime. Respective model adaptations result in a large number of model variants that are derived from the same

  17. Digital image processing an algorithmic approach with Matlab

    CERN Document Server

    Qidwai, Uvais

    2009-01-01

    Introduction to Image Processing and the MATLAB EnvironmentIntroduction Digital Image Definitions: Theoretical Account Image Properties MATLAB Algorithmic Account MATLAB CodeImage Acquisition, Types, and File I/OImage Acquisition Image Types and File I/O Basics of Color Images Other Color Spaces Algorithmic Account MATLAB CodeImage ArithmeticIntroduction Operator Basics Theoretical TreatmentAlgorithmic Treatment Coding ExamplesAffine and Logical Operations, Distortions, and Noise in ImagesIntroduction Affine Operations Logical Operators Noise in Images Distortions in ImagesAlgorithmic Account

  18. Digital signal processing algorithms for automatic voice recognition

    Science.gov (United States)

    Botros, Nazeih M.

    1987-11-01

    The current digital signal analysis algorithms are investigated that are implemented in automatic voice recognition algorithms. Automatic voice recognition means, the capability of a computer to recognize and interact with verbal commands. The digital signal is focused on, rather than the linguistic, analysis of speech signal. Several digital signal processing algorithms are available for voice recognition. Some of these algorithms are: Linear Predictive Coding (LPC), Short-time Fourier Analysis, and Cepstrum Analysis. Among these algorithms, the LPC is the most widely used. This algorithm has short execution time and do not require large memory storage. However, it has several limitations due to the assumptions used to develop it. The other 2 algorithms are frequency domain algorithms with not many assumptions, but they are not widely implemented or investigated. However, with the recent advances in the digital technology, namely signal processors, these 2 frequency domain algorithms may be investigated in order to implement them in voice recognition. This research is concerned with real time, microprocessor based recognition algorithms.

  19. Digital signal processing algorithms for automatic voice recognition

    Science.gov (United States)

    Botros, Nazeih M.

    1987-01-01

    The current digital signal analysis algorithms are investigated that are implemented in automatic voice recognition algorithms. Automatic voice recognition means, the capability of a computer to recognize and interact with verbal commands. The digital signal is focused on, rather than the linguistic, analysis of speech signal. Several digital signal processing algorithms are available for voice recognition. Some of these algorithms are: Linear Predictive Coding (LPC), Short-time Fourier Analysis, and Cepstrum Analysis. Among these algorithms, the LPC is the most widely used. This algorithm has short execution time and do not require large memory storage. However, it has several limitations due to the assumptions used to develop it. The other 2 algorithms are frequency domain algorithms with not many assumptions, but they are not widely implemented or investigated. However, with the recent advances in the digital technology, namely signal processors, these 2 frequency domain algorithms may be investigated in order to implement them in voice recognition. This research is concerned with real time, microprocessor based recognition algorithms.

  20. Optimization of machining processes using pattern search algorithm

    Directory of Open Access Journals (Sweden)

    Miloš Madić

    2014-04-01

    Full Text Available Optimization of machining processes not only increases machining efficiency and economics, but also the end product quality. In recent years, among the traditional optimization methods, stochastic direct search optimization methods such as meta-heuristic algorithms are being increasingly applied for solving machining optimization problems. Their ability to deal with complex, multi-dimensional and ill-behaved optimization problems made them the preferred optimization tool by most researchers and practitioners. This paper introduces the use of pattern search (PS algorithm, as a deterministic direct search optimization method, for solving machining optimization problems. To analyze the applicability and performance of the PS algorithm, six case studies of machining optimization problems, both single and multi-objective, were considered. The PS algorithm was employed to determine optimal combinations of machining parameters for different machining processes such as abrasive waterjet machining, turning, turn-milling, drilling, electrical discharge machining and wire electrical discharge machining. In each case study the optimization solutions obtained by the PS algorithm were compared with the optimization solutions that had been determined by past researchers using meta-heuristic algorithms. Analysis of obtained optimization results indicates that the PS algorithm is very applicable for solving machining optimization problems showing good competitive potential against stochastic direct search methods such as meta-heuristic algorithms. Specific features and merits of the PS algorithm were also discussed.

  1. Automatic processing of CERN video, audio and photo archives

    Science.gov (United States)

    Kwiatek, M.

    2008-07-01

    The digitalization of CERN audio-visual archives, a major task currently in progress, will generate over 40 TB of video, audio and photo files. Storing these files is one issue, but a far more important challenge is to provide long-time coherence of the archive and to make these files available on-line with minimum manpower investment. An infrastructure, based on standard CERN services, has been implemented, whereby master files, stored in the CERN Distributed File System (DFS), are discovered and scheduled for encoding into lightweight web formats based on predefined profiles. Changes in master files, conversion profiles or in the metadata database (read from CDS, the CERN Document Server) are automatically detected and the media re-encoded whenever necessary. The encoding processes are run on virtual servers provided on-demand by the CERN Server Self Service Centre, so that new servers can be easily configured to adapt to higher load. Finally, the generated files are made available from the CERN standard web servers with streaming implemented using Windows Media Services.

  2. A probabilistic algorithm to process geolocation data.

    Science.gov (United States)

    Merkel, Benjamin; Phillips, Richard A; Descamps, Sébastien; Yoccoz, Nigel G; Moe, Børge; Strøm, Hallvard

    2016-01-01

    The use of light level loggers (geolocators) to understand movements and distributions in terrestrial and marine vertebrates, particularly during the non-breeding period, has increased dramatically in recent years. However, inferring positions from light data is not straightforward, often relies on assumptions that are difficult to test, or includes an element of subjectivity. We present an intuitive framework to compute locations from twilight events collected by geolocators from different manufacturers. The procedure uses an iterative forward step selection, weighting each possible position using a set of parameters that can be specifically selected for each analysis. The approach was tested on data from two wide-ranging seabird species - black-browed albatross Thalassarche melanophris and wandering albatross Diomedea exulans - tracked at Bird Island, South Georgia, during the two most contrasting periods of the year in terms of light regimes (solstice and equinox). Using additional information on travel speed, sea surface temperature and land avoidance, our approach was considerably more accurate than the traditional threshold method (errors reduced to medians of 185 km and 145 km for solstice and equinox periods, respectively). The algorithm computes stable results with uncertainty estimates, including around the equinoxes, and does not require calibration of solar angles. Accuracy can be increased by assimilating information on travel speed and behaviour, as well as environmental data. This framework is available through the open source R package probGLS, and can be applied in a wide range of biologging studies.

  3. CHARACTER RECOGNITION OF VIDEO SUBTITLES\\

    Directory of Open Access Journals (Sweden)

    Satish S Hiremath

    2016-11-01

    Full Text Available An important task in content based video indexing is to extract text information from videos. The challenges involved in text extraction and recognition are variation of illumination on each video frame with text, the text present on the complex background and different font size of the text. Using various image processing algorithms like morphological operations, blob detection and histogram of oriented gradients the character recognition of video subtitles is implemented. Segmentation, feature extraction and classification are the major steps of character recognition. Several experimental results are shown to demonstrate the performance of the proposed algorithm

  4. Genetic Algorithm Optimisation of PID Controllers for a Multivariable Process

    OpenAIRE

    Wael Alharbi; Barry Gomm

    2017-01-01

    This project is about the design of PID controllers and the improvement of outputs in multivariable processes. The optimisation of PID controller for the Shell oil process is presented in this paper, using Genetic Algorithms (GAs). Genetic Algorithms (GAs) are used to automatically tune PID controllers according to given specifications. They use an objective function, which is specially formulated and measures the performance of controller in terms of time-domain bounds on the responses of cl...

  5. Textual and chemical information processing: different domains but similar algorithms

    Directory of Open Access Journals (Sweden)

    Peter Willett

    2000-01-01

    Full Text Available This paper discusses the extent to which algorithms developed for the processing of textual databases are also applicable to the processing of chemical structure databases, and vice versa. Applications discussed include: an algorithm for distribution sorting that has been applied to the design of screening systems for rapid chemical substructure searching; the use of measures of inter-molecular structural similarity for the analysis of hypertext graphs; a genetic algorithm for calculating term weights for relevance feedback searching for determining whether a molecule is likely to exhibit biological activity; and the use of data fusion to combine the results of different chemical similarity searches.

  6. Bistatic sAR data processing algorithms

    CERN Document Server

    Qiu, Xiaolan; Hu, Donghui

    2013-01-01

    Synthetic Aperture Radar (SAR) is critical for remote sensing. It works day and night, in good weather or bad. Bistatic SAR is a new kind of SAR system, where the transmitter and receiver are placed on two separate platforms. Bistatic SAR is one of the most important trends in SAR development, as the technology renders SAR more flexible and safer when used in military environments. Imaging is one of the most difficult and important aspects of bistatic SAR data processing. Although traditional SAR signal processing is fully developed, bistatic SAR has a more complex system structure, so sign

  7. Parameter optimization of electrochemical machining process using black hole algorithm

    Science.gov (United States)

    Singh, Dinesh; Shukla, Rajkamal

    2017-12-01

    Advanced machining processes are significant as higher accuracy in machined component is required in the manufacturing industries. Parameter optimization of machining processes gives optimum control to achieve the desired goals. In this paper, electrochemical machining (ECM) process is considered to evaluate the performance of the considered process using black hole algorithm (BHA). BHA considers the fundamental idea of a black hole theory and it has less operating parameters to tune. The two performance parameters, material removal rate (MRR) and overcut (OC) are considered separately to get optimum machining parameter settings using BHA. The variations of process parameters with respect to the performance parameters are reported for better and effective understanding of the considered process using single objective at a time. The results obtained using BHA are found better while compared with results of other metaheuristic algorithms, such as, genetic algorithm (GA), artificial bee colony (ABC) and bio-geography based optimization (BBO) attempted by previous researchers.

  8. Graphical Representation of Parallel Algorithmic Processes

    Science.gov (United States)

    1990-12-01

    other research. It accepts performance data from PICL (described below) and displays it in many ways. It provides node activity and node CPU ...translation involves performing byte-order swaps; this is required because the Intel 80386 processor stores numbers low byte first, while the Sun SPARC...basic information regarding message passing between processes, overall communications load, communica- tions statistics for each node, and CPU

  9. Comparing Simple and Advanced Video Tools as Supports for Complex Collaborative Design Processes

    Science.gov (United States)

    Zahn, Carmen; Pea, Roy; Hesse, Friedrich W.; Rosen, Joe

    2010-01-01

    Working with digital video technologies, particularly advanced video tools with editing capabilities, offers new prospects for meaningful learning through design. However, it is also possible that the additional complexity of such tools does "not" advance learning. We compared in an experiment the design processes and learning outcomes…

  10. Increasing Speed of Processing With Action Video Games

    Science.gov (United States)

    Dye, Matthew W.G.; Green, C. Shawn; Bavelier, Daphne

    2010-01-01

    In many everyday situations, speed is of the essence. However, fast decisions typically mean more mistakes. To this day, it remains unknown whether reaction times can be reduced with appropriate training, within one individual, across a range of tasks, and without compromising accuracy. Here we review evidence that the very act of playing action video games significantly reduces reaction times without sacrificing accuracy. Critically, this increase in speed is observed across various tasks beyond game situations. Video gaming may therefore provide an efficient training regimen to induce a general speeding of perceptual reaction times without decreases in accuracy of performance. PMID:20485453

  11. Increasing Speed of Processing With Action Video Games.

    Science.gov (United States)

    Dye, Matthew W G; Green, C Shawn; Bavelier, Daphne

    2009-01-01

    In many everyday situations, speed is of the essence. However, fast decisions typically mean more mistakes. To this day, it remains unknown whether reaction times can be reduced with appropriate training, within one individual, across a range of tasks, and without compromising accuracy. Here we review evidence that the very act of playing action video games significantly reduces reaction times without sacrificing accuracy. Critically, this increase in speed is observed across various tasks beyond game situations. Video gaming may therefore provide an efficient training regimen to induce a general speeding of perceptual reaction times without decreases in accuracy of performance.

  12. Video surveillance of epilepsy patients using color image processing

    DEFF Research Database (Denmark)

    Bager, Gitte; Vilic, Kenan; Vilic, Adnan

    2014-01-01

    This paper introduces a method for tracking patients under video surveillance based on a color marker system. The patients are not restricted in their movements, which requires a tracking system that can overcome non-ideal scenes e.g. occlusions, very fast movements, lighting issues and other mov...

  13. Video Surveillance of Epilepsy Patients using Color Image Processing

    DEFF Research Database (Denmark)

    Bager, Gitte; Vilic, Kenan; Alving, Jørgen

    2007-01-01

    This report introduces a method for tracking of patients under video surveillance based on a marker system. The patients are not restricted in their movements, which requires a tracking system that can overcome non-ideal scenes e.g. occlusions, very fast movements, lightning issues and other moving...

  14. Research and implementation of video image acquisition and processing based on Java and JMF

    Science.gov (United States)

    Qin, Jinlei; Li, Zheng; Niu, Yuguang

    2012-01-01

    The article put forward a method which had been used for video image acquisition and processing, and a system based on Java media framework (JMF) had been implemented by it. The method could be achieved not only by B/S mode but also by C/S mode taking advantage of the predominance of the Java language. Some key issues such as locating video data source, playing video, video image acquisition and processing and so on had been expatiated in detail. The operation results of the system show that this method is fully compatible with common video capture device. At the same time the system possesses many excellences as lower cost, more powerful, easier to develop and cross-platform etc. Finally the application prospect of the method which is based on java and JMF is pointed out.

  15. Nonlinear spectrum compression for the hearing impaired via a frequency-domain processing algorithm.

    Science.gov (United States)

    Paarmann, Larry D

    2006-01-01

    In this paper, the results of both normal-hearing, and profoundly hearing-impaired adults, tested with spectrum compressed speech via the modified chirp-z algorithm, with and without visual stimuli, are reported. Ten normal-hearing adult listeners and five profoundly hearing-impaired adult listeners were asked to identify nonsense syllables presented auditorily and bimodally (audition and vision) via video tape in two conditions: lowpass filtered or unprocessed, and spectrum compressed. The lowpass filtered and spectrum compressed speech occupies the same spectrum width of 840 Hz; at 900 Hz and above, the attenuation is at least 60 dB. The spectrum compression is performed by means of a modified chirp-z algorithm, and is described in this paper. The testing results are significant and are reported in this paper. While the signal processing approach is somewhat intensive, the realtime throughput delay is small. Recent advances in hardware speed suggest that realization in a hearing aid is feasible.

  16. Towards Possible Non-Extensive Thermodynamics of Algorithmic Processing — Statistical Mechanics of Insertion Sort Algorithm

    Science.gov (United States)

    Strzałka, Dominik; Grabowski, Franciszek

    Tsallis entropy introduced in 1988 is considered to have obtained new possibilities to construct generalized thermodynamical basis for statistical physics expanding classical Boltzmann-Gibbs thermodynamics for nonequilibrium states. During the last two decades this q-generalized theory has been successfully applied to considerable amount of physically interesting complex phenomena. The authors would like to present a new view on the problem of algorithms computational complexity analysis by the example of the possible thermodynamical basis of the sorting process and its dynamical behavior. A classical approach to the analysis of the amount of resources needed for algorithmic computation is based on the assumption that the contact between the algorithm and the input data stream is a simple system, because only the worst-case time complexity is considered to minimize the dependency on specific instances. Meanwhile the article shows that this process can be governed by long-range dependencies with thermodynamical basis expressed by the specific shapes of probability distributions. The classical approach does not allow to describe all properties of processes (especially the dynamical behavior of algorithms) that can appear during the computer algorithmic processing even if one takes into account the average case analysis in computational complexity. The importance of this problem is still neglected especially if one realizes two important things. The first one: nowadays computer systems work also in an interactive mode and for better understanding of its possible behavior one needs a proper thermodynamical basis. The second one: computers from mathematical point of view are Turing machines but in reality they have physical implementations that need energy for processing and the problem of entropy production appears. That is why the thermodynamical analysis of the possible behavior of the simple insertion sort algorithm will be given here.

  17. 78 FR 34370 - Revisions to Electric Quarterly Report Filing Process; Notice of Availability of Video Showing...

    Science.gov (United States)

    2013-06-07

    ... processes for filing EQRs allows an EQR seller and its agent to file using a web interface that generally replicates the Commission-distributed software used currently. A video showing how EQRs can be filed using...

  18. Using Digital Video Editing to Shape Novice Teachers: A Generative Process for Nurturing Professional Growth

    Science.gov (United States)

    Calandra, Brendan; Brantley-Dias, Laurie

    2010-01-01

    The authors describe the generative process for using video editing for teachers' professional development. The article provides a rationale, a theoretical framework, and a critical review of the authors' work over the past five years.

  19. Optimization of Process Design Problems Using Differential Evolution Algorithm

    Directory of Open Access Journals (Sweden)

    A.M. Gujarathia

    2016-06-01

    Full Text Available A modified differential evolution algorithm (MDE has been used for solving different process related design problems (namely calculation of the NRTL and Two-Suffix Margules activity coefficient models parameters in 20 ternary extraction systems including different ionic liquids and reactor network design problem. The obtained results, in terms of root mean square deviations (rmsd for these models are satisfactory, with the overall values of 0.0023 and 0.0170 for 169 tie-lines for NRTL and Two-Suffix Margules models, respectively. The results showed that the MDE algorithm results in better solutions compared to the previous work based on genetic algorithm (GA for correlating liquid-liquid equilibrium (LLE data in these systems. MDE also outperformed DE algorithm when tested on reactor network design problem with respect to convergence and speed.

  20. Increasing Speed of Processing With Action Video Games

    OpenAIRE

    Dye, Matthew W.G.; Green, C. Shawn; Bavelier, Daphne

    2009-01-01

    In many everyday situations, speed is of the essence. However, fast decisions typically mean more mistakes. To this day, it remains unknown whether reaction times can be reduced with appropriate training, within one individual, across a range of tasks, and without compromising accuracy. Here we review evidence that the very act of playing action video games significantly reduces reaction times without sacrificing accuracy. Critically, this increase in speed is observed across various tasks be...

  1. Loss-minimal Algorithmic Trading Based on Levy Processes

    Directory of Open Access Journals (Sweden)

    Farhad Kia

    2014-08-01

    Full Text Available In this paper we optimize portfolios assuming that the value of the portfolio follows a Lévy process. First we identify the parameters of the underlying Lévy process and then portfolio optimization is performed by maximizing the probability of positive return. The method has been tested by extensive performance analysis on Forex and SP 500 historical time series. The proposed trading algorithm has achieved 4.9\\% percent yearly return on average without leverage which proves its applicability to algorithmic trading.

  2. USING GENETIC ALGORITHMS TO DESIGN ENVIRONMENTALLY FRIENDLY PROCESSES

    Science.gov (United States)

    Genetic algorithm calculations are applied to the design of chemical processes to achieve improvements in environmental and economic performance. By finding the set of Pareto (i.e., non-dominated) solutions one can see how different objectives, such as environmental and economic ...

  3. Fast Quantum Algorithms for Numerical Integrals and Stochastic Processes

    Science.gov (United States)

    Abrams, D.; Williams, C.

    1999-01-01

    We discuss quantum algorithms that calculate numerical integrals and descriptive statistics of stochastic processes. With either of two distinct approaches, one obtains an exponential speed increase in comparison to the fastest known classical deterministic algotithms and a quadratic speed increase incomparison to classical Monte Carlo methods.

  4. Foundations of digital signal processing theory, algorithms and hardware design

    CERN Document Server

    Gaydecki, Patrick

    2005-01-01

    An excellent introductory text, this book covers the basic theoretical, algorithmic and real-time aspects of digital signal processing (DSP). Detailed information is provided on off-line, real-time and DSP programming and the reader is effortlessly guided through advanced topics such as DSP hardware design, FIR and IIR filter design and difference equation manipulation.

  5. Epidemic Processes on Complex Networks : Modelling, Simulation and Algorithms

    NARCIS (Netherlands)

    Van de Bovenkamp, R.

    2015-01-01

    Local interactions on a graph will lead to global dynamic behaviour. In this thesis we focus on two types of dynamic processes on graphs: the Susceptible-Infected-Susceptilbe (SIS) virus spreading model, and gossip style epidemic algorithms. The largest part of this thesis is devoted to the SIS

  6. Image Processing Algorithms in the Secondary School Programming Education

    Science.gov (United States)

    Gerják, István

    2017-01-01

    Learning computer programming for students of the age of 14-18 is difficult and requires endurance and engagement. Being familiar with the syntax of a computer language and writing programs in it are challenges for youngsters, not to mention that understanding algorithms is also a big challenge. To help students in the learning process, teachers…

  7. Learning to diagnose using patient video case in paediatrics: perceptive and cognitive processes.

    Science.gov (United States)

    Balslev, Thomas

    2012-12-01

    Thomas Balslev, a paediatric neurologist and educational researcher, defended his thesis on 24 November 2011. The thesis included five published papers, and investigated learning with authentic, brief patient video cases. With analysis of a video case in a small group, learning processes and sharing of knowledge was intensely stimulated. Small group discussion and subsequent listening to an expert's think-aloud were particularly effective approaches to enhance diagnostic accuracy among non-experts. In a descriptive study, expertise-related differences during analysis of patient video cases were characterized, and in a controlled study, different types of visual modelling were tested.

  8. A fast method for video deblurring based on a combination of gradient methods and denoising algorithms in Matlab and C environments

    Science.gov (United States)

    Mirzadeh, Zeynab; Mehri, Razieh; Rabbani, Hossein

    2010-01-01

    In this paper the degraded video with blur and noise is enhanced by using an algorithm based on an iterative procedure. In this algorithm at first we estimate the clean data and blur function using Newton optimization method and then the estimation procedure is improved using appropriate denoising methods. These noise reduction techniques are based on local statistics of clean data and blur function. For estimated blur function we use LPA-ICI (local polynomial approximation - intersection of confidence intervals) method that use an anisotropic window around each point and obtain the enhanced data employing Wiener filter in this local window. Similarly, to improvement the quality of estimated clean video, at first we transform the data to wavelet transform domain and then improve our estimation using maximum a posterior (MAP) estimator and local Laplace prior. This procedure (initial estimation and improvement of estimation by denoising) is iterated and finally the clean video is obtained. The implementation of this algorithm is slow in MATLAB1 environment and so it is not suitable for online applications. However, MATLAB has the capability of running functions written in C. The files which hold the source for these functions are called MEX-Files. The MEX functions allow system-specific APIs to be called to extend MATLAB's abilities. So, in this paper to speed up our algorithm, the written code in MATLAB is sectioned and the elapsed time for each section is measured and slow sections (that use 60% of complete running time) are selected. Then these slow sections are translated to C++ and linked to MATLAB. In fact, the high loads of information in images and processed data in the "for" loops of relevant code, makes MATLAB an unsuitable candidate for writing such programs. The written code for our video deblurring algorithm in MATLAB contains eight "for" loops. These eighth "for" utilize 60% of the total execution time of the entire program and so the runtime should be

  9. People detection in nuclear plants by video processing for safety purpose

    Energy Technology Data Exchange (ETDEWEB)

    Jorge, Carlos Alexandre F.; Mol, Antonio Carlos A., E-mail: calexandre@ien.gov.b, E-mail: mol@ien.gov.b [Instituto de Engenharia Nuclear (IEN/CNEN), Rio de Janeiro, RJ (Brazil); Seixas, Jose M.; Silva, Eduardo Antonio B., E-mail: seixas@lps.ufrj.b, E-mail: eduardo@lps.ufrj.b [Coordenacao dos Programas de Pos-Graduacao de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Eletrica; Cota, Raphael E.; Ramos, Bruno L., E-mail: brunolange@poli.ufrj.b [Universidade Federal do Rio de Janeiro (EP/UFRJ), RJ (Brazil). Dept. de Engenharia Eletronica e de Computacao

    2011-07-01

    This work describes the development of a surveillance system for safety purposes in nuclear plants. The final objective is to track people online in videos, in order to estimate the dose received by personnel, during the execution of working tasks in nuclear plants. The estimation will be based on their tracked positions and on dose rate mapping in a real nuclear plant at Instituto de Engenharia Nuclear, Argonauta nuclear research reactor. Cameras have been installed within Argonauta's room, supplying the data needed. Both video processing and statistical signal processing techniques may be used for detection, segmentation and tracking people in video. This first paper reports people segmentation in video using background subtraction, by two different approaches, namely frame differences, and blind signal separation based on the independent component analysis method. Results are commented, along with perspectives for further work. (author)

  10. Landsat ecosystem disturbance adaptive processing system (LEDAPS) algorithm description

    Science.gov (United States)

    Schmidt, Gail; Jenkerson, Calli; Masek, Jeffrey; Vermote, Eric; Gao, Feng

    2013-01-01

    The Landsat Ecosystem Disturbance Adaptive Processing System (LEDAPS) software was originally developed by the National Aeronautics and Space Administration–Goddard Space Flight Center and the University of Maryland to produce top-of-atmosphere reflectance from LandsatThematic Mapper and Enhanced Thematic Mapper Plus Level 1 digital numbers and to apply atmospheric corrections to generate a surface-reflectance product.The U.S. Geological Survey (USGS) has adopted the LEDAPS algorithm for producing the Landsat Surface Reflectance Climate Data Record.This report discusses the LEDAPS algorithm, which was implemented by the USGS.

  11. Improving Video Segmentation by Fusing Depth Cues and the Visual Background Extractor (ViBe Algorithm

    Directory of Open Access Journals (Sweden)

    Xiaoqin Zhou

    2017-05-01

    Full Text Available Depth-sensing technology has led to broad applications of inexpensive depth cameras that can capture human motion and scenes in three-dimensional space. Background subtraction algorithms can be improved by fusing color and depth cues, thereby allowing many issues encountered in classical color segmentation to be solved. In this paper, we propose a new fusion method that combines depth and color information for foreground segmentation based on an advanced color-based algorithm. First, a background model and a depth model are developed. Then, based on these models, we propose a new updating strategy that can eliminate ghosting and black shadows almost completely. Extensive experiments have been performed to compare the proposed algorithm with other, conventional RGB-D (Red-Green-Blue and Depth algorithms. The experimental results suggest that our method extracts foregrounds with higher effectiveness and efficiency.

  12. Visual analysis of trash bin processing on garbage trucks in low resolution video

    Science.gov (United States)

    Sidla, Oliver; Loibner, Gernot

    2015-03-01

    We present a system for trash can detection and counting from a camera which is mounted on a garbage collection truck. A working prototype has been successfully implemented and tested with several hours of real-world video. The detection pipeline consists of HOG detectors for two trash can sizes, and meanshift tracking and low level image processing for the analysis of the garbage disposal process. Considering the harsh environment and unfavorable imaging conditions, the process works already good enough so that very useful measurements from video data can be extracted. The false positive/false negative rate of the full processing pipeline is about 5-6% at fully automatic operation. Video data of a full day (about 8 hrs) can be processed in about 30 minutes on a standard PC.

  13. A Selection Process for Genetic Algorithm Using Clustering Analysis

    Directory of Open Access Journals (Sweden)

    Adam Chehouri

    2017-11-01

    Full Text Available This article presents a newly proposed selection process for genetic algorithms on a class of unconstrained optimization problems. The k-means genetic algorithm selection process (KGA is composed of four essential stages: clustering, membership phase, fitness scaling and selection. Inspired from the hypothesis that clustering the population helps to preserve a selection pressure throughout the evolution of the population, a membership probability index is assigned to each individual following the clustering phase. Fitness scaling converts the membership scores in a range suitable for the selection function which selects the parents of the next generation. Two versions of the KGA process are presented: using a fixed number of clusters K (KGAf and via an optimal partitioning Kopt (KGAo determined by two different internal validity indices. The performance of each method is tested on seven benchmark problems.

  14. Multimedia applications in nursing curriculum: the process of producing streaming videos for medication administration skills.

    Science.gov (United States)

    Sowan, Azizeh K

    2014-07-01

    Streaming videos (SVs) are commonly used multimedia applications in clinical health education. However, there are several negative aspects related to the production and delivery of SVs. Only a few published studies have included sufficient descriptions of the videos and the production process and design innovations. This paper describes the production of innovative SVs for medication administration skills for undergraduate nursing students at a public university in Jordan and focuses on the ethical and cultural issues in producing this type of learning resource. The curriculum development committee approved the modification of educational techniques for medication administration procedures to include SVs within an interactive web-based learning environment. The production process of the videos adhered to established principles for "protecting patients' rights when filming and recording" and included: preproduction, production and postproduction phases. Medication administration skills were videotaped in a skills laboratory where they are usually taught to students and also in a hospital setting with real patients. The lab videos included critical points and Do's and Don'ts and the hospital videos fostered real-world practices. The range of time of the videos was reasonable to eliminate technical difficulty in access. Eight SVs were produced that covered different types of the medication administration skills. The production of SVs required the collaborative efforts of experts in IT, multimedia, nursing and informatics educators, and nursing care providers. Results showed that the videos were well-perceived by students, and the instructors who taught the course. The process of producing the videos in this project can be used as a valuable framework for schools considering utilizing multimedia applications in teaching. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  15. Video Stabilization Using Feature Point Matching

    Science.gov (United States)

    Kulkarni, Shamsundar; Bormane, D. S.; Nalbalwar, S. L.

    2017-01-01

    Video capturing by non-professionals will lead to unanticipated effects. Such as image distortion, image blurring etc. Hence, many researchers study such drawbacks to enhance the quality of videos. In this paper an algorithm is proposed to stabilize jittery videos. A stable output video will be attained without the effect of jitter which is caused due to shaking of handheld camera during video recording. Firstly, salient points from each frame from the input video is identified and processed followed by optimizing and stabilize the video. Optimization includes the quality of the video stabilization. This method has shown good result in terms of stabilization and it discarded distortion from the output videos recorded in different circumstances.

  16. VIPER: a general-purpose digital image-processing system applied to video microscopy.

    Science.gov (United States)

    Brunner, M; Ittner, W

    1988-01-01

    This paper describes VIPER, the video image-processing system Erlangen. It consists of a general purpose microcomputer, commercially available image-processing hardware modules connected directly to the computer, video input/output-modules such as a TV camera, video recorders and monitors, and a software package. The modular structure and the capabilities of this system are explained. The software is user-friendly, menu-driven and performs image acquisition, transfers, greyscale processing, arithmetics, logical operations, filtering display, colour assignment, graphics, and a couple of management functions. More than 100 image-processing functions are implemented. They are available either by typing a key or by a simple call to the function-subroutine library in application programs. Examples are supplied in the area of biomedical research, e.g. in in-vivo microscopy.

  17. Fast Quantum Algorithm for Predicting Descriptive Statistics of Stochastic Processes

    Science.gov (United States)

    Williams Colin P.

    1999-01-01

    Stochastic processes are used as a modeling tool in several sub-fields of physics, biology, and finance. Analytic understanding of the long term behavior of such processes is only tractable for very simple types of stochastic processes such as Markovian processes. However, in real world applications more complex stochastic processes often arise. In physics, the complicating factor might be nonlinearities; in biology it might be memory effects; and in finance is might be the non-random intentional behavior of participants in a market. In the absence of analytic insight, one is forced to understand these more complex stochastic processes via numerical simulation techniques. In this paper we present a quantum algorithm for performing such simulations. In particular, we show how a quantum algorithm can predict arbitrary descriptive statistics (moments) of N-step stochastic processes in just O(square root of N) time. That is, the quantum complexity is the square root of the classical complexity for performing such simulations. This is a significant speedup in comparison to the current state of the art.

  18. Physics-based signal processing algorithms for micromachined cantilever arrays

    Science.gov (United States)

    Candy, James V; Clague, David S; Lee, Christopher L; Rudd, Robert E; Burnham, Alan K; Tringe, Joseph W

    2013-11-19

    A method of using physics-based signal processing algorithms for micromachined cantilever arrays. The methods utilize deflection of a micromachined cantilever that represents the chemical, biological, or physical element being detected. One embodiment of the method comprises the steps of modeling the deflection of the micromachined cantilever producing a deflection model, sensing the deflection of the micromachined cantilever and producing a signal representing the deflection, and comparing the signal representing the deflection with the deflection model.

  19. Simulation of anaerobic digestion processes using stochastic algorithm.

    Science.gov (United States)

    Palanichamy, Jegathambal; Palani, Sundarambal

    2014-01-01

    The Anaerobic Digestion (AD) processes involve numerous complex biological and chemical reactions occurring simultaneously. Appropriate and efficient models are to be developed for simulation of anaerobic digestion systems. Although several models have been developed, mostly they suffer from lack of knowledge on constants, complexity and weak generalization. The basis of the deterministic approach for modelling the physico and bio-chemical reactions occurring in the AD system is the law of mass action, which gives the simple relationship between the reaction rates and the species concentrations. The assumptions made in the deterministic models are not hold true for the reactions involving chemical species of low concentration. The stochastic behaviour of the physicochemical processes can be modeled at mesoscopic level by application of the stochastic algorithms. In this paper a stochastic algorithm (Gillespie Tau Leap Method) developed in MATLAB was applied to predict the concentration of glucose, acids and methane formation at different time intervals. By this the performance of the digester system can be controlled. The processes given by ADM1 (Anaerobic Digestion Model 1) were taken for verification of the model. The proposed model was verified by comparing the results of Gillespie's algorithms with the deterministic solution for conversion of glucose into methane through degraders. At higher value of 'τ' (timestep), the computational time required for reaching the steady state is more since the number of chosen reactions is less. When the simulation time step is reduced, the results are similar to ODE solver. It was concluded that the stochastic algorithm is a suitable approach for the simulation of complex anaerobic digestion processes. The accuracy of the results depends on the optimum selection of tau value.

  20. Robust real-time segmentation of images and videos using a smooth-spline snake-based algorithm.

    Science.gov (United States)

    Precioso, Frederic; Barlaud, Michel; Blu, Thierry; Unser, Michael

    2005-07-01

    This paper deals with fast image and video segmentation using active contours. Region-based active contours using level sets are powerful techniques for video segmentation, but they suffer from large computational cost. A parametric active contour method based on B-Spline interpolation has been proposed in to highly reduce the computational cost, but this method is sensitive to noise. Here, we choose to relax the rigid interpolation constraint in order to robustify our method in the presence of noise: by using smoothing splines, we trade a tunable amount of interpolation error for a smoother spline curve. We show by experiments on natural sequences that this new flexibility yields segmentation results of higher quality at no additional computational cost. Hence, real-time processing for moving objects segmentation is preserved.

  1. Simulation of video sequences for an accurate evaluation of tracking algorithms on complex scenes

    Science.gov (United States)

    Dubreu, Christine; Manzanera, Antoine; Bohain, Eric

    2008-04-01

    As target tracking is arousing more and more interest, the necessity to reliably assess tracking algorithms in any conditions is becoming essential. The evaluation of such algorithms requires a database of sequences representative of the whole range of conditions in which the tracking system is likely to operate, together with its associated ground truth. However, building such a database with real sequences, and collecting the associated ground truth appears to be hardly possible and very time-consuming. Therefore, more and more often, synthetic sequences are generated by complex and heavy simulation platforms to evaluate the performance of tracking algorithms. Some methods have also been proposed using simple synthetic sequences generated without such complex simulation platforms. These sequences are generated from a finite number of discriminating parameters, and are statistically representative, as regards these parameters, of real sequences. They are very simple and not photorealistic, but can be reliably used for low-level tracking algorithms evaluation in any operating conditions. The aim of this paper is to assess the reliability of these non-photorealistic synthetic sequences for evaluation of tracking systems on complex-textured objects, and to show how the number of parameters can be increased to synthesize more elaborated scenes and deal with more complex dynamics, including occlusions and three-dimensional deformations.

  2. XbD Video 3, The SEEing process of qualitative data analysis

    DEFF Research Database (Denmark)

    2013-01-01

    This is the third video in the Experience-based Designing series. It presents a live classroom demonstration of a nine step qualitative data analysis process called SEEing: The process is useful for uncovering or discovering deeper layers of 'meaning' and meaning structures in an experience...

  3. Evaluation of clinical image processing algorithms used in digital mammography.

    Science.gov (United States)

    Zanca, Federica; Jacobs, Jurgen; Van Ongeval, Chantal; Claus, Filip; Celis, Valerie; Geniets, Catherine; Provost, Veerle; Pauwels, Herman; Marchal, Guy; Bosmans, Hilde

    2009-03-01

    Screening is the only proven approach to reduce the mortality of breast cancer, but significant numbers of breast cancers remain undetected even when all quality assurance guidelines are implemented. With the increasing adoption of digital mammography systems, image processing may be a key factor in the imaging chain. Although to our knowledge statistically significant effects of manufacturer-recommended image processings have not been previously demonstrated, the subjective experience of our radiologists, that the apparent image quality can vary considerably between different algorithms, motivated this study. This article addresses the impact of five such algorithms on the detection of clusters of microcalcifications. A database of unprocessed (raw) images of 200 normal digital mammograms, acquired with the Siemens Novation DR, was collected retrospectively. Realistic simulated microcalcification clusters were inserted in half of the unprocessed images. All unprocessed images were subsequently processed with five manufacturer-recommended image processing algorithms (Agfa Musica 1, IMS Raffaello Mammo 1.2, Sectra Mamea AB Sigmoid, Siemens OPVIEW v2, and Siemens OPVIEW v1). Four breast imaging radiologists were asked to locate and score the clusters in each image on a five point rating scale. The free-response data were analyzed by the jackknife free-response receiver operating characteristic (JAFROC) method and, for comparison, also with the receiver operating characteristic (ROC) method. JAFROC analysis revealed highly significant differences between the image processings (F = 8.51, p image processing strongly impacts the detectability of clusters. Siemens OPVIEW2 and Siemens OPVIEW1 yielded the highest and lowest performances, respectively. ROC analysis of the data also revealed significant differences between the processing but at lower significance (F = 3.47, p = 0.0305) than JAFROC. Both statistical analysis methods revealed that the same six pairs of

  4. Genetic Algorithm Optimisation of PID Controllers for a Multivariable Process

    Directory of Open Access Journals (Sweden)

    Wael Alharbi

    2017-03-01

    Full Text Available This project is about the design of PID controllers and the improvement of outputs in multivariable processes. The optimisation of PID controller for the Shell oil process is presented in this paper, using Genetic Algorithms (GAs. Genetic Algorithms (GAs are used to automatically tune PID controllers according to given specifications. They use an objective function, which is specially formulated and measures the performance of controller in terms of time-domain bounds on the responses of closed-loop process.A specific objective function is suggested that allows the designer for a single-input, single-output (SISO process to explicitly specify the process performance specifications associated with the given problem in terms of time-domain bounds, then experimentally evaluate the closed-loop responses. This is investigated using a simple two-term parametric PID controller tuning problem. The results are then analysed and compared with those obtained using a number of popular conventional controller tuning methods. The intention is to demonstrate that the proposed objective function is inherently capable of accurately quantifying complex performance specifications in the time domain. This is something that cannot normally be employed in conventional controller design or tuning methods.Finally, the recommended objective function will be used to examine the control problems of Multi-Input-Multi-Output (MIMO processes, and the results will be presented in order to determine the efficiency of the suggested control system.

  5. IJA: An Efficient Algorithm for Query Processing in Sensor Networks

    Directory of Open Access Journals (Sweden)

    Dong Hwa Kim

    2011-01-01

    Full Text Available One of main features in sensor networks is the function that processes real time state information after gathering needed data from many domains. The component technologies consisting of each node called a sensor node that are including physical sensors, processors, actuators and power have advanced significantly over the last decade. Thanks to the advanced technology, over time sensor networks have been adopted in an all-round industry sensing physical phenomenon. However, sensor nodes in sensor networks are considerably constrained because with their energy and memory resources they have a very limited ability to process any information compared to conventional computer systems. Thus query processing over the nodes should be constrained because of their limitations. Due to the problems, the join operations in sensor networks are typically processed in a distributed manner over a set of nodes and have been studied. By way of example while simple queries, such as select and aggregate queries, in sensor networks have been addressed in the literature, the processing of join queries in sensor networks remains to be investigated. Therefore, in this paper, we propose and describe an Incremental Join Algorithm (IJA in Sensor Networks to reduce the overhead caused by moving a join pair to the final join node or to minimize the communication cost that is the main consumer of the battery when processing the distributed queries in sensor networks environments. At the same time, the simulation result shows that the proposed IJA algorithm significantly reduces the number of bytes to be moved to join nodes compared to the popular synopsis join algorithm.

  6. IJA: An Efficient Algorithm for Query Processing in Sensor Networks

    Science.gov (United States)

    Lee, Hyun Chang; Lee, Young Jae; Lim, Ji Hyang; Kim, Dong Hwa

    2011-01-01

    One of main features in sensor networks is the function that processes real time state information after gathering needed data from many domains. The component technologies consisting of each node called a sensor node that are including physical sensors, processors, actuators and power have advanced significantly over the last decade. Thanks to the advanced technology, over time sensor networks have been adopted in an all-round industry sensing physical phenomenon. However, sensor nodes in sensor networks are considerably constrained because with their energy and memory resources they have a very limited ability to process any information compared to conventional computer systems. Thus query processing over the nodes should be constrained because of their limitations. Due to the problems, the join operations in sensor networks are typically processed in a distributed manner over a set of nodes and have been studied. By way of example while simple queries, such as select and aggregate queries, in sensor networks have been addressed in the literature, the processing of join queries in sensor networks remains to be investigated. Therefore, in this paper, we propose and describe an Incremental Join Algorithm (IJA) in Sensor Networks to reduce the overhead caused by moving a join pair to the final join node or to minimize the communication cost that is the main consumer of the battery when processing the distributed queries in sensor networks environments. At the same time, the simulation result shows that the proposed IJA algorithm significantly reduces the number of bytes to be moved to join nodes compared to the popular synopsis join algorithm. PMID:22319375

  7. A fast autofocus algorithm for synthetic aperture radar processing

    DEFF Research Database (Denmark)

    Dall, Jørgen

    1992-01-01

    High-resolution synthetic aperture radar (SAR) imaging requires the motion of the radar platform to be known very accurately. Otherwise, phase errors are induced in the processing of the raw SAR data, and bad focusing results. In particular, a constant error in the measured along-track velocity...... or the cross-track acceleration leads to a phase error that varies quadratically over the synthetic aperture. The process of estimating this quadratic phase error directly from the radar data is termed autofocus. A novel autofocus algorithm with a computational complexity which is at least an order...

  8. Action Video Games Do Not Improve the Speed of Information Processing in Simple Perceptual Tasks

    Science.gov (United States)

    van Ravenzwaaij, Don; Boekel, Wouter; Forstmann, Birte U.; Ratcliff, Roger; Wagenmakers, Eric-Jan

    2015-01-01

    Previous research suggests that playing action video games improves performance on sensory, perceptual, and attentional tasks. For instance, Green, Pouget, and Bavelier (2010) used the diffusion model to decompose data from a motion detection task and estimate the contribution of several underlying psychological processes. Their analysis indicated that playing action video games leads to faster information processing, reduced response caution, and no difference in motor responding. Because perceptual learning is generally thought to be highly context-specific, this transfer from gaming is surprising and warrants corroborative evidence from a large-scale training study. We conducted 2 experiments in which participants practiced either an action video game or a cognitive game in 5 separate, supervised sessions. Prior to each session and following the last session, participants performed a perceptual discrimination task. In the second experiment, we included a third condition in which no video games were played at all. Behavioral data and diffusion model parameters showed similar practice effects for the action gamers, the cognitive gamers, and the nongamers and suggest that, in contrast to earlier reports, playing action video games does not improve the speed of information processing in simple perceptual tasks. PMID:24933517

  9. Action video games do not improve the speed of information processing in simple perceptual tasks.

    Science.gov (United States)

    van Ravenzwaaij, Don; Boekel, Wouter; Forstmann, Birte U; Ratcliff, Roger; Wagenmakers, Eric-Jan

    2014-10-01

    Previous research suggests that playing action video games improves performance on sensory, perceptual, and attentional tasks. For instance, Green, Pouget, and Bavelier (2010) used the diffusion model to decompose data from a motion detection task and estimate the contribution of several underlying psychological processes. Their analysis indicated that playing action video games leads to faster information processing, reduced response caution, and no difference in motor responding. Because perceptual learning is generally thought to be highly context-specific, this transfer from gaming is surprising and warrants corroborative evidence from a large-scale training study. We conducted 2 experiments in which participants practiced either an action video game or a cognitive game in 5 separate, supervised sessions. Prior to each session and following the last session, participants performed a perceptual discrimination task. In the second experiment, we included a third condition in which no video games were played at all. Behavioral data and diffusion model parameters showed similar practice effects for the action gamers, the cognitive gamers, and the nongamers and suggest that, in contrast to earlier reports, playing action video games does not improve the speed of information processing in simple perceptual tasks.

  10. A Time-Consistent Video Segmentation Algorithm Designed for Real-Time Implementation

    Directory of Open Access Journals (Sweden)

    M. El Hassani

    2008-01-01

    Temporal consistency of the segmentation is ensured by incorporating motion information through the use of an improved change-detection mask. This mask is designed using both illumination differences between frames and region segmentation of the previous frame. By considering both pixel and region levels, we obtain a particularly efficient algorithm at a low computational cost, allowing its implementation in real-time on the TriMedia processor for CIF image sequences.

  11. Adaptive Process Control with Fuzzy Logic and Genetic Algorithms

    Science.gov (United States)

    Karr, C. L.

    1993-01-01

    Researchers at the U.S. Bureau of Mines have developed adaptive process control systems in which genetic algorithms (GA's) are used to augment fuzzy logic controllers (FLC's). GA's are search algorithms that rapidly locate near-optimum solutions to a wide spectrum of problems by modeling the search procedures of natural genetics. FLC's are rule based systems that efficiently manipulate a problem environment by modeling the 'rule-of-thumb' strategy used in human decision-making. Together, GA's and FLC's possess the capabilities necessary to produce powerful, efficient, and robust adaptive control systems. To perform efficiently, such control systems require a control element to manipulate the problem environment, an analysis element to recognize changes in the problem environment, and a learning element to adjust to the changes in the problem environment. Details of an overall adaptive control system are discussed. A specific laboratory acid-base pH system is used to demonstrate the ideas presented.

  12. Adaptive process control using fuzzy logic and genetic algorithms

    Science.gov (United States)

    Karr, C. L.

    1993-01-01

    Researchers at the U.S. Bureau of Mines have developed adaptive process control systems in which genetic algorithms (GA's) are used to augment fuzzy logic controllers (FLC's). GA's are search algorithms that rapidly locate near-optimum solutions to a wide spectrum of problems by modeling the search procedures of natural genetics. FLC's are rule based systems that efficiently manipulate a problem environment by modeling the 'rule-of-thumb' strategy used in human decision making. Together, GA's and FLC's possess the capabilities necessary to produce powerful, efficient, and robust adaptive control systems. To perform efficiently, such control systems require a control element to manipulate the problem environment, and a learning element to adjust to the changes in the problem environment. Details of an overall adaptive control system are discussed. A specific laboratory acid-base pH system is used to demonstrate the ideas presented.

  13. PALMA, an improved algorithm for DOSY signal processing

    CERN Document Server

    Cherni, Afef; Delsuc, Marc-André

    2016-01-01

    NMR is a tool of choice for the measure of diffusion coefficients of species in solution. The DOSY experiment, a 2D implementation of this measure, has proven to be particularly useful for the study of complex mixtures, molecular interactions, polymers, etc. However, DOSY data analysis requires to resort to inverse Laplace transform, in particular for polydisperse samples. This is a known difficult numerical task, for which we present here a novel approach. A new algorithm based on a splitting scheme and on the use of proximity operators is introduced. Used in conjunction with a Maximum Entropy and $\\ell_1$ hybrid regularisation, this algorithm converges rapidly and produces results robust against experimental noise. This method has been called PALMA. It is able to reproduce faithfully monodisperse as well as polydisperse systems, and numerous simulated and experimental examples are presented. It has been implemented on the server http://palma. labo.igbmc.fr where users can have their datasets processed autom...

  14. Video enhancement effectiveness for target detection

    Science.gov (United States)

    Simon, Michael; Fischer, Amber; Petrov, Plamen

    2011-05-01

    Unmanned aerial vehicles (UAVs) capture real-time video data of military targets while keeping the warfighter at a safe distance. This keeps soldiers out of harm's way while they perform intelligence, surveillance and reconnaissance (ISR) and close-air support troops in contact (CAS-TIC) situations. The military also wants to use UAV video to achieve force multiplication. One method of achieving effective force multiplication involves fielding numerous UAVs with cameras and having multiple videos processed simultaneously by a single operator. However, monitoring multiple video streams is difficult for operators when the videos are of low quality. To address this challenge, we researched several promising video enhancement algorithms that focus on improving video quality. In this paper, we discuss our video enhancement suite and provide examples of video enhancement capabilities, focusing on stabilization, dehazing, and denoising. We provide results that show the effects of our enhancement algorithms on target detection and tracking algorithms. These results indicate that there is potential to assist the operator in identifying and tracking relevant targets with aided target recognition even on difficult video, increasing the force multiplier effect of UAVs. This work also forms the basis for human factors research into the effects of enhancement algorithms on ISR missions.

  15. Advances and applications of optimised algorithms in image processing

    CERN Document Server

    Oliva, Diego

    2017-01-01

    This book presents a study of the use of optimization algorithms in complex image processing problems. The problems selected explore areas ranging from the theory of image segmentation to the detection of complex objects in medical images. Furthermore, the concepts of machine learning and optimization are analyzed to provide an overview of the application of these tools in image processing. The material has been compiled from a teaching perspective. Accordingly, the book is primarily intended for undergraduate and postgraduate students of Science, Engineering, and Computational Mathematics, and can be used for courses on Artificial Intelligence, Advanced Image Processing, Computational Intelligence, etc. Likewise, the material can be useful for research from the evolutionary computation, artificial intelligence and image processing co.

  16. Optimized Laplacian image sharpening algorithm based on graphic processing unit

    Science.gov (United States)

    Ma, Tinghuai; Li, Lu; Ji, Sai; Wang, Xin; Tian, Yuan; Al-Dhelaan, Abdullah; Al-Rodhaan, Mznah

    2014-12-01

    In classical Laplacian image sharpening, all pixels are processed one by one, which leads to large amount of computation. Traditional Laplacian sharpening processed on CPU is considerably time-consuming especially for those large pictures. In this paper, we propose a parallel implementation of Laplacian sharpening based on Compute Unified Device Architecture (CUDA), which is a computing platform of Graphic Processing Units (GPU), and analyze the impact of picture size on performance and the relationship between the processing time of between data transfer time and parallel computing time. Further, according to different features of different memory, an improved scheme of our method is developed, which exploits shared memory in GPU instead of global memory and further increases the efficiency. Experimental results prove that two novel algorithms outperform traditional consequentially method based on OpenCV in the aspect of computing speed.

  17. Video image processing to create a speed sensor

    Science.gov (United States)

    1999-11-01

    Image processing has been applied to traffic analysis in recent years, with different goals. In the report, a new approach is presented for extracting vehicular speed information, given a sequence of real-time traffic images. We extract moving edges ...

  18. Gaming to see: Action Video Gaming is associated with enhanced processing of masked stimuli

    Directory of Open Access Journals (Sweden)

    Carsten ePohl

    2014-02-01

    Full Text Available Recent research revealed that action video game players outperform non-players in a wide range of attentional, perceptual and cognitive tasks. Here we tested if expertise in action video games is related to differences regarding the potential of shortly presented stimuli to bias behaviour. In a response priming paradigm, participants classified four animal pictures functioning as targets as being smaller or larger than a reference frame. Before each target, one of the same four animal pictures was presented as a masked prime to influence participants’ responses in a congruent or incongruent way. Masked primes induced congruence effects, that is, faster responses for congruent compared to incongruent conditions, indicating processing of hardly visible primes. Results also suggested that action video game players showed a larger congruence effect than non-players for 20 ms primes, whereas there was no group difference for 60 ms primes. In addition, there was a tendency for action video game players to detect masked primes for some prime durations better than non-players. Thus, action video game expertise may be accompanied by faster and more efficient processing of shortly presented visual stimuli.

  19. Action video games and improved attentional control: Disentangling selection- and response-based processes.

    Science.gov (United States)

    Chisholm, Joseph D; Kingstone, Alan

    2015-10-01

    Research has demonstrated that experience with action video games is associated with improvements in a host of cognitive tasks. Evidence from paradigms that assess aspects of attention has suggested that action video game players (AVGPs) possess greater control over the allocation of attentional resources than do non-video-game players (NVGPs). Using a compound search task that teased apart selection- and response-based processes (Duncan, 1985), we required participants to perform an oculomotor capture task in which they made saccades to a uniquely colored target (selection-based process) and then produced a manual directional response based on information within the target (response-based process). We replicated the finding that AVGPs are less susceptible to attentional distraction and, critically, revealed that AVGPs outperform NVGPs on both selection-based and response-based processes. These results not only are consistent with the improved-attentional-control account of AVGP benefits, but they suggest that the benefit of action video game playing extends across the full breadth of attention-mediated stimulus-response processes that impact human performance.

  20. Colour-reproduction algorithm for transmitting variable video frames and its application to capsule endoscopy

    OpenAIRE

    Khan, Tareq; Shrestha, Ravi; Imtiaz, Md. Shamin; Khan A. Wahid

    2015-01-01

    Presented is a new power-efficient colour generation algorithm for wireless capsule endoscopy (WCE) application. In WCE, transmitting colour image data from the human intestine through radio frequency (RF) consumes a huge amount of power. The conventional way is to transmit all R, G and B components of all frames. Using the proposed dictionary-based colour generation scheme, instead of sending all R, G and B frames, first one colour frame is sent followed by a series of grey-scale frames. At ...

  1. Building a medical image processing algorithm verification database

    Science.gov (United States)

    Brown, C. Wayne

    2000-06-01

    The design of a database containing head Computed Tomography (CT) studies is presented, along with a justification for the database's composition. The database will be used to validate software algorithms that screen normal head CT studies from studies that contain pathology. The database is designed to have the following major properties: (1) a size sufficient for statistical viability, (2) inclusion of both normal (no pathology) and abnormal scans, (3) inclusion of scans due to equipment malfunction, technologist error, and uncooperative patients, (4) inclusion of data sets from multiple scanner manufacturers, (5) inclusion of data sets from different gender and age groups, and (6) three independent diagnosis of each data set. Designed correctly, the database will provide a partial basis for FDA (United States Food and Drug Administration) approval of image processing algorithms for clinical use. Our goal for the database is the proof of viability of screening head CT's for normal anatomy using computer algorithms. To put this work into context, a classification scheme for 'computer aided diagnosis' systems is proposed.

  2. Selection of parameters for advanced machining processes using firefly algorithm

    Directory of Open Access Journals (Sweden)

    Rajkamal Shukla

    2017-02-01

    Full Text Available Advanced machining processes (AMPs are widely utilized in industries for machining complex geometries and intricate profiles. In this paper, two significant processes such as electric discharge machining (EDM and abrasive water jet machining (AWJM are considered to get the optimum values of responses for the given range of process parameters. The firefly algorithm (FA is attempted to the considered processes to obtain optimized parameters and the results obtained are compared with the results given by previous researchers. The variation of process parameters with respect to the responses are plotted to confirm the optimum results obtained using FA. In EDM process, the performance parameter “MRR” is increased from 159.70 gm/min to 181.6723 gm/min, while “Ra” and “REWR” are decreased from 6.21 μm to 3.6767 μm and 6.21% to 6.324 × 10−5% respectively. In AWJM process, the value of the “kerf” and “Ra” are decreased from 0.858 mm to 0.3704 mm and 5.41 mm to 4.443 mm respectively. In both the processes, the obtained results show a significant improvement in the responses.

  3. Online scene change detection of multicast (MBone) video

    Science.gov (United States)

    Zhou, Wensheng; Shen, Ye; Vellaikal, Asha; Kuo, C.-C. Jay

    1998-10-01

    Many multimedia applications, such as multimedia data management systems and communication systems, require efficient representation of multimedia content. Thus semantic interpretation of video content has been a popular research area. Currently, most content-based video representation involves the segmentation of video based on key frames which are generated using scene change detection techniques as well as camera/object motion. Then, video features can be extracted from key frames. However most of such research performs off-line video processing in which the whole video scope is known as a priori which allows multiple scans of the stored video files during video processing. In comparison, relatively not much research has been done in the area of on-line video processing, which is crucial in video communication applications such as on-line collaboration, news broadcasts and so on. Our research investigates on-line real-time scene change detection of multicast video over the Internet. Our on-line processing system are designed to meet the requirements of real-time video multicasting over the Internet and to utilize the successful video parsing techniques available today. The proposed algorithms extract key frames from video bitstreams sent through the MBone network, and the extracted key frames are multicasted as annotations or metadata over a separate channel to assist in content filtering such as those anticipated to be in use by on-line filtering proxies in the Internet. The performance of the proposed algorithms are demonstrated and discussed in this paper.

  4. Peri-operative imaging of cancer margins with reflectance confocal microscopy during Mohs micrographic surgery: feasibility of a video-mosaicing algorithm

    Science.gov (United States)

    Flores, Eileen; Yelamos, Oriol; Cordova, Miguel; Kose, Kivanc; Phillips, William; Rossi, Anthony; Nehal, Kishwer; Rajadhyaksha, Milind

    2017-02-01

    Reflectance confocal microscopy (RCM) imaging shows promise for guiding surgical treatment of skin cancers. Recent technological advancements such as the introduction of the handheld version of the reflectance confocal microscope, video acquisition and video-mosaicing have improved RCM as an emerging tool to evaluate cancer margins during routine surgical skin procedures such as Mohs micrographic surgery (MMS). Detection of residual non-melanoma skin cancer (NMSC) tumor during MMS is feasible, as demonstrated by the introduction of real-time perioperative imaging on patients in the surgical setting. Our study is currently testing the feasibility of a new mosaicing algorithm for perioperative RCM imaging of NMSC cancer margins on patients during MMS. We report progress toward imaging and image analysis on forty-five patients, who presented for MMS at the MSKCC Dermatology service. The first 10 patients were used as a training set to establish an RCM imaging algorithm, which was implemented on the remaining test set of 35 patients. RCM imaging, using 35% AlCl3 for nuclear contrast, was performed pre- and intra-operatively with the Vivascope 3000 (Caliber ID). Imaging was performed in quadrants in the wound, to simulate the Mohs surgeon's examination of pathology. Videos were taken at the epidermal and deep dermal margins. Our Mohs surgeons assessed all videos and video-mosaics for quality and correlation to histology. Overall, our RCM video-mosaicing algorithm is feasible. RCM videos and video-mosaics of the epidermal and dermal margins were found to be of clinically acceptable quality. Assessment of cancer margins was affected by type of NMSC, size and location. Among the test set of 35 patients, 83% showed acceptable imaging quality, resolution and contrast. Visualization of nuclear and cellular morphology of residual BCC/SCC tumor and normal skin features could be detected in the peripheral and deep dermal margins. We observed correlation between the RCM videos/video

  5. Effects of video-game play on information processing: a meta-analytic investigation.

    Science.gov (United States)

    Powers, Kasey L; Brooks, Patricia J; Aldrich, Naomi J; Palladino, Melissa A; Alfieri, Louis

    2013-12-01

    Do video games enhance cognitive functioning? We conducted two meta-analyses based on different research designs to investigate how video games impact information-processing skills (auditory processing, executive functions, motor skills, spatial imagery, and visual processing). Quasi-experimental studies (72 studies, 318 comparisons) compare habitual gamers with controls; true experiments (46 studies, 251 comparisons) use commercial video games in training. Using random-effects models, video games led to improved information processing in both the quasi-experimental studies, d = 0.61, 95% CI [0.50, 0.73], and the true experiments, d = 0.48, 95% CI [0.35, 0.60]. Whereas the quasi-experimental studies yielded small to large effect sizes across domains, the true experiments yielded negligible effects for executive functions, which contrasted with the small to medium effect sizes in other domains. The quasi-experimental studies appeared more susceptible to bias than were the true experiments, with larger effects being reported in higher-tier than in lower-tier journals, and larger effects reported by the most active research groups in comparison with other labs. The results are further discussed with respect to other moderators and limitations in the extant literature.

  6. The Use of Video Feedback in Teaching Process-Approach EFL Writing

    Science.gov (United States)

    Özkul, Sertaç; Ortaçtepe, Deniz

    2017-01-01

    This experimental study investigated the use of video feedback as an alternative to feedback with correction codes at an institution where the latter was commonly used for teaching process-approach English as a foreign language (EFL) writing. Over a 5-week period, the control and the experimental groups were provided with feedback based on…

  7. Algorithms

    Indian Academy of Sciences (India)

    positive numbers. The word 'algorithm' was most often associated with this algorithm till 1950. It may however be pOinted out that several non-trivial algorithms such as synthetic (polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used.

  8. A Scalable Gaussian Process Analysis Algorithm for Biomass Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Chandola, Varun [ORNL; Vatsavai, Raju [ORNL

    2011-01-01

    Biomass monitoring is vital for studying the carbon cycle of earth's ecosystem and has several significant implications, especially in the context of understanding climate change and its impacts. Recently, several change detection methods have been proposed to identify land cover changes in temporal profiles (time series) of vegetation collected using remote sensing instruments, but do not satisfy one or both of the two requirements of the biomass monitoring problem, i.e., {\\em operating in online mode} and {\\em handling periodic time series}. In this paper, we adapt Gaussian process regression to detect changes in such time series in an online fashion. While Gaussian process (GP) have been widely used as a kernel based learning method for regression and classification, their applicability to massive spatio-temporal data sets, such as remote sensing data, has been limited owing to the high computational costs involved. We focus on addressing the scalability issues associated with the proposed GP based change detection algorithm. This paper makes several significant contributions. First, we propose a GP based online time series change detection algorithm and demonstrate its effectiveness in detecting different types of changes in {\\em Normalized Difference Vegetation Index} (NDVI) data obtained from a study area in Iowa, USA. Second, we propose an efficient Toeplitz matrix based solution which significantly improves the computational complexity and memory requirements of the proposed GP based method. Specifically, the proposed solution can analyze a time series of length $t$ in $O(t^2)$ time while maintaining a $O(t)$ memory footprint, compared to the $O(t^3)$ time and $O(t^2)$ memory requirement of standard matrix manipulation based methods. Third, we describe a parallel version of the proposed solution which can be used to simultaneously analyze a large number of time series. We study three different parallel implementations: using threads, MPI, and a

  9. DSMC multicomponent aerosol dynamics: Sampling algorithms and aerosol processes

    Science.gov (United States)

    Palaniswaamy, Geethpriya

    The post-accident nuclear reactor primary and containment environments can be characterized by high temperatures and pressures, and fission products and nuclear aerosols. These aerosols evolve via natural transport processes as well as under the influence of engineered safety features. These aerosols can be hazardous and may pose risk to the public if released into the environment. Computations of their evolution, movement and distribution involve the study of various processes such as coagulation, deposition, condensation, etc., and are influenced by factors such as particle shape, charge, radioactivity and spatial inhomogeneity. These many factors make the numerical study of nuclear aerosol evolution computationally very complicated. The focus of this research is on the use of the Direct Simulation Monte Carlo (DSMC) technique to elucidate the role of various phenomena that influence the nuclear aerosol evolution. In this research, several aerosol processes such as coagulation, deposition, condensation, and source reinforcement are explored for a multi-component, aerosol dynamics problem in a spatially homogeneous medium. Among the various sampling algorithms explored the Metropolis sampling algorithm was found to be effective and fast. Several test problems and test cases are simulated using the DSMC technique. The DSMC results obtained are verified against the analytical and sectional results for appropriate test problems. Results show that the assumption of a single mean density is not appropriate due to the complicated effect of component densities on the aerosol processes. The methods developed and the insights gained will also be helpful in future research on the challenges associated with the description of fission product and aerosol releases.

  10. Video Pulses: User-Based Modeling of Interesting Video Segments

    Directory of Open Access Journals (Sweden)

    Markos Avlonitis

    2014-01-01

    Full Text Available We present a user-based method that detects regions of interest within a video in order to provide video skims and video summaries. Previous research in video retrieval has focused on content-based techniques, such as pattern recognition algorithms that attempt to understand the low-level features of a video. We are proposing a pulse modeling method, which makes sense of a web video by analyzing users' Replay interactions with the video player. In particular, we have modeled the user information seeking behavior as a time series and the semantic regions as a discrete pulse of fixed width. Then, we have calculated the correlation coefficient between the dynamically detected pulses at the local maximums of the user activity signal and the pulse of reference. We have found that users' Replay activity significantly matches the important segments in information-rich and visually complex videos, such as lecture, how-to, and documentary. The proposed signal processing of user activity is complementary to previous work in content-based video retrieval and provides an additional user-based dimension for modeling the semantics of a social video on the web.

  11. The Editing Process Making Video Formats to Dvd Using Sony Vegas Pro 8.0.

    OpenAIRE

    Franky Suprayitno; Yulina Yulina, SKom, MMSi

    2007-01-01

    In this modern era, the need for entertainment is very important, especially in the business world. Therefore, the application of digital systems technology is one of the appropriate solution in the process of filmmaking. With this scientific writing, may be helpful to learn the process of digital video editing using software Sony Vegas Pro 8.0 Sony Creative Software output. The need for entertainment is very important, especially in the business world such as advertising and movie theater. T...

  12. A CCTV system with SMS alert (CMDSA): An implementation of pixel processing algorithm for motion detection

    Science.gov (United States)

    Rahman, Nurul Hidayah Ab; Abdullah, Nurul Azma; Hamid, Isredza Rahmi A.; Wen, Chuah Chai; Jelani, Mohamad Shafiqur Rahman Mohd

    2017-10-01

    Closed-Circuit TV (CCTV) system is one of the technologies in surveillance field to solve the problem of detection and monitoring by providing extra features such as email alert or motion detection. However, detecting and alerting the admin on CCTV system may complicate due to the complexity to integrate the main program with an external Application Programming Interface (API). In this study, pixel processing algorithm is applied due to its efficiency and SMS alert is added as an alternative solution for users who opted out email alert system or have no Internet connection. A CCTV system with SMS alert (CMDSA) was developed using evolutionary prototyping methodology. The system interface was implemented using Microsoft Visual Studio while the backend components, which are database and coding, were implemented on SQLite database and C# programming language, respectively. The main modules of CMDSA are motion detection, capturing and saving video, image processing and Short Message Service (SMS) alert functions. Subsequently, the system is able to reduce the processing time making the detection process become faster, reduce the space and memory used to run the program and alerting the system admin instantly.

  13. Facilitation or disengagement? Attention bias in facial affect processing after short-term violent video game exposure

    OpenAIRE

    Liu, Yanling; Lan, Haiying; Teng, Zhaojun; Guo, Cheng; Yao, Dezhong

    2017-01-01

    Previous research has been inconsistent on whether violent video games exert positive and/or negative effects on cognition. In particular, attentional bias in facial affect processing after violent video game exposure continues to be controversial. The aim of the present study was to investigate attentional bias in facial recognition after short term exposure to violent video games and to characterize the neural correlates of this effect. In order to accomplish this, participants were exposed...

  14. IMAGEP - A FORTRAN ALGORITHM FOR DIGITAL IMAGE PROCESSING

    Science.gov (United States)

    Roth, D. J.

    1994-01-01

    IMAGEP is a FORTRAN computer algorithm containing various image processing, analysis, and enhancement functions. It is a keyboard-driven program organized into nine subroutines. Within the subroutines are other routines, also, selected via keyboard. Some of the functions performed by IMAGEP include digitization, storage and retrieval of images; image enhancement by contrast expansion, addition and subtraction, magnification, inversion, and bit shifting; display and movement of cursor; display of grey level histogram of image; and display of the variation of grey level intensity as a function of image position. This algorithm has possible scientific, industrial, and biomedical applications in material flaw studies, steel and ore analysis, and pathology, respectively. IMAGEP is written in VAX FORTRAN for DEC VAX series computers running VMS. The program requires the use of a Grinnell 274 image processor which can be obtained from Mark McCloud Associates, Campbell, CA. An object library of the required GMR series software is included on the distribution media. IMAGEP requires 1Mb of RAM for execution. The standard distribution medium for this program is a 1600 BPI 9track magnetic tape in VAX FILES-11 format. It is also available on a TK50 tape cartridge in VAX FILES-11 format. This program was developed in 1991. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation.

  15. Simulation-based algorithms for Markov decision processes

    CERN Document Server

    Chang, Hyeong Soo; Fu, Michael C; Marcus, Steven I

    2013-01-01

    Markov decision process (MDP) models are widely used for modeling sequential decision-making problems that arise in engineering, economics, computer science, and the social sciences.  Many real-world problems modeled by MDPs have huge state and/or action spaces, giving an opening to the curse of dimensionality and so making practical solution of the resulting models intractable.  In other cases, the system of interest is too complex to allow explicit specification of some of the MDP model parameters, but simulation samples are readily available (e.g., for random transitions and costs). For these settings, various sampling and population-based algorithms have been developed to overcome the difficulties of computing an optimal solution in terms of a policy and/or value function.  Specific approaches include adaptive sampling, evolutionary policy iteration, evolutionary random policy search, and model reference adaptive search. This substantially enlarged new edition reflects the latest developments in novel ...

  16. Using Digital Time-Lapse Videos to Teach Geomorphic Processes to Undergraduates

    Science.gov (United States)

    Clark, D. H.; Linneman, S. R.; Fuller, J.

    2004-12-01

    We demonstrate the use of relatively low-cost, computer-based digital imagery to create time-lapse videos of two distinct geomorphic processes in order to help students grasp the significance of the rates, styles, and temporal dependence of geologic phenomena. Student interviews indicate that such videos help them to understand the relationship between processes and landform development. Time-lapse videos have been used extensively in some sciences (e.g., biology - http://sbcf.iu.edu/goodpract/hangarter.html, meteorology - http://www.apple.com/education/hed/aua0101s/meteor/, chemistry - http://www.chem.yorku.ca/profs/hempsted/chemed/home.html) to demonstrate gradual processes that are difficult for many students to visualize. Most geologic processes are slower still, and are consequently even more difficult for students to grasp, yet time-lapse videos are rarely used in earth science classrooms. The advent of inexpensive web-cams and computers provides a new means to explore the temporal dimension of earth surface processes. To test the use of time-lapse videos in geoscience education, we are developing time-lapse movies that record the evolution of two landforms: a stream-table delta and a large, natural, active landslide. The former involves well-known processes in a controlled, repeatable laboratory experiment, whereas the latter tracks the developing dynamics of an otherwise poorly understood slope failure. The stream-table delta is small and grows in ca. 2 days; we capture a frame on an overhead web-cam every 3 minutes. Before seeing the video, students are asked to hypothesize how the delta will grow through time. The final time-lapse video, ca. 20-80 MB, elegantly shows channel migration, progradation rates, and formation of major geomorphic elements (topset, foreset, bottomset beds). The web-cam can also be "zoomed-in" to show smaller-scale processes, such as bedload transfer, and foreset slumping. Post-lab tests and interviews with students indicate that

  17. Ubiquitous UAVs: a cloud based framework for storing, accessing and processing huge amount of video footage in an efficient way

    Science.gov (United States)

    Efstathiou, Nectarios; Skitsas, Michael; Psaroudakis, Chrysostomos; Koutras, Nikolaos

    2017-09-01

    Nowadays, video surveillance cameras are used for the protection and monitoring of a huge number of facilities worldwide. An important element in such surveillance systems is the use of aerial video streams originating from onboard sensors located on Unmanned Aerial Vehicles (UAVs). Video surveillance using UAVs represent a vast amount of video to be transmitted, stored, analyzed and visualized in a real-time way. As a result, the introduction and development of systems able to handle huge amount of data become a necessity. In this paper, a new approach for the collection, transmission and storage of aerial videos and metadata is introduced. The objective of this work is twofold. First, the integration of the appropriate equipment in order to capture and transmit real-time video including metadata (i.e. position coordinates, target) from the UAV to the ground and, second, the utilization of the ADITESS Versatile Media Content Management System (VMCMS-GE) for storing of the video stream and the appropriate metadata. Beyond the storage, VMCMS-GE provides other efficient management capabilities such as searching and processing of videos, along with video transcoding. For the evaluation and demonstration of the proposed framework we execute a use case where the surveillance of critical infrastructure and the detection of suspicious activities is performed. Collected video Transcodingis subject of this evaluation as well.

  18. Video Conferencing for Opening Classroom Doors in Initial Teacher Education: Sociocultural Processes of Mimicking and Improvisation

    Directory of Open Access Journals (Sweden)

    Rolf Wiesemes

    2010-11-01

    Full Text Available In this article, we present an alternative framework for conceptualising video-conferencing uses in initial teacher education and in Higher Education (HE more generally. This alternative framework takes into account the existing models in the field, but – based on a set of interviews conducted with teacher trainees and wider analysis of the related literature – we suggest that there is a need to add to existing models the notions of ‘mimicking’ (copying practice and improvisation (unplanned and spontaneous personal learning moments. These two notions are considered to be vital, as they remain valid throughout teachers’ careers and constitute key affordances of video-conferencing uses in HE. In particular, we argue that improvisational processes can be considered as key for developing professional practice and lifelong learning and that video-conferencing uses in initial teacher education can contribute to an understanding of training and learning processes. Current conceptualisations of video conferencing as suggested by Coyle (2004 and Marsh et al. (2009 remain valid, but also are limited in their scope with respect to focusing predominantly on pragmatic and instrumental teacher-training issues. Our article suggests that the theoretical conceptualisations of video conferencing should be expanded to include elements of mimicking and ultimately improvisation. This allows us to consider not just etic aspects of practice, but equally emic practices and related personal professional development. We locate these arguments more widely in a sociocultural-theory framework, as it enables us to describe interactions in dialectical rather than dichotomous terms (Lantolf & Poehner, 2008.

  19. Video Conferencing for Opening Classroom Doors in Initial Teacher Education: Sociocultural Processes of Mimicking and Improvisation

    Directory of Open Access Journals (Sweden)

    Wiesemes, Rolf

    2010-01-01

    Full Text Available In this article, we present an alternative framework for conceptualising video-conferencing uses in initial teacher education and in Higher Education (HE more generally. This alternative framework takes into account the existing models in the field, but – based on a set of interviews conducted with teacher trainees and wider analysis of the related literature – we suggest that there is a need to add to existing models the notions of ‘mimicking’ (copying practice and improvisation (unplanned and spontaneous personal learning moments. These two notions are considered to be vital, as they remain valid throughout teachers’ careers and constitute key affordances of video-conferencing uses in HE. In particular, we argue that improvisational processes can be considered as key for developing professional practice and lifelong learning and that video-conferencing uses in initial teacher education can contribute to an understanding of training and learning processes. Current conceptualisations of video conferencing as suggested by Coyle (2004 and Marsh et al. (2009 remain valid, but also are limited in their scope with respect to focusing predominantly on pragmatic and instrumental teacher-training issues. Our article suggests that the theoretical conceptualisations of video conferencing should be expanded to include elements of mimicking and ultimately improvisation. This allows us to consider not just etic aspects of practice, but equally emic practices and related personal professional development. We locate these arguments more widely in a sociocultural-theory framework, as it enables us to describe interactions in dialectical rather than dichotomous terms (Lantolf & Poehner, 2008.

  20. Video-based rendering

    CERN Document Server

    Magnor, Marcus A

    2005-01-01

    Driven by consumer-market applications that enjoy steadily increasing economic importance, graphics hardware and rendering algorithms are a central focus of computer graphics research. Video-based rendering is an approach that aims to overcome the current bottleneck in the time-consuming modeling process and has applications in areas such as computer games, special effects, and interactive TV. This book offers an in-depth introduction to video-based rendering, a rapidly developing new interdisciplinary topic employing techniques from computer graphics, computer vision, and telecommunication en

  1. Design and Implementation of Video Shot Detection on Field Programmable Gate Arrays

    Directory of Open Access Journals (Sweden)

    Jharna Majumdar

    2012-09-01

    Full Text Available Video has become an interactive medium of communication in everyday life. The sheer volume of video makes it extremely difficult to browse through and find the required data. Hence extraction of key frames from the video which represents the abstract of the entire video becomes necessary. The aim of the video shot detection is to find the position of the shot boundaries, so that key frames can be selected from each shot for subsequent processing such as video summarization, indexing etc. For most of the surveillance applications like video summery, face recognition etc., the hardware (real time implementation of these algorithms becomes necessary. Here in this paper we present the architecture for simultaneous accessing of consecutive frames, which are then used for the implementation of various Video Shot Detection algorithms. We also present the real time implementation of three video shot detection algorithms using the above mentioned architecture on FPGA (Field Programmable Gate Arrays.

  2. A Gaussian process guided particle filter for tracking 3D human pose in video.

    Science.gov (United States)

    Sedai, Suman; Bennamoun, Mohammed; Huynh, Du Q

    2013-11-01

    In this paper, we propose a hybrid method that combines Gaussian process learning, a particle filter, and annealing to track the 3D pose of a human subject in video sequences. Our approach, which we refer to as annealed Gaussian process guided particle filter, comprises two steps. In the training step, we use a supervised learning method to train a Gaussian process regressor that takes the silhouette descriptor as an input and produces multiple output poses modeled by a mixture of Gaussian distributions. In the tracking step, the output pose distributions from the Gaussian process regression are combined with the annealed particle filter to track the 3D pose in each frame of the video sequence. Our experiments show that the proposed method does not require initialization and does not lose tracking of the pose. We compare our approach with a standard annealed particle filter using the HumanEva-I dataset and with other state of the art approaches using the HumanEva-II dataset. The evaluation results show that our approach can successfully track the 3D human pose over long video sequences and give more accurate pose tracking results than the annealed particle filter.

  3. Shadow Detection Based on Regions of Light Sources for Object Extraction in Nighttime Video.

    Science.gov (United States)

    Lee, Gil-Beom; Lee, Myeong-Jin; Lee, Woo-Kyung; Park, Joo-Heon; Kim, Tae-Hwan

    2017-03-22

    Intelligent video surveillance systems detect pre-configured surveillance events through background modeling, foreground and object extraction, object tracking, and event detection. Shadow regions inside video frames sometimes appear as foreground objects, interfere with ensuing processes, and finally degrade the event detection performance of the systems. Conventional studies have mostly used intensity, color, texture, and geometric information to perform shadow detection in daytime video, but these methods lack the capability of removing shadows in nighttime video. In this paper, a novel shadow detection algorithm for nighttime video is proposed; this algorithm partitions each foreground object based on the object's vertical histogram and screens out shadow objects by validating their orientations heading toward regions of light sources. From the experimental results, it can be seen that the proposed algorithm shows more than 93.8% shadow removal and 89.9% object extraction rates for nighttime video sequences, and the algorithm outperforms conventional shadow removal algorithms designed for daytime videos.

  4. Compressive Sensing in Signal Processing: Algorithms and Transform Domain Formulations

    Directory of Open Access Journals (Sweden)

    Irena Orović

    2016-01-01

    Full Text Available Compressive sensing has emerged as an area that opens new perspectives in signal acquisition and processing. It appears as an alternative to the traditional sampling theory, endeavoring to reduce the required number of samples for successful signal reconstruction. In practice, compressive sensing aims to provide saving in sensing resources, transmission, and storage capacities and to facilitate signal processing in the circumstances when certain data are unavailable. To that end, compressive sensing relies on the mathematical algorithms solving the problem of data reconstruction from a greatly reduced number of measurements by exploring the properties of sparsity and incoherence. Therefore, this concept includes the optimization procedures aiming to provide the sparsest solution in a suitable representation domain. This work, therefore, offers a survey of the compressive sensing idea and prerequisites, together with the commonly used reconstruction methods. Moreover, the compressive sensing problem formulation is considered in signal processing applications assuming some of the commonly used transformation domains, namely, the Fourier transform domain, the polynomial Fourier transform domain, Hermite transform domain, and combined time-frequency domain.

  5. Reliable and Efficient Parallel Processing Algorithms and Architectures for Modern Signal Processing. Ph.D. Thesis

    Science.gov (United States)

    Liu, Kuojuey Ray

    1990-01-01

    Least-squares (LS) estimations and spectral decomposition algorithms constitute the heart of modern signal processing and communication problems. Implementations of recursive LS and spectral decomposition algorithms onto parallel processing architectures such as systolic arrays with efficient fault-tolerant schemes are the major concerns of this dissertation. There are four major results in this dissertation. First, we propose the systolic block Householder transformation with application to the recursive least-squares minimization. It is successfully implemented on a systolic array with a two-level pipelined implementation at the vector level as well as at the word level. Second, a real-time algorithm-based concurrent error detection scheme based on the residual method is proposed for the QRD RLS systolic array. The fault diagnosis, order degraded reconfiguration, and performance analysis are also considered. Third, the dynamic range, stability, error detection capability under finite-precision implementation, order degraded performance, and residual estimation under faulty situations for the QRD RLS systolic array are studied in details. Finally, we propose the use of multi-phase systolic algorithms for spectral decomposition based on the QR algorithm. Two systolic architectures, one based on triangular array and another based on rectangular array, are presented for the multiphase operations with fault-tolerant considerations. Eigenvectors and singular vectors can be easily obtained by using the multi-pase operations. Performance issues are also considered.

  6. Design of a test system for the development of advanced video chips and software algorithms.

    Science.gov (United States)

    Falkinger, Marita; Kranzfelder, Michael; Wilhelm, Dirk; Stemp, Verena; Koepf, Susanne; Jakob, Judith; Hille, Andreas; Endress, Wolfgang; Feussner, Hubertus; Schneider, Armin

    2015-04-01

    Visual deterioration is a crucial point in minimally invasive surgery impeding surgical performance. Modern image processing technologies appear to be promising approaches for further image optimization by digital elimination of disturbing particles. To make them mature for clinical application, an experimental test environment for evaluation of possible image interferences would be most helpful. After a comprehensive review of the literature (MEDLINE, IEEE, Google Scholar), a test bed for generation of artificial surgical smoke and mist was evolved. Smoke was generated by a fog machine and mist produced by a nebulizer. The size of resulting droplets was measured microscopically and compared with biological smoke (electrocautery) and mist (ultrasound dissection) emerging during minimally invasive surgical procedures. The particles resulting from artificial generation are in the range of the size of biological droplets. For surgical smoke, the droplet dimension produced by the fog machine was 4.19 µm compared with 4.65 µm generated by electrocautery during a surgical procedure. The size of artificial mist produced by the nebulizer ranged between 45.38 and 48.04 µm compared with the range between 30.80 and 56.27 µm that was generated during minimally invasive ultrasonic dissection. A suitable test bed for artificial smoke and mist generation was developed revealing almost identical droplet characteristics as produced during minimally invasive surgical procedures. The possibility to generate image interferences comparable to those occurring during laparoscopy (electrocautery and ultrasound dissection) provides a basis for the future development of image processing technologies for clinical applications. © The Author(s) 2014.

  7. Digital video.

    Science.gov (United States)

    Johnson, Don; Johnson, Mike

    2004-04-01

    The process of digital capture, editing, and archiving video has become an important aspect of documenting arthroscopic surgery. Recording the arthroscopic findings before and after surgery is an essential part of the patient's medical record. The hardware and software has become more reasonable to purchase, but the learning curve to master the software is steep. Digital video is captured at the time of arthroscopy to a hard disk, and written to a CD at the end of the operative procedure. The process of obtaining video of open procedures is more complex. Outside video of the procedure is recorded on digital tape with a digital video camera. The camera must be plugged into a computer to capture the video on the hard disk. Adobe Premiere software is used to edit the video and render the finished video to the hard drive. This finished video is burned onto a CD. We outline the choice of computer hardware and software for the manipulation of digital video. The techniques of backup and archiving the completed projects and files also are outlined. The uses of digital video for education and the formats that can be used in PowerPoint presentations are discussed.

  8. Improving Video Game Development: Facilitating Heterogeneous Team Collaboration through Flexible Software Processes

    Science.gov (United States)

    Musil, Juergen; Schweda, Angelika; Winkler, Dietmar; Biffl, Stefan

    Based on our observations of Austrian video game software development (VGSD) practices we identified a lack of systematic processes/method support and inefficient collaboration between various involved disciplines, i.e. engineers and artists. VGSD includes heterogeneous disciplines, e.g. creative arts, game/content design, and software. Nevertheless, improving team collaboration and process support is an ongoing challenge to enable a comprehensive view on game development projects. Lessons learned from software engineering practices can help game developers to increase game development processes within a heterogeneous environment. Based on a state of the practice survey in the Austrian games industry, this paper presents (a) first results with focus on process/method support and (b) suggests a candidate flexible process approach based on Scrum to improve VGSD and team collaboration. Results showed (a) a trend to highly flexible software processes involving various disciplines and (b) identified the suggested flexible process approach as feasible and useful for project application.

  9. Collaborative web-based annotation of video footage of deep-sea life, ecosystems and geological processes

    Science.gov (United States)

    Kottmann, R.; Ratmeyer, V.; Pop Ristov, A.; Boetius, A.

    2012-04-01

    More and more seagoing scientific expeditions use video-controlled research platforms such as Remote Operating Vehicles (ROV), Autonomous Underwater Vehicles (AUV), and towed camera systems. These produce many hours of video material which contains detailed and scientifically highly valuable footage of the biological, chemical, geological, and physical aspects of the oceans. Many of the videos contain unique observations of unknown life-forms which are rare, and which cannot be sampled and studied otherwise. To make such video material online accessible and to create a collaborative annotation environment the "Video Annotation and processing platform" (V-App) was developed. A first solely web-based installation for ROV videos is setup at the German Center for Marine Environmental Sciences (available at http://videolib.marum.de). It allows users to search and watch videos with a standard web browser based on the HTML5 standard. Moreover, V-App implements social web technologies allowing a distributed world-wide scientific community to collaboratively annotate videos anywhere at any time. It has several features fully implemented among which are: • User login system for fine grained permission and access control • Video watching • Video search using keywords, geographic position, depth and time range and any combination thereof • Video annotation organised in themes (tracks) such as biology and geology among others in standard or full screen mode • Annotation keyword management: Administrative users can add, delete, and update single keywords for annotation or upload sets of keywords from Excel-sheets • Download of products for scientific use This unique web application system helps making costly ROV videos online available (estimated cost range between 5.000 - 10.000 Euros per hour depending on the combination of ship and ROV). Moreover, with this system each expert annotation adds instantaneous available and valuable knowledge to otherwise uncharted

  10. Representation and Processing Algorithms for Business Rules Systems

    Directory of Open Access Journals (Sweden)

    Vasile MAZILESCU

    2012-11-01

    Full Text Available Business Rules is a powerful technology for representing a business policy. It opens the chance to react fast on changes in the market. For example, new pricing strategies can be implemented without a time consuming re-programming of a sales software. The knowledge expressed by rules can be easily understood by non-computer experts. Business experts therefore are able to handle Business Rules directly and therefore make changes even faster. For this, the construction of a Global Semantic Graph (GSG to support future information- and collaboration-centric applications and services, is a very important subject. The GSG is a publish/subscribe (pub/sub based architecture that supports publication of t-uples and subscriptions with standing graph queries. An implementation of an efficient pattern matching algorithm such as Rete on top of a distributed environment might serve as a possible substrate for GSG’s pub/sub facility. Knowledge description and exploitation within a Business Rule Management System (BRMS are somehow conflicting characteristics, since the increase of the representation power of knowledge diminishes the efficiency of the system and increases the difficulty of carrying it out. Many challenges in the BRMSs field are difficult to solve from a computational point of view.The use of variables in a Business Rule Management System knowledge representation allows factorising knowledge, like in classical knowledge based systems. The language of the first-degree predicates facilitates the formulation of complex knowledge in a rigorous way, imposing appropriate reasoning techniques. It is, thus, necessary to define the description method of fuzzy knowledge, to justify the knowledge exploiting efficiency when the compiling technique is used, to present the inference engine and highlight the functional features of the pattern matching and the state space processes. This paper presents the main results of our project for designing a compiler

  11. Computer algorithm for analyzing and processing borehole strainmeter data

    Science.gov (United States)

    Langbein, John O.

    2010-01-01

    The newly installed Plate Boundary Observatory (PBO) strainmeters record signals from tectonic activity, Earth tides, and atmospheric pressure. Important information about tectonic processes may occur at amplitudes at and below tidal strains and pressure loading. If incorrect assumptions are made regarding the background noise in the strain data, then the estimates of tectonic signal amplitudes may be incorrect. Furthermore, the use of simplifying assumptions that data are uncorrelated can lead to incorrect results and pressure loading and tides may not be completely removed from the raw data. Instead, any algorithm used to process strainmeter data must incorporate the strong temporal correlations that are inherent with these data. The technique described here uses least squares but employs data covariance that describes the temporal correlation of strainmeter data. There are several advantages to this method since many parameters are estimated simultaneously. These parameters include: (1) functional terms that describe the underlying error model, (2) the tidal terms, (3) the pressure loading term(s), (4) amplitudes of offsets, either those from earthquakes or from the instrument, (5) rate and changes in rate, and (6) the amplitudes and time constants of either logarithmic or exponential curves that can characterize postseismic deformation or diffusion of fluids near the strainmeter. With the proper error model, realistic estimates of the standard errors of the various parameters are obtained; this is especially critical in determining the statistical significance of a suspected, tectonic strain signal. The program also provides a method of tracking the various adjustments required to process strainmeter data. In addition, the program provides several plots to assist with identifying either tectonic signals or other signals that may need to be removed before any geophysical signal can be identified.

  12. Algorithms

    Indian Academy of Sciences (India)

    , i is referred to as the loop-index, 'stat-body' is any sequence of ... while i ~ N do stat-body; i: = i+ 1; endwhile. The algorithm for sorting the numbers is described in Table 1 and the algorithmic steps on a list of 4 numbers shown in. Figure 1.

  13. Body movement analysis during sleep for children with ADHD using video image processing.

    Science.gov (United States)

    Nakatani, Masahiro; Okada, Shima; Shimizu, Sachiko; Mohri, Ikuko; Ohno, Yuko; Taniike, Masako; Makikawa, Masaaki

    2013-01-01

    In recent years, the amount of children with sleep disorders that cause arousal during sleep or light sleep is increasing. Attention-deficit hyperactivity disorder (ADHD) is a cause of this sleep disorder; children with ADHD have frequent body movement during sleep. Therefore, we investigated the body movement during sleep of children with and without ADHD using video imaging. We analysed large gross body movements (GM) that occur and obtained the GM rate and the rest duration. There were differences between the body movements of children with ADHD and normally developed children. The children with ADHD moved frequently, so their rest duration was shorter than that of the normally developed children. Additionally, the rate of gross body movement indicated a significant difference in REM sleep (p video image processing.

  14. A New Method to Improve Performance of Resampling Process in Particles Filter by Genetic Algorithm and Gamma Test Algorithm

    Science.gov (United States)

    Wang, Zhenwu; Hut, Rolf; van de Giesen, Nick

    2017-04-01

    Particle filtering is a nonlinear and non-Gaussian dynamical filtering system. It has found widespread applications in hydrological data assimilation. In order to solve the loss of particle diversity exiting in resampling process of particle filter, this research proposes an improved particle filter algorithm using genetic algorithm optimization and Gamma test. This method combines the genetic algorithm and Gamma test into the resampling procedure of particle filter to improve the adaptability and performance of particle filter in data assimilation. First, the particles are classified to three different groups based on resampling method. The particles with high weight values remain unchanged. Then genetic algorithm is used to cross and variate the rest of the particles. In the process of the optimization, the Gamma test method is applied for monitoring the quality of the new generated particles. When the gamma statistic stays stable, the algorithm will end the optimization and continue to perturb next observations in particle algorithm. The algorithm is illustrated for the three-dimensional Lorenz model and the much more complex 40-dimensional Lorenz model. The results demonstrate this method can keep the diversity of the particles and enhance the performance of the particle filter, leading to the promising conjecture that the method is applicable to realistic hydrological problems.

  15. A flexible software architecture for scalable real-time image and video processing applications

    Science.gov (United States)

    Usamentiaga, Rubén; Molleda, Julio; García, Daniel F.; Bulnes, Francisco G.

    2012-06-01

    Real-time image and video processing applications require skilled architects, and recent trends in the hardware platform make the design and implementation of these applications increasingly complex. Many frameworks and libraries have been proposed or commercialized to simplify the design and tuning of real-time image processing applications. However, they tend to lack flexibility because they are normally oriented towards particular types of applications, or they impose specific data processing models such as the pipeline. Other issues include large memory footprints, difficulty for reuse and inefficient execution on multicore processors. This paper presents a novel software architecture for real-time image and video processing applications which addresses these issues. The architecture is divided into three layers: the platform abstraction layer, the messaging layer, and the application layer. The platform abstraction layer provides a high level application programming interface for the rest of the architecture. The messaging layer provides a message passing interface based on a dynamic publish/subscribe pattern. A topic-based filtering in which messages are published to topics is used to route the messages from the publishers to the subscribers interested in a particular type of messages. The application layer provides a repository for reusable application modules designed for real-time image and video processing applications. These modules, which include acquisition, visualization, communication, user interface and data processing modules, take advantage of the power of other well-known libraries such as OpenCV, Intel IPP, or CUDA. Finally, we present different prototypes and applications to show the possibilities of the proposed architecture.

  16. Maintenance of Process Control Algorithms based on Dynamic Program Slicing

    DEFF Research Database (Denmark)

    Hansen, Ole Fink; Andersen, Nils Axel; Ravn, Ole

    2010-01-01

    Today’s industrial control systems gradually lose performance after installation and must be regularly maintained by means of adjusting parameters and modifying the control algorithm, in order to regain high performance. Industrial control algorithms are complex software systems, and it is partic......Today’s industrial control systems gradually lose performance after installation and must be regularly maintained by means of adjusting parameters and modifying the control algorithm, in order to regain high performance. Industrial control algorithms are complex software systems......, and it is particularly difficult to locate causes of performance loss, while readjusting the algorithm once the cause of performance loss is actually realized and found is relatively simple. In this paper we present a software-engineering approach to the maintenance problem, which provides tools for exploring...... the behavior of a control algorithm, enables maintenance personnel to focus on only relevant parts of the algorithm and semi-automatically locate the part of the algorithm that is responsible for the reduced performance. The solution is tuning-free and can be applied to installed and running systems without...

  17. Ameliorating mammograms by using novel image processing algorithms

    Science.gov (United States)

    Pillai, A.; Kwartowitz, D.

    2014-03-01

    Mammography is one of the most important tools for the early detection of breast cancer typically through detection of characteristic masses and/or micro calcifications. Digital mammography has become commonplace in recent years. High quality mammogram images are large in size, providing high-resolution data. Estimates of the false negative rate for cancers in mammography are approximately 10%-30%. This may be due to observation error, but more frequently it is because the cancer is hidden by other dense tissue in the breast and even after retrospective review of the mammogram, cannot be seen. In this study, we report on the results of novel image processing algorithms that will enhance the images providing decision support to reading physicians. Techniques such as Butterworth high pass filtering and Gabor filters will be applied to enhance images; followed by segmentation of the region of interest (ROI). Subsequently, the textural features will be extracted from the ROI, which will be used to classify the ROIs as either masses or non-masses. Among the statistical methods most used for the characterization of textures, the co-occurrence matrix makes it possible to determine the frequency of appearance of two pixels separated by a distance, at an angle from the horizontal. This matrix contains a very large amount of information that is complex. Therefore, it is not used directly but through measurements known as indices of texture such as average, variance, energy, contrast, correlation, normalized correlation and entropy.

  18. Automated process flowsheet synthesis for membrane processes using genetic algorithm: role of crossover operators

    KAUST Repository

    Shafiee, Alireza

    2016-06-25

    In optimization-based process flowsheet synthesis, optimization methods, including genetic algorithms (GA), are used as advantageous tools to select a high performance flowsheet by ‘screening’ large numbers of possible flowsheets. In this study, we expand the role of GA to include flowsheet generation through proposing a modified Greedysub tour crossover operator. Performance of the proposed crossover operator is compared with four other commonly used operators. The proposed GA optimizationbased process synthesis method is applied to generate the optimum process flowsheet for a multicomponent membrane-based CO2 capture process. Within defined constraints and using the random-point crossover, CO2 purity of 0.827 (equivalent to 0.986 on dry basis) is achieved which results in improvement (3.4%) over the simplest crossover operator applied. In addition, the least variability in the converged flowsheet and CO2 purity is observed for random-point crossover operator, which approximately implies closeness of the solution to the global optimum, and hence the consistency of the algorithm. The proposed crossover operator is found to improve the convergence speed of the algorithm by 77.6%.

  19. Real-time medical video processing, enabled by hardware accelerated correlations

    DEFF Research Database (Denmark)

    Savarimuthu, T. R.; Kjaer-Nielsen, A.; Sorensen, A. S.

    2011-01-01

    Image processing involving correlation based filter algorithms have proved extremely useful for image enhancement, feature extraction and recognition, in a wide range of medical applications, but is almost exclusively used with still images due to the amount of computations required by the correl......Image processing involving correlation based filter algorithms have proved extremely useful for image enhancement, feature extraction and recognition, in a wide range of medical applications, but is almost exclusively used with still images due to the amount of computations required...

  20. Marketing Strategy Implementation Process in the Creative Industry of Video Games

    Directory of Open Access Journals (Sweden)

    Maryangela Drumond de Abreu Negrão

    2013-06-01

    Full Text Available This article contributes to the understanding of marketing strategy process when it presents the organizational and human factors that support the processes of implementation, identified in a qualitative study conducted in the creative industry of video game development. The research, a case study applied to four video and computer game companies was based on the Sashittal and Jassawalla (2001 marketing strategic model, and on the concepts of the creative behavior and innovation in organizations proposed by Amabile (1997. The analysis suggests that the marketing strategy implementation is anchored in innovative administrative process, creative skills and the adoption of modern control technologies. It was observed that a vision that associates production, process, the market orientation and the delivery of value-adding is essential for the implementation of strategies in creative and innovative organizational structures. The research contributes to the marketing strategy implementation studies in creative and innovative environments under the approach of smaller organizations. It also contributes with the marketing strategy theory when it suggests that the analysis of the process, the control and the management skills be included as categories into the theoretical model in future investigations.

  1. Research on Agricultural Surveillance Video of Intelligent Tracking

    Science.gov (United States)

    Cai, Lecai; Xu, Jijia; Liangping, Jin; He, Zhiyong

    Intelligent video tracking technology is the digital video processing and analysis of an important field of application in the civilian and military defense have a wide range of applications. In this paper, a systematic study on the surveillance video of the Smart in the agricultural tracking, particularly in target detection and tracking problem of the study, respectively for the static background of the video sequences of moving targets detection and tracking algorithm, the goal of agricultural production for rapid detection and tracking algorithm and Mean Shift-based translation and rotation of the target tracking algorithm. Experimental results show that the system can effectively and accurately track the target in the surveillance video. Therefore, in agriculture for the intelligent video surveillance tracking study, whether it is from the environmental protection or social security, economic efficiency point of view, are very meaningful.

  2. Measurement of thigmomorphogenesis and gravitropism by non-intrusive computerized video image processing

    Science.gov (United States)

    Jaffe, M. J.

    1984-01-01

    A video image processing instrument, DARWIN (Digital Analyser of Resolvable Whole-pictures by Image Numeration), was developed. It was programmed to measure stem or root growth and bending, and coupled to a specially mounted video camera to be able to automatically generate growth and bending curves during gravitropism. The growth of the plant is recorded on a video casette recorder with a specially modified time lapse function. At the end of the experiment, DARWIN analyses the growth or movement and prints out bending and growth curves. This system was used to measure thigmomorphagenesis in light grown corn plants. If the plant is rubbed with an applied force load of 0.38 N., it grows faster than the unrubbed control, whereas 1.14 N. retards its growth. Image analysis shows that most of the change in the rate of growth is caused in the first hour after rubbing. When DARWIN was used to measure gravitropism in dark grown oat seedlings, it was found that the top side of the shoot contracts during the first hour of gravitational stimulus, whereas the bottom side begins to elongate after 10 to 15 minutes.

  3. Individual differences in the processing of smoking-cessation video messages: An imaging genetics study.

    Science.gov (United States)

    Shi, Zhenhao; Wang, An-Li; Aronowitz, Catherine A; Romer, Daniel; Langleben, Daniel D

    2017-09-01

    Studies testing the benefits of enriching smoking-cessation video ads with attention-grabbing sensory features have yielded variable results. Dopamine transporter gene (DAT1) has been implicated in attention deficits. We hypothesized that DAT1 polymorphism is partially responsible for this variability. Using functional magnetic resonance imaging, we examined brain responses to videos high or low in attention-grabbing features, indexed by "message sensation value" (MSV), in 53 smokers genotyped for DAT1. Compared to other smokers, 10/10 homozygotes showed greater neural response to High- vs. Low-MSV smoking-cessation videos in two a priori regions of interest: the right temporoparietal junction and the right ventrolateral prefrontal cortex. These regions are known to underlie stimulus-driven attentional processing. Exploratory analysis showed that the right temporoparietal response positively predicted follow-up smoking behavior indexed by urine cotinine. Our findings suggest that responses to attention-grabbing features in smoking-cessation messages is affected by the DAT1 genotype. Copyright © 2017. Published by Elsevier B.V.

  4. GPU accelerated OCT processing at megahertz axial scan rate and high resolution video rate volumetric rendering

    Science.gov (United States)

    Jian, Yifan; Wong, Kevin; Sarunic, Marinko V.

    2013-03-01

    In this report, we describe how to highly optimize a CUDA based platform to perform real time optical coherence tomography data processing and 3D volumetric rendering using commercially-available cost-effective graphic processing units (GPUs). The maximum complete attainable axial scan processing rate (including memory transfer and rendering frame) was 2.2 megahertz for 16 bits pixel depth and 2048 pixels/A-scan, the maximum 3D volumetric rendering speed is 23 volumes/second (size:1024×256×200). To the best of our knowledge, this is the fastest processing rate reported to date with single-chip GPU and the first implementation of real time video rate volumetric OCT processing and rendering that is capable of matching the ultrahigh-speed OCT acquisition rates.

  5. Facilitation or disengagement? Attention bias in facial affect processing after short-term violent video game exposure.

    Directory of Open Access Journals (Sweden)

    Yanling Liu

    Full Text Available Previous research has been inconsistent on whether violent video games exert positive and/or negative effects on cognition. In particular, attentional bias in facial affect processing after violent video game exposure continues to be controversial. The aim of the present study was to investigate attentional bias in facial recognition after short term exposure to violent video games and to characterize the neural correlates of this effect. In order to accomplish this, participants were exposed to either neutral or violent video games for 25 min and then event-related potentials (ERPs were recorded during two emotional search tasks. The first search task assessed attentional facilitation, in which participants were required to identify an emotional face from a crowd of neutral faces. In contrast, the second task measured disengagement, in which participants were required to identify a neutral face from a crowd of emotional faces. Our results found a significant presence of the ERP component, N2pc, during the facilitation task; however, no differences were observed between the two video game groups. This finding does not support a link between attentional facilitation and violent video game exposure. Comparatively, during the disengagement task, N2pc responses were not observed when participants viewed happy faces following violent video game exposure; however, a weak N2pc response was observed after neutral video game exposure. These results provided only inconsistent support for the disengagement hypothesis, suggesting that participants found it difficult to separate a neutral face from a crowd of emotional faces.

  6. Facilitation or disengagement? Attention bias in facial affect processing after short-term violent video game exposure.

    Science.gov (United States)

    Liu, Yanling; Lan, Haiying; Teng, Zhaojun; Guo, Cheng; Yao, Dezhong

    2017-01-01

    Previous research has been inconsistent on whether violent video games exert positive and/or negative effects on cognition. In particular, attentional bias in facial affect processing after violent video game exposure continues to be controversial. The aim of the present study was to investigate attentional bias in facial recognition after short term exposure to violent video games and to characterize the neural correlates of this effect. In order to accomplish this, participants were exposed to either neutral or violent video games for 25 min and then event-related potentials (ERPs) were recorded during two emotional search tasks. The first search task assessed attentional facilitation, in which participants were required to identify an emotional face from a crowd of neutral faces. In contrast, the second task measured disengagement, in which participants were required to identify a neutral face from a crowd of emotional faces. Our results found a significant presence of the ERP component, N2pc, during the facilitation task; however, no differences were observed between the two video game groups. This finding does not support a link between attentional facilitation and violent video game exposure. Comparatively, during the disengagement task, N2pc responses were not observed when participants viewed happy faces following violent video game exposure; however, a weak N2pc response was observed after neutral video game exposure. These results provided only inconsistent support for the disengagement hypothesis, suggesting that participants found it difficult to separate a neutral face from a crowd of emotional faces.

  7. Facilitation or disengagement? Attention bias in facial affect processing after short-term violent video game exposure

    Science.gov (United States)

    Liu, Yanling; Lan, Haiying; Teng, Zhaojun; Guo, Cheng; Yao, Dezhong

    2017-01-01

    Previous research has been inconsistent on whether violent video games exert positive and/or negative effects on cognition. In particular, attentional bias in facial affect processing after violent video game exposure continues to be controversial. The aim of the present study was to investigate attentional bias in facial recognition after short term exposure to violent video games and to characterize the neural correlates of this effect. In order to accomplish this, participants were exposed to either neutral or violent video games for 25 min and then event-related potentials (ERPs) were recorded during two emotional search tasks. The first search task assessed attentional facilitation, in which participants were required to identify an emotional face from a crowd of neutral faces. In contrast, the second task measured disengagement, in which participants were required to identify a neutral face from a crowd of emotional faces. Our results found a significant presence of the ERP component, N2pc, during the facilitation task; however, no differences were observed between the two video game groups. This finding does not support a link between attentional facilitation and violent video game exposure. Comparatively, during the disengagement task, N2pc responses were not observed when participants viewed happy faces following violent video game exposure; however, a weak N2pc response was observed after neutral video game exposure. These results provided only inconsistent support for the disengagement hypothesis, suggesting that participants found it difficult to separate a neutral face from a crowd of emotional faces. PMID:28249033

  8. Novel methodology for casting process optimization using Gaussian process regression and genetic algorithm

    Directory of Open Access Journals (Sweden)

    Yao Weixiong

    2009-08-01

    Full Text Available High pressure die casting (HPDC is a versatile material processing method for mass-production of metal parts with complex geometries, and this method has been widely used in manufacturing various products of excellent dimensional accuracy and productivity. In order to ensure the quality of the components, a number of variables need to be properly set. A novel methodology for high pressure die casting process optimization was developed, validated and applied to selection of optimal parameters, which incorporate design of experiment (DOE, Gaussian process (GP regression technique and genetic algorithms (GA. This new approach was applied to process optimization for cast magnesium alloy notebook shell. After being trained, using data generated by PROCAST (FEM-based simulation software, the GP model approximated well with the simulation by extracting useful information from the simulation results. With the help of MATLAB, the GP/GA based approach has achieved the optimum solution of die casting process condition settings.

  9. A Novel Algorithm to Scheduling Optimization of Melting-Casting Process in Copper Alloy Strip Production

    Directory of Open Access Journals (Sweden)

    Xiaohui Yan

    2015-01-01

    Full Text Available Melting-casting is the first process in copper alloy strip production. The schedule scheme on this process affects the subsequent processes greatly. In this paper, we build the multiobjective model of melting-casting scheduling problem, which considers minimizing the makespan and total weighted earliness and tardiness penalties comprehensively. A novel algorithm, which we called Multiobjective Artificial Bee Colony/Decomposition (MOABC/D algorithm, is proposed to solve this model. The algorithm combines the framework of Multiobjective Evolutionary Algorithm/Decomposition (MOEA/D and the neighborhood search strategy of Artificial Bee Colony algorithm. The results on instances show that the proposed MOABC/D algorithm outperforms the other two comparison algorithms both on the distributions of the Pareto front and the priority in the optimal selection results.

  10. Real time video processing software for the analysis of endoscopic guided-biopsies

    Science.gov (United States)

    Ordoñez, C.; Bouchet, A.; Pastore, J.; Blotta, E.

    2011-12-01

    The severity in Barrett esophagus disease is, undoubtedly, the possibility of its malignization. To make an early diagnosis in order to avoid possible complications, it is absolutely necessary collect biopsies to make a histological analysis. This should be done under endoscopic control to avoid mucus areas that may co-exist within the columnar epithelial, which could lead to a false diagnosis. This paper presents a video processing software in real-time in order to delineate and enhance areas of interest to facilitate the work of the expert.

  11. Real-time interferometric monitoring and measuring of photopolymerization based stereolithographic additive manufacturing process: sensor model and algorithm

    Science.gov (United States)

    Zhao, X.; Rosen, D. W.

    2017-01-01

    As additive manufacturing is poised for growth and innovations, it faces barriers of lack of in-process metrology and control to advance into wider industry applications. The exposure controlled projection lithography (ECPL) is a layerless mask-projection stereolithographic additive manufacturing process, in which parts are fabricated from photopolymers on a stationary transparent substrate. To improve the process accuracy with closed-loop control for ECPL, this paper develops an interferometric curing monitoring and measuring (ICM&M) method which addresses the sensor modeling and algorithms issues. A physical sensor model for ICM&M is derived based on interference optics utilizing the concept of instantaneous frequency. The associated calibration procedure is outlined for ICM&M measurement accuracy. To solve the sensor model, particularly in real time, an online evolutionary parameter estimation algorithm is developed adopting moving horizon exponentially weighted Fourier curve fitting and numerical integration. As a preliminary validation, simulated real-time measurement by offline analysis of a video of interferograms acquired in the ECPL process is presented. The agreement between the cured height estimated by ICM&M and that measured by microscope indicates that the measurement principle is promising as real-time metrology for global measurement and control of the ECPL process.

  12. [A new laser scan system for video ophthalmoscopy. Initial clinical experiences also in relation to digital image processing].

    Science.gov (United States)

    Fabian, E; Mertz, M; Hofmann, H; Wertheimer, R; Foos, C

    1990-06-01

    The clinical advantages of a scanning laser ophthalmoscope (SLO) and video imaging of fundus pictures are described. Image quality (contrast, depth of field) and imaging possibilities (confocal stop) are assessed. Imaging with different lasers (argon, He-Ne) and changes in imaging rendered possible by confocal alignment of the imaging optics are discussed. Hard copies from video images are still of inferior quality compared to fundus photographs. Methods of direct processing and retrieval of digitally stored SLO video fundus images are illustrated by examples. Modifications for a definitive laser scanning system - in regard to the field of view and the quality of hard copies - are proposed.

  13. Performance evaluation software moving object detection and tracking in videos

    CERN Document Server

    Karasulu, Bahadir

    2013-01-01

    Performance Evaluation Software: Moving Object Detection and Tracking in Videos introduces a software approach for the real-time evaluation and performance comparison of the methods specializing in moving object detection and/or tracking (D&T) in video processing. Digital video content analysis is an important item for multimedia content-based indexing (MCBI), content-based video retrieval (CBVR) and visual surveillance systems. There are some frequently-used generic algorithms for video object D&T in the literature, such as Background Subtraction (BS), Continuously Adaptive Mean-shift (CMS),

  14. Optimization of submerged arc welding process parameters using quasi-oppositional based Jaya algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Rao, R. Venkata; Rai, Dhiraj P. [Sardar Vallabhbhai National Institute of Technology, Gujarat (India)

    2017-05-15

    Submerged arc welding (SAW) is characterized as a multi-input process. Selection of optimum combination of process parameters of SAW process is a vital task in order to achieve high quality of weld and productivity. The objective of this work is to optimize the SAW process parameters using a simple optimization algorithm, which is fast, robust and convenient. Therefore, in this work a very recently proposed optimization algorithm named Jaya algorithm is applied to solve the optimization problems in SAW process. In addition, a modified version of Jaya algorithm with oppositional based learning, named “Quasi-oppositional based Jaya algorithm” (QO-Jaya) is proposed in order to improve the performance of the Jaya algorithm. Three optimization case studies are considered and the results obtained by Jaya algorithm and QO-Jaya algorithm are compared with the results obtained by well-known optimization algorithms such as Genetic algorithm (GA), Particle swarm optimization (PSO), Imperialist competitive algorithm (ICA) and Teaching learning based optimization (TLBO).

  15. The association between chronic exposure to video game violence and affective picture processing: an ERP study.

    Science.gov (United States)

    Bailey, Kira; West, Robert; Anderson, Craig A

    2011-06-01

    Exposure to video game violence (VGV) is known to result in desensitization to violent material and may alter the processing of positive emotion related to facial expressions. The present study was designed to address three questions: (1) Does the association between VGV and positive emotion extend to stimuli other than faces, (2) is the association between VGV and affective picture processing observed with a single presentation of the stimuli, and (3) is the association between VGV and the response to violent stimuli sensitive to the relevance of emotion for task performance? The data revealed that transient modulations of the event-related potentials (ERPs) related to attentional orienting and sustained modulations of the ERPs related to evaluative processing were sensitive to VGV exposure.

  16. Algorithms

    Indian Academy of Sciences (India)

    Algorithms. 3. Procedures and Recursion. R K Shyamasundar. In this article we introduce procedural abstraction and illustrate its uses. Further, we illustrate the notion of recursion which is one of the most useful features of procedural abstraction. Procedures. Let us consider a variation of the pro blem of summing the first M.

  17. Algorithms

    Indian Academy of Sciences (India)

    number of elements. We shall illustrate the widely used matrix multiplication algorithm using the two dimensional arrays in the following. Consider two matrices A and B of integer type with di- mensions m x nand n x p respectively. Then, multiplication of. A by B denoted, A x B , is defined by matrix C of dimension m xp where.

  18. A low-light-level video recursive filtering technology based on the three-dimensional coefficients

    Science.gov (United States)

    Fu, Rongguo; Feng, Shu; Shen, Tianyu; Luo, Hao; Wei, Yifang; Yang, Qi

    2017-08-01

    Low light level video is an important method of observation under low illumination condition, but the SNR of low light level video is low, the effect of observation is poor, so the noise reduction processing must be carried out. Low light level video noise mainly includes Gauss noise, Poisson noise, impulse noise, fixed pattern noise and dark current noise. In order to remove the noise in low-light-level video effectively, improve the quality of low-light-level video. This paper presents an improved time domain recursive filtering algorithm with three dimensional filtering coefficients. This algorithm makes use of the correlation between the temporal domain of the video sequence. In the video sequences, the proposed algorithm adaptively adjusts the local window filtering coefficients in space and time by motion estimation techniques, for the different pixel points of the same frame of the image, the different weighted coefficients are used. It can reduce the image tail, and ensure the noise reduction effect well. Before the noise reduction, a pretreatment based on boxfilter is used to reduce the complexity of the algorithm and improve the speed of the it. In order to enhance the visual effect of low-light-level video, an image enhancement algorithm based on guided image filter is used to enhance the edge of the video details. The results of experiment show that the hybrid algorithm can remove the noise of the low-light-level video effectively, enhance the edge feature and heighten the visual effects of video.

  19. An Automated Energy Detection Algorithm Based on Morphological Filter Processing with a Modified Watershed Transform

    Science.gov (United States)

    2018-01-01

    ARL-TR-8270 ● JAN 2018 US Army Research Laboratory An Automated Energy Detection Algorithm Based on Morphological Filter...Automated Energy Detection Algorithm Based on Morphological Filter Processing with a Modified Watershed Transform by Kwok F Tom Sensors and Electron...1 October 2016–30 September 2017 4. TITLE AND SUBTITLE An Automated Energy Detection Algorithm Based on Morphological Filter Processing with a

  20. An Automated Energy Detection Algorithm Based on Morphological Filter Processing with a Semi-Disk Structure

    Science.gov (United States)

    2018-01-01

    ARL-TR-8271 ● JAN 2018 US Army Research Laboratory An Automated Energy Detection Algorithm Based on Morphological Filter... Energy Detection Algorithm Based on Morphological Filter Processing with a Semi-Disk Structure by Kwok F Tom Sensors and Electron Devices...September 2017 4. TITLE AND SUBTITLE An Automated Energy Detection Algorithm Based on Morphological Filter Processing with a Semi-Disk Structure 5a

  1. A Study on the Optimization Performance of Fireworks and Cuckoo Search Algorithms in Laser Machining Processes

    Science.gov (United States)

    Goswami, D.; Chakraborty, S.

    2014-11-01

    Laser machining is a promising non-contact process for effective machining of difficult-to-process advanced engineering materials. Increasing interest in the use of lasers for various machining operations can be attributed to its several unique advantages, like high productivity, non-contact processing, elimination of finishing operations, adaptability to automation, reduced processing cost, improved product quality, greater material utilization, minimum heat-affected zone and green manufacturing. To achieve the best desired machining performance and high quality characteristics of the machined components, it is extremely important to determine the optimal values of the laser machining process parameters. In this paper, fireworks algorithm and cuckoo search (CS) algorithm are applied for single as well as multi-response optimization of two laser machining processes. It is observed that although almost similar solutions are obtained for both these algorithms, CS algorithm outperforms fireworks algorithm with respect to average computation time, convergence rate and performance consistency.

  2. High-Quality 800-b/s Voice Processing Algorithm.

    Science.gov (United States)

    1991-02-25

    filter. The feedback gain of the low-pass filter is a critical factor. We recommend a feedback gain somewhere between 0.990 and 0.995, which is large...algorithm discriminated the following word pairs more successfully than the 2400-b/s LPC: ZEE - THEE JILT - GILT JEST - GUEST CHEEP - KEEP SING - THING

  3. Multichannel active control of nonlinear noise processes using diagonal structure bilinear FXLMS algorithm

    Science.gov (United States)

    Chen, Dong; Yuan, Ding; Li, Tan; Sidan, Du

    2015-12-01

    A novel nonlinear adaptive algorithm named as diagonal structure bilinear filtered-x least mean square (DBFXLMS) for multichannel nonlinear active noise control is proposed in this paper. The performances of the proposed algorithm are shown below and the computational complexity is compared with the second-order Volterra filtered-x LMS (VFXLMS) algorithm and the filtered-s least mean square (FSLMS) algorithm, in terms of normalized mean square error (NMSE), for multichannel active control of nonlinear noise processes. Both the simulations and the computational complexity analyses demonstrate that the proposed method has an improvement as compared to the proposed algorithms.

  4. Modified Firefly Algorithm based controller design for integrating and unstable delay processes

    Directory of Open Access Journals (Sweden)

    A. Gupta

    2016-03-01

    Full Text Available In this paper, Modified Firefly Algorithm has been used for optimizing the controller parameters of Smith predictor structure. The proposed algorithm modifies the position formula of the standard Firefly Algorithm in order to achieve faster convergence rate. Performance criteria Integral Square Error (ISE is optimized using this optimization technique. Simulation results show high performance for Modified Firefly Algorithm as compared to conventional Firefly Algorithm in terms of convergence rate. Integrating and unstable delay processes are taken as examples to indicate the performance of the proposed method.

  5. Marketing Strategy Implementation Process in the Creative Industry of Video Games

    National Research Council Canada - National Science Library

    Maryangela Drumond de Abreu Negrão; Ana Maria Machado Toaldo

    2013-01-01

    ... in a qualitative study conducted in the creative industry of video game development. The research, a case study applied to four video and computer game companies was based on the Sashittal and Jassawalla (2001...

  6. Basics of Polar-Format algorithm for processing Synthetic Aperture Radar images.

    Energy Technology Data Exchange (ETDEWEB)

    Doerry, Armin Walter

    2012-05-01

    The purpose of this report is to provide a background to Synthetic Aperture Radar (SAR) image formation using the Polar Format (PFA) processing algorithm. This is meant to be an aid to those tasked to implement real-time image formation using the Polar Format processing algorithm.

  7. A note on a perfect simulation algorithm for marked Hawkes processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Rasmussen, Jakob Gulddahl

    2004-01-01

    The usual straightforward simulation algorithm for (marked or unmarked) Hawkes processes suffers from edge effect. In this note we describe a perfect simulation algorithm which is partly derived as in Brix and Kendall (2002) and partly using upper and lower processes as in the Propp-Wilson algori...

  8. Grid-based algorithm to search critical points, in the electron density, accelerated by graphics processing units.

    Science.gov (United States)

    Hernández-Esparza, Raymundo; Mejía-Chica, Sol-Milena; Zapata-Escobar, Andy D; Guevara-García, Alfredo; Martínez-Melchor, Apolinar; Hernández-Pérez, Julio-M; Vargas, Rubicelia; Garza, Jorge

    2014-12-05

    Using a grid-based method to search the critical points in electron density, we show how to accelerate such a method with graphics processing units (GPUs). When the GPU implementation is contrasted with that used on central processing units (CPUs), we found a large difference between the time elapsed by both implementations: the smallest time is observed when GPUs are used. We tested two GPUs, one related with video games and other used for high-performance computing (HPC). By the side of the CPUs, two processors were tested, one used in common personal computers and other used for HPC, both of last generation. Although our parallel algorithm scales quite well on CPUs, the same implementation on GPUs runs around 10× faster than 16 CPUs, with any of the tested GPUs and CPUs. We have found what one GPU dedicated for video games can be used without any problem for our application, delivering a remarkable performance, in fact; this GPU competes against one HPC GPU, in particular when single-precision is used. © 2014 Wiley Periodicals, Inc.

  9. Real-time recursive hyperspectral sample and band processing algorithm architecture and implementation

    CERN Document Server

    Chang, Chein-I

    2017-01-01

    This book explores recursive architectures in designing progressive hyperspectral imaging algorithms. In particular, it makes progressive imaging algorithms recursive by introducing the concept of Kalman filtering in algorithm design so that hyperspectral imagery can be processed not only progressively sample by sample or band by band but also recursively via recursive equations. This book can be considered a companion book of author’s books, Real-Time Progressive Hyperspectral Image Processing, published by Springer in 2016. Explores recursive structures in algorithm architecture Implements algorithmic recursive architecture in conjunction with progressive sample and band processing Derives Recursive Hyperspectral Sample Processing (RHSP) techniques according to Band-Interleaved Sample/Pixel (BIS/BIP) acquisition format Develops Recursive Hyperspectral Band Processing (RHBP) techniques according to Band SeQuential (BSQ) acquisition format for hyperspectral data.

  10. Good-Enough Language Processing: Evidence from Sentence-Video Matching

    Science.gov (United States)

    Kharkwal, Gaurav; Stromswold, Karin

    2014-01-01

    This paper investigates how detailed a linguistic representation is formed for descriptions of visual events. In two experiments, participants watched captioned videos and decided whether the captions accurately described the videos. In both experiments, videos depicted geometric shapes moving around the screen. In the first experiment, all of the…

  11. Examining Feedback in an Instructional Video Game Using Process Data and Error Analysis. CRESST Report 817

    Science.gov (United States)

    Buschang, Rebecca E.; Kerr, Deirdre S.; Chung, Gregory K. W. K.

    2012-01-01

    Appropriately designed technology-based learning environments such as video games can be used to give immediate and individualized feedback to students. However, little is known about the design and use of feedback in instructional video games. This study investigated how feedback used in a mathematics video game about fractions impacted student…

  12. Heuristic and algorithmic processing in English, mathematics, and science education.

    Science.gov (United States)

    Sharps, Matthew J; Hess, Adam B; Price-Sharps, Jana L; Teh, Jane

    2008-01-01

    Many college students experience difficulties in basic academic skills. Recent research suggests that much of this difficulty may lie in heuristic competency--the ability to use and successfully manage general cognitive strategies. In the present study, the authors evaluated this possibility. They compared participants' performance on a practice California Basic Educational Skills Test and on a series of questions in the natural sciences with heuristic and algorithmic performance on a series of mathematics and reading comprehension exercises. Heuristic competency in mathematics was associated with better scores in science and mathematics. Verbal and algorithmic skills were associated with better reading comprehension. These results indicate the importance of including heuristic training in educational contexts and highlight the importance of a relatively domain-specific approach to questions of cognition in higher education.

  13. Framework for Integrating Science Data Processing Algorithms Into Process Control Systems

    Science.gov (United States)

    Mattmann, Chris A.; Crichton, Daniel J.; Chang, Albert Y.; Foster, Brian M.; Freeborn, Dana J.; Woollard, David M.; Ramirez, Paul M.

    2011-01-01

    A software framework called PCS Task Wrapper is responsible for standardizing the setup, process initiation, execution, and file management tasks surrounding the execution of science data algorithms, which are referred to by NASA as Product Generation Executives (PGEs). PGEs codify a scientific algorithm, some step in the overall scientific process involved in a mission science workflow. The PCS Task Wrapper provides a stable operating environment to the underlying PGE during its execution lifecycle. If the PGE requires a file, or metadata regarding the file, the PCS Task Wrapper is responsible for delivering that information to the PGE in a manner that meets its requirements. If the PGE requires knowledge of upstream or downstream PGEs in a sequence of executions, that information is also made available. Finally, if information regarding disk space, or node information such as CPU availability, etc., is required, the PCS Task Wrapper provides this information to the underlying PGE. After this information is collected, the PGE is executed, and its output Product file and Metadata generation is managed via the PCS Task Wrapper framework. The innovation is responsible for marshalling output Products and Metadata back to a PCS File Management component for use in downstream data processing and pedigree. In support of this, the PCS Task Wrapper leverages the PCS Crawler Framework to ingest (during pipeline processing) the output Product files and Metadata produced by the PGE. The architectural components of the PCS Task Wrapper framework include PGE Task Instance, PGE Config File Builder, Config File Property Adder, Science PGE Config File Writer, and PCS Met file Writer. This innovative framework is really the unifying bridge between the execution of a step in the overall processing pipeline, and the available PCS component services as well as the information that they collectively manage.

  14. A Rotor Tip Vortex Tracing Algorithm for Image Post-Processing

    Science.gov (United States)

    Overmeyer, Austin D.

    2015-01-01

    A neurite tracing algorithm, originally developed for medical image processing, was used to trace the location of the rotor tip vortex in density gradient flow visualization images. The tracing algorithm was applied to several representative test images to form case studies. The accuracy of the tracing algorithm was compared to two current methods including a manual point and click method and a cross-correlation template method. It is shown that the neurite tracing algorithm can reduce the post-processing time to trace the vortex by a factor of 10 to 15 without compromising the accuracy of the tip vortex location compared to other methods presented in literature.

  15. Real-time multi-camera video acquisition and processing platform for ADAS

    Science.gov (United States)

    Saponara, Sergio

    2016-04-01

    The paper presents the design of a real-time and low-cost embedded system for image acquisition and processing in Advanced Driver Assisted Systems (ADAS). The system adopts a multi-camera architecture to provide a panoramic view of the objects surrounding the vehicle. Fish-eye lenses are used to achieve a large Field of View (FOV). Since they introduce radial distortion of the images projected on the sensors, a real-time algorithm for their correction is also implemented in a pre-processor. An FPGA-based hardware implementation, re-using IP macrocells for several ADAS algorithms, allows for real-time processing of input streams from VGA automotive CMOS cameras.

  16. An integrated multispectral video and environmental monitoring system for the study of coastal processes and the support of beach management operations

    Science.gov (United States)

    Ghionis, George; Trygonis, Vassilis; Karydis, Antonis; Vousdoukas, Michalis; Alexandrakis, George; Drakopoulos, Panos; Amdreadis, Olympos; Psarros, Fotis; Velegrakis, Antonis; Poulos, Serafim

    2016-04-01

    Effective beach management requires environmental assessments that are based on sound science, are cost-effective and are available to beach users and managers in an accessible, timely and transparent manner. The most common problems are: 1) The available field data are scarce and of sub-optimal spatio-temporal resolution and coverage, 2) our understanding of local beach processes needs to be improved in order to accurately model/forecast beach dynamics under a changing climate, and 3) the information provided by coastal scientists/engineers in the form of data, models and scientific interpretation is often too complicated to be of direct use by coastal managers/decision makers. A multispectral video system has been developed, consisting of one or more video cameras operating in the visible part of the spectrum, a passive near-infrared (NIR) camera, an active NIR camera system, a thermal infrared camera and a spherical video camera, coupled with innovative image processing algorithms and a telemetric system for the monitoring of coastal environmental parameters. The complete system has the capability to record, process and communicate (in quasi-real time) high frequency information on shoreline position, wave breaking zones, wave run-up, erosion hot spots along the shoreline, nearshore wave height, turbidity, underwater visibility, wind speed and direction, air and sea temperature, solar radiation, UV radiation, relative humidity, barometric pressure and rainfall. An innovative, remotely-controlled interactive visual monitoring system, based on the spherical video camera (with 360°field of view), combines the video streams from all cameras and can be used by beach managers to monitor (in real time) beach user numbers, flow activities and safety at beaches of high touristic value. The high resolution near infrared cameras permit 24-hour monitoring of beach processes, while the thermal camera provides information on beach sediment temperature and moisture, can

  17. Study on a High Compression Processing for Video-on-Demand e-learning System

    Science.gov (United States)

    Nomura, Yoshihiko; Matsuda, Ryutaro; Sakamoto, Ryota; Sugiura, Tokuhiro; Matsui, Hirokazu; Kato, Norihiko

    The authors proposed a high-quality and small-capacity lecture-video-file creating system for distance e-learning system. Examining the feature of the lecturing scene, the authors ingeniously employ two kinds of image-capturing equipment having complementary characteristics : one is a digital video camera with a low resolution and a high frame rate, and the other is a digital still camera with a high resolution and a very low frame rate. By managing the two kinds of image-capturing equipment, and by integrating them with image processing, we can produce course materials with the greatly reduced file capacity : the course materials satisfy the requirements both for the temporal resolution to see the lecturer's point-indicating actions and for the high spatial resolution to read the small written letters. As a result of a comparative experiment, the e-lecture using the proposed system was confirmed to be more effective than an ordinary lecture from the viewpoint of educational effect.

  18. Improving performance of wavelet-based image denoising algorithm using complex diffusion process

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan; Sharifzadeh, Sara; Korhonen, Jari

    2012-01-01

    Image enhancement and de-noising is an essential pre-processing step in many image processing algorithms. In any image de-noising algorithm, the main concern is to keep the interesting structures of the image. Such interesting structures often correspond to the discontinuities (edges......). In this paper, we present a new algorithm for image noise reduction based on the combination of complex diffusion process and wavelet thresholding. In the existing wavelet thresholding methods, the noise reduction is limited, because the approximate coefficients containing the main information of the image...... are kept unchanged. Since noise affects both the approximate and detail coefficients, the proposed algorithm for noise reduction applies the complex diffusion process on the approximation band in order to alleviate the deficiency of the existing wavelet thresholding methods. The algorithm has been examined...

  19. IMPLEMENTATION OF IMAGE PROCESSING ALGORITHMS AND GLVQ TO TRACK AN OBJECT USING AR.DRONE CAMERA

    OpenAIRE

    Muhammad Nanda Kurniawan; Didit Widiyanto

    2014-01-01

    Abstract In this research, Parrot AR.Drone as an Unmanned Aerial Vehicle (UAV) was used to track an object from above. Development of this system utilized some functions from OpenCV library and Robot Operating System (ROS). Techniques that were implemented in the system are image processing al-gorithm (Centroid-Contour Distance (CCD)), feature extraction algorithm (Principal Component Analysis (PCA)) and an artificial neural network algorithm (Generalized Learning Vector Quantization (GLV...

  20. Implementation of Image Processing Algorithms and Glvq to Track an Object Using Ar.drone Camera

    OpenAIRE

    Kurniawan, Muhammad Nanda; Widiyanto, Didit

    2014-01-01

    In this research, Parrot AR.Drone as an Unmanned Aerial Vehicle (UAV) was used to track an object from above. Development of this system utilized some functions from OpenCV library and Robot Operating System (ROS). Techniques that were implemented in the system are image processing al-gorithm (Centroid-Contour Distance (CCD)), feature extraction algorithm (Principal Component Analysis (PCA)) and an artificial neural network algorithm (Generalized Learning Vector Quantization (GLVQ)). The fina...

  1. Eye-Movement Tracking Using Compressed Video Images

    Science.gov (United States)

    Mulligan, Jeffrey B.; Beutter, Brent R.; Hull, Cynthia H. (Technical Monitor)

    1994-01-01

    Infrared video cameras offer a simple noninvasive way to measure the position of the eyes using relatively inexpensive equipment. Several commercial systems are available which use special hardware to localize features in the image in real time, but the constraint of realtime performance limits the complexity of the applicable algorithms. In order to get better resolution and accuracy, we have used off-line processing to apply more sophisticated algorithms to the images. In this case, a major technical challenge is the real-time acquisition and storage of the video images. This has been solved using a strictly digital approach, exploiting the burgeoning field of hardware video compression. In this paper we describe the algorithms we have developed for tracking the movements of the eyes in video images, and present experimental results showing how the accuracy is affected by the degree of video compression.

  2. Elaboration of some signal processing algorithms in ultrasonic techniques: application to materials NDT

    Science.gov (United States)

    Drai; Sellidj; Khelil; Benchaala

    2000-03-01

    In ultrasonic techniques, information on defect characterization possibilities has required more evolved technique development than classical methods. To obtain a high probability of defect detection, these methods use signal-processing algorithms in order to enhance the signal-to-noise ratio. These methods are also used to discriminate between planar and volumetric defects. In this paper, some signal-processing algorithms are developed and implemented on a computer to allow their utilization in real-time processing of ultrasonics NDT results.

  3. Designing a feedback control algorithm for the tube hydroforming process

    DEFF Research Database (Denmark)

    Endelt, Benny Ørtoft; Cheng, Ming; Zhang, Shihong

    2013-01-01

    Tube hydroforming has a broad industrial appeal as the process enables production of geometrically complex parts within a single forming operation. The process is highly flexible with respect to adjustable process parameters (in the present context trajectories for the internal pressure and axial...

  4. A novel method to reduce time investment when processing videos from camera trap studies.

    Directory of Open Access Journals (Sweden)

    Kristijn R R Swinnen

    Full Text Available Camera traps have proven very useful in ecological, conservation and behavioral research. Camera traps non-invasively record presence and behavior of animals in their natural environment. Since the introduction of digital cameras, large amounts of data can be stored. Unfortunately, processing protocols did not evolve as fast as the technical capabilities of the cameras. We used camera traps to record videos of Eurasian beavers (Castor fiber. However, a large number of recordings did not contain the target species, but instead empty recordings or other species (together non-target recordings, making the removal of these recordings unacceptably time consuming. In this paper we propose a method to partially eliminate non-target recordings without having to watch the recordings, in order to reduce workload. Discrimination between recordings of target species and non-target recordings was based on detecting variation (changes in pixel values from frame to frame in the recordings. Because of the size of the target species, we supposed that recordings with the target species contain on average much more movements than non-target recordings. Two different filter methods were tested and compared. We show that a partial discrimination can be made between target and non-target recordings based on variation in pixel values and that environmental conditions and filter methods influence the amount of non-target recordings that can be identified and discarded. By allowing a loss of 5% to 20% of recordings containing the target species, in ideal circumstances, 53% to 76% of non-target recordings can be identified and discarded. We conclude that adding an extra processing step in the camera trap protocol can result in large time savings. Since we are convinced that the use of camera traps will become increasingly important in the future, this filter method can benefit many researchers, using it in different contexts across the globe, on both videos and

  5. Graphics processing unit accelerated optical coherence tomography processing at megahertz axial scan rate and high resolution video rate volumetric rendering.

    Science.gov (United States)

    Jian, Yifan; Wong, Kevin; Sarunic, Marinko V

    2013-02-01

    In this report, we describe how to highly optimize a computer unified device architecture based platform to perform real-time processing of optical coherence tomography interferometric data and three-dimensional (3-D) volumetric rendering using a commercially available, cost-effective, graphics processing unit (GPU). The maximum complete attainable axial scan processing rate, including memory transfer and displaying B-scan frame, was 2.24 MHz for 16 bits pixel depth and 2048 fast Fourier transform size; the maximum 3-D volumetric rendering rate, including B-scan, en face view display, and 3-D rendering, was ~23 volumes/second (volume size: 1024×256×200). To the best of our knowledge, this is the fastest processing rate reported to date with a single-chip GPU and the first implementation of real-time video-rate volumetric optical coherence tomography (OCT) processing and rendering that is capable of matching the acquisition rates of ultrahigh-speed OCT.

  6. Graphics processing unit accelerated optical coherence tomography processing at megahertz axial scan rate and high resolution video rate volumetric rendering

    Science.gov (United States)

    Jian, Yifan; Wong, Kevin; Sarunic, Marinko V.

    2013-02-01

    In this report, we describe how to highly optimize a computer unified device architecture based platform to perform real-time processing of optical coherence tomography interferometric data and three-dimensional (3-D) volumetric rendering using a commercially available, cost-effective, graphics processing unit (GPU). The maximum complete attainable axial scan processing rate, including memory transfer and displaying B-scan frame, was 2.24 MHz for 16 bits pixel depth and 2048 fast Fourier transform size; the maximum 3-D volumetric rendering rate, including B-scan, en face view display, and 3-D rendering, was ˜23 volumes/second (volume size: 1024×256×200). To the best of our knowledge, this is the fastest processing rate reported to date with a single-chip GPU and the first implementation of real-time video-rate volumetric optical coherence tomography (OCT) processing and rendering that is capable of matching the acquisition rates of ultrahigh-speed OCT.

  7. Evaluation of security algorithms used for security processing on DICOM images

    Science.gov (United States)

    Chen, Xiaomeng; Shuai, Jie; Zhang, Jianguo; Huang, H. K.

    2005-04-01

    In this paper, we developed security approach to provide security measures and features in PACS image acquisition and Tele-radiology image transmission. The security processing on medical images was based on public key infrastructure (PKI) and including digital signature and data encryption to achieve the security features of confidentiality, privacy, authenticity, integrity, and non-repudiation. There are many algorithms which can be used in PKI for data encryption and digital signature. In this research, we select several algorithms to perform security processing on different DICOM images in PACS environment, evaluate the security processing performance of these algorithms, and find the relationship between performance with image types, sizes and the implementation methods.

  8. An algorithm for the exact Fisher information matrix of vector ARMAX time series processes

    NARCIS (Netherlands)

    Klein, A.; Melard, G.

    2011-01-01

    In this paper an algorithm is developed for the exact Fisher information matrix of a vector ARMAX Gaussian process, VARMAX. The algorithm developed in this paper is composed by recursion equations at a vector-matrix level and some of these recursions consist of derivatives. For that purpose

  9. Parametric Optimization of Nd:YAG Laser Beam Machining Process Using Artificial Bee Colony Algorithm

    Directory of Open Access Journals (Sweden)

    Rajarshi Mukherjee

    2013-01-01

    Full Text Available Nd:YAG laser beam machining (LBM process has a great potential to manufacture intricate shaped microproducts with its unique characteristics. In practical applications, such as drilling, grooving, cutting, or scribing, the optimal combination of Nd:YAG LBM process parameters needs to be sought out to provide the desired machining performance. Several mathematical techniques, like Taguchi method, desirability function, grey relational analysis, and genetic algorithm, have already been applied for parametric optimization of Nd:YAG LBM processes, but in most of the cases, suboptimal or near optimal solutions have been reached. This paper focuses on the application of artificial bee colony (ABC algorithm to determine the optimal Nd:YAG LBM process parameters while considering both single and multiobjective optimization of the responses. A comparative study with other population-based algorithms, like genetic algorithm, particle swarm optimization, and ant colony optimization algorithm, proves the global applicability and acceptability of ABC algorithm for parametric optimization. In this algorithm, exchange of information amongst the onlooker bees minimizes the search iteration for the global optimal and avoids generation of suboptimal solutions. The results of two sample paired t-tests also demonstrate its superiority over the other optimization algorithms.

  10. A Comparison of Comprehension Processes in Sign Language Interpreter Videos with or without Captions.

    Science.gov (United States)

    Debevc, Matjaž; Milošević, Danijela; Kožuh, Ines

    2015-01-01

    One important theme in captioning is whether the implementation of captions in individual sign language interpreter videos can positively affect viewers' comprehension when compared with sign language interpreter videos without captions. In our study, an experiment was conducted using four video clips with information about everyday events. Fifty-one deaf and hard of hearing sign language users alternately watched the sign language interpreter videos with, and without, captions. Afterwards, they answered ten questions. The results showed that the presence of captions positively affected their rates of comprehension, which increased by 24% among deaf viewers and 42% among hard of hearing viewers. The most obvious differences in comprehension between watching sign language interpreter videos with and without captions were found for the subjects of hiking and culture, where comprehension was higher when captions were used. The results led to suggestions for the consistent use of captions in sign language interpreter videos in various media.

  11. Temporal Segmentation of MPEG Video Streams

    Directory of Open Access Journals (Sweden)

    Janko Calic

    2002-06-01

    Full Text Available Many algorithms for temporal video partitioning rely on the analysis of uncompressed video features. Since the information relevant to the partitioning process can be extracted directly from the MPEG compressed stream, higher efficiency can be achieved utilizing information from the MPEG compressed domain. This paper introduces a real-time algorithm for scene change detection that analyses the statistics of the macroblock features extracted directly from the MPEG stream. A method for extraction of the continuous frame difference that transforms the 3D video stream into a 1D curve is presented. This transform is then further employed to extract temporal units within the analysed video sequence. Results of computer simulations are reported.

  12. Turning Video Resource Management into Cloud Computing

    Directory of Open Access Journals (Sweden)

    Weili Kou

    2016-07-01

    Full Text Available Big data makes cloud computing more and more popular in various fields. Video resources are very useful and important to education, security monitoring, and so on. However, issues of their huge volumes, complex data types, inefficient processing performance, weak security, and long times for loading pose challenges in video resource management. The Hadoop Distributed File System (HDFS is an open-source framework, which can provide cloud-based platforms and presents an opportunity for solving these problems. This paper presents video resource management architecture based on HDFS to provide a uniform framework and a five-layer model for standardizing the current various algorithms and applications. The architecture, basic model, and key algorithms are designed for turning video resources into a cloud computing environment. The design was tested by establishing a simulation system prototype.

  13. A Comparison of Comprehension Processes in Sign Language Interpreter Videos with or without Captions

    National Research Council Canada - National Science Library

    Debevc, Matjaž; Milošević, Danijela; Kožuh, Ines

    2015-01-01

    One important theme in captioning is whether the implementation of captions in individual sign language interpreter videos can positively affect viewers' comprehension when compared with sign language...

  14. Gold - A novel deconvolution algorithm with optimization for waveform LiDAR processing

    Science.gov (United States)

    Zhou, Tan; Popescu, Sorin C.; Krause, Keith; Sheridan, Ryan D.; Putman, Eric

    2017-07-01

    Waveform Light Detection and Ranging (LiDAR) data have advantages over discrete-return LiDAR data in accurately characterizing vegetation structure. However, we lack a comprehensive understanding of waveform data processing approaches under different topography and vegetation conditions. The objective of this paper is to highlight a novel deconvolution algorithm, the Gold algorithm, for processing waveform LiDAR data with optimal deconvolution parameters. Further, we present a comparative study of waveform processing methods to provide insight into selecting an approach for a given combination of vegetation and terrain characteristics. We employed two waveform processing methods: (1) direct decomposition, (2) deconvolution and decomposition. In method two, we utilized two deconvolution algorithms - the Richardson-Lucy (RL) algorithm and the Gold algorithm. The comprehensive and quantitative comparisons were conducted in terms of the number of detected echoes, position accuracy, the bias of the end products (such as digital terrain model (DTM) and canopy height model (CHM)) from the corresponding reference data, along with parameter uncertainty for these end products obtained from different methods. This study was conducted at three study sites that include diverse ecological regions, vegetation and elevation gradients. Results demonstrate that two deconvolution algorithms are sensitive to the pre-processing steps of input data. The deconvolution and decomposition method is more capable of detecting hidden echoes with a lower false echo detection rate, especially for the Gold algorithm. Compared to the reference data, all approaches generate satisfactory accuracy assessment results with small mean spatial difference (parameter uncertainty analysis demonstrates that the Gold algorithm outperforms other approaches in dense vegetation areas, with the smallest RMSE, and the RL algorithm performs better in sparse vegetation areas in terms of RMSE. Additionally, the high

  15. Algorithm for applying interpolation in digital signal processing ...

    African Journals Online (AJOL)

    In many practical applications of Digital Signal Processing (DSP), one is faced with the problem of changing the sampling rate of a signal, either increasing it or decreasing it by some amount. Software-defined radios and test equipment use a variety of digital signal processing techniques to improve system performance.

  16. An Efficient Algorithm to Processing SAR Data on GPU

    Directory of Open Access Journals (Sweden)

    Meng Da-di

    2013-06-01

    Full Text Available Data processing is a time-consuming matter in the field of Synthetic Aperture Radar (SAR. In other ways, Graphic Processing Unit (GPU have tremendous float-point computational horsepower and very high memory bandwidth, and the developing Compute Unified Device Architecture (CUDA technology has enabled the GPU to be applied to the general purpose parallel computing. A new method of processing SAR data on GPU is presented in this paper. Compared with the nominal GPU based SAR processing method, number of data transfers between CPU/GPU are reduced from 4 to 1, and CPUs are exploited to cooperate with GPU synchronously. By the proposed method, data Processing is speeded up by 2.3 times, which is verified by the testing on the simulated SAR data.

  17. Particle detection algorithms for complex plasmas

    Science.gov (United States)

    Mohr, D. P.; Knapek, C. A.; Huber, P.; Zaehringer, E.

    2018-01-01

    The micrometer-sized particles in a complex plasma can be directly visualized and recorded by digital video cameras. To analyze the dynamics of single particles, reliable algorithms are required to accurately determine their positions to sub-pixel accuracy from the recorded images. Here, we combine the algorithms with common techniques for image processing, and we study several algorithms, pre- and post-processing methods, and the impact of the choice of threshold parameters.

  18. Performance evaluation of simple linear iterative clustering algorithm on medical image processing.

    Science.gov (United States)

    Cong, Jinyu; Wei, Benzheng; Yin, Yilong; Xi, Xiaoming; Zheng, Yuanjie

    2014-01-01

    Simple Linear Iterative Clustering (SLIC) algorithm is increasingly applied to different kinds of image processing because of its excellent perceptually meaningful characteristics. In order to better meet the needs of medical image processing and provide technical reference for SLIC on the application of medical image segmentation, two indicators of boundary accuracy and superpixel uniformity are introduced with other indicators to systematically analyze the performance of SLIC algorithm, compared with Normalized cuts and Turbopixels algorithm. The extensive experimental results show that SLIC is faster and less sensitive to the image type and the setting superpixel number than other similar algorithms such as Turbopixels and Normalized cuts algorithms. And it also has a great benefit to the boundary recall, the robustness of fuzzy boundary, the setting superpixel size and the segmentation performance on medical image segmentation.

  19. A dataflow analysis tool for parallel processing of algorithms

    Science.gov (United States)

    Jones, Robert L., III

    1993-01-01

    A graph-theoretic design process and software tool is presented for selecting a multiprocessing scheduling solution for a class of computational problems. The problems of interest are those that can be described using a dataflow graph and are intended to be executed repetitively on a set of identical parallel processors. Typical applications include signal processing and control law problems. Graph analysis techniques are introduced and shown to effectively determine performance bounds, scheduling constraints, and resource requirements. The software tool is shown to facilitate the application of the design process to a given problem.

  20. GENERAL ALGORITHMIC SCHEMA OF THE PROCESS OF THE CHILL AUXILIARIES PROJECTION

    Directory of Open Access Journals (Sweden)

    A. N. Chichko

    2006-01-01

    Full Text Available The general algorithmic diagram of systematization of the existing approaches to the process of projection is offered and the foundation of computer system of the chill mold arming construction is laid.

  1. On Wiener-Masani's algorithm for finding the generating function of multivariate stochastic processes

    Science.gov (United States)

    Miamee, A. G.

    1988-01-01

    It is shown that the algorithms for determining the generating function and prediction error matrix of multivariate stationary stochastic processes developed by Wiener and Masani (1957), and later by Masani (1960) will work in some more general setting.

  2. Asymptotic equivalent analysis of the LMS algorithm under linearly filtered processes

    National Research Council Canada - National Science Library

    Rupp, Markus

    2016-01-01

    While the least mean square (LMS) algorithm has been widely explored for some specific statistics of the driving process, an understanding of its behavior under general statistics has not been fully achieved...

  3. Automatic digital document processing and management problems, algorithms and techniques

    CERN Document Server

    Ferilli, Stefano

    2011-01-01

    This text reviews the issues involved in handling and processing digital documents. Examining the full range of a document's lifetime, this book covers acquisition, representation, security, pre-processing, layout analysis, understanding, analysis of single components, information extraction, filing, indexing and retrieval. This title: provides a list of acronyms and a glossary of technical terms; contains appendices covering key concepts in machine learning, and providing a case study on building an intelligent system for digital document and library management; discusses issues of security,

  4. An integrated environment for fast development and performance assessment of sonar image processing algorithms - SSIE

    DEFF Research Database (Denmark)

    Henriksen, Lars

    1996-01-01

    The sonar simulator integrated environment (SSIE) is a tool for developing high performance processing algorithms for single or sequences of sonar images. The tool is based on MATLAB providing a very short lead time from concept to executable code and thereby assessment of the algorithms tested...... of the algorithms is the availability of sonar images. To accommodate this problem the SSIE has been equipped with a simulator capable of generating high fidelity sonar images for a given scene of objects, sea-bed AUV path, etc. In the paper the main components of the SSIE is described and examples of different...... processing steps are given...

  5. Playing with Process: Video Game Choice as a Model of Behavior

    Science.gov (United States)

    Waelchli, Paul

    2010-01-01

    Popular culture experience in video games creates avenues to practice information literacy skills and model research in a real-world setting. Video games create a unique popular culture experience where players can invest dozens of hours on one game, create characters to identify with, organize skill sets and plot points, collaborate with people…

  6. Exploring Novice Teachers' Cognitive Processes Using Digital Video Technology: A Qualitative Case Study

    Science.gov (United States)

    Sun-Ongerth, Yuelu

    2012-01-01

    This dissertation describes a qualitative case study that investigated novice teachers' video-aided reflection on their own teaching. To date, most studies that have investigated novice teachers' video-aided reflective practice have focused on examining novice teachers' levels of reflective writing rather than the cognitive…

  7. We get the algorithms of our ground truths: Designing referential databases in digital image processing

    Science.gov (United States)

    Jaton, Florian

    2017-01-01

    This article documents the practical efforts of a group of scientists designing an image-processing algorithm for saliency detection. By following the actors of this computer science project, the article shows that the problems often considered to be the starting points of computational models are in fact provisional results of time-consuming, collective and highly material processes that engage habits, desires, skills and values. In the project being studied, problematization processes lead to the constitution of referential databases called ‘ground truths’ that enable both the effective shaping of algorithms and the evaluation of their performances. Working as important common touchstones for research communities in image processing, the ground truths are inherited from prior problematization processes and may be imparted to subsequent ones. The ethnographic results of this study suggest two complementary analytical perspectives on algorithms: (1) an ‘axiomatic’ perspective that understands algorithms as sets of instructions designed to solve given problems computationally in the best possible way, and (2) a ‘problem-oriented’ perspective that understands algorithms as sets of instructions designed to computationally retrieve outputs designed and designated during specific problematization processes. If the axiomatic perspective on algorithms puts the emphasis on the numerical transformations of inputs into outputs, the problem-oriented perspective puts the emphasis on the definition of both inputs and outputs. PMID:28950802

  8. We get the algorithms of our ground truths: Designing referential databases in digital image processing.

    Science.gov (United States)

    Jaton, Florian

    2017-12-01

    This article documents the practical efforts of a group of scientists designing an image-processing algorithm for saliency detection. By following the actors of this computer science project, the article shows that the problems often considered to be the starting points of computational models are in fact provisional results of time-consuming, collective and highly material processes that engage habits, desires, skills and values. In the project being studied, problematization processes lead to the constitution of referential databases called 'ground truths' that enable both the effective shaping of algorithms and the evaluation of their performances. Working as important common touchstones for research communities in image processing, the ground truths are inherited from prior problematization processes and may be imparted to subsequent ones. The ethnographic results of this study suggest two complementary analytical perspectives on algorithms: (1) an 'axiomatic' perspective that understands algorithms as sets of instructions designed to solve given problems computationally in the best possible way, and (2) a 'problem-oriented' perspective that understands algorithms as sets of instructions designed to computationally retrieve outputs designed and designated during specific problematization processes. If the axiomatic perspective on algorithms puts the emphasis on the numerical transformations of inputs into outputs, the problem-oriented perspective puts the emphasis on the definition of both inputs and outputs.

  9. Algorithms

    Indian Academy of Sciences (India)

    The general basis of the universality of computers lies in the notion of Universal Turing Machines proposed by A M. Turing and will be discussed in forthcoming articles. In general, a program is composed of objects familiar and convenient to the user. This process enables the design and construction of complex programs.

  10. Algorithms

    Indian Academy of Sciences (India)

    concepts. SERIES I ARTICLE specifications (intentions). In the sequel, we shall illustrate in an informal way the above process. Our aim is to present the concepts assuming just the basic knowledge of the execution of a program and basic mathematical concepts. For understanding the basis of the method, let us take a look ...

  11. Signal processing for 5G algorithms and implementations

    CERN Document Server

    Luo, Fa-Long

    2016-01-01

    A comprehensive and invaluable guide to 5G technology, implementation and practice in one single volume. For all things 5G, this book is a must-read. Signal processing techniques have played the most important role in wireless communications since the second generation of cellular systems. It is anticipated that new techniques employed in 5G wireless networks will not only improve peak service rates significantly, but also enhance capacity, coverage, reliability , low-latency, efficiency, flexibility, compatibility and convergence to meet the increasing demands imposed by applications such as big data, cloud service, machine-to-machine (M2M) and mission-critical communications. This book is a comprehensive and detailed guide to all signal processing techniques employed in 5G wireless networks. Uniquely organized into four categories, New Modulation and &n sp;Coding, New Spatial Processing, New Spectrum Opportunities and New System-level Enabling Technologies, it covers everything from network architecture...

  12. Approximate Circuits in Low-Power Image and Video Processing: The Approximate Median Filter

    Directory of Open Access Journals (Sweden)

    L. Sekanina

    2017-09-01

    Full Text Available Low power image and video processing circuits are crucial in many applications of computer vision. Traditional techniques used to reduce power consumption in these applications have recently been accompanied by circuit approximation methods which exploit the fact that these applications are highly error resilient and, hence, the quality of image processing can be traded for power consumption. On the basis of a literature survey, we identified the components whose implementations are the most frequently approximated and the methods used for obtaining these approximations. One of the components is the median image filter. We propose, evaluate and compare two approximation strategies based on Cartesian genetic programming applied to approximate various common implementations of the median filter. For filters developed using these approximation strategies, trade-offs between the quality of filtering and power consumption are investigated. Under conditions of our experiments we conclude that better trade-offs are achieved when the image filter is evolved from scratch rather than a conventional filter is approximated.

  13. The algorithm of verification of welding process for plastic pipes

    Science.gov (United States)

    Rzasinski, R.

    2017-08-01

    The study analyzes the process of butt welding of PE pipes in terms of proper selection of connector parameters. The process was oriented to the elements performed as a series of types of pipes. Polymeric materials commonly referred to as polymers or plastics, synthetic materials are produced from oil products in the polyreaction compounds of low molecular weight, called monomers. During the polyreactions monomers combine to build a macromolecule material monomer named with the prefix poly polypropylene, polyethylene or polyurethane, creating particles in solid state on the order of 0,2 to 0,4 mm. Finished products from polymers of virtually any shape and size are obtained by compression molding, injection molding, extrusion, laminating, centrifugal casting, etc. Weld can only be a thermoplastic that softens at an elevated temperature, and thus can be connected via a clamp. Depending on the source and method of supplying heat include the following welding processes: welding contact, radiant welding, friction welding, dielectric welding, ultrasonic welding. The analysis will be welding contact. In connection with the development of new generation of polyethylene, and the production of pipes with increasing dimensions (diameter, wall thickness) is important to select the correct process.

  14. A metamodel based optimisation algorithm for metal forming processes

    NARCIS (Netherlands)

    Bonte, M.H.A.; van den Boogaard, Antonius H.; Huetink, Han; Banabic, Dorel

    2007-01-01

    Cost saving and product improvement have always been important goals in the metal forming industry. To achieve these goals, metal forming processes need to be optimised. During the last decades, simulation software based on the Finite Element Method (FEM) has significantly contributed to designing

  15. Signal-Processing Algorithm Development for the ACLAIM Sensor

    Science.gov (United States)

    vonLaven, Scott

    1995-01-01

    Methods for further minimizing the risk by making use of previous lidar observations were investigated. EOFs are likely to play an important role in these methods, and a procedure for extracting EOFs from data has been implemented, The new processing methods involving EOFs could range from extrapolation, as discussed, to more complicated statistical procedures for maintaining low unstart risk.

  16. Scheduling algorithms for automatic control systems for technological processes

    Science.gov (United States)

    Chernigovskiy, A. S.; Tsarev, R. Yu; Kapulin, D. V.

    2017-01-01

    Wide use of automatic process control systems and the usage of high-performance systems containing a number of computers (processors) give opportunities for creation of high-quality and fast production that increases competitiveness of an enterprise. Exact and fast calculations, control computation, and processing of the big data arrays - all of this requires the high level of productivity and, at the same time, minimum time of data handling and result receiving. In order to reach the best time, it is necessary not only to use computing resources optimally, but also to design and develop the software so that time gain will be maximal. For this purpose task (jobs or operations), scheduling techniques for the multi-machine/multiprocessor systems are applied. Some of basic task scheduling methods for the multi-machine process control systems are considered in this paper, their advantages and disadvantages come to light, and also some usage considerations, in case of the software for automatic process control systems developing, are made.

  17. Algorithmic information theory mathematics of digital information processing

    CERN Document Server

    Seibt, Peter

    2007-01-01

    Treats the Mathematics of many important areas in digital information processing. This book covers, in a unified presentation, five topics: Data Compression, Cryptography, Sampling (Signal Theory), Error Control Codes, Data Reduction. It is useful for teachers, students and practitioners in Electronic Engineering, Computer Science and Mathematics.

  18. The software and algorithms for hyperspectral data processing

    Science.gov (United States)

    Shyrayeva, Anhelina; Martinov, Anton; Ivanov, Victor; Katkovsky, Leonid

    2017-04-01

    Hyperspectral remote sensing technique is widely used for collecting and processing -information about the Earth's surface objects. Hyperspectral data are combined to form a three-dimensional (x, y, λ) data cube. Department of Aerospace Research of the Institute of Applied Physical Problems of the Belarusian State University presents a general model of the software for hyperspectral image data analysis and processing. The software runs in Windows XP/7/8/8.1/10 environment on any personal computer. This complex has been has been written in C++ language using QT framework and OpenGL for graphical data visualization. The software has flexible structure that consists of a set of independent plugins. Each plugin was compiled as Qt Plugin and represents Windows Dynamic library (dll). Plugins can be categorized in terms of data reading types, data visualization (3D, 2D, 1D) and data processing The software has various in-built functions for statistical and mathematical analysis, signal processing functions like direct smoothing function for moving average, Savitzky-Golay smoothing technique, RGB correction, histogram transformation, and atmospheric correction. The software provides two author's engineering techniques for the solution of atmospheric correction problem: iteration method of refinement of spectral albedo's parameters using Libradtran and analytical least square method. The main advantages of these methods are high rate of processing (several minutes for 1 GB data) and low relative error in albedo retrieval (less than 15%). Also, the software supports work with spectral libraries, region of interest (ROI) selection, spectral analysis such as cluster-type image classification and automatic hypercube spectrum comparison by similarity criterion with similar ones from spectral libraries, and vice versa. The software deals with different kinds of spectral information in order to identify and distinguish spectrally unique materials. Also, the following advantages

  19. Representing Block-structured Process Models as Order Matrices: Basic Concepts, Formal Properties, Algorithms

    NARCIS (Netherlands)

    Li, C.; Reichert, M.U.; Wombacher, Andreas

    2009-01-01

    In various cases we need to transform a process model into a matrix representation for further analysis. In this paper, we introduce the notion of Order Matrix, which enables unique representation of block-structured process models. We present algorithms for transforming a block-structured process

  20. Design optimization of single mixed refrigerant LNG process using a hybrid modified coordinate descent algorithm

    Science.gov (United States)

    Qyyum, Muhammad Abdul; Long, Nguyen Van Duc; Minh, Le Quang; Lee, Moonyong

    2018-01-01

    Design optimization of the single mixed refrigerant (SMR) natural gas liquefaction (LNG) process involves highly non-linear interactions between decision variables, constraints, and the objective function. These non-linear interactions lead to an irreversibility, which deteriorates the energy efficiency of the LNG process. In this study, a simple and highly efficient hybrid modified coordinate descent (HMCD) algorithm was proposed to cope with the optimization of the natural gas liquefaction process. The single mixed refrigerant process was modeled in Aspen Hysys® and then connected to a Microsoft Visual Studio environment. The proposed optimization algorithm provided an improved result compared to the other existing methodologies to find the optimal condition of the complex mixed refrigerant natural gas liquefaction process. By applying the proposed optimization algorithm, the SMR process can be designed with the 0.2555 kW specific compression power which is equivalent to 44.3% energy saving as compared to the base case. Furthermore, in terms of coefficient of performance (COP), it can be enhanced up to 34.7% as compared to the base case. The proposed optimization algorithm provides a deep understanding of the optimization of the liquefaction process in both technical and numerical perspectives. In addition, the HMCD algorithm can be employed to any mixed refrigerant based liquefaction process in the natural gas industry.

  1. A priority-based heuristic algorithm (PBHA for optimizing integrated process planning and scheduling problem

    Directory of Open Access Journals (Sweden)

    Muhammad Farhan Ausaf

    2015-12-01

    Full Text Available Process planning and scheduling are two important components of a manufacturing setup. It is important to integrate them to achieve better global optimality and improved system performance. To find optimal solutions for integrated process planning and scheduling (IPPS problem, numerous algorithm-based approaches exist. Most of these approaches try to use existing meta-heuristic algorithms for solving the IPPS problem. Although these approaches have been shown to be effective in optimizing the IPPS problem, there is still room for improvement in terms of quality of solution and algorithm efficiency, especially for more complicated problems. Dispatching rules have been successfully utilized for solving complicated scheduling problems, but haven’t been considered extensively for the IPPS problem. This approach incorporates dispatching rules with the concept of prioritizing jobs, in an algorithm called priority-based heuristic algorithm (PBHA. PBHA tries to establish job and machine priority for selecting operations. Priority assignment and a set of dispatching rules are simultaneously used to generate both the process plans and schedules for all jobs and machines. The algorithm was tested for a series of benchmark problems. The proposed algorithm was able to achieve superior results for most complex problems presented in recent literature while utilizing lesser computational resources.

  2. Application of Hybrid Genetic Algorithm Routine in Optimizing Food and Bioengineering Processes

    Directory of Open Access Journals (Sweden)

    Jaya Shankar Tumuluru

    2016-11-01

    Full Text Available Optimization is a crucial step in the analysis of experimental results. Deterministic methods only converge on local optimums and require exponentially more time as dimensionality increases. Stochastic algorithms are capable of efficiently searching the domain space; however convergence is not guaranteed. This article demonstrates the novelty of the hybrid genetic algorithm (HGA, which combines both stochastic and deterministic routines for improved optimization results. The new hybrid genetic algorithm developed is applied to the Ackley benchmark function as well as case studies in food, biofuel, and biotechnology processes. For each case study, the hybrid genetic algorithm found a better optimum candidate than reported by the sources. In the case of food processing, the hybrid genetic algorithm improved the anthocyanin yield by 6.44%. Optimization of bio-oil production using HGA resulted in a 5.06% higher yield. In the enzyme production process, HGA predicted a 0.39% higher xylanase yield. Hybridization of the genetic algorithm with a deterministic algorithm resulted in an improved optimum compared to statistical methods.

  3. Modification and Validation of an Automotive Data Processing Unit, Compessed Video System, and Communications Equipment

    Energy Technology Data Exchange (ETDEWEB)

    Carter, R.J.

    1997-04-01

    The primary purpose of the "modification and validation of an automotive data processing unit (DPU), compressed video system, and communications equipment" cooperative research and development agreement (CRADA) was to modify and validate both hardware and software, developed by Scientific Atlanta, Incorporated (S-A) for defense applications (e.g., rotary-wing airplanes), for the commercial sector surface transportation domain (i.e., automobiles and trucks). S-A also furnished a state-of-the-art compressed video digital storage and retrieval system (CVDSRS), and off-the-shelf data storage and transmission equipment to support the data acquisition system for crash avoidance research (DASCAR) project conducted by Oak Ridge National Laboratory (ORNL). In turn, S-A received access to hardware and technology related to DASCAR. DASCAR was subsequently removed completely and installation was repeated a number of times to gain an accurate idea of complete installation, operation, and removal of DASCAR. Upon satisfactory completion of the DASCAR construction and preliminary shakedown, ORNL provided NHTSA with an operational demonstration of DASCAR at their East Liberty, OH test facility. The demonstration included an on-the-road demonstration of the entire data acquisition system using NHTSA'S test track. In addition, the demonstration also consisted of a briefing, containing the following: ORNL generated a plan for validating the prototype data acquisition system with regard to: removal of DASCAR from an existing vehicle, and installation and calibration in other vehicles; reliability of the sensors and systems; data collection and transmission process (data integrity); impact on the drivability of the vehicle and obtrusiveness of the system to the driver; data analysis procedures; conspicuousness of the vehicle to other drivers; and DASCAR installation and removal training and documentation. In order to identify any operational problems not captured by the systems

  4. Joint Kalman–Haar Algorithm Applied to Signal Processing

    Directory of Open Access Journals (Sweden)

    Alejandro Viegener

    2012-03-01

    Full Text Available Under the analysis of signals disturbed by noise, in this paper we propose a working methodology aimed to seize the best estimate of combining Kalman filtering with the characterization that is achieved by applying a multiresolution analysis (MRA using wavelets. From the standpoint of Kalman filtering this combined procedure is quasi-optimal, but the change to be made allows the simultaneous implementation of a scheme of wavelet denoising; with this decreases the computational cost of applying both procedures separately. Our proposal is to process the signal by successive non-overlapping intervals, combining the process for calculating the optimal filter with a MRA using the Haar wavelet. The method takes advantage of the combined use of both tools (Kalman-Haar and is free from edge problems related to the signal segmentation.

  5. A Review on Sensor, Signal, and Information Processing Algorithms (PREPRINT)

    Science.gov (United States)

    2010-01-01

    use for estimating speech intelligibility in auditoria , Journal of the Acoustical Society of America 77 (1985) 1069–1077. [68] A. Spanias, Speech...detection under communication constraints, in: IEEE Interna - tional Symposium on Information Theory, 1995. [78] C. Yu, P. Varshney, Bit allocation for...Processing 55 (5, Part 1) (2007) 1634–1643. [107] J. Karlsson, N. Wernersson, M. Skoglund, Distributed scalar quantizers for noisy channels, in: IEEE Interna

  6. Intelligent video surveillance systems

    CERN Document Server

    Dufour, Jean-Yves

    2012-01-01

    Belonging to the wider academic field of computer vision, video analytics has aroused a phenomenal surge of interest since the current millennium. Video analytics is intended to solve the problem of the incapability of exploiting video streams in real time for the purpose of detection or anticipation. It involves analyzing the videos using algorithms that detect and track objects of interest over time and that indicate the presence of events or suspect behavior involving these objects.The aims of this book are to highlight the operational attempts of video analytics, to identify possi

  7. Despeckle filtering for ultrasound imaging and video II selected applications

    CERN Document Server

    Loizou, Christos P

    2015-01-01

    In ultrasound imaging and video visual perception is hindered by speckle multiplicative noise that degrades the quality. Noise reduction is therefore essential for improving the visual observation quality or as a pre-processing step for further automated analysis, such as image/video segmentation, texture analysis and encoding in ultrasound imaging and video. The goal of the first book (book 1 of 2 books) was to introduce the problem of speckle in ultrasound image and video as well as the theoretical background, algorithmic steps, and the MatlabTM for the following group of despeckle filters:

  8. Parametric optimization of ultrasonic machining process using gravitational search and fireworks algorithms

    Directory of Open Access Journals (Sweden)

    Debkalpa Goswami

    2015-03-01

    Full Text Available Ultrasonic machining (USM is a mechanical material removal process used to erode holes and cavities in hard or brittle workpieces by using shaped tools, high-frequency mechanical motion and an abrasive slurry. Unlike other non-traditional machining processes, such as laser beam and electrical discharge machining, USM process does not thermally damage the workpiece or introduce significant levels of residual stress, which is important for survival of materials in service. For having enhanced machining performance and better machined job characteristics, it is often required to determine the optimal control parameter settings of an USM process. The earlier mathematical approaches for parametric optimization of USM processes have mostly yielded near optimal or sub-optimal solutions. In this paper, two almost unexplored non-conventional optimization techniques, i.e. gravitational search algorithm (GSA and fireworks algorithm (FWA are applied for parametric optimization of USM processes. The optimization performance of these two algorithms is compared with that of other popular population-based algorithms, and the effects of their algorithm parameters on the derived optimal solutions and computational speed are also investigated. It is observed that FWA provides the best optimal results for the considered USM processes.

  9. First results of the multi-purpose real-time processing video camera system on the Wendelstein 7-X stellarator and implications for future devices

    Science.gov (United States)

    Zoletnik, S.; Biedermann, C.; Cseh, G.; Kocsis, G.; König, R.; Szabolics, T.; Szepesi, T.; Wendelstein 7-X Team

    2018-01-01

    A special video camera has been developed for the 10-camera overview video system of the Wendelstein 7-X (W7-X) stellarator considering multiple application needs and limitations resulting from this complex long-pulse superconducting stellarator experiment. The event detection intelligent camera (EDICAM) uses a special 1.3 Mpixel CMOS sensor with non-destructive read capability which enables fast monitoring of smaller Regions of Interest (ROIs) even during long exposures. The camera can perform simple data evaluation algorithms (minimum/maximum, mean comparison to levels) on the ROI data which can dynamically change the readout process and generate output signals. Multiple EDICAM cameras were operated in the first campaign of W7-X and capabilities were explored in the real environment. Data prove that the camera can be used for taking long exposure (10-100 ms) overview images of the plasma while sub-ms monitoring and even multi-camera correlated edge plasma turbulence measurements of smaller areas can be done in parallel. These latter revealed that filamentary turbulence structures extend between neighboring modules of the stellarator. Considerations emerging for future upgrades of this system and similar setups on future long-pulse fusion experiments such as ITER are discussed.

  10. APPLYING OF COLLABORATIVE FILTERING ALGORITHM FOR PROCESSING OF MEDICAL DATA

    Directory of Open Access Journals (Sweden)

    Карина Владимировна МЕЛЬНИК

    2015-05-01

    Full Text Available The problem of improving of effectiveness of medical facility for implementation of social project is considered. There are different approaches to solve this problem, some of which require additional funding, which is usually absent. Therefore, it was proposed to use the approach of processing and application of patients’ data from medical records. The selection of a representative sample of patients was carried out using the technique of collaborative filtering. Review of the methods of collaborative filtering is performed, which showed that there are three main groups of methods. The first group calculates various measures of similarity between the object. The second group is data mining techniques. The third group of methods is a hybrid approach. The Gower coefficient for calculation of similarity measure of medical records of patients is considered in the article. A model of risk assessment of diseases based on collaborative filtering techniques is developed.

  11. Study of gray image pseudo-color processing algorithms

    Science.gov (United States)

    Hu, Jinlong; Peng, Xianrong; Xu, Zhiyong

    In gray images which contain abundant information, if the differences between adjacent pixels' intensity are small, the required information can not be extracted by humans, since humans are more sensitive to color images than gray images. If gray images are transformed to pseudo-color images, the details of images will be more explicit, and the target will be recognized more easily. There are two methods (in frequency field and in spatial field) to realize pseudo-color enhancement of gray images. The first method is mainly the filtering in frequency field, and the second is the equal density pseudo-color coding methods which mainly include density segmentation coding, function transformation and complementary pseudo-color coding. Moreover, there are many other methods to realize pseudo-color enhancement, such as pixel's self-transformation based on RGB tri-primary, pseudo-color coding from phase-modulated image based on RGB color model, pseudo-color coding of high gray-resolution image, et al. However, above methods are tailored to a particular situation and transformations are based on RGB color space. In order to improve the visual effect, the method based on RGB color space and pixels' self-transformation is improved in this paper, which is based on HIS color space. Compared with other methods, some gray images with ordinary formats can be processed, and many gray images can be transformed to pseudo-color images with 24 bits. The experiment shows that the processed image has abundant levels, which is consistent with human's perception.

  12. Compute-unified device architecture implementation of a block-matching algorithm for multiple graphical processing unit cards

    Science.gov (United States)

    Massanes, Francesc; Cadennes, Marie; Brankov, Jovan G.

    2011-07-01

    We describe and evaluate a fast implementation of a classical block-matching motion estimation algorithm for multiple graphical processing units (GPUs) using the compute unified device architecture computing engine. The implemented block-matching algorithm uses summed absolute difference error criterion and full grid search (FS) for finding optimal block displacement. In this evaluation, we compared the execution time of a GPU and CPU implementation for images of various sizes, using integer and noninteger search grids. The results show that use of a GPU card can shorten computation time by a factor of 200 times for integer and 1000 times for a noninteger search grid. The additional speedup for a noninteger search grid comes from the fact that GPU has built-in hardware for image interpolation. Further, when using multiple GPU cards, the presented evaluation shows the importance of the data splitting method across multiple cards, but an almost linear speedup with a number of cards is achievable. In addition, we compared the execution time of the proposed FS GPU implementation with two existing, highly optimized nonfull grid search CPU-based motion estimations methods, namely implementation of the Pyramidal Lucas Kanade Optical flow algorithm in OpenCV and simplified unsymmetrical multi-hexagon search in H.264/AVC standard. In these comparisons, FS GPU implementation still showed modest improvement even though the computational complexity of FS GPU implementation is substantially higher than non-FS CPU implementation. We also demonstrated that for an image sequence of 720 × 480 pixels in resolution commonly used in video surveillance, the proposed GPU implementation is sufficiently fast for real-time motion estimation at 30 frames-per-second using two NVIDIA C1060 Tesla GPU cards.

  13. Compute-unified device architecture implementation of a block-matching algorithm for multiple graphical processing unit cards.

    Science.gov (United States)

    Massanes, Francesc; Cadennes, Marie; Brankov, Jovan G

    2011-07-01

    In this paper we describe and evaluate a fast implementation of a classical block matching motion estimation algorithm for multiple Graphical Processing Units (GPUs) using the Compute Unified Device Architecture (CUDA) computing engine. The implemented block matching algorithm (BMA) uses summed absolute difference (SAD) error criterion and full grid search (FS) for finding optimal block displacement. In this evaluation we compared the execution time of a GPU and CPU implementation for images of various sizes, using integer and non-integer search grids.The results show that use of a GPU card can shorten computation time by a factor of 200 times for integer and 1000 times for a non-integer search grid. The additional speedup for non-integer search grid comes from the fact that GPU has built-in hardware for image interpolation. Further, when using multiple GPU cards, the presented evaluation shows the importance of the data splitting method across multiple cards, but an almost linear speedup with a number of cards is achievable.In addition we compared execution time of the proposed FS GPU implementation with two existing, highly optimized non-full grid search CPU based motion estimations methods, namely implementation of the Pyramidal Lucas Kanade Optical flow algorithm in OpenCV and Simplified Unsymmetrical multi-Hexagon search in H.264/AVC standard. In these comparisons, FS GPU implementation still showed modest improvement even though the computational complexity of FS GPU implementation is substantially higher than non-FS CPU implementation.We also demonstrated that for an image sequence of 720×480 pixels in resolution, commonly used in video surveillance, the proposed GPU implementation is sufficiently fast for real-time motion estimation at 30 frames-per-second using two NVIDIA C1060 Tesla GPU cards.

  14. Gaussian Process Regression-Based Video Anomaly Detection and Localization With Hierarchical Feature Representation.

    Science.gov (United States)

    Cheng, Kai-Wen; Chen, Yie-Tarng; Fang, Wen-Hsien

    2015-12-01

    This paper presents a hierarchical framework for detecting local and global anomalies via hierarchical feature representation and Gaussian process regression (GPR) which is fully non-parametric and robust to the noisy training data, and supports sparse features. While most research on anomaly detection has focused more on detecting local anomalies, we are more interested in global anomalies that involve multiple normal events interacting in an unusual manner, such as car accidents. To simultaneously detect local and global anomalies, we cast the extraction of normal interactions from the training videos as a problem of finding the frequent geometric relations of the nearby sparse spatio-temporal interest points (STIPs). A codebook of interaction templates is then constructed and modeled using the GPR, based on which a novel inference method for computing the likelihood of an observed interaction is also developed. Thereafter, these local likelihood scores are integrated into globally consistent anomaly masks, from which anomalies can be succinctly identified. To the best of our knowledge, it is the first time GPR is employed to model the relationship of the nearby STIPs for anomaly detection. Simulations based on four widespread datasets show that the new method outperforms the main state-of-the-art methods with lower computational burden.

  15. Multisensor Processing Algorithms for Underwater Dipole Localization and Tracking Using MEMS Artificial Lateral-Line Sensors

    Directory of Open Access Journals (Sweden)

    Jones Douglas L

    2006-01-01

    Full Text Available An engineered artificial lateral-line system has been recently developed, consisting of a 16-element array of finely spaced MEMS hot-wire flow sensors. This represents a new class of underwater flow sensing instruments and necessitates the development of rapid, efficient, and robust signal processing algorithms. In this paper, we report on the development and implementation of a set of algorithms that assist in the localization and tracking of vibrational dipole sources underwater. Using these algorithms, accurate tracking of the trajectory of a moving dipole source has been demonstrated successfully.

  16. On the Wiener-Masani algorithm for finding the generating function of multivariate stochastic processes

    Science.gov (United States)

    Miamee, A. G.

    1988-01-01

    The algorithms developed by Wiener and Masani (1957 and 1958) and Masani (1960) for the characterization of a class of multivariate stationary stochastic processes are investigated analytically. The algorithms permit the determination of (1) the generating function, (2) the prediction-error matrix, and (3) an autoregressive representation of the linear least-squares predictor. A number of theorems and lemmas are proved, and it is shown that the range of validity of the algorithms can be extended significantly beyond that given by Wiener and Masani.

  17. IMPLEMENTATION OF IMAGE PROCESSING ALGORITHMS AND GLVQ TO TRACK AN OBJECT USING AR.DRONE CAMERA

    Directory of Open Access Journals (Sweden)

    Muhammad Nanda Kurniawan

    2014-08-01

    Full Text Available Abstract In this research, Parrot AR.Drone as an Unmanned Aerial Vehicle (UAV was used to track an object from above. Development of this system utilized some functions from OpenCV library and Robot Operating System (ROS. Techniques that were implemented in the system are image processing al-gorithm (Centroid-Contour Distance (CCD, feature extraction algorithm (Principal Component Analysis (PCA and an artificial neural network algorithm (Generalized Learning Vector Quantization (GLVQ. The final result of this research is a program for AR.Drone to track a moving object on the floor in fast response time that is under 1 second.

  18. Digital video image processing from dental operating microscope in endodontic treatment.

    Science.gov (United States)

    Suehara, Masataka; Nakagawa, Kan-Ichi; Aida, Natsuko; Ushikubo, Toshihiro; Morinaga, Kazuki

    2012-01-01

    Recently, optical microscopes have been used in endodontic treatment, as they offer advantages in terms of magnification, illumination, and documentation. Documentation is particularly important in presenting images to patients, and can take the form of both still images and motion video. Although high-quality still images can be obtained using a 35-mm film or CCD camera, the quality of still images produced by a video camera is significantly lower. The purpose of this study was to determine the potential of RegiStax in obtaining high-quality still images from a continuous video stream from an optical microscope. Video was captured continuously and sections with the highest luminosity chosen for frame alignment and stacking using the RegiStax program. The resulting stacked images were subjected to wavelet transformation. The results indicate that high-quality images with a large depth of field could be obtained using this method.

  19. Optimized static and video EEG rapid serial visual presentation (RSVP) paradigm based on motion surprise computation

    Science.gov (United States)

    Khosla, Deepak; Huber, David J.; Bhattacharyya, Rajan

    2017-05-01

    In this paper, we describe an algorithm and system for optimizing search and detection performance for "items of interest" (IOI) in large-sized images and videos that employ the Rapid Serial Visual Presentation (RSVP) based EEG paradigm and surprise algorithms that incorporate motion processing to determine whether static or video RSVP is used. The system works by first computing a motion surprise map on image sub-regions (chips) of incoming sensor video data and then uses those surprise maps to label the chips as either "static" or "moving". This information tells the system whether to use a static or video RSVP presentation and decoding algorithm in order to optimize EEG based detection of IOI in each chip. Using this method, we are able to demonstrate classification of a series of image regions from video with an azimuth value of 1, indicating perfect classification, over a range of display frequencies and video speeds.

  20. Gaming to see: Action Video Gaming is associated with enhanced processing of masked stimuli

    OpenAIRE

    Carsten ePohl; Wilfried eKunde; Thomas eGanz; Annette eConzelmann; Paul ePauli; Andrea eKiesel

    2014-01-01

    Recent research revealed that action video game players outperform non-players in a wide range of attentional, perceptual and cognitive tasks. Here we tested if expertise in action video games is related to differences regarding the potential of shortly presented stimuli to bias behaviour. In a response priming paradigm, participants classified four animal pictures functioning as targets as being smaller or larger than a reference frame. Before each target, one of the same four animal picture...

  1. An Automated Energy Detection Algorithm Based on Morphological and Statistical Processing Techniques

    Science.gov (United States)

    2018-01-09

    ARL-TR-8272 ● JAN 2018 US Army Research Laboratory An Automated Energy Detection Algorithm Based on Morphological and...is no longer needed. Do not return it to the originator. ARL-TR-8272 ● JAN 2018 US Army Research Laboratory An Automated Energy ...4. TITLE AND SUBTITLE An Automated Energy Detection Algorithm Based on Morphological and Statistical Processing Techniques 5a. CONTRACT NUMBER

  2. General purpose graphic processing unit implementation of adaptive pulse compression algorithms

    Science.gov (United States)

    Cai, Jingxiao; Zhang, Yan

    2017-07-01

    This study introduces a practical approach to implement real-time signal processing algorithms for general surveillance radar based on NVIDIA graphical processing units (GPUs). The pulse compression algorithms are implemented using compute unified device architecture (CUDA) libraries such as CUDA basic linear algebra subroutines and CUDA fast Fourier transform library, which are adopted from open source libraries and optimized for the NVIDIA GPUs. For more advanced, adaptive processing algorithms such as adaptive pulse compression, customized kernel optimization is needed and investigated. A statistical optimization approach is developed for this purpose without needing much knowledge of the physical configurations of the kernels. It was found that the kernel optimization approach can significantly improve the performance. Benchmark performance is compared with the CPU performance in terms of processing accelerations. The proposed implementation framework can be used in various radar systems including ground-based phased array radar, airborne sense and avoid radar, and aerospace surveillance radar.

  3. Optimization Solutions for Improving the Performance of the Parallel Reduction Algorithm Using Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Ion LUNGU

    2012-01-01

    Full Text Available In this paper, we research, analyze and develop optimization solutions for the parallel reduction function using graphics processing units (GPUs that implement the Compute Unified Device Architecture (CUDA, a modern and novel approach for improving the software performance of data processing applications and algorithms. Many of these applications and algorithms make use of the reduction function in their computational steps. After having designed the function and its algorithmic steps in CUDA, we have progressively developed and implemented optimization solutions for the reduction function. In order to confirm, test and evaluate the solutions' efficiency, we have developed a custom tailored benchmark suite. We have analyzed the obtained experimental results regarding: the comparison of the execution time and bandwidth when using graphic processing units covering the main CUDA architectures (Tesla GT200, Fermi GF100, Kepler GK104 and a central processing unit; the data type influence; the binary operator's influence.

  4. An algorithm for automated layout of process description maps drawn in SBGN.

    Science.gov (United States)

    Genc, Begum; Dogrusoz, Ugur

    2016-01-01

    Evolving technology has increased the focus on genomics. The combination of today's advanced techniques with decades of molecular biology research has yielded huge amounts of pathway data. A standard, named the Systems Biology Graphical Notation (SBGN), was recently introduced to allow scientists to represent biological pathways in an unambiguous, easy-to-understand and efficient manner. Although there are a number of automated layout algorithms for various types of biological networks, currently none specialize on process description (PD) maps as defined by SBGN. We propose a new automated layout algorithm for PD maps drawn in SBGN. Our algorithm is based on a force-directed automated layout algorithm called Compound Spring Embedder (CoSE). On top of the existing force scheme, additional heuristics employing new types of forces and movement rules are defined to address SBGN-specific rules. Our algorithm is the only automatic layout algorithm that properly addresses all SBGN rules for drawing PD maps, including placement of substrates and products of process nodes on opposite sides, compact tiling of members of molecular complexes and extensively making use of nested structures (compound nodes) to properly draw cellular locations and molecular complex structures. As demonstrated experimentally, the algorithm results in significant improvements over use of a generic layout algorithm such as CoSE in addressing SBGN rules on top of commonly accepted graph drawing criteria. An implementation of our algorithm in Java is available within ChiLay library (https://github.com/iVis-at-Bilkent/chilay). ugur@cs.bilkent.edu.tr or dogrusoz@cbio.mskcc.org Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.

  5. Temporal Gillespie Algorithm: Fast Simulation of Contagion Processes on Time-Varying Networks

    Science.gov (United States)

    Vestergaard, Christian L.; Génois, Mathieu

    2015-01-01

    Stochastic simulations are one of the cornerstones of the analysis of dynamical processes on complex networks, and are often the only accessible way to explore their behavior. The development of fast algorithms is paramount to allow large-scale simulations. The Gillespie algorithm can be used for fast simulation of stochastic processes, and variants of it have been applied to simulate dynamical processes on static networks. However, its adaptation to temporal networks remains non-trivial. We here present a temporal Gillespie algorithm that solves this problem. Our method is applicable to general Poisson (constant-rate) processes on temporal networks, stochastically exact, and up to multiple orders of magnitude faster than traditional simulation schemes based on rejection sampling. We also show how it can be extended to simulate non-Markovian processes. The algorithm is easily applicable in practice, and as an illustration we detail how to simulate both Poissonian and non-Markovian models of epidemic spreading. Namely, we provide pseudocode and its implementation in C++ for simulating the paradigmatic Susceptible-Infected-Susceptible and Susceptible-Infected-Recovered models and a Susceptible-Infected-Recovered model with non-constant recovery rates. For empirical networks, the temporal Gillespie algorithm is here typically from 10 to 100 times faster than rejection sampling. PMID:26517860

  6. Generic vs custom; analogue vs digital: on the implementation of an online EEG signal processing algorithm.

    Science.gov (United States)

    Casson, Alexander J; Rodriguez-Villegas, Esther

    2008-01-01

    This paper quantifies the performance difference between custom and generic hardware algorithm implementations, illustrating the challenges that are involved in Body Area Network signal processing implementations. The potential use of analogue signal processing to improve the power performance is also demonstrated.

  7. Reproducible cancer biomarker discovery in SELDI-TOF MS using different pre-processing algorithms.

    Directory of Open Access Journals (Sweden)

    Jinfeng Zou

    Full Text Available BACKGROUND: There has been much interest in differentiating diseased and normal samples using biomarkers derived from mass spectrometry (MS studies. However, biomarker identification for specific diseases has been hindered by irreproducibility. Specifically, a peak profile extracted from a dataset for biomarker identification depends on a data pre-processing algorithm. Until now, no widely accepted agreement has been reached. RESULTS: In this paper, we investigated the consistency of biomarker identification using differentially expressed (DE peaks from peak profiles produced by three widely used average spectrum-dependent pre-processing algorithms based on SELDI-TOF MS data for prostate and breast cancers. Our results revealed two important factors that affect the consistency of DE peak identification using different algorithms. One factor is that some DE peaks selected from one peak profile were not detected as peaks in other profiles, and the second factor is that the statistical power of identifying DE peaks in large peak profiles with many peaks may be low due to the large scale of the tests and small number of samples. Furthermore, we demonstrated that the DE peak detection power in large profiles could be improved by the stratified false discovery rate (FDR control approach and that the reproducibility of DE peak detection could thereby be increased. CONCLUSIONS: Comparing and evaluating pre-processing algorithms in terms of reproducibility can elucidate the relationship among different algorithms and also help in selecting a pre-processing algorithm. The DE peaks selected from small peak profiles with few peaks for a dataset tend to be reproducibly detected in large peak profiles, which suggests that a suitable pre-processing algorithm should be able to produce peaks sufficient for identifying useful and reproducible biomarkers.

  8. Rapid prototyping of an automated video surveillance system: a hardware-software co-design approach

    Science.gov (United States)

    Ngo, Hau T.; Rakvic, Ryan N.; Broussard, Randy P.; Ives, Robert W.

    2011-06-01

    FPGA devices with embedded DSP and memory blocks, and high-speed interfaces are ideal for real-time video processing applications. In this work, a hardware-software co-design approach is proposed to effectively utilize FPGA features for a prototype of an automated video surveillance system. Time-critical steps of the video surveillance algorithm are designed and implemented in the FPGAs logic elements to maximize parallel processing. Other non timecritical tasks are achieved by executing a high level language program on an embedded Nios-II processor. Pre-tested and verified video and interface functions from a standard video framework are utilized to significantly reduce development and verification time. Custom and parallel processing modules are integrated into the video processing chain by Altera's Avalon Streaming video protocol. Other data control interfaces are achieved by connecting hardware controllers to a Nios-II processor using Altera's Avalon Memory Mapped protocol.

  9. Parallel Algorithm of Geometrical Hashing Based on NumPy Package and Processes Pool

    Directory of Open Access Journals (Sweden)

    Klyachin Vladimir Aleksandrovich

    2015-10-01

    Full Text Available The article considers the problem of multi-dimensional geometric hashing. The paper describes a mathematical model of geometric hashing and considers an example of its use in localization problems for the point. A method of constructing the corresponding hash matrix by parallel algorithm is considered. In this paper an algorithm of parallel geometric hashing using a development pattern «pool processes» is proposed. The implementation of the algorithm is executed using the Python programming language and NumPy package for manipulating multidimensional data. To implement the process pool it is proposed to use a class Process Pool Executor imported from module concurrent.futures, which is included in the distribution of the interpreter Python since version 3.2. All the solutions are presented in the paper by corresponding UML class diagrams. Designed GeomNash package includes classes Data, Result, GeomHash, Job. The results of the developed program presents the corresponding graphs. Also, the article presents the theoretical justification for the application process pool for the implementation of parallel algorithms. It is obtained condition t2 > (p/(p-1*t1 of the appropriateness of process pool. Here t1 - the time of transmission unit of data between processes, and t2 - the time of processing unit data by one processor.

  10. Attitudes of older adults toward shooter video games: An initial study to select an acceptable game for training visual processing.

    Science.gov (United States)

    McKay, Sandra M; Maki, Brian E

    2010-01-01

    A computer-based 'Useful Field of View' (UFOV) training program has been shown to be effective in improving visual processing in older adults. Studies of young adults have shown that playing video games can have similar benefits; however, these studies involved realistic and violent 'first-person shooter' (FPS) games. The willingness of older adults to play such games has not been established. OBJECTIVES: To determine the degree to which older adults would accept playing a realistic, violent FPS-game, compared to video games not involving realistic depiction of violence. METHODS: Sixteen older adults (ages 64-77) viewed and rated video-clip demonstrations of the UFOV program and three video-game genres (realistic-FPS, cartoon-FPS, fixed-shooter), and were then given an opportunity to try them out (30 minutes per game) and rate various features. RESULTS: The results supported a hypothesis that the participants would be less willing to play the realistic-FPS game in comparison to the less violent alternatives (p'svideo-clip demonstrations, 10 of 16 participants indicated they would be unwilling to try out the realistic-FPS game. Of the six who were willing, three did not enjoy the experience and were not interested in playing again. In contrast, all 12 subjects who were willing to try the cartoon-FPS game reported that they enjoyed it and would be willing to play again. A high proportion also tried and enjoyed the UFOV training (15/16) and the fixed-shooter game (12/15). DISCUSSION: A realistic, violent FPS video game is unlikely to be an appropriate choice for older adults. Cartoon-FPS and fixed-shooter games are more viable options. Although most subjects also enjoyed UFOV training, a video-game approach has a number of potential advantages (for instance, 'addictive' properties, low cost, self-administration at home). We therefore conclude that non-violent cartoon-FPS and fixed-shooter video games warrant further investigation as an alternative to the UFOV program

  11. OPTIMISATION OF OCCUPATIONAL RADIATION PROTECTION IN IMAGE-GUIDED INTERVENTIONS: EXPLORING VIDEO RECORDINGS AS A TOOL IN THE PROCESS.

    Science.gov (United States)

    Almén, Anja; Sandblom, Viktor; Rystedt, Hans; von Wrangel, Alexa; Ivarsson, Jonas; Båth, Magnus; Lundh, Charlotta

    2016-06-01

    The overall purpose of this work was to explore how video recordings can contribute to the process of optimising occupational radiation protection in image-guided interventions. Video-recorded material from two image-guided interventions was produced and used to investigate to what extent it is conceivable to observe and assess dose-affecting actions in video recordings. Using the recorded material, it was to some extent possible to connect the choice of imaging techniques to the medical events during the procedure and, to a less extent, to connect these technical and medical issues to the occupational exposure. It was possible to identify a relationship between occupational exposure level to staff and positioning and use of shielding. However, detailed values of the dose rates were not possible to observe on the recordings, and the change in occupational exposure level from adjustments of exposure settings was not possible to identify. In conclusion, the use of video recordings is a promising tool to identify dose-affecting instances, allowing for a deeper knowledge of the interdependency between the management of the medical procedure, the applied imaging technology and the occupational exposure level. However, for a full information about the dose-affecting actions, the equipment used and the recording settings have to be thoroughly planned. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  12. Choosing optimal rapid manufacturing process for thin-walled products using expert algorithm

    Directory of Open Access Journals (Sweden)

    Filip Gorski

    2010-10-01

    Full Text Available Choosing right Rapid Prototyping technology is not easy, especially for companies inexperienced with that group of manufacturing techniques. Paper summarizes research focused on creating an algorithm for expert system, helping to choose optimal process and determine its parameters for thin-walled products rapid manufacturing. Research was based upon trial manufacturing of different thin-walled items using various RP technologies. Products were categorized, each category was defined by a set of requirements. Basing on research outcome, main algorithm has been created. Next step was developing detailed algorithms for optimizing particular methods. Implementation of these algorithms brings huge benefit for recipients, including cost reduction, supply time decrease and improvements in information flow.

  13. Symblicit algorithms for optimal strategy synthesis in monotonic Markov decision processes

    Directory of Open Access Journals (Sweden)

    Aaron Bohy

    2014-07-01

    Full Text Available When treating Markov decision processes (MDPs with large state spaces, using explicit representations quickly becomes unfeasible. Lately, Wimmer et al. have proposed a so-called symblicit algorithm for the synthesis of optimal strategies in MDPs, in the quantitative setting of expected mean-payoff. This algorithm, based on the strategy iteration algorithm of Howard and Veinott, efficiently combines symbolic and explicit data structures, and uses binary decision diagrams as symbolic representation. The aim of this paper is to show that the new data structure of pseudo-antichains (an extension of antichains provides another interesting alternative, especially for the class of monotonic MDPs. We design efficient pseudo-antichain based symblicit algorithms (with open source implementations for two quantitative settings: the expected mean-payoff and the stochastic shortest path. For two practical applications coming from automated planning and LTL synthesis, we report promising experimental results w.r.t. both the run time and the memory consumption.

  14. Tailoring a video-feedback intervention for sensitive discipline to parents with intellectual disabilities: a process evaluation.

    Science.gov (United States)

    Hodes, Marja W; Meppelder, H Marieke; Schuengel, Carlo; Kef, Sabina

    2014-01-01

    Parenting support programs for the general population may not be effective for parents with intellectual disabilities (ID). A videobased intervention program based on attachment and coercion theory (Video-feedback Intervention to promote Positive Parenting with additional focus on Sensitive Discipline; VIPP-SD) was tailored to parents with ID and the implementation of the adapted program was evaluated by the home visitors conducting the program. Home visitors (N = 17) of 36 families rated the intervention process during each session. Home visitors' evaluations showed a significant increase in positive ratings of parents' easiness to work with, amenability to influence, and openness. Cooperation remained stable. A case example illustrated this process, showing how feedback using video facilitated changes in the perceptions and attributions of a mother with mild ID.

  15. [Research and realization of signal processing algorithms based on FPGA in digital ophthalmic ultrasonography imaging].

    Science.gov (United States)

    Fang, Simin; Zhou, Sheng; Wang, Xiaochun; Ye, Qingsheng; Tian, Ling; Ji, Jianjun; Wang, Yanqun

    2015-01-01

    To design and improve signal processing algorithms of ophthalmic ultrasonography based on FPGA. Achieved three signal processing modules: full parallel distributed dynamic filter, digital quadrature demodulation, logarithmic compression, using Verilog HDL hardware language in Quartus II. Compared to the original system, the hardware cost is reduced, the whole image shows clearer and more information of the deep eyeball contained in the image, the depth of detection increases from 5 cm to 6 cm. The new algorithms meet the design requirements and achieve the system's optimization that they can effectively improve the image quality of existing equipment.

  16. A hyperbolastic type-I diffusion process: Parameter estimation by means of the firefly algorithm.

    Science.gov (United States)

    Barrera, Antonio; Román-Román, Patricia; Torres-Ruiz, Francisco

    2018-01-01

    A stochastic diffusion process, whose mean function is a hyperbolastic curve of type I, is presented. The main characteristics of the process are studied and the problem of maximum likelihood estimation for the parameters of the process is considered. To this end, the firefly metaheuristic optimization algorithm is applied after bounding the parametric space by a stagewise procedure. Some examples based on simulated sample paths and real data illustrate this development. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Algorithm combination of deblurring and denoising on video frames using the method search of local features on image

    Directory of Open Access Journals (Sweden)

    Semenishchev Evgeny

    2017-01-01

    Full Text Available In this paper, we propose an approach that allows us to perform an operation to reduce error in the form of noise and lubrication. To improve the processing speed and the possibility of parallelization of the process, we use the approach is based on the search for local features on the image.

  18. Follow-the-leader algorithm for the payload inspection and processing system

    Science.gov (United States)

    Williams, Robert L., II

    1995-01-01

    This report summarizes the author's summer 1995 work at NASA Kennedy Space Center in the Advanced System Division. The assignment was path planning for the Payload Inspection and Processing System (PIPS). PIPS is an automated system, programmed off-line for inspection of Space Shuttle payloads after integration and prior to launch. PIPS features a hyper-redundant 18-dof serpentine truss manipulator capable of snake-like motions to avoid obstacles. The path planning problem was divided into two segments: (1) determining an obstacle-free trajectory for the inspection camera at the manipulator tip to follow; and (2) development of a follow-the-leader (FTL) algorithm which ensures whole-arm collision avoidance by forcing ensuing links to follow the same tip trajectory. The summer 1995 work focused on the FTL algorithm. This report summarizes development, implementation, testing, and graphical demonstration of the FTL algorithm for prototype PIPS hardware. The method and code was developed in a modular manner so the final PIPS hardware may use them with minimal changes. The FTL algorithm was implemented using MATLAB software and demonstrated with a high-fidelity IGRIP model. The author also supported implementation of the algorithm in C++ for hardware control. The FTL algorithm proved to be successful and robust in graphical simulation. The author intends to return to the project in summer 1996 to implement path planning for PIPS.

  19. Designing an Iterative Learning Control Algorithm Based on Process History using limited post process geometrical information

    DEFF Research Database (Denmark)

    Endelt, Benny Ørtoft; Volk, Wolfram

    2013-01-01

    , the reaction speed may be insufficient compared to the production rate in an industrial application. We propose to design an iterative learning control (ILC) algorithm which can control and update the blank-holder force as well as the distribution of the blank-holder force based on limited geometric data from...

  20. Development of a video-based education and process change intervention to improve advance cardiopulmonary resuscitation decision-making.

    Science.gov (United States)

    Waldron, Nicholas; Johnson, Claire E; Saul, Peter; Waldron, Heidi; Chong, Jeffrey C; Hill, Anne-Marie; Hayes, Barbara

    2016-10-06

    Advance cardiopulmonary resuscitation (CPR) decision-making and escalation of care discussions are variable in routine clinical practice. We aimed to explore physician barriers to advance CPR decision-making in an inpatient hospital setting and develop a pragmatic intervention to support clinicians to undertake and document routine advance care planning discussions. Two focus groups, which involved eight consultants and ten junior doctors, were conducted following a review of the current literature. A subsequent iterative consensus process developed two intervention elements: (i) an updated 'Goals of Patient Care' (GOPC) form and process; (ii) an education video and resources for teaching advance CPR decision-making and communication. A multidisciplinary group of health professionals and policy-makers with experience in systems development, education and research provided critical feedback. Three key themes emerged from the focus groups and the literature, which identified a structure for the intervention: (i) knowing what to say; (ii) knowing how to say it; (iii) wanting to say it. The themes informed the development of a video to provide education about advance CPR decision-making framework, improving communication and contextualising relevant clinical issues. Critical feedback assisted in refining the video and further guided development and evolution of a medical GOPC approach to discussing and recording medical treatment and advance care plans. Through an iterative process of consultation and review, video-based education and an expanded GOPC form and approach were developed to address physician and systemic barriers to advance CPR decision-making and documentation. Implementation and evaluation across hospital settings is required to examine utility and determine effect on quality of care.

  1. Automatic inpainting scheme for video text detection and removal.

    Science.gov (United States)

    Mosleh, Ali; Bouguila, Nizar; Ben Hamza, Abdessamad

    2013-11-01

    We present a two stage framework for automatic video text removal to detect and remove embedded video texts and fill-in their remaining regions by appropriate data. In the video text detection stage, text locations in each frame are found via an unsupervised clustering performed on the connected components produced by the stroke width transform (SWT). Since SWT needs an accurate edge map, we develop a novel edge detector which benefits from the geometric features revealed by the bandlet transform. Next, the motion patterns of the text objects of each frame are analyzed to localize video texts. The detected video text regions are removed, then the video is restored by an inpainting scheme. The proposed video inpainting approach applies spatio-temporal geometric flows extracted by bandlets to reconstruct the missing data. A 3D volume regularization algorithm, which takes advantage of bandlet bases in exploiting the anisotropic regularities, is introduced to carry out the inpainting task. The method does not need extra processes to satisfy visual consistency. The experimental results demonstrate the effectiveness of both our proposed video text detection approach and the video completion technique, and consequently the entire automatic video text removal and restoration process.

  2. Action video games do not improve the speed of information processing in simple perceptual tasks

    NARCIS (Netherlands)

    van Ravenzwaaij, D.; Boekel, W.; Forstmann, B.U.; Ratcliff, R.; Wagenmakers, E.-J.

    2014-01-01

    Previous research suggests that playing action video games improves performance on sensory, perceptual, and attentional tasks. For instance, Green, Pouget, and Bavelier (2010) used the diffusion model to decompose data from a motion detection task and estimate the contribution of several underlying

  3. A Gaussian Process Based Online Change Detection Algorithm for Monitoring Periodic Time Series

    Energy Technology Data Exchange (ETDEWEB)

    Chandola, Varun [ORNL; Vatsavai, Raju [ORNL

    2011-01-01

    Online time series change detection is a critical component of many monitoring systems, such as space and air-borne remote sensing instruments, cardiac monitors, and network traffic profilers, which continuously analyze observations recorded by sensors. Data collected by such sensors typically has a periodic (seasonal) component. Most existing time series change detection methods are not directly applicable to handle such data, either because they are not designed to handle periodic time series or because they cannot operate in an online mode. We propose an online change detection algorithm which can handle periodic time series. The algorithm uses a Gaussian process based non-parametric time series prediction model and monitors the difference between the predictions and actual observations within a statistically principled control chart framework to identify changes. A key challenge in using Gaussian process in an online mode is the need to solve a large system of equations involving the associated covariance matrix which grows with every time step. The proposed algorithm exploits the special structure of the covariance matrix and can analyze a time series of length T in O(T^2) time while maintaining a O(T) memory footprint, compared to O(T^4) time and O(T^2) memory requirement of standard matrix manipulation methods. We experimentally demonstrate the superiority of the proposed algorithm over several existing time series change detection algorithms on a set of synthetic and real time series. Finally, we illustrate the effectiveness of the proposed algorithm for identifying land use land cover changes using Normalized Difference Vegetation Index (NDVI) data collected for an agricultural region in Iowa state, USA. Our algorithm is able to detect different types of changes in a NDVI validation data set (with ~80% accuracy) which occur due to crop type changes as well as disruptive changes (e.g., natural disasters).

  4. Algorithm development for automated outlier detection and background noise reduction during NIR spectroscopic data processing

    Science.gov (United States)

    Abookasis, David; Workman, Jerome J.

    2011-09-01

    This study describes a hybrid processing algorithm for use during calibration/validation of near-infrared spectroscopic signals based on a spectra cross-correlation and filtering process, combined with a partial-least square regression (PLS) analysis. In the first step of the algorithm, exceptional signals (outliers) are detected and remove based on spectra correlation criteria we have developed. Then, signal filtering based on direct orthogonal signal correction (DOSC) was applied, before being used in the PLS model, to filter out background variance. After outlier screening and DOSC treatment, a PLS calibration model matrix is formed. Once this matrix has been built, it is used to predict the concentration of the unknown samples. Common statistics such as standard error of cross-validation, mean relative error, coefficient of determination, etc. were computed to assess the fitting ability of the algorithm Algorithm performance was tested on several hundred blood samples prepared at different hematocrit and glucose levels using blood materials from thirteen healthy human volunteers. During measurements, these samples were subjected to variations in temperature, flow rate, and sample pathlength. Experimental results highlight the potential, applicability, and effectiveness of the proposed algorithm in terms of low error of prediction, high sensitivity and specificity, and low false negative (Type II error) samples.

  5. GIFT-Grab: Real-time C++ and Python multi-channel video capture, processing and encoding API

    Directory of Open Access Journals (Sweden)

    Dzhoshkun Ismail Shakir

    2017-10-01

    Full Text Available GIFT-Grab is an open-source API for acquiring, processing and encoding video streams in real time. GIFT-Grab supports video acquisition using various frame-grabber hardware as well as from standard-compliant network streams and video files. The current GIFT-Grab release allows for multi-channel video acquisition and encoding at the maximum frame rate of supported hardware – 60 frames per second (fps. GIFT-Grab builds on well-established highly configurable multimedia libraries including FFmpeg and OpenCV. GIFT-Grab exposes a simplified high-level API, aimed at facilitating integration into client applications with minimal coding effort. The core implementation of GIFT-Grab is in C++11. GIFT-Grab also features a Python API compatible with the widely used scientific computing packages NumPy and SciPy. GIFT-Grab was developed for capturing multiple simultaneous intra-operative video streams from medical imaging devices. Yet due to the ubiquity of video processing in research, GIFT-Grab can be used in many other areas. GIFT-Grab is hosted and managed on the software repository of the Centre for Medical Image Computing (CMIC at University College London, and is also mirrored on GitHub. In addition it is available for installation from the Python Package Index (PyPI via the pip installation tool. Funding statement: This work was supported through an Innovative Engineering for Health award by the Wellcome Trust [WT101957], the Engineering and Physical Sciences Research Council (EPSRC [NS/A000027/1] and a National Institute for Health Research Biomedical Research Centre UCLH/UCL High Impact Initiative. Sébastien Ourselin receives funding from the EPSRC (EP/H046410/1, EP/J020990/1, EP/K005278 and the MRC (MR/J01107X/1. Luis C. García-Peraza-Herrera is supported by the EPSRC-funded UCL Centre for Doctoral Training in Medical Imaging (EP/L016478/1.

  6. Synchronous multi-microprocessor system for implementing digital signal processing algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Barnwell, T.P. III; Hodges, C.J.M.

    1982-01-01

    This paper discusses the details of a multi-microprocessor system design as a research facility for studying multiprocessor implementation of digital signal processing algorithms. The overall system, which consists of a control microprocessor, eight satellite microprocessors, a control minicomputer, and extensive distributed software, has proven to be an effect tool in the study of multiprocessor implementations. 5 references.

  7. ORNL ADCP POST-PROCESSING GUIDE AND MATLAB ALGORITHMS FOR MHK SITE FLOW AND TURBULENCE ANALYSIS

    Energy Technology Data Exchange (ETDEWEB)

    Gunawan, Budi [Oak Ridge National Laboratory (ORNL); Neary, Vincent S [ORNL

    2011-09-01

    Standard methods, along with guidance for post-processing the ADCP stationary measurements using MATLAB algorithms that were evaluated and tested by Oak Ridge National Laboratory (ORNL), are presented following an overview of the ADCP operating principles, deployment methods, error sources and recommended protocols for removing and replacing spurious data.

  8. Programs for algorithms of the adaptation of mathematical models of the technological processes of concentration

    Energy Technology Data Exchange (ETDEWEB)

    Aliev, E.M.; Iakubov, M.S.

    1981-01-01

    Examination is made of the methodology of the construction of adaptive mathematical models of complex flotation processes, which are described by a system of nonlinear equations. A program is made of the algorithm of adaption, and presentation is made of the results of its practical application.

  9. Detection of wood failure by image processing method: influence of algorithm, adhesive and wood species

    Science.gov (United States)

    Lanying Lin; Sheng He; Feng Fu; Xiping Wang

    2015-01-01

    Wood failure percentage (WFP) is an important index for evaluating the bond strength of plywood. Currently, the method used for detecting WFP is visual inspection, which lacks efficiency. In order to improve it, image processing methods are applied to wood failure detection. The present study used thresholding and K-means clustering algorithms in wood failure detection...

  10. Design of an online video edge detection device for bottle caps based on FPGA

    OpenAIRE

    Donghui LIU; Lina TONG; Jiashuo WANG; Xiaoyun SUN; Xiaoying ZUO; Yakun DU; Zhenzhou WANG

    2015-01-01

    An online video edge detection device for bottle caps is designed and implemented using OV7670 video module and FPGA based control unit. By Verilog language programming, the device realizes the menu type parametric setting of the external VGA display, and completes the Roberts edge detection of real-time video image, which improves the speed of image processing. By improving the detection algorithm, the noise is effectively suppressed, and clear and coherent edge images are derived. The desig...

  11. Perception Of Space, Empathy And Cognitive Processes: Design Of A Video Game For The Measurement Of Perspective Taking Skills

    Directory of Open Access Journals (Sweden)

    Pio Alfredo Di Tore

    2014-04-01

    Full Text Available The perspective-taking skills require the ability to manipulate spatial reference systems and are the basis of the empathetic process. Empathy, in its relations with space representation and manipulation of spatial reference systems, is the investigation subject of this work, whose aim is the design of a videogame aimed at the measurement of the player's perspective taking skills. The idea of creating a video game on perspective taking is based on a classic Piagetian task, the three mountains problem, object of recent attention by the Italian scientific community that is involved in research in education. The current stage of the project has produced a video game, now in alpha testing release. The article discusses the software theoretical framework (spatial theory of empathy, describes the choices made in the design stage and comment on first results obtained during the alpha testing.

  12. Compressive Video Recovery Using Block Match Multi-Frame Motion Estimation Based on Single Pixel Cameras

    Directory of Open Access Journals (Sweden)

    Sheng Bi

    2016-03-01

    Full Text Available Compressive sensing (CS theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%.

  13. Process optimization of rolling for zincked sheet technology using response surface methodology and genetic algorithm

    Science.gov (United States)

    Ji, Liang-Bo; Chen, Fang

    2017-07-01

    Numerical simulation and intelligent optimization technology were adopted for rolling and extrusion of zincked sheet. By response surface methodology (RSM), genetic algorithm (GA) and data processing technology, an efficient optimization of process parameters for rolling of zincked sheet was investigated. The influence trend of roller gap, rolling speed and friction factor effects on reduction rate and plate shortening rate were analyzed firstly. Then a predictive response surface model for comprehensive quality index of part was created using RSM. Simulated and predicted values were compared. Through genetic algorithm method, the optimal process parameters for the forming of rolling were solved. They were verified and the optimum process parameters of rolling were obtained. It is feasible and effective.

  14. Embedded algorithms within an FPGA-based system to process nonlinear time series data

    Science.gov (United States)

    Jones, Jonathan D.; Pei, Jin-Song; Tull, Monte P.

    2008-03-01

    This paper presents some preliminary results of an ongoing project. A pattern classification algorithm is being developed and embedded into a Field-Programmable Gate Array (FPGA) and microprocessor-based data processing core in this project. The goal is to enable and optimize the functionality of onboard data processing of nonlinear, nonstationary data for smart wireless sensing in structural health monitoring. Compared with traditional microprocessor-based systems, fast growing FPGA technology offers a more powerful, efficient, and flexible hardware platform including on-site (field-programmable) reconfiguration capability of hardware. An existing nonlinear identification algorithm is used as the baseline in this study. The implementation within a hardware-based system is presented in this paper, detailing the design requirements, validation, tradeoffs, optimization, and challenges in embedding this algorithm. An off-the-shelf high-level abstraction tool along with the Matlab/Simulink environment is utilized to program the FPGA, rather than coding the hardware description language (HDL) manually. The implementation is validated by comparing the simulation results with those from Matlab. In particular, the Hilbert Transform is embedded into the FPGA hardware and applied to the baseline algorithm as the centerpiece in processing nonlinear time histories and extracting instantaneous features of nonstationary dynamic data. The selection of proper numerical methods for the hardware execution of the selected identification algorithm and consideration of the fixed-point representation are elaborated. Other challenges include the issues of the timing in the hardware execution cycle of the design, resource consumption, approximation accuracy, and user flexibility of input data types limited by the simplicity of this preliminary design. Future work includes making an FPGA and microprocessor operate together to embed a further developed algorithm that yields better

  15. STUDY OF BLOCKING EFFECT ELIMINATION METHODS BY MEANS OF INTRAFRAME VIDEO SEQUENCE INTERPOLATION

    Directory of Open Access Journals (Sweden)

    I. S. Rubina

    2015-01-01

    Full Text Available The paper deals with image interpolation methods and their applicability to eliminate some of the artifacts related to both the dynamic properties of objects in video sequences and algorithms used in the order of encoding steps. The main drawback of existing methods is the high computational complexity, unacceptable in video processing. Interpolation of signal samples for blocking - effect elimination at the output of the convertion encoding is proposed as a part of the study. It was necessary to develop methods for improvement of compression ratio and quality of the reconstructed video data by blocking effect elimination on the borders of the segments by intraframe interpolating of video sequence segments. The main point of developed methods is an adaptive recursive algorithm application with adaptive-sized interpolation kernel both with and without the brightness gradient consideration at the boundaries of objects and video sequence blocks. Within theoretical part of the research, methods of information theory (RD-theory and data redundancy elimination, methods of pattern recognition and digital signal processing, as well as methods of probability theory are used. Within experimental part of the research, software implementation of compression algorithms with subsequent comparison of the implemented algorithms with the existing ones was carried out. Proposed methods were compared with the simple averaging algorithm and the adaptive algorithm of central counting interpolation. The advantage of the algorithm based on the adaptive kernel size selection interpolation is in compression ratio increasing by 30%, and the advantage of the modified algorithm based on the adaptive interpolation kernel size selection is in the compression ratio increasing by 35% in comparison with existing algorithms, interpolation and quality of the reconstructed video sequence improving by 3% compared to the one compressed without interpolation. The findings will be

  16. Research on defogging technology of video image based on FPGA

    Science.gov (United States)

    Liu, Shuo; Piao, Yan

    2015-03-01

    As the effect of atmospheric particles scattering, the video image captured by outdoor surveillance system has low contrast and brightness, which directly affects the application value of the system. The traditional defogging technology is mostly studied by software for the defogging algorithms of the single frame image. Moreover, the algorithms have large computation and high time complexity. Then, the defogging technology of video image based on Digital Signal Processing (DSP) has the problem of complex peripheral circuit. It can't be realized in real-time processing, and it's hard to debug and upgrade. In this paper, with the improved dark channel prior algorithm, we propose a kind of defogging technology of video image based on Field Programmable Gate Array (FPGA). Compared to the traditional defogging methods, the video image with high resolution can be processed in real-time. Furthermore, the function modules of the system have been designed by hardware description language. At last, the results show that the defogging system based on FPGA can process the video image with minimum resolution of 640×480 in real-time. After defogging, the brightness and contrast of video image are improved effectively. Therefore, the defogging technology proposed in the paper has a great variety of applications including aviation, forest fire prevention, national security and other important surveillance.

  17. CRBLASTER: A Parallel-Processing Computational Framework for Embarrassingly-Parallel Image-Analysis Algorithms

    Science.gov (United States)

    Mighell, Kenneth John

    2011-11-01

    The development of parallel-processing image-analysis codes is generally a challenging task that requires complicated choreography of interprocessor communications. If, however, the image-analysis algorithm is embarrassingly parallel, then the development of a parallel-processing implementation of that algorithm can be a much easier task to accomplish because, by definition, there is little need for communication between the compute processes. I describe the design, implementation, and performance of a parallel-processing image-analysis application, called CRBLASTER, which does cosmic-ray rejection of CCD (charge-coupled device) images using the embarrassingly-parallel L.A.COSMIC algorithm. CRBLASTER is written in C using the high-performance computing industry standard Message Passing Interface (MPI) library. The code has been designed to be used by research scientists who are familiar with C as a parallel-processing computational framework that enables the easy development of parallel-processing image-analysis programs based on embarrassingly-parallel algorithms. The CRBLASTER source code is freely available at the official application website at the National Optical Astronomy Observatory. Removing cosmic rays from a single 800x800 pixel Hubble Space Telescope WFPC2 image takes 44 seconds with the IRAF script lacos_im.cl running on a single core of an Apple Mac Pro computer with two 2.8-GHz quad-core Intel Xeon processors. CRBLASTER is 7.4 times faster processing the same image on a single core on the same machine. Processing the same image with CRBLASTER simultaneously on all 8 cores of the same machine takes 0.875 seconds - which is a speedup factor of 50.3 times faster than the IRAF script. A detailed analysis is presented of the performance of CRBLASTER using between 1 and 57 processors on a low-power Tilera 700-MHz 64-core TILE64 processor.

  18. Algorithm-structured computer arrays and networks architectures and processes for images, percepts, models, information

    CERN Document Server

    Uhr, Leonard

    1984-01-01

    Computer Science and Applied Mathematics: Algorithm-Structured Computer Arrays and Networks: Architectures and Processes for Images, Percepts, Models, Information examines the parallel-array, pipeline, and other network multi-computers.This book describes and explores arrays and networks, those built, being designed, or proposed. The problems of developing higher-level languages for systems and designing algorithm, program, data flow, and computer structure are also discussed. This text likewise describes several sequences of successively more general attempts to combine the power of arrays wi

  19. An improved algorithm for learning long-term dependency problems in adaptive processing of data structures.

    Science.gov (United States)

    Cho, Siu-Yeung; Chi, Zheru; Siu, Wan-Chi; Tsoi, Ah Chung

    2003-01-01

    Many researchers have explored the use of neural-network representations for the adaptive processing of data structures. One of the most popular learning formulations of data structure processing is backpropagation through structure (BPTS). The BPTS algorithm has been successful applied to a number of learning tasks that involve structural patterns such as logo and natural scene classification. The main limitations of the BPTS algorithm are attributed to slow convergence speed and the long-term dependency problem for the adaptive processing of data structures. In this paper, an improved algorithm is proposed to solve these problems. The idea of this algorithm is to optimize the free learning parameters of the neural network in the node representation by using least-squares-based optimization methods in a layer-by-layer fashion. Not only can fast convergence speed be achieved, but the long-term dependency problem can also be overcome since the vanishing of gradient information is avoided when our approach is applied to very deep tree structures.

  20. Comparison of Different Post-Processing Algorithms for Dynamic Susceptibility Contrast Perfusion Imaging of Cerebral Gliomas.

    Science.gov (United States)

    Kudo, Kohsuke; Uwano, Ikuko; Hirai, Toshinori; Murakami, Ryuji; Nakamura, Hideo; Fujima, Noriyuki; Yamashita, Fumio; Goodwin, Jonathan; Higuchi, Satomi; Sasaki, Makoto

    2017-04-10

    The purpose of the present study was to compare different software algorithms for processing DSC perfusion images of cerebral tumors with respect to i) the relative CBV (rCBV) calculated, ii) the cutoff value for discriminating low- and high-grade gliomas, and iii) the diagnostic performance for differentiating these tumors. Following approval of institutional review board, informed consent was obtained from all patients. Thirty-five patients with primary glioma (grade II, 9; grade III, 8; and grade IV, 18 patients) were included. DSC perfusion imaging was performed with 3-Tesla MRI scanner. CBV maps were generated by using 11 different algorithms of four commercially available software and one academic program. rCBV of each tumor compared to normal white matter was calculated by ROI measurements. Differences in rCBV value were compared between algorithms for each tumor grade. Receiver operator characteristics analysis was conducted for the evaluation of diagnostic performance of different algorithms for differentiating between different grades. Several algorithms showed significant differences in rCBV, especially for grade IV tumors. When differentiating between low- (II) and high-grade (III/IV) tumors, the area under the ROC curve (Az) was similar (range 0.85-0.87), and there were no significant differences in Az between any pair of algorithms. In contrast, the optimal cutoff values varied between algorithms (range 4.18-6.53). rCBV values of tumor and cutoff values for discriminating low- and high-grade gliomas differed between software packages, suggesting that optimal software-specific cutoff values should be used for diagnosis of high-grade gliomas.

  1. An index-based algorithm for fast on-line query processing of latent semantic analysis.

    Directory of Open Access Journals (Sweden)

    Mingxi Zhang

    Full Text Available Latent Semantic Analysis (LSA is widely used for finding the documents whose semantic is similar to the query of keywords. Although LSA yield promising similar results, the existing LSA algorithms involve lots of unnecessary operations in similarity computation and candidate check during on-line query processing, which is expensive in terms of time cost and cannot efficiently response the query request especially when the dataset becomes large. In this paper, we study the efficiency problem of on-line query processing for LSA towards efficiently searching the similar documents to a given query. We rewrite the similarity equation of LSA combined with an intermediate value called partial similarity that is stored in a designed index called partial index. For reducing the searching space, we give an approximate form of similarity equation, and then develop an efficient algorithm for building partial index, which skips the partial similarities lower than a given threshold θ. Based on partial index, we develop an efficient algorithm called ILSA for supporting fast on-line query processing. The given query is transformed into a pseudo document vector, and the similarities between query and candidate documents are computed by accumulating the partial similarities obtained from the index nodes corresponds to non-zero entries in the pseudo document vector. Compared to the LSA algorithm, ILSA reduces the time cost of on-line query processing by pruning the candidate documents that are not promising and skipping the operations that make little contribution to similarity scores. Extensive experiments through comparison with LSA have been done, which demonstrate the efficiency and effectiveness of our proposed algorithm.

  2. Design and Analysis of Optimization Algorithms to Minimize Cryptographic Processing in BGP Security Protocols.

    Science.gov (United States)

    Sriram, Vinay K; Montgomery, Doug

    2017-07-01

    The Internet is subject to attacks due to vulnerabilities in its routing protocols. One proposed approach to attain greater security is to cryptographically protect network reachability announcements exchanged between Border Gateway Protocol (BGP) routers. This study proposes and evaluates the performance and efficiency of various optimization algorithms for validation of digitally signed BGP updates. In particular, this investigation focuses on the BGPSEC (BGP with SECurity extensions) protocol, currently under consideration for standardization in the Internet Engineering Task Force. We analyze three basic BGPSEC update processing algorithms: Unoptimized, Cache Common Segments (CCS) optimization, and Best Path Only (BPO) optimization. We further propose and study cache management schemes to be used in conjunction with the CCS and BPO algorithms. The performance metrics used in the analyses are: (1) routing table convergence time after BGPSEC peering reset or router reboot events and (2) peak-second signature verification workload. Both analytical modeling and detailed trace-driven simulation were performed. Results show that the BPO algorithm is 330% to 628% faster than the unoptimized algorithm for routing table convergence in a typical Internet core-facing provider edge router.

  3. Signal and image processing algorithm performance in a virtual and elastic computing environment

    Science.gov (United States)

    Bennett, Kelly W.; Robertson, James

    2013-05-01

    The U.S. Army Research Laboratory (ARL) supports the development of classification, detection, tracking, and localization algorithms using multiple sensing modalities including acoustic, seismic, E-field, magnetic field, PIR, and visual and IR imaging. Multimodal sensors collect large amounts of data in support of algorithm development. The resulting large amount of data, and their associated high-performance computing needs, increases and challenges existing computing infrastructures. Purchasing computer power as a commodity using a Cloud service offers low-cost, pay-as-you-go pricing models, scalability, and elasticity that may provide solutions to develop and optimize algorithms without having to procure additional hardware and resources. This paper provides a detailed look at using a commercial cloud service provider, such as Amazon Web Services (AWS), to develop and deploy simple signal and image processing algorithms in a cloud and run the algorithms on a large set of data archived in the ARL Multimodal Signatures Database (MMSDB). Analytical results will provide performance comparisons with existing infrastructure. A discussion on using cloud computing with government data will discuss best security practices that exist within cloud services, such as AWS.

  4. Incremental principal component pursuit for video background modeling

    Science.gov (United States)

    Rodriquez-Valderrama, Paul A.; Wohlberg, Brendt

    2017-03-14

    An incremental Principal Component Pursuit (PCP) algorithm for video background modeling that is able to process one frame at a time while adapting to changes in background, with a computational complexity that allows for real-time processing, having a low memory footprint and is robust to translational and rotational jitter.

  5. GRAPHICS PROCESSING UNITS: MORE THAN THE PATHWAY TO REALISTIC VIDEO-GAMES

    Directory of Open Access Journals (Sweden)

    CARLOS TRUJILLO

    2011-01-01

    Full Text Available El amplio mercado de los juegos de video ha impulsado un acelerado progreso del hardware y software orientado a lograr ambientes de juego de mayor realidad. Entre estos desarrollos se cuentan las unidades de procesamiento gráfico (GPU, cuyo objetivo es liberar la unidad de procesamiento principal (CPU de los elaborados cómputos que proporcionan "vida" a los juegos de video. Para lograrlo, las GPUs son equipadas con múltiples núcleos de procesamiento operando en paralelo, lo cual permite utilizarlas en tareas mucho más diversas que el desarrollo de juegos de video. En este artículo se presenta una breve descripción de las características de compute unified device architecture (CUDA TM, una arquitectura de cómputo paralelo en GPUs. Se presenta una aplicación de esta arquitectura en la reconstrucción numérica de hologramas, para la cual se reporta una aceleración de 11X con respecto al desempeño alcanzado en una CPU.

  6. Efficient hybrid evolutionary algorithm for optimization of a strip coiling process

    Science.gov (United States)

    Pholdee, Nantiwat; Park, Won-Woong; Kim, Dong-Kyu; Im, Yong-Taek; Bureerat, Sujin; Kwon, Hyuck-Cheol; Chun, Myung-Sik

    2015-04-01

    This article proposes an efficient metaheuristic based on hybridization of teaching-learning-based optimization and differential evolution for optimization to improve the flatness of a strip during a strip coiling process. Differential evolution operators were integrated into the teaching-learning-based optimization with a Latin hypercube sampling technique for generation of an initial population. The objective function was introduced to reduce axial inhomogeneity of the stress distribution and the maximum compressive stress calculated by Love's elastic solution within the thin strip, which may cause an irregular surface profile of the strip during the strip coiling process. The hybrid optimizer and several well-established evolutionary algorithms (EAs) were used to solve the optimization problem. The comparative studies show that the proposed hybrid algorithm outperformed other EAs in terms of convergence rate and consistency. It was found that the proposed hybrid approach was powerful for process optimization, especially with a large-scale design problem.

  7. Heterogeneous reconfigurable processors for real-time baseband processing from algorithm to architecture

    CERN Document Server

    Zhang, Chenxin; Öwall, Viktor

    2016-01-01

    This book focuses on domain-specific heterogeneous reconfigurable architectures, demonstrating for readers a computing platform which is flexible enough to support multiple standards, multiple modes, and multiple algorithms. The content is multi-disciplinary, covering areas of wireless communication, computing architecture, and circuit design. The platform described provides real-time processing capability with reasonable implementation cost, achieving balanced trade-offs among flexibility, performance, and hardware costs. The authors discuss efficient design methods for wireless communication processing platforms, from both an algorithm and architecture design perspective. Coverage also includes computing platforms for different wireless technologies and standards, including MIMO, OFDM, Massive MIMO, DVB, WLAN, LTE/LTE-A, and 5G. •Discusses reconfigurable architectures, including hardware building blocks such as processing elements, memory sub-systems, Network-on-Chip (NoC), and dynamic hardware reconfigur...

  8. Variable disparity-motion estimation based fast three-view video coding

    Science.gov (United States)

    Bae, Kyung-Hoon; Kim, Seung-Cheol; Hwang, Yong Seok; Kim, Eun-Soo

    2009-02-01

    In this paper, variable disparity-motion estimation (VDME) based 3-view video coding is proposed. In the encoding, key-frame coding (KFC) based motion estimation and variable disparity estimation (VDE) for effectively fast three-view video encoding are processed. These proposed algorithms enhance the performance of 3-D video encoding/decoding system in terms of accuracy of disparity estimation and computational overhead. From some experiments, stereo sequences of 'Pot Plant' and 'IVO', it is shown that the proposed algorithm's PSNRs is 37.66 and 40.55 dB, and the processing time is 0.139 and 0.124 sec/frame, respectively.

  9. CRBLASTER: A Parallel-Processing Computational Framework for Embarrassingly Parallel Image-Analysis Algorithms

    Science.gov (United States)

    Mighell, Kenneth John

    2010-10-01

    The development of parallel-processing image-analysis codes is generally a challenging task that requires complicated choreography of interprocessor communications. If, however, the image-analysis algorithm is embarrassingly parallel, then the development of a parallel-processing implementation of that algorithm can be a much easier task to accomplish because, by definition, there is little need for communication between the compute processes. I describe the design, implementation, and performance of a parallel-processing image-analysis application, called crblaster, which does cosmic-ray rejection of CCD images using the embarrassingly parallel l.a.cosmic algorithm. crblaster is written in C using the high-performance computing industry standard Message Passing Interface (MPI) library. crblaster uses a two-dimensional image partitioning algorithm that partitions an input image into N rectangular subimages of nearly equal area; the subimages include sufficient additional pixels along common image partition edges such that the need for communication between computer processes is eliminated. The code has been designed to be used by research scientists who are familiar with C as a parallel-processing computational framework that enables the easy development of parallel-processing image-analysis programs based on embarrassingly parallel algorithms. The crblaster source code is freely available at the official application Web site at the National Optical Astronomy Observatory. Removing cosmic rays from a single 800 × 800 pixel Hubble Space Telescope WFPC2 image takes 44 s with the IRAF script lacos_im.cl running on a single core of an Apple Mac Pro computer with two 2.8 GHz quad-core Intel Xeon processors. crblaster is 7.4 times faster when processing the same image on a single core on the same machine. Processing the same image with crblaster simultaneously on all eight cores of the same machine takes 0.875 s—which is a speedup factor of 50.3 times faster than the

  10. Processing time tolerance-based ACO algorithm for solving job-shop scheduling problem

    Science.gov (United States)

    Luo, Yabo; Waden, Yongo P.

    2017-06-01

    Ordinarily, Job Shop Scheduling Problem (JSSP) is known as NP-hard problem which has uncertainty and complexity that cannot be handled by a linear method. Thus, currently studies on JSSP are concentrated mainly on applying different methods of improving the heuristics for optimizing the JSSP. However, there still exist many problems for efficient optimization in the JSSP, namely, low efficiency and poor reliability, which can easily trap the optimization process of JSSP into local optima. Therefore, to solve this problem, a study on Ant Colony Optimization (ACO) algorithm combined with constraint handling tactics is carried out in this paper. Further, the problem is subdivided into three parts: (1) Analysis of processing time tolerance-based constraint features in the JSSP which is performed by the constraint satisfying model; (2) Satisfying the constraints by considering the consistency technology and the constraint spreading algorithm in order to improve the performance of ACO algorithm. Hence, the JSSP model based on the improved ACO algorithm is constructed; (3) The effectiveness of the proposed method based on reliability and efficiency is shown through comparative experiments which are performed on benchmark problems. Consequently, the results obtained by the proposed method are better, and the applied technique can be used in optimizing JSSP.

  11. Digital Video in Research

    DEFF Research Database (Denmark)

    Frølunde, Lisbeth

    2012-01-01

    questions of our media literacy pertaining to authoring multimodal texts (visual, verbal, audial, etc.) in research practice and the status of multimodal texts in academia. The implications of academic video extend to wider issues of how researchers harness opportunities to author different types of texts......Is video becoming “the new black” in academia, if so, what are the challenges? The integration of video in research methodology (for collection, analysis) is well-known, but the use of “academic video” for dissemination is relatively new (Eriksson and Sørensen). The focus of this paper is academic...... video, or short video essays produced for the explicit purpose of communicating research processes, topics, and research-based knowledge (see the journal of academic videos: www.audiovisualthinking.org). Video is increasingly used in popular showcases for video online, such as YouTube and Vimeo, as well...

  12. A novel video dataset for change detection benchmarking.

    Science.gov (United States)

    Goyette, Nil; Jodoin, Pierre-Marc; Porikli, Fatih; Konrad, Janusz; Ishwar, Prakash

    2014-11-01

    Change detection is one of the most commonly encountered low-level tasks in computer vision and video processing. A plethora of algorithms have been developed to date, yet no widely accepted, realistic, large-scale video data set exists for benchmarking different methods. Presented here is a unique change detection video data set consisting of nearly 90 000 frames in 31 video sequences representing six categories selected to cover a wide range of challenges in two modalities (color and thermal infrared). A distinguishing characteristic of this benchmark video data set is that each frame is meticulously annotated by hand for ground-truth foreground, background, and shadow area boundaries-an effort that goes much beyond a simple binary label denoting the presence of change. This enables objective and precise quantitative comparison and ranking of video-based change detection algorithms. This paper discusses various aspects of the new data set, quantitative performance metrics used, and comparative results for over two dozen change detection algorithms. It draws important conclusions on solved and remaining issues in change detection, and describes future challenges for the scientific community. The data set, evaluation tools, and algorithm rankings are available to the public on a website and will be updated with feedback from academia and industry in the future.

  13. TURING MACHINE AS UNIVERSAL ALGORITHM EXECUTOR AND ITS APPLICATION IN THE PROCESS OF HIGH-SCHOOL STUDENTS` ADVANCED STUDY OF ALGORITHMIZATION AND PROGRAMMING FUNDAMENTALS

    Directory of Open Access Journals (Sweden)

    Oleksandr B. Yashchyk

    2016-05-01

    Full Text Available The article discusses the importance of studying the notion of algorithm and its formal specification using Turing machines. In the article it was identified the basic hypothesis of the theory of algorithms for Turing as well as reviewed scientific research of modern scientists devoted to this issue and found the main principles of the Turing machine as an abstract mathematical model. The process of forming information competencies components, information culture and students` logical thinking development with the inclusion of the topic “Study and Application of Turing machine as Universal Algorithm Executor” in the course of Informatics was analyzed.

  14. The application of the algorithm of the individualization of students’ physical education process

    Directory of Open Access Journals (Sweden)

    Barybina L.N.

    2014-11-01

    Full Text Available Purpose: theoretically and experimentally justify the use of the algorithm of physical education process individualization in universities taking into account the psychophysiological features of students. Material: the study involved 413 students. It was defined indicators of the level of physical fitness and functional status, psycho-physiological features. Results: it was worked out the algorithm of individualization of students’ physical education process. It was defined the structure of the complex preparedness and it was developed models of characteristics of students - representatives of different sports specializations. It was established that for the successful construction of the training process, it is necessary to combine the parameters of physical, functional training and physiological indicators into a single integral evaluation of the individual characteristics of students. It was shown that at the students of the experimental group was improved indicators of functional, psychophysiological capabilities and physical preparedness. Conclusions: the application of the algorithm of the individualization of process of physical education enhances the functionality of the students.

  15. Knowledge-Aided Multichannel Adaptive SAR/GMTI Processing: Algorithm and Experimental Results

    Directory of Open Access Journals (Sweden)

    Wu Di

    2010-01-01

    Full Text Available The multichannel synthetic aperture radar ground moving target indication (SAR/GMTI technique is a simplified implementation of space-time adaptive processing (STAP, which has been proved to be feasible in the past decades. However, its detection performance will be degraded in heterogeneous environments due to the rapidly varying clutter characteristics. Knowledge-aided (KA STAP provides an effective way to deal with the nonstationary problem in real-world clutter environment. Based on the KA STAP methods, this paper proposes a KA algorithm for adaptive SAR/GMTI processing in heterogeneous environments. It reduces sample support by its fast convergence properties and shows robust to non-stationary clutter distribution relative to the traditional adaptive SAR/GMTI scheme. Experimental clutter suppression results are employed to verify the virtue of this algorithm.

  16. An efficient sampling algorithm for uncertain abnormal data detection in biomedical image processing and disease prediction.

    Science.gov (United States)

    Liu, Fei; Zhang, Xi; Jia, Yan

    2015-01-01

    In this paper, we propose a computer information processing algorithm that can be used for biomedical image processing and disease prediction. A biomedical image is considered a data object in a multi-dimensional space. Each dimension is a feature that can be used for disease diagnosis. We introduce a new concept of the top (k1,k2) outlier. It can be used to detect abnormal data objects in the multi-dimensional space. This technique focuses on uncertain space, where each data object has several possible instances with distinct probabilities. We design an efficient sampling algorithm for the top (k1,k2) outlier in uncertain space. Some improvement techniques are used for acceleration. Experiments show our methods' high accuracy and high efficiency.

  17. Neural networks and differential evolution algorithm applied for modelling the depollution process of some gaseous streams.

    Science.gov (United States)

    Curteanu, Silvia; Suditu, Gabriel Dan; Buburuzan, Adela Marina; Dragoi, Elena Niculina

    2014-11-01

    The depollution of some gaseous streams containing n-hexane is studied by adsorption in a fixed bed column, under dynamic conditions, using granular activated carbon and two types of non-functionalized hypercross-linked polymeric resins. In order to model the process, a new neuro-evolutionary approach is proposed. It is a combination of a modified differential evolution (DE) with neural networks (NNs) and two local search algorithms, the global and local optimizers, working together to determine the optimal NN model. The main elements that characterize the applied variant of DE consist in using an opposition-based learning initialization, a simple self-adaptive procedure for the control parameters, and a modified mutation principle based on the fitness function as a criterion for reorganization. The results obtained prove that the proposed algorithm is able to determine a good model of the considered process, its performance being better than those of an available phenomenological model.

  18. Communicating pictures a course in image and video coding

    CERN Document Server

    Bull, David R

    2014-01-01

    Communicating Pictures starts with a unique historical perspective of the role of images in communications and then builds on this to explain the applications and requirements of a modern video coding system. It draws on the author's extensive academic and professional experience of signal processing and video coding to deliver a text that is algorithmically rigorous, yet accessible, relevant to modern standards, and practical. It offers a thorough grounding in visual perception, and demonstrates how modern image and video compression methods can be designed in order to meet the rate-quality performance levels demanded by today's applications, networks and users. With this book you will learn: Practical issues when implementing a codec, such as picture boundary extension and complexity reduction, with particular emphasis on efficient algorithms for transforms, motion estimators and error resilience Conflicts between conventional video compression, based on variable length coding and spatiotemporal prediction,...

  19. Understanding how replication processes can maintain systems away from equilibrium using Algorithmic Information Theory.

    Science.gov (United States)

    Devine, Sean D

    2016-02-01

    Replication can be envisaged as a computational process that is able to generate and maintain order far-from-equilibrium. Replication processes, can self-regulate, as the drive to replicate can counter degradation processes that impact on a system. The capability of replicated structures to access high quality energy and eject disorder allows Landauer's principle, in conjunction with Algorithmic Information Theory, to quantify the entropy requirements to maintain a system far-from-equilibrium. Using Landauer's principle, where destabilising processes, operating under the second law of thermodynamics, change the information content or the algorithmic entropy of a system by ΔH bits, replication processes can access order, eject disorder, and counter the change without outside interventions. Both diversity in replicated structures, and the coupling of different replicated systems, increase the ability of the system (or systems) to self-regulate in a changing environment as adaptation processes select those structures that use resources more efficiently. At the level of the structure, as selection processes minimise the information loss, the irreversibility is minimised. While each structure that emerges can be said to be more entropically efficient, as such replicating structures proliferate, the dissipation of the system as a whole is higher than would be the case for inert or simpler structures. While a detailed application to most real systems would be difficult, the approach may well be useful in understanding incremental changes to real systems and provide broad descriptions of system behaviour. Copyright © 2016 The Author. Published by Elsevier Ireland Ltd.. All rights reserved.

  20. The DWPF product composition control system at Savannah River: Statistical process control algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Postles, R.L.; Brown, K.G.

    1991-12-31

    The DWPF Process batch-blends aqueous radwaste (PHA) with solid radwaste (Sludge) in a waste receipt vessel (the SRAT). The resulting SRAT-Batch is transferred to the next process vessel (the SME) and there blended with ground glass (Frit) to produce a batch of feed slurry. The SME-Batch is passed to a subsequent hold tank (the MFT) which feeds a Melter continuously. The Melter produces a molten glass wasteform which is poured into stainless steel canisters for cooling and, ultimately, shipment to and storage in a geologic Repository. The Repository will require that the glass wasteform be resistant to leaching by any underground water that might contact it. In addition, there are processing constraints on Viscosity and Liquidus Temperature of the melt. The Product Composition Control System (PCCS) is the system intended to ensure that the melt will be Processible and that the glass wasteform will be Acceptable. Within the PCCS, the SPC Algorithm is the device which guides control of the DWPF process. The SPC Algorithm is needed to control the multivariate DWPF process in the face of uncertainties (variances and covariances) which arise from this process and its supply, sampling, modeling, and measurement systems.

  1. A novel speech processing algorithm based on harmonicity cues in cochlear implant

    Science.gov (United States)

    Wang, Jian; Chen, Yousheng; Zhang, Zongping; Chen, Yan; Zhang, Weifeng

    2017-08-01

    This paper proposed a novel speech processing algorithm in cochlear implant, which used harmonicity cues to enhance tonal information in Mandarin Chinese speech recognition. The input speech was filtered by a 4-channel band-pass filter bank. The frequency ranges for the four bands were: 300-621, 621-1285, 1285-2657, and 2657-5499 Hz. In each pass band, temporal envelope and periodicity cues (TEPCs) below 400 Hz were extracted by full wave rectification and low-pass filtering. The TEPCs were modulated by a sinusoidal carrier, the frequency of which was fundamental frequency (F0) and its harmonics most close to the center frequency of each band. Signals from each band were combined together to obtain an output speech. Mandarin tone, word, and sentence recognition in quiet listening conditions were tested for the extensively used continuous interleaved sampling (CIS) strategy and the novel F0-harmonic algorithm. Results found that the F0-harmonic algorithm performed consistently better than CIS strategy in Mandarin tone, word, and sentence recognition. In addition, sentence recognition rate was higher than word recognition rate, as a result of contextual information in the sentence. Moreover, tone 3 and 4 performed better than tone 1 and tone 2, due to the easily identified features of the former. In conclusion, the F0-harmonic algorithm could enhance tonal information in cochlear implant speech processing due to the use of harmonicity cues, thereby improving Mandarin tone, word, and sentence recognition. Further study will focus on the test of the F0-harmonic algorithm in noisy listening conditions.

  2. Developing image processing meta-algorithms with data mining of multiple metrics.

    Science.gov (United States)

    Leung, Kelvin; Cunha, Alexandre; Toga, A W; Parker, D Stott

    2014-01-01

    People often use multiple metrics in image processing, but here we take a novel approach of mining the values of batteries of metrics on image processing results. We present a case for extending image processing methods to incorporate automated mining of multiple image metric values. Here by a metric we mean any image similarity or distance measure, and in this paper we consider intensity-based and statistical image measures and focus on registration as an image processing problem. We show how it is possible to develop meta-algorithms that evaluate different image processing results with a number of different metrics and mine the results in an automated fashion so as to select the best results. We show that the mining of multiple metrics offers a variety of potential benefits for many image processing problems, including improved robustness and validation.

  3. Convergence of Simar's Algorithm for Finding the Maximum Likelihood Estimate of a Compound Poisson Process

    OpenAIRE

    Bohning, Dankmar

    1982-01-01

    Simar (1976) suggested an iteration procedure for finding the maximum likelihood estimate of a compound Poisson process, but he could not show convergence. Here the more general case of maximizing a concave functional on the set of all probability measures is treated. As a generalization of Simar's procedure, an algorithm is given for solving this problem, including assumptions to ensure convergence to an optimum. Finally, it is shown that Simar's functional fulfills these assumptions.

  4. Facial biometrics of Yorubas of Nigeria using Akinlolu-Raji image-processing algorithm

    Directory of Open Access Journals (Sweden)

    Adelaja Abdulazeez Akinlolu

    2016-01-01

    Full Text Available Background: Forensic anthropology deals with the establishment of human identity using genetics, biometrics, and face recognition technology. This study aims to compute facial biometrics of Yorubas of Osun State of Nigeria using a novel Akinlolu-Raji image-processing algorithm. Materials and Methods: Three hundred Yorubas of Osun State (150 males and 150 females, aged 15–33 years were selected as subjects for the study with informed consents and when established as Yorubas by parents and grandparents. Height, body weight, and facial biometrics (evaluated on three-dimensional [3D] facial photographs were measured on all subjects. The novel Akinlolu-Raji image-processing algorithm for forensic face recognition was developed using the modified row method of computer programming. Facial width, total face height, short forehead height, long forehead height, upper face height, nasal bridge length, nose height, morphological face height, and lower face height computed from readings of the Akinlolu-Raji image-processing algorithm were analyzed using z-test (P ≤ 0.05 of 2010 Microsoft Excel statistical software. Results: Statistical analyzes of facial measurements showed nonsignificant higher mean values (P > 0.05 in Yoruba males compared to females. Yoruba males and females have the leptoprosopic face type based on classifications of face types from facial indices. Conclusions: Akinlolu-Raji image-processing algorithm can be employed for computing anthropometric, forensic, diagnostic, or any other measurements on 2D and 3D images, and data computed from its readings can be converted to actual or life sizes as obtained in 1D measurements. Furthermore, Yoruba males and females have the leptoprosopic face type.

  5. New Design Methods And Algorithms For High Energy-Efficient And Low-cost Distillation Processes

    Energy Technology Data Exchange (ETDEWEB)

    Agrawal, Rakesh [Purdue Univ., West Lafayette, IN (United States)

    2013-11-21

    This project sought and successfully answered two big challenges facing the creation of low-energy, cost-effective, zeotropic multi-component distillation processes: first, identification of an efficient search space that includes all the useful distillation configurations and no undesired configurations; second, development of an algorithm to search the space efficiently and generate an array of low-energy options for industrial multi-component mixtures. Such mixtures are found in large-scale chemical and petroleum plants. Commercialization of our results was addressed by building a user interface allowing practical application of our methods for industrial problems by anyone with basic knowledge of distillation for a given problem. We also provided our algorithm to a major U.S. Chemical Company for use by the practitioners. The successful execution of this program has provided methods and algorithms at the disposal of process engineers to readily generate low-energy solutions for a large class of multicomponent distillation problems in a typical chemical and petrochemical plant. In a petrochemical complex, the distillation trains within crude oil processing, hydrotreating units containing alkylation, isomerization, reformer, LPG (liquefied petroleum gas) and NGL (natural gas liquids) processing units can benefit from our results. Effluents from naphtha crackers and ethane-propane crackers typically contain mixtures of methane, ethylene, ethane, propylene, propane, butane and heavier hydrocarbons. We have shown that our systematic search method with a more complete search space, along with the optimization algorithm, has a potential to yield low-energy distillation configurations for all such applications with energy savings up to 50%.

  6. New algorithms for processing time-series big EEG data within mobile health monitoring systems.

    Science.gov (United States)

    Serhani, Mohamed Adel; Menshawy, Mohamed El; Benharref, Abdelghani; Harous, Saad; Navaz, Alramzana Nujum

    2017-10-01

    Recent advances in miniature biomedical sensors, mobile smartphones, wireless communications, and distributed computing technologies provide promising techniques for developing mobile health systems. Such systems are capable of monitoring epileptic seizures reliably, which are classified as chronic diseases. Three challenging issues raised in this context with regard to the transformation, compression, storage, and visualization of big data, which results from a continuous recording of epileptic seizures using mobile devices. In this paper, we address the above challenges by developing three new algorithms to process and analyze big electroencephalography data in a rigorous and efficient manner. The first algorithm is responsible for transforming the standard European Data Format (EDF) into the standard JavaScript Object Notation (JSON) and compressing the transformed JSON data to decrease the size and time through the transfer process and to increase the network transfer rate. The second algorithm focuses on collecting and storing the compressed files generated by the transformation and compression algorithm. The collection process is performed with respect to the on-the-fly technique after decompressing files. The third algorithm provides relevant real-time interaction with signal data by prospective users. It particularly features the following capabilities: visualization of single or multiple signal channels on a smartphone device and query data segments. We tested and evaluated the effectiveness of our approach through a software architecture model implementing a mobile health system to monitor epileptic seizures. The experimental findings from 45 experiments are promising and efficiently satisfy the approach's objectives in a price of linearity. Moreover, the size of compressed JSON files and transfer times are reduced by 10% and 20%, respectively, while the average total time is remarkably reduced by 67% through all performed experiments. Our approach

  7. An Algorithm for Modelling the Impact of the Judicial Conflict-Resolution Process on Construction Investment

    Directory of Open Access Journals (Sweden)

    Andrej Bugajev

    2018-01-01

    Full Text Available In this article, the modelling of the judicial conflict-resolution process is considered from a construction investor’s point of view. Such modelling is important for improving the risk management for construction investors and supporting sustainable city development by supporting the development of rules regulating the construction process. Thus, this raises the problem of evaluation of different decisions and selection of the optimal one followed by distribution extraction. First, the example of such a process is analysed and schematically represented. Then, it is formalised as a graph, which is described in the form of a decision graph with cycles. We use some natural problem properties and provide the algorithm to convert this graph into a tree. Then, we propose the algorithm to evaluate profits for different scenarios with estimation of time, which is done by integration of an average daily costs function. Afterwards, the optimisation problem is solved and the optimal investor strategy is obtained—this allows one to extract the construction project profit distribution, which can be used for further analysis by standard risk (and other important information-evaluation techniques. The overall algorithm complexity is analysed, the computational experiment is performed and conclusions are formulated.

  8. A high precision position sensor design and its signal processing algorithm for a maglev train.

    Science.gov (United States)

    Xue, Song; Long, Zhiqiang; He, Ning; Chang, Wensen

    2012-01-01

    High precision positioning technology for a kind of high speed maglev train with an electromagnetic suspension (EMS) system is studied. At first, the basic structure and functions of the position sensor are introduced and some key techniques to enhance the positioning precision are designed. Then, in order to further improve the positioning signal quality and the fault-tolerant ability of the sensor, a new kind of discrete-time tracking differentiator (TD) is proposed based on nonlinear optimal control theory. This new TD has good filtering and differentiating performances and a small calculation load. It is suitable for real-time signal processing. The stability, convergence property and frequency characteristics of the TD are studied and analyzed thoroughly. The delay constant of the TD is figured out and an effective time delay compensation algorithm is proposed. Based on the TD technology, a filtering process is introduced in to improve the positioning signal waveform when the sensor is under bad working conditions, and a two-sensor switching algorithm is designed to eliminate the positioning errors caused by the joint gaps of the long stator. The effectiveness and stability of the sensor and its signal processing algorithms are proved by the experiments on a test train during a long-term test run.

  9. Efficient Coding of Shape and Transparency for Video Objects

    DEFF Research Database (Denmark)

    Aghito, Shankar Manuel; Forchhammer, Søren

    2007-01-01

    A novel scheme for coding gray-level alpha planes in object-based video is presented. Gray-level alpha planes convey the shape and the transparency information, which are required for smooth composition of video objects. The algorithm proposed is based on the segmentation of the alpha plane...... shape layer is processed by a novel video shape coder. In intra mode, the DSLSC binary image coder presented in is used. This is extended here with an intermode utilizing temporal redundancies in shape image sequences. Then the opaque layer is compressed by a newly designed scheme which models...

  10. Real time processing of neutron monitor data using the edge editor algorithm

    Directory of Open Access Journals (Sweden)

    Mavromichalaki Helen

    2012-09-01

    Full Text Available The nucleonic component of the secondary cosmic rays is measured by the worldwide network of neutron monitors (NMs. In most cases, a NM station publishes the measured data in a real time basis in order to be available for instant use from the scientific community. The space weather centers and the online applications such as the ground level enhancement (GLE alert make use of the online data and are highly dependent on their quality. However, the primary data in some cases are distorted due to unpredictable instrument variations. For this reason, the real time primary data processing of the measured data of a station is necessary. The general operational principle of the correction algorithms is the comparison between the different channels of a NM, taking advantage of the fact that a station hosts a number of identical detectors. Median editor, Median editor plus and Super editor are some of the correction algorithms that are being used with satisfactory results. In this work an alternative algorithm is proposed and analyzed. The new algorithm uses a statistical approach to define the distribution of the measurements and introduces an error index which is used for the correction of the measurements that deviate from this distribution.

  11. An Efficient Randomized Algorithm for Real-Time Process Scheduling in PicOS Operating System

    Science.gov (United States)

    Helmy*, Tarek; Fatai, Anifowose; Sallam, El-Sayed

    PicOS is an event-driven operating environment designed for use with embedded networked sensors. More specifically, it is designed to support the concurrency in intensive operations required by networked sensors with minimal hardware requirements. Existing process scheduling algorithms of PicOS; a commercial tiny, low-footprint, real-time operating system; have their associated drawbacks. An efficient, alternative algorithm, based on a randomized selection policy, has been proposed, demonstrated, confirmed for efficiency and fairness, on the average, and has been recommended for implementation in PicOS. Simulations were carried out and performance measures such as Average Waiting Time (AWT) and Average Turn-around Time (ATT) were used to assess the efficiency of the proposed randomized version over the existing ones. The results prove that Randomized algorithm is the best and most attractive for implementation in PicOS, since it is most fair and has the least AWT and ATT on average over the other non-preemptive scheduling algorithms implemented in this paper.

  12. Syndromic surveillance using veterinary laboratory data: data pre-processing and algorithm performance evaluation.

    Science.gov (United States)

    Dórea, Fernanda C; McEwen, Beverly J; McNab, W Bruce; Revie, Crawford W; Sanchez, Javier

    2013-06-06

    Diagnostic test orders to an animal laboratory were explored as a data source for monitoring trends in the incidence of clinical syndromes in cattle. Four years of real data and over 200 simulated outbreak signals were used to compare pre-processing methods that could remove temporal effects in the data, as well as temporal aberration detection algorithms that provided high sensitivity and specificity. Weekly differencing demonstrated solid performance in removing day-of-week effects, even in series with low daily counts. For aberration detection, the results indicated that no single algorithm showed performance superior to all others across the range of outbreak scenarios simulated. Exponentially weighted moving average charts and Holt-Winters exponential smoothing demonstrated complementary performance, with the latter offering an automated method to adjust to changes in the time series that will likely occur in the future. Shewhart charts provided lower sensitivity but earlier detection in some scenarios. Cumulative sum charts did not appear to add value to the system; however, the poor performance of this algorithm was attributed to characteristics of the data monitored. These findings indicate that automated monitoring aimed at early detection of temporal aberrations will likely be most effective when a range of algorithms are implemented in parallel.

  13. Droplet morphometry and velocimetry (DMV): a video processing software for time-resolved, label-free tracking of droplet parameters.

    Science.gov (United States)

    Basu, Amar S

    2013-05-21

    Emerging assays in droplet microfluidics require the measurement of parameters such as drop size, velocity, trajectory, shape deformation, fluorescence intensity, and others. While micro particle image velocimetry (μPIV) and related techniques are suitable for measuring flow using tracer particles, no tool exists for tracking droplets at the granularity of a single entity. This paper presents droplet morphometry and velocimetry (DMV), a digital video processing software for time-resolved droplet analysis. Droplets are identified through a series of image processing steps which operate on transparent, translucent, fluorescent, or opaque droplets. The steps include background image generation, background subtraction, edge detection, small object removal, morphological close and fill, and shape discrimination. A frame correlation step then links droplets spanning multiple frames via a nearest neighbor search with user-defined matching criteria. Each step can be individually tuned for maximum compatibility. For each droplet found, DMV provides a time-history of 20 different parameters, including trajectory, velocity, area, dimensions, shape deformation, orientation, nearest neighbour spacing, and pixel statistics. The data can be reported via scatter plots, histograms, and tables at the granularity of individual droplets or by statistics accrued over the population. We present several case studies from industry and academic labs, including the measurement of 1) size distributions and flow perturbations in a drop generator, 2) size distributions and mixing rates in drop splitting/merging devices, 3) efficiency of single cell encapsulation devices, 4) position tracking in electrowetting operations, 5) chemical concentrations in a serial drop dilutor, 6) drop sorting efficiency of a tensiophoresis device, 7) plug length and orientation of nonspherical plugs in a serpentine channel, and 8) high throughput tracking of >250 drops in a reinjection system. Performance metrics

  14. Decision tree algorithm for detection of spatial processes in landscape transformation.

    Science.gov (United States)

    Bogaert, Jan; Ceulemans, Reinhart; Salvador-Van Eysenrode, David

    2004-01-01

    The conversion of landscapes by human activities results in widespread changes in landscape spatial structure. Regardless of the type of land conversion, there appears to be a limited number of common spatial configurations that result from such land transformation processes. Some of these configurations are considered optimal or more desirable than others. Based on pattern geometry, we define ten processes responsible for pattern change: aggregation, attrition, creation, deformation, dissection, enlargement, fragmentation, perforation, shift, and shrinkage. A novelty in this contribution is the inclusion of transformation processes causing expansion of the land cover of interest. Consequently, we propose a decision tree algorithm that enables detection of these processes, based on three parameters that have to be determined before and after the transformation of the landscape: area, perimeter length, and number of patches of the focal landscape class. As an example, the decision tree algorithm is applied to determine the transformation processes of three divergent land cover change scenarios: deciduous woodland degradation in Cadiz Township (Wisconsin, USA) 1831-1950, canopy gap formation in a terra firme rain forest at the Tiputini Biodiversity Station (Amazonian Ecuador) 1997-1998, and forest regrowth in Petersham Township (Massachusetts, USA) 1830-1985. The examples signal the importance of the temporal resolution of the data, since long-term pattern conversions can be subdivided in stadia in which particular pattern components are altered by specific transformation processes.

  15. Process planning optimization on turning machine tool using a hybrid genetic algorithm with local search approach

    Directory of Open Access Journals (Sweden)

    Yuliang Su

    2015-04-01

    Full Text Available A turning machine tool is a kind of new type of machine tool that is equipped with more than one spindle and turret. The distinctive simultaneous and parallel processing abilities of turning machine tool increase the complexity of process planning. The operations would not only be sequenced and satisfy precedence constraints, but also should be scheduled with multiple objectives such as minimizing machining cost, maximizing utilization of turning machine tool, and so on. To solve this problem, a hybrid genetic algorithm was proposed to generate optimal process plans based on a mixed 0-1 integer programming model. An operation precedence graph is used to represent precedence constraints and help generate a feasible initial population of hybrid genetic algorithm. Encoding strategy based on data structure was developed to represent process plans digitally in order to form the solution space. In addition, a local search approach for optimizing the assignments of available turrets would be added to incorporate scheduling with process planning. A real-world case is used to prove that the proposed approach could avoid infeasible solutions and effectively generate a global optimal process plan.

  16. Segment scheduling method for reducing 360° video streaming latency

    Science.gov (United States)

    Gudumasu, Srinivas; Asbun, Eduardo; He, Yong; Ye, Yan

    2017-09-01

    360° video is an emerging new format in the media industry enabled by the growing availability of virtual reality devices. It provides the viewer a new sense of presence and immersion. Compared to conventional rectilinear video (2D or 3D), 360° video poses a new and difficult set of engineering challenges on video processing and delivery. Enabling comfortable and immersive user experience requires very high video quality and very low latency, while the large video file size poses a challenge to delivering 360° video in a quality manner at scale. Conventionally, 360° video represented in equirectangular or other projection formats can be encoded as a single standards-compliant bitstream using existing video codecs such as H.264/AVC or H.265/HEVC. Such method usually needs very high bandwidth to provide an immersive user experience. While at the client side, much of such high bandwidth and the computational power used to decode the video are wasted because the user only watches a small portion (i.e., viewport) of the entire picture. Viewport dependent 360°video processing and delivery approaches spend more bandwidth on the viewport than on non-viewports and are therefore able to reduce the overall transmission bandwidth. This paper proposes a dual buffer segment scheduling algorithm for viewport adaptive streaming methods to reduce latency when switching between high quality viewports in 360° video streaming. The approach decouples the scheduling of viewport segments and non-viewport segments to ensure the viewport segment requested matches the latest user head orientation. A base layer buffer stores all lower quality segments, and a viewport buffer stores high quality viewport segments corresponding to the most recent viewer's head orientation. The scheduling scheme determines viewport requesting time based on the buffer status and the head orientation. This paper also discusses how to deploy the proposed scheduling design for various viewport adaptive video

  17. Dealing with change in process choreographies: Design and implementation of propagation algorithms.

    Science.gov (United States)

    Fdhila, Walid; Indiono, Conrad; Rinderle-Ma, Stefanie; Reichert, Manfred

    2015-04-01

    Enabling process changes constitutes a major challenge for any process-aware information system. This not only holds for processes running within a single enterprise, but also for collaborative scenarios involving distributed and autonomous partners. In particular, if one partner adapts its private process, the change might affect the processes of the other partners as well. Accordingly, it might have to be propagated to concerned partners in a transitive way. A fundamental challenge in this context is to find ways of propagating the changes in a decentralized manner. Existing approaches are limited with respect to the change operations considered as well as their dependency on a particular process specification language. This paper presents a generic change propagation approach that is based on the Refined Process Structure Tree, i.e., the approach is independent of a specific process specification language. Further, it considers a comprehensive set of change patterns. For all these change patterns, it is shown that the provided change propagation algorithms preserve consistency and compatibility of the process choreography. Finally, a proof-of-concept prototype of a change propagation framework for process choreographies is presented. Overall, comprehensive change support in process choreographies will foster the implementation and operational support of agile collaborative process scenarios.

  18. A Sensor Fusion Algorithm for Filtering Pyrometer Measurement Noise in the Czochralski Crystallization Process

    Directory of Open Access Journals (Sweden)

    M. Komperød

    2011-01-01

    Full Text Available The Czochralski (CZ crystallization process is used to produce monocrystalline silicon for solar cell wafers and electronics. Tight temperature control of the molten silicon is most important for achieving high crystal quality. SINTEF Materials and Chemistry operates a CZ process. During one CZ batch, two pyrometers were used for temperature measurement. The silicon pyrometer measures the temperature of the molten silicon. This pyrometer is assumed to be accurate, but has much high-frequency measurement noise. The graphite pyrometer measures the temperature of a graphite material. This pyrometer has little measurement noise. There is quite a good correlation between the two pyrometer measurements. This paper presents a sensor fusion algorithm that merges the two pyrometer signals for producing a temperature estimate with little measurement noise, while having significantly less phase lag than traditional lowpass- filtering of the silicon pyrometer. The algorithm consists of two sub-algorithms: (i A dynamic model is used to estimate the silicon temperature based on the graphite pyrometer, and (ii a lowpass filter and a highpass filter designed as complementary filters. The complementary filters are used to lowpass-filter the silicon pyrometer, highpass-filter the dynamic model output, and merge these filtered signals. Hence, the lowpass filter attenuates noise from the silicon pyrometer, while the graphite pyrometer and the dynamic model estimate those frequency components of the silicon temperature that are lost when lowpass-filtering the silicon pyrometer. The algorithm works well within a limited temperature range. To handle a larger temperature range, more research must be done to understand the process' nonlinear dynamics, and build this into the dynamic model.

  19. IDP++: signal and image processing algorithms in C++ version 4.1

    Energy Technology Data Exchange (ETDEWEB)

    Lehman, S.K.

    1996-11-01

    IDP++ (Image and Data Processing in C++) is a collection of signal and image processing algorithms written in C++. It is a compiled signal processing environment which supports four data types of up to four dimensions. It is developed within Lawrence Livermore National Laboratory`s Image and Data Processing group as a partial replacement for View. IDP ++ takes advantage of the latest, implemented and actually working, object-oriented compiler technology to provide `information hiding.` Users need only know C, not C++. Signals are treated like any other variable with a defined set of operators and functions in an intuitive manner. IDP++ is designed for real-time environment where interpreted processing packages are less efficient. IDP++ exists for both SUNs and Silicon Graphics using their most current compilers.

  20. Warpage improvement on wheel caster by optimizing the process parameters using genetic algorithm (GA)

    Science.gov (United States)

    Safuan, N. S.; Fathullah, M.; Shayfull, Z.; Nasir, S. M.; Hazwan, M. H. M.

    2017-09-01

    In injection moulding process, the defects will always encountered and affected the final product shape and functionality. This study is concerning on minimizing warpage and optimizing the process parameter of injection moulding part. Apart from eliminating product wastes, this project also giving out best recommended parameters setting. This research studied on five parameters. The optimization showed that warpage have been improved 42.64% from 0.6524 mm to 0.30879 mm in Autodesk Moldflow Insight (AMI) simulation result and Genetic Algorithm (GA) respectively.

  1. Discriminative Non-Linear Stationary Subspace Analysis for Video Classification.

    Science.gov (United States)

    Baktashmotlagh, Mahsa; Harandi, Mehrtash; Lovell, Brian C; Salzmann, Mathieu

    2014-12-01

    Low-dimensional representations are key to the success of many video classification algorithms. However, the commonly-used dimensionality reduction techniques fail to account for the fact that only part of the signal is shared across all the videos in one class. As a consequence, the resulting representations contain instance-specific information, which introduces noise in the classification process. In this paper, we introduce non-linear stationary subspace analysis: a method that overcomes this issue by explicitly separating the stationary parts of the video signal (i.e., the parts shared across all videos in one class), from its non-stationary parts (i.e., the parts specific to individual videos). Our method also encourages the new representation to be discriminative, thus accounting for the underlying classification problem. We demonstrate the effectiveness of our approach on dynamic texture recognition, scene classification and action recognition.

  2. Optimized Gillespie algorithms for the simulation of Markovian epidemic processes on large and heterogeneous networks

    Science.gov (United States)

    Cota, Wesley; Ferreira, Silvio C.

    2017-10-01

    Numerical simulation of continuous-time Markovian processes is an essential and widely applied tool in the investigation of epidemic spreading on complex networks. Due to the high heterogeneity of the connectivity structure through which epidemic is transmitted, efficient and accurate implementations of generic epidemic processes are not trivial and deviations from statistically exact prescriptions can lead to uncontrolled biases. Based on the Gillespie algorithm (GA), in which only steps that change the state are considered, we develop numerical recipes and describe their computer implementations for statistically exact and computationally efficient simulations of generic Markovian epidemic processes aiming at highly heterogeneous and large networks. The central point of the recipes investigated here is to include phantom processes, that do not change the states but do count for time increments. We compare the efficiencies for the susceptible-infected-susceptible, contact process and susceptible-infected-recovered models, that are particular cases of a generic model considered here. We numerically confirm that the simulation outcomes of the optimized algorithms are statistically indistinguishable from the original GA and can be several orders of magnitude more efficient.

  3. Integrating a Genetic Algorithm Into a Knowledge-Based System for Ordering Complex Design Processes

    Science.gov (United States)

    Rogers, James L.; McCulley, Collin M.; Bloebaum, Christina L.

    1996-01-01

    The design cycle associated with large engineering systems requires an initial decomposition of the complex system into design processes which are coupled through the transference of output data. Some of these design processes may be grouped into iterative subcycles. In analyzing or optimizing such a coupled system, it is essential to be able to determine the best ordering of the processes within these subcycles to reduce design cycle time and cost. Many decomposition approaches assume the capability is available to determine what design processes and couplings exist and what order of execution will be imposed during the design cycle. Unfortunately, this is often a complex problem and beyond the capabilities of a human design manager. A new feature, a genetic algorithm, has been added to DeMAID (Design Manager's Aid for Intelligent Decomposition) to allow the design manager to rapidly examine many different combinations of ordering processes in an iterative subcycle and to optimize the ordering based on cost, time, and iteration requirements. Two sample test cases are presented to show the effects of optimizing the ordering with a genetic algorithm.

  4. Multipass Turning Operation Process Optimization Using Hybrid Genetic Simulated Annealing Algorithm

    Directory of Open Access Journals (Sweden)

    Abdelouahhab Jabri

    2017-01-01

    Full Text Available For years, there has been increasing attention placed on the metal removal processes such as turning and milling operations; researchers from different areas focused on cutting conditions optimization. Cutting conditions optimization is a crucial step in Computer Aided Process Planning (CAPP; it aims to select optimal cutting parameters (such as cutting speed, feed rate, depth of cut, and number of passes since these parameters affect production cost as well as production deadline. This paper deals with multipass turning operation optimization using a proposed Hybrid Genetic Simulated Annealing Algorithm (HSAGA. The SA-based local search is properly embedded into a GA search mechanism in order to move the GA away from being closed within local optima. The unit production cost is considered in this work as objective function to minimize under different practical and operational constraints. Taguchi method is then used to calibrate the parameters of proposed optimization approach. Finally, different results obtained by various optimization algorithms are compared to the obtained solution and the proposed hybrid evolutionary technique optimization has proved its effectiveness over other algorithms.

  5. Improving Image Processing Systems by Using Software Simulated LRU Cache Algorithms

    Directory of Open Access Journals (Sweden)

    Cosmin CIORANU

    2012-01-01

    Full Text Available Today’s scientific progress is closely related with data processing, a process is implemented using algorithms, but in order to have a result, algorithms need data, and data are generated by sensors, particularly satellite imagery or collaborative GIS platforms. The progress has made those imaging capturing sensors more and more accurate therefore the generated data are becoming larger and larger. The problem is mostly related to the operating system and sometimes software design’s inability to manage contiguous spaces of memory. In an ironic turn of events, those data sometimes cannot be held all at once in a computer system to be analyzed. A solution needed to be devised to overcome this easy problem at first, but complex in implementation. The answer is somehow hidden, but is has been around since the birth of computer science, and is called a memory cache, which is basically at its origins a fast memory. We can adjust this concept in software programming by identifying the problem and coming up with an implementation. The data cache can be implemented in many various ways but here we will present one based on LRU (least recently used algorithm mostly to handle three dimension arrays, called 3dCache which is widely compatible with software packages that supports external tools such as Matlab or a programming environment like C++.

  6. How to detect Edgar Allan Poe's 'purloined letter,' or cross-correlation algorithms in digitized video images for object identification, movement evaluation, and deformation analysis

    Science.gov (United States)

    Dost, Michael; Vogel, Dietmar; Winkler, Thomas; Vogel, Juergen; Erb, Rolf; Kieselstein, Eva; Michel, Bernd

    2003-07-01

    Cross correlation analysis of digitised grey scale patterns is based on - at least - two images which are compared one to each other. Comparison is performed by means of a two-dimensional cross correlation algorithm applied to a set of local intensity submatrices taken from the pattern matrices of the reference and the comparison images in the surrounding of predefined points of interest. Established as an outstanding NDE tool for 2D and 3D deformation field analysis with a focus on micro- and nanoscale applications (microDAC and nanoDAC), the method exhibits an additional potential for far wider applications, that could be used for advancing homeland security. Cause the cross correlation algorithm in some kind seems to imitate some of the "smart" properties of human vision, this "field-of-surface-related" method can provide alternative solutions to some object and process recognition problems that are difficult to solve with more classic "object-related" image processing methods. Detecting differences between two or more images using cross correlation techniques can open new and unusual applications in identification and detection of hidden objects or objects with unknown origin, in movement or displacement field analysis and in some aspects of biometric analysis, that could be of special interest for homeland security.

  7. Casual Video Games as Training Tools for Attentional Processes in Everyday Life.

    Science.gov (United States)

    Stroud, Michael J; Whitbourne, Susan Krauss

    2015-11-01

    Three experiments examined the attentional components of the popular match-3 casual video game, Bejeweled Blitz (BJB). Attentionally demanding, BJB is highly popular among adults, particularly those in middle and later adulthood. In experiment 1, 54 older adults (Mage = 70.57) and 33 younger adults (Mage = 19.82) played 20 rounds of BJB, and completed online tasks measuring reaction time, simple visual search, and conjunction visual search. Prior experience significantly predicted BJB scores for younger adults, but for older adults, both prior experience and simple visual search task scores predicted BJB performance. Experiment 2 tested whether BJB practice alone would result in a carryover benefit to a visual search task in a sample of 58 young adults (Mage = 19.57) who completed 0, 10, or 30 rounds of BJB followed by a BJB-like visual search task with targets present or absent. Reaction times were significantly faster for participants who completed 30 but not 10 rounds of BJB compared with the search task only. This benefit was evident when targets were both present and absent, suggesting that playing BJB improves not only target detection, but also the ability to quit search effectively. Experiment 3 tested whether the attentional benefit in experiment 2 would apply to non-BJB stimuli. The results revealed a similar numerical but not significant trend. Taken together, the findings suggest there are benefits of casual video game playing to attention and relevant everyday skills, and that these games may have potential value as training tools.

  8. A digital filterbank hearing aid: predicting user preference and performance for two signal processing algorithms.

    Science.gov (United States)

    Lunner, T; Hellgren, J; Arlinger, S; Elberling, C

    1997-02-01

    In a series of experiments with a wearable binaural digital hearing aid, two hearing aid processing algorithms were compared. Both algorithms provided individual frequency shaping via a seven-band filterbank with compression limiting in the high-frequency channel. They differed in the processing of the low-frequency channel, using dynamic range compression for one (DynEar) and linear processing with compression limiting for the other (LinEar). In a pilot field test we found that LinEar/ DynEar preference based on use time could be predicted from auditory dynamic range data. For the subjects who preferred DynEar, the mean dynamic range was broader for low and mid frequencies and narrower for high frequencies, as compared with the LinEar preference subjects. These groupings were tested as predictors of user preference and performance in a main field test. The main study included 26 hearing aid users with symmetrical sensorineural losses. The algorithms were compared in a one-mo-long blind field test. A data logger function was included for objective recording of the total time each algorithm was used and how the volume controls were used. The preference was based on the time used for each algorithm and on subjective statements. Threshold signal-to-noise ratio (S/N-threshold) for speech was tested, and sound quality ratings were obtained through a questionnaire. We also tested the S/N-thresholds for the subjects' conventional (own) aids. The preference was correctly predicted by the dynamic range data on 12 out of 15 new cases. S/N-thresholds were lower for the preferred fittings compared with the nonpreferred fittings and with the subjects' own aids. In the questionnaire the preferred fittings were rated significantly higher in terms of overall impression and clearness. Because of the systematic way the DynEar-preference subjects adjusted the high-frequency DynEar gain, we speculate that upward spread of masking may have been a factor in preference and performance

  9. Big Data GPU-Driven Parallel Processing Spatial and Spatio-Temporal Clustering Algorithms

    Science.gov (United States)

    Konstantaras, Antonios; Skounakis, Emmanouil; Kilty, James-Alexander; Frantzeskakis, Theofanis; Maravelakis, Emmanuel

    2016-04-01

    Advances in graphics processing units' technology towards encompassing parallel architectures [1], comprised of thousands of cores and multiples of parallel threads, provide the foundation in terms of hardware for the rapid processing of various parallel applications regarding seismic big data analysis. Seismic data are normally stored as collections of vectors in massive matrices, growing rapidly in size as wider areas are covered, denser recording networks are being established and decades of data are being compiled together [2]. Yet, many processes regarding seismic data analysis are performed on each seismic event independently or as distinct tiles [3] of specific grouped seismic events within a much larger data set. Such processes, independent of one another can be performed in parallel narrowing down processing times drastically [1,3]. This research work presents the development and implementation of three parallel processing algorithms using Cuda C [4] for the investigation of potentially distinct seismic regions [5,6] present in the vicinity of the southern Hellenic seismic arc. The algorithms, programmed and executed in parallel comparatively, are the: fuzzy k-means clustering with expert knowledge [7] in assigning overall clusters' number; density-based clustering [8]; and a selves-developed spatio-temporal clustering algorithm encompassing expert [9] and empirical knowledge [10] for the specific area under investigation. Indexing terms: GPU parallel programming, Cuda C, heterogeneous processing, distinct seismic regions, parallel clustering algorithms, spatio-temporal clustering References [1] Kirk, D. and Hwu, W.: 'Programming massively parallel processors - A hands-on approach', 2nd Edition, Morgan Kaufman Publisher, 2013 [2] Konstantaras, A., Valianatos, F., Varley, M.R. and Makris, J.P.: 'Soft-Computing Modelling of Seismicity in the Southern Hellenic Arc', Geoscience and Remote Sensing Letters, vol. 5 (3), pp. 323-327, 2008 [3] Papadakis, S. and

  10. Evaluation of Lip Prints on Different Supports Using a Batch Image Processing Algorithm and Image Superimposition.

    Science.gov (United States)

    Herrera, Lara Maria; Fernandes, Clemente Maia da Silva; Serra, Mônica da Costa

    2018-01-01

    This study aimed to develop and to assess an algorithm to facilitate lip print visualization, and to digitally analyze lip prints on different supports, by superimposition. It also aimed to classify lip prints according to sex. A batch image processing algorithm was developed, which facilitated the identification and extraction of information about lip grooves. However, it performed better for lip print images with a uniform background. Paper and glass slab allowed more correct identifications than glass and the both sides of compact disks. There was no significant difference between the type of support and the amount of matching structures located in the middle area of the lower lip. There was no evidence of association between types of lip grooves and sex. Lip groove patterns of type III and type I were the most common for both sexes. The development of systems for lip print analysis is necessary, mainly concerning digital methods. © 2017 American Academy of Forensic Sciences.

  11. Investigation of super-resolution processing algorithm by target light-intensity search in digital holography

    Science.gov (United States)

    Neo, Atsushi; Kakue, Takashi; Shimobaba, Tomoyoshi; Masuda, Nobuyuki; Ito, Tomoyoshi

    2017-04-01

    Digital holography is expected to be useful in the analysis of moving three-dimensional (3D) image measurement. In this technique, a two-dimensional interference fringe recorded using a 3D image is captured with an image sensor, and the 3D image is reproduced on a computer. To obtain the reproduced 3D images with high spatial resolution, a high-performance image sensor is required, which increases the system cost. We propose an algorithm for super-resolution processing in digital holography that does not require a high-performance image sensor. The proposed algorithm wherein 3D images are considered as the aggregation of object points improves spatial resolution by performing a light-intensity search of the reproduced image and the object points.

  12. Processing Sliding Mosaic Mode Data with Modified Full-Aperture Imaging Algorithm Integrating Scalloping Correction

    Directory of Open Access Journals (Sweden)

    Zhao Tuan

    2016-10-01

    Full Text Available In this study, we present a modified full-aperture imaging algorithm that includes scalloping correction and spike suppression for sliding-Mosaic-mode Synthetic Aperture Radar (SAR. It is innovational to correct the azimuth beam-pattern weighting altered by radar antenna rotation in the azimuth during the deramping preprocessing operation. The main idea of spike suppression is to substitute zeros between bursts with linear-predicted data extrapolated from adjacent bursts to suppress spikes caused by multiburst processing. We also integrate scalloping correction for the sliding mode into this algorithm. Finally, experiments are performed using the C-band airborne SAR system with a maximum bandwidth of 200 MHz to validate the effectiveness of this approach.

  13. [A modified speech enhancement algorithm for electronic cochlear implant and its digital signal processing realization].

    Science.gov (United States)

    Wang, Yulin; Tian, Xuelong

    2014-08-01

    In order to improve the speech quality and auditory perceptiveness of electronic cochlear implant under strong noise background, a speech enhancement system used for electronic cochlear implant front-end was constructed. Taking digital signal processing (DSP) as the core, the system combines its multi-channel buffered serial port (McBSP) data transmission channel with extended audio interface chip TLV320AIC10, so speech signal acquisition and output with high speed are realized. Meanwhile, due to the traditional speech enhancement method which has the problems as bad adaptability, slow convergence speed and big steady-state error, versiera function and de-correlation principle were used to improve the existing adaptive filtering algorithm, which effectively enhanced the quality of voice communications. Test results verified the stability of the system and the de-noising performance of the algorithm, and it also proved that they could provide clearer speech signals for the deaf or tinnitus patients.

  14. Improved algorithm for processing grating-based phase contrast interferometry image sets

    Science.gov (United States)

    Marathe, Shashidhara; Assoufid, Lahsen; Xiao, Xianghui; Ham, Kyungmin; Johnson, Warren W.; Butler, Leslie G.

    2014-01-01

    Grating-based X-ray and neutron interferometry tomography using phase-stepping methods generates large data sets. An improved algorithm is presented for solving for the parameters to calculate transmissions, differential phase contrast, and dark-field images. The method takes advantage of the vectorization inherent in high-level languages such as Mathematica and MATLAB and can solve a 16 × 1k × 1k data set in less than a second. In addition, the algorithm can function with partial data sets. This is demonstrated with processing of a 16-step grating data set with partial use of the original data chosen without any restriction. Also, we have calculated the reduced chi-square for the fit and notice the effect of grating support structural elements upon the differential phase contrast image and have explored expanded basis set representations to mitigate the impact.

  15. Integration of Teaching Processes and Learning Assessment in the Prefrontal Cortex during a Video Game Teaching-learning Task.

    Science.gov (United States)

    Takeuchi, Naoyuki; Mori, Takayuki; Suzukamo, Yoshimi; Izumi, Shin-Ichi

    2016-01-01

    Human teaching is a social interaction that supports reciprocal and dynamical feedback between the teacher and the student. The prefrontal cortex (PFC) is a region of particular interest due to its demonstrated role in social interaction. In the present study, we evaluated the PFC activity simultaneously in two individuals playing the role of a teacher and student in a video game teaching-learning task. For that, we used two wearable near-infrared spectroscopy (NIRS) devices in order to elucidate the neural mechanisms underlying cognitive interactions between teachers and students. Fifteen teacher-student pairs in total (N = 30) participated in this study. Each teacher was instructed to teach the video game to their student partner, without speaking. The PFC activity was simultaneously evaluated in both participants using a wearable 16-channel NIRS system during the video game teaching-learning task. Two sessions, each including a triplet of a 30-s teaching-learning task, were performed in order to evaluate changes in PFC activity after advancement of teaching-learning state. Changes in the teachers' left PFC activity between the first and second session positively correlated with those observed in students (r = 0.694, p = 0.004). Moreover, among teachers, multiple regression analysis revealed a correlation between the left PFC activity and the assessment gap between one's own teaching and the student's understanding (β = 0.649, p = 0.009). Activity in the left PFC changed synchronously in both teachers and students after advancement of the teaching-learning state. The left PFC of teachers may be involved in integrating information regarding one's own teaching process and the student's learning state. The present observations indicate that simultaneous recording and analysis of brain activity data during teacher-student interactions may be useful in the field of educational neuroscience.

  16. Hyperspectral processing in graphical processing units

    Science.gov (United States)

    Winter, Michael E.; Winter, Edwin M.

    2011-06-01

    With the advent of the commercial 3D video card in the mid 1990s, we have seen an order of magnitude performance increase with each generation of new video cards. While these cards were designed primarily for visualization and video games, it became apparent after a short while that they could be used for scientific purposes. These Graphical Processing Units (GPUs) are rapidly being incorporated into data processing tasks usually reserved for general purpose computers. It has been found that many image processing problems scale well to modern GPU systems. We have implemented four popular hyperspectral processing algorithms (N-FINDR, linear unmixing, Principal Components, and the RX anomaly detection algorithm). These algorithms show an across the board speedup of at least a factor of 10, with some special cases showing extreme speedups of a hundred times or more.

  17. Total algorithms

    NARCIS (Netherlands)

    Tel, G.

    We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of

  18. Algorithms for Processing and Analysis of Ocean Color Satellite Data for Coastal Case 2 Waters. Chapter 16

    Science.gov (United States)

    Stumpf, Richard P.; Arnone, Robert A.; Gould, Richard W., Jr.; Ransibrahmanakul, Varis; Tester, Patricia A.

    2003-01-01

    SeaWiFS has the ability to enhance our understanding of many oceanographic processes. However, its utility in the coastal zone has been limited by valid bio-optical algorithms and by the determination of accurate water reflectances, particularly in the blue bands (412-490 nm), which have a significant impact on the effectiveness of all bio-optical algorithms. We have made advances in three areas: algorithm development (Table 16.1), field data collection, and data applications.

  19. Shadow Detection Based on Regions of Light Sources for Object Extraction in Nighttime Video

    Directory of Open Access Journals (Sweden)

    Gil-beom Lee

    2017-03-01

    Full Text Available Intelligent video surveillance systems detect pre-configured surveillance events through background modeling, foreground and object extraction, object tracking, and event detection. Shadow regions inside video frames sometimes appear as foreground objects, interfere with ensuing processes, and finally degrade the event detection performance of the systems. Conventional studies have mostly used intensity, color, texture, and geometric information to perform shadow detection in daytime video, but these methods lack the capability of removing shadows in nighttime video. In this paper, a novel shadow detection algorithm for nighttime video is proposed; this algorithm partitions each foreground object based on the object’s vertical histogram and screens out shadow objects by validating their orientations heading toward regions of light sources. From the experimental results, it can be seen that the proposed algorithm shows more than 93.8% shadow removal and 89.9% object extraction rates for nighttime video sequences, and the algorithm outperforms conventional shadow removal algorithms designed for daytime videos.

  20. Stochastic process variation in deep-submicron CMOS circuits and algorithms

    CERN Document Server

    Zjajo, Amir

    2014-01-01

    One of the most notable features of nanometer scale CMOS technology is the increasing magnitude of variability of the key device parameters affecting performance of integrated circuits. The growth of variability can be attributed to multiple factors, including the difficulty of manufacturing control, the emergence of new systematic variation-generating mechanisms, and most importantly, the increase in atomic-scale randomness, where device operation must be described as a stochastic process. In addition to wide-sense stationary stochastic device variability and temperature variation, existence of non-stationary stochastic electrical noise associated with fundamental processes in integrated-circuit devices represents an elementary limit on the performance of electronic circuits. In an attempt to address these issues, Stochastic Process Variation in Deep-Submicron CMOS: Circuits and Algorithms offers unique combination of mathematical treatment of random process variation, electrical noise and temperature and ne...

  1. Collaborative Video Sketching

    DEFF Research Database (Denmark)

    Henningsen, Birgitte; Gundersen, Peter Bukovica; Hautopp, Heidi

    2017-01-01

    This paper introduces to what we define as a collaborative video sketching process. This process links various sketching techniques with digital storytelling approaches and creative reflection processes in video productions. Traditionally, sketching has been used by designers across various...... forms and through empirical examples, we present and discuss the video recording of sketching sessions, as well as development of video sketches by rethinking, redoing and editing the recorded sessions. The empirical data is based on workshop sessions with researchers and students from universities...... and university colleges and primary and secondary school teachers. As researchers, we have had different roles in these action research case studies where various video sketching techniques were applied.The analysis illustrates that video sketching can take many forms, and two common features are important...

  2. Algorithm design

    CERN Document Server

    Kleinberg, Jon

    2006-01-01

    Algorithm Design introduces algorithms by looking at the real-world problems that motivate them. The book teaches students a range of design and analysis techniques for problems that arise in computing applications. The text encourages an understanding of the algorithm design process and an appreciation of the role of algorithms in the broader field of computer science.

  3. Genetic algorithms

    Science.gov (United States)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  4. Single-channel mixed signal blind source separation algorithm based on multiple ICA processing

    Science.gov (United States)

    Cheng, Xiefeng; Li, Ji

    2017-01-01

    Take separating the fetal heart sound signal from the mixed signal that get from the electronic stethoscope as the research background, the paper puts forward a single-channel mixed signal blind source separation algorithm based on multiple ICA processing. Firstly, according to the empirical mode decomposition (EMD), the single-channel mixed signal get multiple orthogonal signal components which are processed by ICA. The multiple independent signal components are called independent sub component of the mixed signal. Then by combining with the multiple independent sub component into single-channel mixed signal, the single-channel signal is expanded to multipath signals, which turns the under-determined blind source separation problem into a well-posed blind source separation problem. Further, the estimate signal of source signal is get by doing the ICA processing. Finally, if the separation effect is not very ideal, combined with the last time's separation effect to the single-channel mixed signal, and keep doing the ICA processing for more times until the desired estimated signal of source signal is get. The simulation results show that the algorithm has good separation effect for the single-channel mixed physiological signals.

  5. Differential search algorithm-based parametric optimization of electrochemical micromachining processes

    Directory of Open Access Journals (Sweden)

    Debkalpa Goswami

    2014-01-01

    Full Text Available Electrochemical micromachining (EMM appears to be a very promising micromachining process for having higher machining rate, better precision and control, reliability, flexibility, environmental acceptability, and capability of machining a wide range of materials. It permits machining of chemically resistant materials, like titanium, copper alloys, super alloys and stainless steel to be used in biomedical, electronic, micro-electromechanical system and nano-electromechanical system applications. Therefore, the optimal use of an EMM process for achieving enhanced machining rate and improved profile accuracy demands selection of its various machining parameters. Various optimization tools, primarily Derringer’s desirability function approach have been employed by the past researchers for deriving the best parametric settings of EMM processes, which inherently lead to sub-optimal or near optimal solutions. In this paper, an attempt is made to apply an almost new optimization tool, i.e. differential search algorithm (DSA for parametric optimization of three EMM processes. A comparative study of optimization performance between DSA, genetic algorithm and desirability function approach proves the wide acceptability of DSA as a global optimization tool.

  6. Optimization process planning using hybrid genetic algorithm and intelligent search for job shop machining.

    Science.gov (United States)

    Salehi, Mojtaba; Bahreininejad, Ardeshir

    2011-08-01

    Optimization of process planning is considered as the key technology for computer-aided process planning which is a rather complex and difficult procedure. A good process plan of a part is built up based on two elements: (1) the optimized sequence of the operations of the part; and (2) the optimized selection of the machine, cutting tool and Tool Access Direction (TAD) for each operation. In the present work, the process planning is divided into preliminary planning, and secondary/detailed planning. In the preliminary stage, based on the analysis of order and clustering constraints as a compulsive constraint aggregation in operation sequencing and using an intelligent searching strategy, the feasible sequences are generated. Then, in the detailed planning stage, using the genetic algorithm which prunes the initial feasible sequences, the optimized operation sequence and the optimized selection of the machine, cutting tool and TAD for each operation based on optimization constraints as an additive constraint aggregation are obtained. The main contribution of this work is the optimization of sequence of the operations of the part, and optimization of machine selection, cutting tool and TAD for each operation using the intelligent search and genetic algorithm simultaneously.

  7. Optimization of Nano-Process Deposition Parameters Based on Gravitational Search Algorithm

    Directory of Open Access Journals (Sweden)

    Norlina Mohd Sabri

    2016-06-01

    Full Text Available This research is focusing on the radio frequency (RF magnetron sputtering process, a physical vapor deposition technique which is widely used in thin film production. This process requires the optimized combination of deposition parameters in order to obtain the desirable thin film. The conventional method in the optimization of the deposition parameters had been reported to be costly and time consuming due to its trial and error nature. Thus, gravitational search algorithm (GSA technique had been proposed to solve this nano-process parameters optimization problem. In this research, the optimized parameter combination was expected to produce the desirable electrical and optical properties of the thin film. The performance of GSA in this research was compared with that of Particle Swarm Optimization (PSO, Genetic Algorithm (GA, Artificial Immune System (AIS and Ant Colony Optimization (ACO. Based on the overall results, the GSA optimized parameter combination had generated the best electrical and an acceptable optical properties of thin film compared to the others. This computational experiment is expected to overcome the problem of having to conduct repetitive laboratory experiments in obtaining the most optimized parameter combination. Based on this initial experiment, the adaptation of GSA into this problem could offer a more efficient and productive way of depositing quality thin film in the fabrication process.

  8. Robust Adaptable Video Copy Detection

    DEFF Research Database (Denmark)

    Assent, Ira; Kremer, Hardy

    2009-01-01

    Video copy detection should be capable of identifying video copies subject to alterations e.g. in video contrast or frame rates. We propose a video copy detection scheme that allows for adaptable detection of videos that are altered temporally (e.g. frame rate change) and/or visually (e.g. change...... in contrast). Our query processing combines filtering and indexing structures for efficient multistep computation of video copies under this model. We show that our model successfully identifies altered video copies and does so more reliably than existing models....

  9. A Wavelet-based Algorithm for Vehicle Flow Information Extraction

    OpenAIRE

    Ling-ling Li; Li-duan Liang; Lei Shi; Zhi Qiao

    2013-01-01

    This paper proposed an improved algorithm applied in video intelligent traffic control system for vehicle detection. The accuracy of original algorithm, which is based on the comparision of contrast and luminance distortion of present image with background, reduces greatly under bad weather because of false detection caused by noises in captured images. In this paper we chose Daubechies wavelet as mother wavelet to add a 2-dimension wavelet process before the algorithm, just after the image i...

  10. A digital filterbank hearing aid: three digital signal processing algorithms--user preference and performance.

    Science.gov (United States)

    Lunner, T; Hellgren, J; Arlinger, S; Elberling, C

    1997-10-01

    Three digital signal processing algorithms named RangeEar, DynEar, and LinEar were compared with regard to user preference and performance when a wearable digital filterbank hearing aid was used. All three algorithms provided individual frequency shaping via a seven-band filterbank. Compression was used in a low-frequency (LF) and a high-frequency (HF) channel. RangeEar and DynEar used wide dynamic range syllabic compression in the LF channel, whereas LinEar used compression limiting. In the HF channel, RangeEar used a slow acting automatic volume control, whereas DynEar and LinEar used compression limiting. The subjects had access to a manual volume control when using the LinEar or DynEar options. The study included 13 hearing aid users with symmetrical sensorineural losses. In a 1 mo long blind field test, the RangeEar algorithm was compared with the preferred algorithm from an earlier study, DynEar or LinEar. A data logger function was included for objective recording of the total time each algorithm was used and how the volume controls were used. The preference was based on the time used for each algorithm and from subjective statements. Threshold signal-to-noise ratio (S/N-threshold) for speech was tested, and sound quality ratings were obtained through a questionnaire. Of the 13 subjects, six preferred the RangeEar fitting and another four preferred the DynEar fitting. Two subjects preferred the LinEar fitting and one had equal preference for RangeEar and LinEar. The results from the questionnaire showed that the preferred fittings were rated higher concerning overall impression of sound quality and clearness, whereas the S/N for the speech test did not show any differences. Preferences, where stated, could be predicted from auditory dynamic range measurements in the LF and HF frequency ranges. The mean dynamic range was broader for low and narrower for high frequencies for those who preferred the RangeEar or DynEar fitting as compared with those who

  11. An Automated Processing Algorithm for Flat Areas Resulting from DEM Filling and Interpolation

    Directory of Open Access Journals (Sweden)

    Xingwei Liu

    2017-11-01

    Full Text Available Correction of digital elevation models (DEMs for flat areas is a critical process for hydrological analyses and modeling, such as the determination of flow directions and accumulations, and the delineation of drainage networks and sub-basins. In this study, a new algorithm is proposed for flat correction/removal. It uses the puddle delineation (PD program to identify depressions (including their centers and overflow/spilling thresholds, compute topographic characteristics, and further fill the depressions. Three different levels of elevation increments are used for flat correction. The first and second level of increments create flows toward the thresholds and centers of the filled depressions or flats, while the third level of small random increments is introduced to cope with multiple threshold conditions. A set of artificial surfaces and two real-world landscapes were selected to test the new algorithm. The results showed that the proposed method was not limited by the shapes, the number of thresholds, and the surrounding topographic conditions of flat areas. Compared with the traditional methods, the new algorithm simplified the flat correction procedure and reduced the final elevation increments by 5.71–33.33%. This can be used to effectively remove/correct topographic flats and create flat-free DEMs.

  12. Accelerating the Gillespie Exact Stochastic Simulation Algorithm using hybrid parallel execution on graphics processing units.

    Directory of Open Access Journals (Sweden)

    Ivan Komarov

    Full Text Available The Gillespie Stochastic Simulation Algorithm (GSSA and its variants are cornerstone techniques to simulate reaction kinetics in situations where the concentration of the reactant is too low to allow deterministic techniques such as differential equations. The inherent limitations of the GSSA include the time required for executing a single run and the need for multiple runs for parameter sweep exercises due to the stochastic nature of the simulation. Even very efficient variants of GSSA are prohibitively expensive to compute and perform parameter sweeps. Here we present a novel variant of the exact GSSA that is amenable to acceleration by using graphics processing units (GPUs. We parallelize the execution of a single realization across threads in a warp (fine-grained parallelism. A warp is a collection of threads that are executed synchronously on a single multi-processor. Warps executing in parallel on different multi-processors (coarse-grained parallelism simultaneously generate multiple trajectories. Novel data-structures and algorithms reduce memory traffic, which is the bottleneck in computing the GSSA. Our benchmarks show an 8×-120× performance gain over various state-of-the-art serial algorithms when simulating different types of models.

  13. Tag Anti-collision Algorithm for RFID Systems with Minimum Overhead Information in the Identification Process

    Directory of Open Access Journals (Sweden)

    Usama S. Mohammed

    2011-04-01

    Full Text Available This paper describes a new tree based anti-collision algorithm for Radio Frequency Identification (RFID systems. The proposed technique is based on fast parallel binary splitting (FPBS technique. It follows a new identification path through the binary tree. The main advantage of the proposed protocol is the simple dialog between the reader and tags. It needs only one bit tag response followed by one bit reader reply (one-to-one bit dialog. The one bit reader response represents the collision report (0: collision; 1: no collision of the tags' one bit message. The tag achieves self transmission control by dynamically updating its relative replying order due to the received collision report. The proposed algorithm minimizes the overhead transmitted bits per one tag identification. In the collision state, tags do modify their next replying order in the next bit level. Performed computer simulations have shown that the collision recovery scheme is very fast and simple even with the successive reading process. Moreover, the proposed algorithm outperforms most of the recent techniques in most cases.

  14. Parameter Determination of Milling Process Using a Novel Teaching-Learning-Based Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Zhibo Zhai

    2015-01-01

    Full Text Available Cutting parameter optimization dramatically affects the production time, cost, profit rate, and the quality of the final products, in milling operations. Aiming to select the optimum machining parameters in multitool milling operations such as corner milling, face milling, pocket milling, and slot milling, this paper presents a novel version of TLBO, TLBO with dynamic assignment learning strategy (DATLBO, in which all the learners are divided into three categories based on their results in “Learner Phase”: good learners, moderate learners, and poor ones. Good learners are self-motivated and try to learn by themselves; each moderate learner uses a probabilistic approach to select one of good learners to learn; each poor learner also uses a probabilistic approach to select several moderate learners to learn. The CEC2005 contest benchmark problems are first used to illustrate the effectiveness of the proposed algorithm. Finally, the DATLBO algorithm is applied to a multitool milling process based on maximum profit rate criterion with five practical technological constraints. The unit time, unit cost, and profit rate from the Handbook (HB, Feasible Direction (FD method, Genetic Algorithm (GA method, five other TLBO variants, and DATLBO are compared, illustrating that the proposed approach is more effective than HB, FD, GA, and five other TLBO variants.

  15. Accelerating adaptive inverse distance weighting interpolation algorithm on a graphics processing unit.

    Science.gov (United States)

    Mei, Gang; Xu, Liangliang; Xu, Nengxiong

    2017-09-01

    This paper focuses on designing and implementing parallel adaptive inverse distance weighting (AIDW) interpolation algorithms by using the graphics processing unit (GPU). The AIDW is an improved version of the standard IDW, which can adaptively determine the power parameter according to the data points' spatial distribution pattern and achieve more accurate predictions than those predicted by IDW. In this paper, we first present two versions of the GPU-accelerated AIDW, i.e. the naive version without profiting from the shared memory and the tiled version taking advantage of the shared memory. We also implement the naive version and the tiled version using two data layouts, structure of arrays and array of aligned structures, on both single and double precision. We then evaluate the performance of parallel AIDW by comparing it with its corresponding serial algorithm on three different machines equipped with the GPUs GT730M, M5000 and K40c. The experimental results indicate that: (i) there is no significant difference in the computational efficiency when different data layouts are employed; (ii) the tiled version is always slightly faster than the naive version; and (iii) on single precision the achieved speed-up can be up to 763 (on the GPU M5000), while on double precision the obtained highest speed-up is 197 (on the GPU K40c). To benefit the community, all source code and testing data related to the presented parallel AIDW algorithm are publicly available.

  16. A 3-D nonlinear recursive digital filter for video image processing

    Science.gov (United States)

    Bauer, P. H.; Qian, W.

    1991-01-01

    This paper introduces a recursive 3-D nonlinear digital filter, which is capable of performing noise suppression without degrading important image information such as edges in space or time. It also has the property of unnoticeable bandwidth reduction immediately after a scene change, which makes the filter an attractive preprocessor to many interframe compression algorithms. The filter consists of a nonlinear 2-D spatial subfilter and a 1-D temporal filter. In order to achieve the required computational speed and increase the flexibility of the filter, all of the linear shift-variant filter modules are of the IIR type.

  17. Automatic construction of image inspection algorithm by using image processing network programming

    Science.gov (United States)

    Yoshimura, Yuichiro; Aoki, Kimiya

    2017-03-01

    In this paper, we discuss a method for automatic programming of inspection image processing. In the industrial field, automatic program generators or expert systems are expected to shorten a period required for developing a new appearance inspection system. So-called "image processing expert system" have been studied for over the nearly 30 years. We are convinced of the need to adopt a new idea. Recently, a novel type of evolutionary algorithms, called genetic network programming (GNP), has been proposed. In this study, we use GNP as a method to create an inspection image processing logic. GNP develops many directed graph structures, and shows excellent ability of formulating complex problems. We have converted this network program model to Image Processing Network Programming (IPNP). IPNP selects an appropriate image processing command based on some characteristics of input image data and processing log, and generates a visual inspection software with series of image processing commands. It is verified from experiments that the proposed method is able to create some inspection image processing programs. In the basic experiment with 200 test images, the success rate of detection of target region was 93.5%.

  18. Thermomechanical processing optimization for 304 austenitic stainless steel using artificial neural network and genetic algorithm

    Science.gov (United States)

    Feng, Wen; Yang, Sen

    2016-12-01

    Thermomechanical processing has an important effect on the grain boundary character distribution. To obtain the optimal thermomechanical processing parameters is the key of grain boundary engineering. In this study, genetic algorithm (GA) based on artificial neural network model was proposed to optimize the thermomechanical processing parameters. In this model, a back-propagation neural network (BPNN) was established to map the relationship between thermomechanical processing parameters and the fraction of low-Σ CSL boundaries, and GA integrated with BPNN (BPNN/GA) was applied to optimize the thermomechanical processing parameters. The validation of the optimal thermomechanical processing parameters was verified by an experiment. Moreover, the microstructures and the intergranular corrosion resistance of the base material (BM) and the materials produced by the optimal thermomechanical processing parameters (termed as the GBEM) were studied. Compared to the BM specimen, the fraction of low-Σ CSL boundaries was increased from 56.8 to 77.9% and the random boundary network was interrupted by the low-Σ CSL boundaries, and the intergranular corrosion resistance was improved in the GBEM specimen. The results indicated that the BPNN/GA model was an effective and reliable means for the thermomechanical processing parameters optimization, which resulted in improving the intergranular corrosion resistance in 304 austenitic stainless steel.

  19. Stabilization and PID tuning algorithms for second-order unstable processes with time-delays.

    Science.gov (United States)

    Seer, Qiu Han; Nandong, Jobrun

    2017-03-01

    Open-loop unstable systems with time-delays are often encountered in process industry, which are often more difficult to control than stable processes. In this paper, the stabilization by PID controller of second-order unstable processes, which can be represented as second-order deadtime with an unstable pole (SODUP) and second-order deadtime with two unstable poles (SODTUP), is performed via the necessary and sufficient criteria of Routh-Hurwitz stability analysis. The stability analysis provides improved understanding on the existence of a stabilizing range of each PID parameter. Three simple PID tuning algorithms are proposed to provide desired closed-loop performance-robustness within the stable regions of controller parameters obtained via the stability analysis. The proposed PID controllers show improved performance over those derived via some existing methods. Copyright © 2017. Published by Elsevier Ltd.

  20. The Joint Polar Satellite System (JPSS) Program's Algorithm Change Process (ACP): Past, Present and Future

    Science.gov (United States)

    Griffin, Ashley

    2017-01-01

    The Joint Polar Satellite System (JPSS) Program Office is the supporting organization for the Suomi National Polar Orbiting Partnership (S-NPP) and JPSS-1 satellites. S-NPP carries the following sensors: VIIRS, CrIS, ATMS, OMPS, and CERES with instruments that ultimately produce over 25 data products that cover the Earths weather, oceans, and atmosphere. A team of scientists and engineers from all over the United States document, monitor and fix errors in operational software code or documentation with the algorithm change process (ACP) to ensure the success of the S-NPP and JPSS 1 missions by maintaining quality and accuracy of the data products the scientific community relies on. This poster will outline the programs algorithm change process (ACP), identify the various users and scientific applications of our operational data products and highlight changes that have been made to the ACP to accommodate operating system upgrades to the JPSS programs Interface Data Processing Segment (IDPS), so that the program is ready for the transition to the 2017 JPSS-1 satellite mission and beyond.

  1. Characterization of arbitrary fiber taper profiles with optical microscopy and image processing algorithms

    Science.gov (United States)

    Farias, Heric D.; Sebem, Renan; Paterno, Aleksander S.

    2014-08-01

    This work reports results from the development of a software to process the parameters involved in the characterization of fiber taper profiles, while using optical microscopy, a high-definition camera and a high- precision translation stage as the moveable base on which the taper is positioned. In addition to this procedure, image processing algorithms were customized to process the acquired images. With edge detection algorithms in the stitched image, one would be able to characterize the given taper radius curve that represents the taper profile when the camera has a sufficient resolution. As a consequence, the proposed fiber taper characterization procedure is a first step towards a high-resolution characterization of fiber taper diameters with arbitrary profiles, specially this case, in which tapers are fabricated with the stepwise technique that allows the production of non- biconical profiles. The parameters of the stitched images depends on the used microscope objective and the length of the characterized tapers. A non-biconical arbitrary taper is measured as an example for the illustration of the developed software and procedure.

  2. Processing of rock core microtomography images: Using seven different machine learning algorithms

    Science.gov (United States)

    Chauhan, Swarup; Rühaak, Wolfram; Khan, Faisal; Enzmann, Frieder; Mielke, Philipp; Kersten, Michael; Sass, Ingo

    2016-01-01

    The abilities of machine learning algorithms to process X-ray microtomographic rock images were determined. The study focused on the use of unsupervised, supervised, and ensemble clustering techniques, to segment X-ray computer microtomography rock images and to estimate the pore spaces and pore size diameters in the rocks. The unsupervised k-means technique gave the fastest processing time and the supervised least squares support vector machine technique gave the slowest processing time. Multiphase assemblages of solid phases (minerals and finely grained minerals) and the pore phase were found on visual inspection of the images. In general, the accuracy in terms of porosity values and pore size distribution was found to be strongly affected by the feature vectors selected. Relative porosity average value of 15.92±1.77% retrieved from all the seven machine learning algorithm is in very good agreement with the experimental results of 17±2%, obtained using gas pycnometer. Of the supervised techniques, the least square support vector machine technique is superior to feed forward artificial neural network because of its ability to identify a generalized pattern. In the ensemble classification techniques boosting technique converged faster compared to bragging technique. The k-means technique outperformed the fuzzy c-means and self-organized maps techniques in terms of accuracy and speed.

  3. OPTIMIZATION OF A PULTRUSION PROCESS USING FINITE DIFFERENCE AND PARTICLE SWARM ALGORITHMS

    Directory of Open Access Journals (Sweden)

    L. S. Santos

    2015-06-01

    Full Text Available AbstractPultrusion is one of several manufacturing processes for reinforced polymer composites. In this process fibers are continuously pulled through a resin bath and, after impregnation, the fiber-resin assembly is cured in a heated forming die. In order to obtain a polymeric composite with good properties (high and uniform degree of cure and a process with a minimum of wasted energy, an optimization procedure is necessary to calculate the optimal temperature profile. The present work suggests a new strategy to minimize the energy rate taking into account the final quality of the product. For this purpose the particle swarm optimization (PSO algorithm and the computer code DASSL were used to solve the differential algebraic equation that represents the mathematical model. The results of the optimization procedure were compared with results reported in the literature and showed that this strategy may be a good alternative to find the best operational point and to test other heat policies in order to improve the material quality and minimize the energy cost. In addition, the robustness and fast convergence of the algorithm encourage industrial implementation for the inference of the degree of cure and optimization.

  4. Generalized random walk algorithm for the numerical modeling of complex diffusion processes

    CERN Document Server

    Vamos, C; Vereecken, H

    2003-01-01

    A generalized form of the random walk algorithm to simulate diffusion processes is introduced. Unlike the usual approach, at a given time all the particles from a grid node are simultaneously scattered using the Bernoulli repartition. This procedure saves memory and computing time and no restrictions are imposed for the maximum number of particles to be used in simulations. We prove that for simple diffusion the method generalizes the finite difference scheme and gives the same precision for large enough number of particles. As an example, simulations of diffusion in random velocity field are performed and the main features of the stochastic mathematical model are numerically tested.

  5. An oscillograms processing algorithm of a high power transformer on the basis of experimental data

    Science.gov (United States)

    Vasileva, O. V.; Budko, A. A.; Lavrinovich, A. V.

    2016-04-01

    The paper presents the studies on digital processing of oscillograms of the power transformer operation allowing determining the state of its windings of different types and degrees of damage. The study was carried out according to the authors' own methods using the Fourier analysis and the developed program based on the following application software packages: MathCAD and Lab View. The efficiency of the algorithm was demonstrated by the example of the waveform non-defective and defective transformers on the basis of the method of nanosecond pulses.

  6. Algorithms of control parameters selection for automation of FDM 3D printing process

    Directory of Open Access Journals (Sweden)

    Kogut Paweł

    2017-01-01

    Full Text Available The paper presents algorithms of control parameters selection of the Fused Deposition Modelling (FDM technology in case of an open printing solutions environment and 3DGence ONE printer. The following parameters were distinguished: model mesh density, material flow speed, cooling performance, retraction and printing speeds. These parameters are independent in principle printing system, but in fact to a certain degree that results from the selected printing equipment features. This is the first step for automation of the 3D printing process in FDM technology.

  7. Optimization of the Thermosetting Pultrusion Process by Using Hybrid and Mixed Integer Genetic Algorithms

    DEFF Research Database (Denmark)

    Baran, Ismet; Tutum, Cem Celal; Hattel, Jesper Henri

    2013-01-01

    In this paper thermo-chemical simulation of the pultrusion process of a composite rod is first used as a validation case to ensure that the utilized numerical scheme is stable and converges to results given in literature. Following this validation case, a cylindrical die block with heaters is added...... to the pultrusion domain of a composite part and thermal contact resistance (TCR) regions at the die-part interface are defined. Two optimization case studies are performed on this new configuration. In the first one, optimal die radius and TCR values are found by using a hybrid genetic algorithm based...

  8. Optimization and Simulation of Plastic Injection Process using Genetic Algorithm and Moldflow

    Science.gov (United States)

    Martowibowo, Sigit Yoewono; Kaswadi, Agung

    2017-03-01

    The use of plastic-based products is continuously increasing. The increasing demands for thinner products, lower production costs, yet higher product quality has triggered an increase in the number of research projects on plastic molding processes. An important branch of such research is focused on mold cooling system. Conventional cooling systems are most widely used because they are easy to make by using conventional machining processes. However, the non-uniform cooling processes are considered as one of their weaknesses. Apart from the conventional systems, there are also conformal cooling systems that are designed for faster and more uniform plastic mold cooling. In this study, the conformal cooling system is applied for the production of bowl-shaped product made of PP AZ564. Optimization is conducted to initiate machine setup parameters, namely, the melting temperature, injection pressure, holding pressure and holding time. The genetic algorithm method and Moldflow were used to optimize the injection process parameters at a minimum cycle time. It is found that, an optimum injection molding processes could be obtained by setting the parameters to the following values: T M = 180 °C; P inj = 20 MPa; P hold = 16 MPa and t hold = 8 s, with a cycle time of 14.11 s. Experiments using the conformal cooling system yielded an average cycle time of 14.19 s. The studied conformal cooling system yielded a volumetric shrinkage of 5.61% and the wall shear stress was found at 0.17 MPa. The difference between the cycle time obtained through simulations and experiments using the conformal cooling system was insignificant (below 1%). Thus, combining process parameters optimization and simulations by using genetic algorithm method with Moldflow can be considered as valid.

  9. Forensic analysis of video file formats

    National Research Council Canada - National Science Library

    Gloe, Thomas; Fischer, André; Kirchner, Matthias

    2014-01-01

    .... In combination, such characteristics can help to authenticate digital video files in forensic settings by distinguishing between original and post-processed videos, verifying the purported source...

  10. Anthropocentric Video Segmentation for Lecture Webcasts

    Directory of Open Access Journals (Sweden)

    Raul Rojas

    2008-03-01

    Full Text Available Many lecture recording and presentation systems transmit slides or chalkboard content along with a small video of the instructor. As a result, two areas of the screen are competing for the viewer's attention, causing the widely known split-attention effect. Face and body gestures, such as pointing, do not appear in the context of the slides or the board. To eliminate this problem, this article proposes to extract the lecturer from the video stream and paste his or her image onto the board or slide image. As a result, the lecturer acting in front of the board or slides becomes the center of attention. The entire lecture presentation becomes more human-centered. This article presents both an analysis of the underlying psychological problems and an explanation of signal processing techniques that are applied in a concrete system. The presented algorithm is able to extract and overlay the lecturer online and in real time at full video resolution.

  11. An Interval-Valued Approach to Business Process Simulation Based on Genetic Algorithms and the BPMN

    Directory of Open Access Journals (Sweden)

    Mario G.C.A. Cimino

    2014-05-01

    Full Text Available Simulating organizational processes characterized by interacting human activities, resources, business rules and constraints, is a challenging task, because of the inherent uncertainty, inaccuracy, variability and dynamicity. With regard to this problem, currently available business process simulation (BPS methods and tools are unable to efficiently capture the process behavior along its lifecycle. In this paper, a novel approach of BPS is presented. To build and manage simulation models according to the proposed approach, a simulation system is designed, developed and tested on pilot scenarios, as well as on real-world processes. The proposed approach exploits interval-valued data to represent model parameters, in place of conventional single-valued or probability-valued parameters. Indeed, an interval-valued parameter is comprehensive; it is the easiest to understand and express and the simplest to process, among multi-valued representations. In order to compute the interval-valued output of the system, a genetic algorithm is used. The resulting process model allows forming mappings at different levels of detail and, therefore, at different model resolutions. The system has been developed as an extension of a publicly available simulation engine, based on the Business Process Model and Notation (BPMN standard.

  12. Video indexing using a high-performance and low-computation color-based opportunistic technique

    Science.gov (United States)

    Ahmed, Mohamed; Karmouch, Ahmed

    2002-02-01

    Video information, image processing, and computer vision techniques are developing rapidly because of the availability of acquisition, processing, and editing tools that use current hardware and software systems. However, problems still remain in conveying this video data to the end users. Limiting factors are the resource capabilities in distributed architectures and the features of the users' terminals. The efficient use of image processing, video indexing, and analysis techniques can provide users with solutions or alternatives. We see the video stream as a sequence of correlated images containing in its structure temporal events such as camera editing effects and presents a new algorithm for achieving video segmentation, indexing, and key framing tasks. The algorithm is based on color histograms and uses a binary penetration technique. Although much has been done in this area, most work does not adequately consider the optimization of timing performance and processing storage. This is especially the case if the techniques are designed for use in run-time distributed environments. Our main contribution is to blend high performance and storage criteria with the need to achieve effective results. The algorithm exploits the temporal heuristic characteristic of the visual information within a video stream. It takes into consideration the issues of detecting false cuts and missing true cuts due to the movement of the camera, the optical flow of large objects, or both. We provide a discussion, together with results from experiments and from the implementation of our application, to show the merits of the new algorithm as compared to the existing one.

  13. Measurement and processing of signatures in the visible range using a calibrated video camera and the CAMDET software package

    Science.gov (United States)

    Sheffer, Dan

    1997-06-01

    A procedure for calibration of a color video camera has been developed at EORD. The RGB values of standard samples, together with the spectral radiance values of the samples, are used to calculate a transformation matrix between the RGB and CIEXYZ color spaces. The transformation matrix is then used to calculate the XYZ color coordinates of distant objects imaged in the field. These, in turn, are used in order to calculate the CIELAB color coordinates of the objects. Good agreement between the calculated coordinates and those obtained from spectroradiometric data is achieved. Processing of the RGB values of pixels in the digital image of a scene using the CAMDET software package which was developed at EORD, results in `Painting Maps' in which the true apparent CIELAB color coordinates are used. The paper discusses the calibration procedure, its advantages and shortcomings and suggests a definition for the visible signature of objects. The Camdet software package is described and some examples are given.

  14. Global data for ecology and epidemiology: a novel algorithm for temporal Fourier processing MODIS data.

    Science.gov (United States)

    Scharlemann, Jörn P W; Benz, David; Hay, Simon I; Purse, Bethan V; Tatem, Andrew J; Wint, G R William; Rogers, David J

    2008-01-09

    Remotely-sensed environmental data from earth-orbiting satellites are increasingly used to model the distribution and abundance of both plant and animal species, especially those of economic or conservation importance. Time series of data from the MODerate-resolution Imaging Spectroradiometer (MODIS) sensors on-board NASA's Terra and Aqua satellites offer the potential to capture environmental thermal and vegetation seasonality, through temporal Fourier analysis, more accurately than was previously possible using the NOAA Advanced Very High Resolution Radiometer (AVHRR) sensor data. MODIS data are composited over 8- or 16-day time intervals that pose unique problems for temporal Fourier analysis. Applying standard techniques to MODIS data can introduce errors of up to 30% in the estimation of the amplitudes and phases of the Fourier harmonics. We present a novel spline-based algorithm that overcomes the processing problems of composited MODIS data. The algorithm is tested on artificial data generated using randomly selected values of both amplitudes and phases, and provides an accurate estimate of the input variables under all conditions. The algorithm was then applied to produce layers that capture the seasonality in MODIS data for the period from 2001 to 2005. Global temporal Fourier processed images of 1 km MODIS data for Middle Infrared Reflectance, day- and night-time Land Surface Temperature (LST), Normalised Difference Vegetation Index (NDVI), and Enhanced Vegetation Index (EVI) are presented for ecological and epidemiological applications. The finer spatial and temporal resolution, combined with the greater geolocational and spectral accuracy of the MODIS instruments, compared with previous multi-temporal data sets, mean that these data may be used with greater confidence in species' distribution modelling.

  15. Global data for ecology and epidemiology: a novel algorithm for temporal Fourier processing MODIS data.

    Directory of Open Access Journals (Sweden)

    Jörn P W Scharlemann

    2008-01-01

    Full Text Available Remotely-sensed environmental data from earth-orbiting satellites are increasingly used to model the distribution and abundance of both plant and animal species, especially those of economic or conservation importance. Time series of data from the MODerate-resolution Imaging Spectroradiometer (MODIS sensors on-board NASA's Terra and Aqua satellites offer the potential to capture environmental thermal and vegetation seasonality, through temporal Fourier analysis, more accurately than was previously possible using the NOAA Advanced Very High Resolution Radiometer (AVHRR sensor data. MODIS data are composited over 8- or 16-day time intervals that pose unique problems for temporal Fourier analysis. Applying standard techniques to MODIS data can introduce errors of up to 30% in the estimation of the amplitudes and phases of the Fourier harmonics.We present a novel spline-based algorithm that overcomes the processing problems of composited MODIS data. The algorithm is tested on artificial data generated using randomly selected values of both amplitudes and phases, and provides an accurate estimate of the input variables under all conditions. The algorithm was then applied to produce layers that capture the seasonality in MODIS data for the period from 2001 to 2005.Global temporal Fourier processed images of 1 km MODIS data for Middle Infrared Reflectance, day- and night-time Land Surface Temperature (LST, Normalised Difference Vegetation Index (NDVI, and Enhanced Vegetation Index (EVI are presented for ecological and epidemiological applications. The finer spatial and temporal resolution, combined with the greater geolocational and spectral accuracy of the MODIS instruments, compared with previous multi-temporal data sets, mean that these data may be used with greater confidence in species' distribution modelling.

  16. Slow motion replay detection of tennis video based on color auto-correlogram

    Science.gov (United States)

    Zhang, Xiaoli; Zhi, Min

    2012-04-01

    In this paper, an effective slow motion replay detection method for tennis videos which contains logo transition is proposed. This method is based on the theory of color auto-correlogram and achieved by fowllowing steps: First,detect the candidate logo transition areas from the video frame sequence. Second, generate logo template. Then use color auto-correlogram for similarity matching between video frames and logo template in the candidate logo transition areas. Finally, select logo frames according to the matching results and locate the borders of slow motion accurately by using the brightness change during logo transition process. Experiment shows that, unlike previous approaches, this method has a great improvement in border locating accuracy rate, and can be used for other sports videos which have logo transition, too. In addition, as the algorithm only calculate the contents in the central area of the video frames, speed of the algorithm has been improved greatly.

  17. GPU-based video motion magnification

    Science.gov (United States)

    DomŻał, Mariusz; Jedrasiak, Karol; Sobel, Dawid; Ryt, Artur; Nawrat, Aleksander

    2016-06-01

    Video motion magnification (VMM) allows people see otherwise not visible subtle changes in surrounding world. VMM is also capable of hiding them with a modified version of the algorithm. It is possible to magnify motion related to breathing of patients in hospital to observe it or extinguish it and extract other information from stabilized image sequence for example blood flow. In both cases we would like to perform calculations in real time. Unfortunately, the VMM algorithm requires a great amount of computing power. In the article we suggest that VMM algorithm can be parallelized (each thread processes one pixel) and in order to prove that we implemented the algorithm on GPU using CUDA technology. CPU is used only to grab, write, display frame and schedule work for GPU. Each GPU kernel performs spatial decomposition, reconstruction and motion amplification. In this work we presented approach that achieves a significant speedup over existing methods and allow to VMM process video in real-time. This solution can be used as preprocessing for other algorithms in more complex systems or can find application wherever real time motion magnification would be useful. It is worth to mention that the implementation runs on most modern desktops and laptops compatible with CUDA technology.

  18. Video microblogging

    DEFF Research Database (Denmark)

    Bornoe, Nis; Barkhuus, Louise

    2010-01-01

    Microblogging is a recently popular phenomenon and with the increasing trend for video cameras to be built into mobile phones, a new type of microblogging has entered the arena of electronic communication: video microblogging. In this study we examine video microblogging, which is the broadcasting...... of short videos. A series of semi-structured interviews offers an understanding of why and how video microblogging is used and what the users post and broadcast....

  19. Fault Detection of Roller-Bearings Using Signal Processing and Optimization Algorithms

    Directory of Open Access Journals (Sweden)

    Dae-Ho Kwak

    2013-12-01

    Full Text Available This study presents a fault detection of roller bearings through signal processing and optimization techniques. After the occurrence of scratch-type defects on the inner race of bearings, variations of kurtosis values are investigated in terms of two different data processing techniques: minimum entropy deconvolution (MED, and the Teager-Kaiser Energy Operator (TKEO. MED and the TKEO are employed to qualitatively enhance the discrimination of defect-induced repeating peaks on bearing vibration data with measurement noise. Given the perspective of the execution sequence of MED and the TKEO, the study found that the kurtosis sensitivity towards a defect on bearings could be highly improved. Also, the vibration signal from both healthy and damaged bearings is decomposed into multiple intrinsic mode functions (IMFs, through empirical mode decomposition (EMD. The weight vectors of IMFs become design variables for a genetic algorithm (GA. The weights of each IMF can be optimized through the genetic algorithm, to enhance the sensitivity of kurtosis on damaged bearing signals. Experimental results show that the EMD-GA approach successfully improved the resolution of detectability between a roller bearing with defect, and an intact system.

  20. A method to optimize the processing algorithm of a computed radiography system for chest radiography.

    Science.gov (United States)

    Moore, C S; Liney, G P; Beavis, A W; Saunderson, J R

    2007-09-01

    A test methodology using an anthropomorphic-equivalent chest phantom is described for the optimization of the Agfa computed radiography "MUSICA" processing algorithm for chest radiography. The contrast-to-noise ratio (CNR) in the lung, heart and diaphragm regions of the phantom, and the "system modulation transfer function" (sMTF) in the lung region, were measured using test tools embedded in the phantom. Using these parameters the MUSICA processing algorithm was optimized with respect to low-contrast detectability and spatial resolution. Two optimum "MUSICA parameter sets" were derived respectively for maximizing the CNR and sMTF in each region of the phantom. Further work is required to find the relative importance of low-contrast detectability and spatial resolution in chest images, from which the definitive optimum MUSICA parameter set can then be derived. Prior to this further work, a compromised optimum MUSICA parameter set was applied to a range of clinical images. A group of experienced image evaluators scored these images alongside images produced from the same radiographs using the MUSICA parameter set in clinical use at the time. The compromised optimum MUSICA parameter set was shown to produce measurably better images.

  1. ACTION OF UNIFORM SEARCH ALGORITHM WHEN SELECTING LANGUAGE UNITS IN THE PROCESS OF SPEECH

    Directory of Open Access Journals (Sweden)

    Ирина Михайловна Некипелова

    2013-05-01

    Full Text Available The article is devoted to research of action of uniform search algorithm when selecting by human of language units for speech produce. The process is connected with a speech optimization phenomenon. This makes it possible to shorten the time of cogitation something that human want to say, and to achieve the maximum precision in thoughts expression. The algorithm of uniform search works at consciousness  and subconsciousness levels. It favours the forming of automatism produce and perception of speech. Realization of human's cognitive potential in the process of communication starts up complicated mechanism of self-organization and self-regulation of language. In turn, it results in optimization of language system, servicing needs not only human's self-actualization but realization of communication in society. The method of problem-oriented search is used for researching of optimization mechanisms, which are distinctive to speech producing and stabilization of language.DOI: http://dx.doi.org/10.12731/2218-7405-2013-4-50

  2. A data mining approach to optimize pellets manufacturing process based on a decision tree algorithm.

    Science.gov (United States)

    Ronowicz, Joanna; Thommes, Markus; Kleinebudde, Peter; Krysiński, Jerzy

    2015-06-20

    The present study is focused on the thorough analysis of cause-effect relationships between pellet formulation characteristics (pellet composition as well as process parameters) and the selected quality attribute of the final product. The shape using the aspect ratio value expressed the quality of pellets. A data matrix for chemometric analysis consisted of 224 pellet formulations performed by means of eight different active pharmaceutical ingredients and several various excipients, using different extrusion/spheronization process conditions. The data set contained 14 input variables (both formulation and process variables) and one output variable (pellet aspect ratio). A tree regression algorithm consistent with the Quality by Design concept was applied to obtain deeper understanding and knowledge of formulation and process parameters affecting the final pellet sphericity. The clear interpretable set of decision rules were generated. The spehronization speed, spheronization time, number of holes and water content of extrudate have been recognized as the key factors influencing pellet aspect ratio. The most spherical pellets were achieved by using a large number of holes during extrusion, a high spheronizer speed and longer time of spheronization. The described data mining approach enhances knowledge about pelletization process and simultaneously facilitates searching for the optimal process conditions which are necessary to achieve ideal spherical pellets, resulting in good flow characteristics. This data mining approach can be taken into consideration by industrial formulation scientists to support rational decision making in the field of pellets technology. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. WEDM process variables investigation for HSLA by response surface methodology and genetic algorithm

    Directory of Open Access Journals (Sweden)

    Neeraj Sharma

    2015-06-01

    Full Text Available Wire electric discharge machining (WEDM is a thermo-electric spark erosion non-traditional type manufacturing process. The applications of WEDM have been found in aerospace and die manufacturing industries, where precise dimensions were the prime objective. This process is applied in case of processing difficult to machine material. Brass wire is used as an electrode and High strength low alloy (HSLA steel as a work-piece during experimentation. The present research deals with the effect of process parameters on the overcut while machining the HSLA steel on WEDM. The mathematical model has been developed with the help of Response Surface Methodology (RSM. Further this model is processed with help of Genetic Algorithm (GA to find out the optimum machining parameters. The percentage error between the predicted and experimental values lies in the range of ±10%, which indicates that the developed model can be utilized to predict the overcut values. The experimental plan was executed according to central composite design. The optimal setting of process parameters is pulse on-time-117 μs; pulse off-time-50 μs; spark gap voltage-49 V; peak current-180 A and wire tension-6 g; for minimum overcut, whereas at the optimal setting overcut is 9.9922 μm.

  4. Electrophysiological Study of Algorithmically Processed Metric/Rhythmic Variations in Language and Music

    Directory of Open Access Journals (Sweden)

    Richard Kronland-Martinet

    2007-12-01

    Full Text Available This work is the result of an interdisciplinary collaboration between scientists from the fields of audio signal processing, phonetics and cognitive neuroscience aiming at studying the perception of modifications in meter, rhythm, semantics and harmony in language and music. A special time-stretching algorithm was developed to work with natural speech. In the language part, French sentences ending with tri-syllabic congruous or incongruous words, metrically modified or not, were made. In the music part, short melodies made of triplets, rhythmically and/or harmonically modified, were built. These stimuli were presented to a group of listeners that were asked to focus their attention either on meter/rhythm or semantics/harmony and to judge whether or not the sentences/melodies were acceptable. Language ERP analyses indicate that semantically incongruous words are processed independently of the subject's attention thus arguing for automatic semantic processing. In addition, metric incongruities seem to influence semantic processing. Music ERP analyses show that rhythmic incongruities are processed independently of attention, revealing automatic processing of rhythm in music.

  5. Electrophysiological Study of Algorithmically Processed Metric/Rhythmic Variations in Language and Music

    Directory of Open Access Journals (Sweden)

    Magne Cyrille

    2007-01-01

    Full Text Available This work is the result of an interdisciplinary collaboration between scientists from the fields of audio signal processing, phonetics and cognitive neuroscience aiming at studying the perception of modifications in meter, rhythm, semantics and harmony in language and music. A special time-stretching algorithm was developed to work with natural speech. In the language part, French sentences ending with tri-syllabic congruous or incongruous words, metrically modified or not, were made. In the music part, short melodies made of triplets, rhythmically and/or harmonically modified, were built. These stimuli were presented to a group of listeners that were asked to focus their attention either on meter/rhythm or semantics/harmony and to judge whether or not the sentences/melodies were acceptable. Language ERP analyses indicate that semantically incongruous words are processed independently of the subject's attention thus arguing for automatic semantic processing. In addition, metric incongruities seem to influence semantic processing. Music ERP analyses show that rhythmic incongruities are processed independently of attention, revealing automatic processing of rhythm in music.

  6. Finding and Improving the Key-Frames of Long Video Sequences for Face Recognition

    DEFF Research Database (Denmark)

    Nasrollahi, Kamal; Moeslund, Thomas B.

    2010-01-01

    of such video sequences by any enhancement or even face recognition algorithm is demanding. Thus, there is a need for a mechanism to summarize the input video sequence to a set of key-frames and then applying an enhancement algorithm to this subset. This paper presents a system doing exactly this. The system......Face recognition systems are very sensitive to the quality and resolution of their input face images. This makes such systems unreliable when working with long surveillance video sequences without employing some selection and enhancement algorithms. On the other hand, processing all the frames...... uses face quality assessment to select the key-frames and a hybrid super-resolution to enhance the face image quality. The suggested system that employs a linear associator face recognizer to evaluate the enhanced results has been tested on real surveillance video sequences and the experimental results...

  7. FORMATION OF SENIOR PUPILS ALGORITHMIC CULTURE IN THE PROCESS OF SOLVING COMPUTATIONAL PROBLEMS USING SOFTWARE TOOLS: RESULTS OF THE STUDY

    Directory of Open Access Journals (Sweden)

    Liudmyla V. Osipа

    2013-06-01

    Full Text Available Introduced a new practical solution to the urgent problem of the formation of algorithmic culture of senior pupils in the process of solving computational problems using software tools. Identified and theoretically grounded teaching conditions for the formation of algorithmic culture of high school students in the process of solving computational problems with the use of software tools and developed a training program as an elective course "The decision of computational problems using software tools", the introduction of which is necessary for the implementation of the teaching conditions for the formation of algorithmic culture.

  8. A Novel Pixon-Based Image Segmentation Process Using Fuzzy Filtering and Fuzzy C-mean Algorithm

    DEFF Research Database (Denmark)

    Nadernejad, E; Barari, Amin

    2011-01-01

    Image segmentation, which is an important stage of many image processing algorithms, is the process of partitioning an image into nonintersecting regions, such that each region is homogeneous and the union of no two adjacent regions is homogeneous. This paper presents a novel pixon-based algorithm...... for image segmentation. The key idea is to create a pixon model by combining fuzzy filtering as a kernel function and a fuzzy c-means clustering algorithm for image segmentation. Use of fuzzy filters reduces noise and slightly smoothes the image. Use of the proposed pixon model prevented image over...

  9. A Novel Pixon-Based Image Segmentation Process Using Fuzzy Filtering and Fuzzy C-mean Algorithm

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan; Barari, Amin

    2011-01-01

    Image segmentation, which is an important stage of many image processing algorithms, is the process of partitioning an image into nonintersecting regions, such that each region is homogeneous and the union of no two adjacent regions is homogeneous. This paper presents a novel pixon-based algorithm...... for image segmentation. The key idea is to create a pixon model by combining fuzzy filtering as a kernel function and a fuzzy c-means clustering algorithm for image segmentation. Use of fuzzy filters reduces noise and slightly smoothes the image. Use of the proposed pixon model prevented image over-segmentation...

  10. Computational thermodynamics, Gaussian processes and genetic algorithms: combined tools to design new alloys

    Science.gov (United States)

    Tancret, F.

    2013-06-01

    A new alloy design procedure is proposed, combining in a single computational tool several modelling and predictive techniques that have already been used and assessed in the field of materials science and alloy design: a genetic algorithm is used to optimize the alloy composition for target properties and performance on the basis of the prediction of mechanical properties (estimated by Gaussian process regression of data on existing alloys) and of microstructural constitution, stability and processability (evaluated by computational themodynamics). These tools are integrated in a unique Matlab programme. An example is given in the case of the design of a new nickel-base superalloy for future power plant applications (such as the ultra-supercritical (USC) coal-fired plant, or the high-temperature gas-cooled nuclear reactor (HTGCR or HTGR), where the selection criteria include cost, oxidation and creep resistance around 750 °C, long-term stability at service temperature, forgeability, weldability, etc.

  11. Progressive Line Processing of Kernel RX Anomaly Detection Algorithm for Hyperspectral Imagery.

    Science.gov (United States)

    Zhao, Chunhui; Deng, Weiwei; Yan, Yiming; Yao, Xifeng

    2017-08-07

    The Kernel-RX detector (KRXD) has attracted widespread interest in hyperspectral image processing with the utilization of nonlinear information. However, the kernelization of hyperspectral data leads to poor execution efficiency in KRXD. This paper presents an approach to the progressive line processing of KRXD (PLP-KRXD) that can perform KRXD line by line (the main data acquisition pattern). Parallel causal sliding windows are defined to ensure the causality of PLP-KRXD. Then, with the employment of the Woodbury matrix identity and the matrix inversion lemma, PLP-KRXD has the capacity to recursively update the kernel matrices, thereby avoiding a great many repetitive calculations of complex matrices, and greatly reducing the algorithm's complexity. To substantiate the usefulness and effectiveness of PLP-KRXD, three groups of hyperspectral datasets are used to conduct experiments.

  12. Structural assessment of aerospace components using image processing algorithms and Finite Element models

    DEFF Research Database (Denmark)

    Stamatelos, Dimtrios; Kappatos, Vassilios

    2017-01-01

    Purpose – This paper presents the development of an advanced structural assessment approach for aerospace components (metallic and composites). This work focuses on developing an automatic image processing methodology based on Non Destructive Testing (NDT) data and numerical models, for predicting...... of the damaged plates showed that strength reduction occurs compare to reference plates. This methodology is a promising method for structural assessment of aerospace components, since conclusions regarding their functionality can be drawn. Research limitations/implications – The investigated structural...... in a tough economic area. Originality/value – As far as it is known, this is the first time that an aerospace structural assessment combines image processing algorithms and FE models....

  13. Waste reduction algorithm used as the case study of simulated bitumen production process

    Directory of Open Access Journals (Sweden)

    Savić Marina A.

    2011-01-01

    Full Text Available Waste reduction algorithm - WAR is a tool helping process engineers for environmental impact assessment. WAR algorithm is a methodology for determining the potential environmental impact (PEI of a chemical process. In particular, the bitumen production process was analyzed following three stages: a atmospheric distillation unit, b vacuum distillation unit, and c bitumen production unit. Study was developed for the middle sized oil refinery with capacity of 5000000 tones of crude oil per year. Results highlight the most vulnerable aspects of the environmental pollution that arise during the manufacturing process of bitumen. The overall rates of PEI leaving the system (PEI/h - Iout PEI/h are: a 2.14105, b 7.17104 and c 2.36103, respectively. The overall rates of PEI generated within the system - Igen PEI/h are: a 7.75104, b -4.31104 and c -4.32102, respectively. Atmospheric distillation unit have the highest overall rate of PEI while the bitumen production unit have the lowest overall rate of PEI. Comparison of Iout PEI/h and Igen PEI/h values for the atmospheric distillation unit, shows that the overall rate of PEI generated in the system is 36.21% of the overall rate of PEI leaving the system. In the cases of vacuum distillation and bitumen production units, the overall rate of PEI generated in system have negative values, i.e. the overall rate of PEI leaving the system is reduced at 60.11% (in the vacuum distillation unit and at 18.30% (in the bitumen production unit. Analysis of the obtained results for the overall rate of PEI, expressed by weight of the product, confirms conclusions.

  14. Scalable gastroscopic video summarization via similar-inhibition dictionary selection.

    Science.gov (United States)

    Wang, Shuai; Cong, Yang; Cao, Jun; Yang, Yunsheng; Tang, Yandong; Zhao, Huaici; Yu, Haibin

    2016-01-01

    This paper aims at developing an automated gastroscopic video summarization algorithm to assist clinicians to more effectively go through the abnormal contents of the video. To select the most representative frames from the original video sequence, we formulate the problem of gastroscopic video summarization as a dictionary selection issue. Different from the traditional dictionary selection methods, which take into account only the number and reconstruction ability of selected key frames, our model introduces the similar-inhibition constraint to reinforce the diversity of selected key frames. We calculate the attention cost by merging both gaze and content change into a prior cue to help select the frames with more high-level semantic information. Moreover, we adopt an image quality evaluation process to eliminate the interference of the poor quality images and a segmentation process to reduce the computational complexity. For experiments, we build a new gastroscopic video dataset captured from 30 volunteers with more than 400k images and compare our method with the state-of-the-arts using the content consistency, index consistency and content-index consistency with the ground truth. Compared with all competitors, our method obtains the best results in 23 of 30 videos evaluated based on content consistency, 24 of 30 videos evaluated based on index consistency and all videos evaluated based on content-index consistency. For gastroscopic video summarization, we propose an automated annotation method via similar-inhibition dictionary selection. Our model can achieve better performance compared with other state-of-the-art models and supplies more suitable key frames for diagnosis. The developed algorithm can be automatically adapted to various real applications, such as the training of young clinicians, computer-aided diagnosis or medical report generation. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Reduced complexity MPEG2 video post-processing for HD display

    DEFF Research Database (Denmark)

    Virk, Kamran; Li, Huiying; Forchhammer, Søren

    2008-01-01

    This paper presents MPEG(2) decoder post-processing for high definition (HD) flat panel displays. The focus is to design efficient post-processing to reduce blocking and ringing artifacts. Standard deblocking modules are improved to obtain a significant load reduction through a new DCT based...

  16. Framework for Processing Videos in the Presence of Spatially Varying Motion Blur

    Science.gov (United States)

    2014-04-18

    image processing and computer vision due to its applicability to a vast range of areas such as tracking, surveillance, object recognition, inpainting ...filling and object removal by exemplar-based image inpainting ,” IEEE Transactions on Image Processing, vol. 13, no. 9, pp. 1200–1212, Sept. 2004. 3

  17. A Nonlinear Decision-Based Algorithm for Removal of Strip Lines, Drop Lines, Blotches, Band Missing and Impulses in Images and Videos

    Directory of Open Access Journals (Sweden)

    D. Ebenezer

    2008-10-01

    Full Text Available A decision-based nonlinear algorithm for removal of strip lines, drop lines, blotches, band missing, and impulses in images is presented. The algorithm performs two simultaneous operations, namely, detection of corrupted pixels and estimation of new pixels for replacing the corrupted pixels. Removal of these artifacts is achieved without damaging edges and details. The algorithm uses an adaptive length window whose maximum size is 5×5 to avoid blurring due to large window sizes. However, the restricted window size renders median operation less effective whenever noise is excessive in which case the proposed algorithm automatically switches to mean filtering. The performance of the algorithm is analyzed in terms of mean square error [MSE], peak-signal-to-noise ratio [PSNR], and image enhancement factor [IEF] and compared with standard algorithms already in use. Improved performance of the proposed algorithm is demonstrated. The advantage of the proposed algorithm is that a single algorithm can replace several independent algorithms required for removal of different artifacts.

  18. A Nonlinear Decision-Based Algorithm for Removal of Strip Lines, Drop Lines, Blotches, Band Missing and Impulses in Images and Videos

    Directory of Open Access Journals (Sweden)

    Manikandan S

    2008-01-01

    Full Text Available Abstract A decision-based nonlinear algorithm for removal of strip lines, drop lines, blotches, band missing, and impulses in images is presented. The algorithm performs two simultaneous operations, namely, detection of corrupted pixels and estimation of new pixels for replacing the corrupted pixels. Removal of these artifacts is achieved without damaging edges and details. The algorithm uses an adaptive length window whose maximum size is to avoid blurring due to large window sizes. However, the restricted window size renders median operation less effective whenever noise is excessive in which case the proposed algorithm automatically switches to mean filtering. The performance of the algorithm is analyzed in terms of mean square error [MSE], peak-signal-to-noise ratio [PSNR], and image enhancement factor [IEF] and compared with standard algorithms already in use. Improved performance of the proposed algorithm is demonstrated. The advantage of the proposed algorithm is that a single algorithm can replace several independent algorithms required for removal of different artifacts.

  19. Efficient Parallel Video Processing Techniques on GPU: From Framework to Implementation

    Science.gov (United States)

    Su, Huayou; Wen, Mei; Wu, Nan; Ren, Ju; Zhang, Chunyuan

    2014-01-01

    Through reorganizing the execution order and optimizing the data structure, we proposed an efficient parallel framework for H.264/AVC encoder based on massively parallel architecture. We implemented the proposed framework by CUDA on NVIDIA's GPU. Not only the compute intensive components of the H.264 encoder are parallelized but also the control intensive components are realized effectively, such as CAVLC and deblocking filter. In addition, we proposed serial optimization methods, including the multiresolution multiwindow for motion estimation, multilevel parallel strategy to enhance the parallelism of intracoding as much as possible, component-based parallel CAVLC, and direction-priority deblocking filter. More than 96% of workload of H.264 encoder is offloaded to GPU. Experimental results show that the parallel implementation outperforms the serial program by 20 times of speedup ratio and satisfies the requirement of the real-time HD encoding of 30 fps. The loss of PSNR is from 0.14 dB to 0.77 dB, when keeping the same bitrate. Through the analysis to the kernels, we found that speedup ratios of the compute intensive algorithms are proportional with the computation power of the GPU. However, the performance of the control intensive parts (CAVLC) is much related to the memory bandwidth, which gives an insight for new architecture design. PMID:24757432

  20. Obscene Video Recognition Using Fuzzy SVM and New Sets of Features

    Directory of Open Access Journals (Sweden)

    Alireza Behrad

    2013-02-01

    Full Text Available In this paper, a novel approach for identifying normal and obscene videos is proposed. In order to classify different episodes of a video independently and discard the need to process all frames, first, key frames are extracted and skin regions are detected for groups of video frames starting with key frames. In the second step, three different features including 1- structural features based on single frame information, 2- features based on spatiotemporal volume and 3-motion-based features, are extracted for each episode of video. The PCA-LDA method is then applied to reduce the size of structural features and select more distinctive features. For the final step, we use fuzzy or a Weighted Support Vector Machine (WSVM classifier to identify video episodes. We also employ a multilayer Kohonen network as an initial clustering algorithm to increase the ability to discriminate between the extracted features into two classes of videos. Features based on motion and periodicity characteristics increase the efficiency of the proposed algorithm in videos with bad illumination and skin colour variation. The proposed method is evaluated using 1100 videos in different environmental and illumination conditions. The experimental results show a correct recognition rate of 94.2% for the proposed algorithm.

  1. Automated chart review utilizing natural language processing algorithm for asthma predictive index.

    Science.gov (United States)

    Kaur, Harsheen; Sohn, Sunghwan; Wi, Chung-Il; Ryu, Euijung; Park, Miguel A; Bachman, Kay; Kita, Hirohito; Croghan, Ivana; Castro-Rodriguez, Jose A; Voge, Gretchen A; Liu, Hongfang; Juhn, Young J

    2018-02-13

    Thus far, no algorithms have been developed to automatically extract patients who meet Asthma Predictive Index (API) criteria from the Electronic health records (EHR) yet. Our objective is to develop and validate a natural language processing (NLP) algorithm to identify patients that meet API criteria. This is a cross-sectional study nested in a birth cohort study in Olmsted County, MN. Asthma status ascertained by manual chart review based on API criteria served as gold standard. NLP-API was developed on a training cohort (n = 87) and validated on a test cohort (n = 427). Criterion validity was measured by sensitivity, specificity, positive predictive value and negative predictive value of the NLP algorithm against manual chart review for asthma status. Construct validity was determined by associations of asthma status defined by NLP-API with known risk factors for asthma. Among the eligible 427 subjects of the test cohort, 48% were males and 74% were White. Median age was 5.3 years (interquartile range 3.6-6.8). 35 (8%) had a history of asthma by NLP-API vs. 36 (8%) by abstractor with 31 by both approaches. NLP-API predicted asthma status with sensitivity 86%, specificity 98%, positive predictive value 88%, negative predictive value 98%. Asthma status by both NLP and manual chart review were significantly associated with the known asthma risk factors, such as history of allergic rhinitis, eczema, family history of asthma, and maternal history of smoking during pregnancy (p value NLP-API and abstractor, and the effect sizes were similar between the reviews with 4.4 vs 4.2 respectively. NLP-API was able to ascertain asthma status in children mining from EHR and has a potential to enhance asthma care and research through population management and large-scale studies when identifying children who meet API criteria.

  2. Video demystified

    CERN Document Server

    Jack, Keith

    2004-01-01

    This international bestseller and essential reference is the "bible" for digital video engineers and programmers worldwide. This is by far the most informative analog and digital video reference available, includes the hottest new trends and cutting-edge developments in the field. Video Demystified, Fourth Edition is a "one stop" reference guide for the various digital video technologies. The fourth edition is completely updated with all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video (Video over DSL, Ethernet, etc.), as well as discussions of the latest standards throughout. The accompanying CD-ROM is updated to include a unique set of video test files in the newest formats. *This essential reference is the "bible" for digital video engineers and programmers worldwide *Contains all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video *Completely revised with all the latest and most up-to-date industry standards.

  3. Digital Video Editing

    Science.gov (United States)

    McConnell, Terry

    2004-01-01

    Monica Adams, head librarian at Robinson Secondary in Fairfax country, Virginia, states that librarians should have the technical knowledge to support projects related to digital video editing. The process of digital video editing and the cables, storage issues and the computer system with software is described.

  4. Wind speed and direction measurement based on arc ultrasonic sensor array signal processing algorithm.

    Science.gov (United States)

    Li, Xinbo; Sun, Haixin; Gao, Wei; Shi, Yaowu; Liu, Guojun; Wu, Yue

    2016-11-01

    This article investigates a kind of method to measure the wind speed and the wind direction, which is based on arc ultrasonic sensor array and combined with array signal processing algorithm. In the proposed method, a new arc ultrasonic array structure is introduced and the array manifold is derived firstly. On this basis, the measurement of the wind speed and the wind direction is analyzed and discussed by means of the basic idea of the classic MUSIC (Multiple Signal Classification) algorithm, which achieves the measurements of the 360° wind direction with resolution of 1° and 0-60m/s wind speed with resolution of 0.1m/s. The implementation of the proposed method is elaborated through the theoretical derivation and corresponding discussion. Besides, the simulation experiments are presented to show the feasibility of the proposed method. The theoretical analysis and simulation results indicate that the proposed method has superiority on anti-noise performance and improves the wind measurement accuracy. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  5. DNA genetic artificial fish swarm constant modulus blind equalization algorithm and its application in medical image processing.

    Science.gov (United States)

    Guo, Y C; Wang, H; Zhang, B L

    2015-10-02

    This study proposes use of the DNA genetic artificial fish swarm constant modulus blind equalization algorithm (DNA-G-AFS-CMBEA) to overcome the local convergence of the CMBEA. In this proposed algorithm, after the fusion of the fast convergence of the AFS algorithm and the global search capability of the DNA-G algorithm to drastically optimize the position vector of the artificial fish, the global optimal position vector is obtained and used as the initial optimal weight vector of the CMBEA. The result of application of this improved method in medical image processing demonstrates that the proposed algorithm outperforms the CMBEA and the AFS-CMBEA in removing the noise in a medical image and improving the peak signal to noise ratio.

  6. Multi-Objective Optimization of Squeeze Casting Process using Genetic Algorithm and Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Patel G.C.M.

    2016-09-01

    Full Text Available The near net shaped manufacturing ability of squeeze casting process requiresto set the process variable combinations at their optimal levels to obtain both aesthetic appearance and internal soundness of the cast parts. The aesthetic and internal soundness of cast parts deal with surface roughness and tensile strength those can readily put the part in service without the requirement of costly secondary manufacturing processes (like polishing, shot blasting, plating, hear treatment etc.. It is difficult to determine the levels of the process variable (that is, pressure duration, squeeze pressure, pouring temperature and die temperature combinations for extreme values of the responses (that is, surface roughness, yield strength and ultimate tensile strength due to conflicting requirements. In the present manuscript, three population based search and optimization methods, namely genetic algorithm (GA, particle swarm optimization (PSO and multi-objective particle swarm optimization based on crowding distance (MOPSO-CD methods have been used to optimize multiple outputs simultaneously. Further, validation test has been conducted for the optimal casting conditions suggested by GA, PSO and MOPSO-CD. The results showed that PSO outperformed GA with regard to computation time.

  7. Hypergraph+: An Improved Hypergraph-Based Task-Scheduling Algorithm for Massive Spatial Data Processing on Master-Slave Platforms

    Directory of Open Access Journals (Sweden)

    Bo Cheng

    2016-08-01

    Full Text Available Spatial data processing often requires massive datasets, and the task/data scheduling efficiency of these applications has an impact on the overall processing performance. Among the existing scheduling strategies, hypergraph-based algorithms capture the data sharing pattern in a global way and significantly reduce total communication volume. Due to heterogeneous processing platforms, however, single hypergraph partitioning for later scheduling may be not optimal. Moreover, these scheduling algorithms neglect the overlap between task execution and data transfer that could further decrease execution time. In order to address these problems, an extended hypergraph-based task-scheduling algorithm, named Hypergraph+, is proposed for massive spatial data processing. Hypergraph+ improves upon current hypergraph scheduling algorithms in two ways: (1 It takes platform heterogeneity into consideration offering a metric function to evaluate the partitioning quality in order to derive the best task/file schedule; and (2 It can maximize the overlap between communication and computation. The GridSim toolkit was used to evaluate Hypergraph+ in an IDW spatial interpolation application on heterogeneous master-slave platforms. Experiments illustrate that the proposed Hypergraph+ algorithm achieves on average a 43% smaller makespan than the original hypergraph scheduling algorithm but still preserves high scheduling efficiency.

  8. Wind Energy Potential Assessment and Forecasting Research Based on the Data Pre-Processing Technique and Swarm Intelligent Optimization Algorithms

    Directory of Open Access Journals (Sweden)

    Zhilong Wang

    2016-11-01

    Full Text Available Accurate quantification and characterization of a wind energy potential assessment and forecasting is significant to optimal wind farm design, evaluation and scheduling. However, wind energy potential assessment and forecasting remain difficult and challenging research topics at present. Traditional wind energy assessment and forecasting models usually ignore the problem of data pre-processing as well as parameter optimization, which leads to low accuracy. Therefore, this paper aims to assess the potential of wind energy and forecast the wind speed in four locations in China based on the data pre-processing technique and swarm intelligent optimization algorithms. In the assessment stage, the cuckoo search (CS algorithm, ant colony (AC algorithm, firefly algorithm (FA and genetic algorithm (GA are used to estimate the two unknown parameters in the Weibull distribution. Then, the wind energy potential assessment results obtained by three data-preprocessing approaches are compared to recognize the best data-preprocessing approach and process the original wind speed time series. While in the forecasting stage, by considering the pre-processed wind speed time series as the original data, the CS and AC optimization algorithms are adopted to optimize three neural networks, namely, the Elman neural network, back propagation neural network, and wavelet neural network. The comparison results demonstrate that the new proposed wind energy assessment and speed forecasting techniques produce promising assessments and predictions and perform better than the single assessment and forecasting components.

  9. Bayesian Maximum Entropy Based Algorithm for Digital X-ray Mammogram Processing

    Directory of Open Access Journals (Sweden)

    Radu Mutihac

    2009-06-01

    Full Text Available Basics of Bayesian statistics in inverse problems using the maximum entropy principle are summarized in connection with the restoration of positive, additive images from various types of data like X-ray digital mammograms. An efficient iterative algorithm for image restoration from large data sets based on the conjugate gradient method and Lagrange multipliers in nonlinear optimization of a specific potential function was developed. The point spread function of the imaging system was determined by numerical simulations of inhomogeneous breast-like tissue with microcalcification inclusions of various opacities. The processed digital and digitized mammograms resulted superior in comparison with their raw counterparts in terms of contrast, resolution, noise, and visibility of details.

  10. Image processing algorithms for automated analysis of GMR data from inspection of multilayer structures

    Science.gov (United States)

    Karpenko, Oleksii; Safdernejad, Seyed; Dib, Gerges; Udpa, Lalita; Udpa, Satish; Tamburrino, Antonello

    2015-03-01

    Eddy current probes (EC) with Giant Magnetoresistive (GMR) sensors have recently emerged as a promising tool for rapid scanning of multilayer aircraft panels that helps detect cracks under fastener heads. However, analysis of GMR data is challenging due to the complexity of sensed magnetic fields. Further, probes that induce unidirectional currents are insensitive to cracks parallel to the current flow. In this paper, signal processing algorithms are developed for mixing data from two orthogonal EC-GMR scans in order to generate pseudo-rotating electromagnetic field images of fasteners with bottom layer cracks. Finite element simulations demonstrate that the normal component of numerically computed rotating field has uniform sensitivity to cracks emanating in all radial directions. The concept of pseudo-rotating field imaging is experimentally validated with the help of MAUS bilateral GMR array (Big-MR) designed by Boeing.

  11. Robot Navigation Control Based on Monocular Images: An Image Processing Algorithm for Obstacle Avoidance Decisions

    Directory of Open Access Journals (Sweden)

    William Benn

    2012-01-01

    Full Text Available This paper covers the use of monocular vision to control autonomous navigation for a robot in a dynamically changing environment. The solution focused on using colour segmentation against a selected floor plane to distinctly separate obstacles from traversable space: this is then supplemented with canny edge detection to separate similarly coloured boundaries to the floor plane. The resulting binary map (where white identifies an obstacle-free area and black identifies an obstacle could then be processed by fuzzy logic or neural networks to control the robot’s next movements. Findings show that the algorithm performed strongly on solid coloured carpets, wooden, and concrete floors but had difficulty in separating colours in multicoloured floor types such as patterned carpets.

  12. Algorithms and programming tools for image processing on the MPP, introduction. Thesis

    Science.gov (United States)

    1985-01-01

    The programming tools and parallel algorithms created for the Massively Parallel Processor (MPP) located at the NASA Goddard Space Center are discussed. A user-friendly environment for high level language parallel algorithm development was developed. The issues involved in implementing certain algorithms on the MPP were researched. The expected results were compared with the actual results.

  13. Expanding Learning and Teaching Processes in an ESL/Civics ABE Classroom Using an Interactive Video Lesson Plan in the U.S. Southwest: An Action Research Study

    Science.gov (United States)

    Cajar-Bravo, Aristides

    2010-01-01

    This study is an action research project that analyzed the ways in which ESL students improve their language learning processes by using as a teaching tool a media literacy video and Civics Education for social skills; it was presented to two groups of 12 students who were attending an ESL/Civics Education Intermediate-Advanced class in an ABE…

  14. Video-rate fluorescence lifetime imaging camera with CMOS single-photon avalanche diode arrays and high-speed imaging algorithm

    NARCIS (Netherlands)

    Li, D.D.U.; Arlt, J.; Tyndall, D.; Walker, R.; Richardson, J.; Stoppa, D.; Charbon, E.; Henderson, R.K.

    2011-01-01

    A high-speed and hardware-only algorithm using a center of mass method has been proposed for single-detector fluorescence lifetime sensing applications. This algorithm is now implemented on a field programmable gate array to provide fast lifetime estimates from a 32 × 32 low dark count 0.13 ?m

  15. A Neuromorphic System for Video Object Recognition

    Directory of Open Access Journals (Sweden)

    Deepak eKhosla

    2014-11-01

    Full Text Available Automated video object recognition is a topic of emerging importance in both defense and civilian applications. This work describes an accurate and low-power neuromorphic architecture and system for real-time automated video object recognition. Our system, Neuormorphic Visual Understanding of Scenes (NEOVUS, is inspired by recent findings in computational neuroscience on feed-forward object detection and classification pipelines for processing and extracting relevant information from visual data. The NEOVUS architecture is inspired by the ventral (what and dorsal (where streams of the mammalian visual pathway and combines retinal processing, form-based and motion-based object detection, and convolutional neural nets based object classification. Our system was evaluated by the Defense Advanced Research Projects Agency (DARPA under the NEOVISION2 program on a variety of urban area video datasets collected from both stationary and moving platforms. The datasets are challenging as they include a large number of targets in cluttered scenes with varying illumination and occlusion conditions. The NEOVUS system was also mapped to commercially available off-the-shelf hardware. The dynamic power requirement for the system that includes a 5.6Mpixel retinal camera processed by object detection and classification algorithms at 30 frames per second was measured at 21.7 Watts (W, for an effective energy consumption of 5.4 nanoJoules (nJ per bit of incoming video. In a systematic evaluation of five different teams by DARPA on three aerial datasets, the NEOVUS demonstrated the best performance with the highest recognition accuracy and at least three orders of magnitude lower energy consumption than two independent state of the art computer vision systems. These unprecedented results show that the NEOVUS has the potential to revolutionize automated video object recognition towards enabling practical low-power and mobile video processing applications.

  16. CYCLOPS: A mobile robotic platform for testing and validating image processing and autonomous navigation algorithms in support of artificial vision prostheses.

    Science.gov (United States)

    Fink, Wolfgang; Tarbell, Mark A

    2009-12-01

    While artificial vision prostheses are quickly becoming a reality, actual testing time with visual prosthesis carriers is at a premium. Moreover, it is helpful to have a more realistic functional approximation of a blind subject. Instead of a normal subject with a healthy retina looking at a low-resolution (pixelated) image on a computer monitor or head-mounted display, a more realistic approximation is achieved by employing a subject-independent mobile robotic platform that uses a pixelated view as its sole visual input for navigation purposes. We introduce CYCLOPS: an AWD, remote controllable, mobile robotic platform that serves as a testbed for real-time image processing and autonomous navigation systems for the purpose of enhancing the visual experience afforded by visual prosthesis carriers. Complete with wireless Internet connectivity and a fully articulated digital camera with wireless video link, CYCLOPS supports both interactive tele-commanding via joystick, and autonomous self-commanding. Due to its onboard computing capabilities and extended battery life, CYCLOPS can perform complex and numerically intensive calculations, such as image processing and autonomous navigation algorithms, in addition to interfacing to additional sensors. Its Internet connectivity renders CYCLOPS a worldwide accessible testbed for researchers in the field of artificial vision systems. CYCLOPS enables subject-independent evaluation and validation of image processing and autonomous navigation systems with respect to the utility and efficiency of supporting and enhancing visual prostheses, while potentially reducing to a necessary minimum the need for valuable testing time with actual visual prosthesis carriers.

  17. Pre-Hardware Optimization of Spacecraft Image Processing Software Algorithms and Hardware Implementation

    Science.gov (United States)

    Kizhner, Semion; Flatley, Thomas P.; Hestnes, Phyllis; Jentoft-Nilsen, Marit; Petrick, David J.; Day, John H. (Technical Monitor)

    2001-01-01

    Spacecraft telemetry rates have steadily increased over the last decade presenting a problem for real-time processing by ground facilities. This paper proposes a solution to a related problem for the Geostationary Operational Environmental Spacecraft (GOES-8) image processing application. Although large super-computer facilities are the obvious heritage solution, they are very costly, making it imperative to seek a feasible alternative engineering solution at a fraction of the cost. The solution is based on a Personal Computer (PC) platform and synergy of optimized software algorithms and re-configurable computing hardware technologies, such as Field Programmable Gate Arrays (FPGA) and Digital Signal Processing (DSP). It has been shown in [1] and [2] that this configuration can provide superior inexpensive performance for a chosen application on the ground station or on-board a spacecraft. However, since this technology is still maturing, intensive pre-hardware steps are necessary to achieve the benefits of hardware implementation. This paper describes these steps for the GOES-8 application, a software project developed using Interactive Data Language (IDL) (Trademark of Research Systems, Inc.) on a Workstation/UNIX platform. The solution involves converting the application to a PC/Windows/RC platform, selected mainly by the availability of low cost, adaptable high-speed RC hardware. In order for the hybrid system to run, the IDL software was modified to account for platform differences. It was interesting to examine the gains and losses in performance on the new platform, as well as unexpected observations before implementing hardware. After substantial pre-hardware optimization steps, the necessity of hardware implementation for bottleneck code in the PC environment became evident and solvable beginning with the methodology described in [1], [2], and implementing a novel methodology for this specific application [6]. The PC-RC interface bandwidth problem for the

  18. Video processing of remote sensor data applied to uranium exploration in Wyoming. [Roll-front U deposits

    Energy Technology Data Exchange (ETDEWEB)

    Levinson, R.A.; Marrs, R.W.; Crockell, F.

    1979-06-30

    LANDSAT satellite imagery and aerial photography can be used to map areas of altered sandstone associated with roll-front uranium deposits. Image data must be enhanced so that alteration spectral contrasts can be seen, and video image processing is a fast, low-cost, and efficient tool. For LANDSAT data, the 7/4 ratio produces the best enhancement of altered sandstone. The 6/4 ratio is most effective for color infrared aerial photography. Geochemical and mineralogical associations occur in unaltered, altered, and ore roll-front zones. Samples from Pumpkin Buttes show that iron is the primary coloring agent which makes alteration visually detectable. Eh and pH changes associated with passage of a roll front cause oxidation of magnetite and pyrite to hematite, goethite, and limonite in the host sandstone, thereby producing the alteration. Statistical analysis show that the detectability of geochemical and color zonation in host sands is weakened by soil-forming processes. Alteration can only be mapped in areas of thin soil cover and moderate to sparse vegetative cover.

  19. Target detection and tracking in infrared video

    Science.gov (United States)

    Deng, Zhihui; Zhu, Jihong

    2017-07-01

    In this paper, we propose a method for target detection and tracking in infrared video. The target is defined by its location and extent in a single frame. In the initialization process, we use an adaptive threshold to segment the target and then extract the fern feature and normalize it as a template. The detector uses the random forest and fern to detect the target in the infrared video. The random forest and fern is a random combination of 2bit Binary Pattern, which is robust to infrared targets with blurred and unknown contours. The tracker uses the gray-value weighted mean-Shift algorithm to track the infrared target which is always brighter than the background. And the tracker can track the deformed target efficiently and quickly. When the target disappears, the detector will redetect the target in the coming infrared image. Finally, we verify the algorithm on the real-time infrared target detection and tracking platform. The result shows that our algorithm performs better than TLD in terms of recall and runtime in infrared video.

  20. An Analysis of OpenACC Programming Model: Image Processing Algorithms as a Case Study

    Directory of Open Access Journals (Sweden)

    M. J. Mišić

    2014-06-01

    Full Text Available Graphics processing units and similar accelerators have been intensively used in general purpose computations for several years. In the last decade, GPU architecture and organization changed dramatically to support an ever-increasing demand for computing power. Along with changes in hardware, novel programming models have been proposed, such as NVIDIA’s Compute Unified Device Architecture (CUDA and Open Computing Language (OpenCL by Khronos group. Although numerous commercial and scientific applications have been developed using these two models, they still impose a significant challenge for less experienced users. There are users from various scientific and engineering communities who would like to speed up their applications without the need to deeply understand a low-level programming model and underlying hardware. In 2011, OpenACC programming model was launched. Much like OpenMP for multicore processors, OpenACC is a high-level, directive-based programming model for manycore processors like GPUs. This paper presents an analysis of OpenACC programming model and its applicability in typical domains like image processing. Three, simple image processing algorithms have been implemented for execution on the GPU with OpenACC. The results were compared with their sequential counterparts, and results are briefly discussed.