WorldWideScience

Sample records for infrared algorithm development

  1. A false-alarm aware methodology to develop robust and efficient multi-scale infrared small target detection algorithm

    Science.gov (United States)

    Moradi, Saed; Moallem, Payman; Sabahi, Mohamad Farzan

    2018-03-01

    False alarm rate and detection rate are still two contradictory metrics for infrared small target detection in an infrared search and track system (IRST), despite the development of new detection algorithms. In certain circumstances, not detecting true targets is more tolerable than detecting false items as true targets. Hence, considering background clutter and detector noise as the sources of the false alarm in an IRST system, in this paper, a false alarm aware methodology is presented to reduce false alarm rate while the detection rate remains undegraded. To this end, advantages and disadvantages of each detection algorithm are investigated and the sources of the false alarms are determined. Two target detection algorithms having independent false alarm sources are chosen in a way that the disadvantages of the one algorithm can be compensated by the advantages of the other one. In this work, multi-scale average absolute gray difference (AAGD) and Laplacian of point spread function (LoPSF) are utilized as the cornerstones of the desired algorithm of the proposed methodology. After presenting a conceptual model for the desired algorithm, it is implemented through the most straightforward mechanism. The desired algorithm effectively suppresses background clutter and eliminates detector noise. Also, since the input images are processed through just four different scales, the desired algorithm has good capability for real-time implementation. Simulation results in term of signal to clutter ratio and background suppression factor on real and simulated images prove the effectiveness and the performance of the proposed methodology. Since the desired algorithm was developed based on independent false alarm sources, our proposed methodology is expandable to any pair of detection algorithms which have different false alarm sources.

  2. Texture orientation-based algorithm for detecting infrared maritime targets.

    Science.gov (United States)

    Wang, Bin; Dong, Lili; Zhao, Ming; Wu, Houde; Xu, Wenhai

    2015-05-20

    Infrared maritime target detection is a key technology for maritime target searching systems. However, in infrared maritime images (IMIs) taken under complicated sea conditions, background clutters, such as ocean waves, clouds or sea fog, usually have high intensity that can easily overwhelm the brightness of real targets, which is difficult for traditional target detection algorithms to deal with. To mitigate this problem, this paper proposes a novel target detection algorithm based on texture orientation. This algorithm first extracts suspected targets by analyzing the intersubband correlation between horizontal and vertical wavelet subbands of the original IMI on the first scale. Then the self-adaptive wavelet threshold denoising and local singularity analysis of the original IMI is combined to remove false alarms further. Experiments show that compared with traditional algorithms, this algorithm can suppress background clutter much better and realize better single-frame detection for infrared maritime targets. Besides, in order to guarantee accurate target extraction further, the pipeline-filtering algorithm is adopted to eliminate residual false alarms. The high practical value and applicability of this proposed strategy is backed strongly by experimental data acquired under different environmental conditions.

  3. Detection algorithm of infrared small target based on improved SUSAN operator

    Science.gov (United States)

    Liu, Xingmiao; Wang, Shicheng; Zhao, Jing

    2010-10-01

    The methods of detecting small moving targets in infrared image sequences that contain moving nuisance objects and background noise is analyzed in this paper. A novel infrared small target detection algorithm based on improved SUSAN operator is put forward. The algorithm selects double templates for the infrared small target detection: one size is greater than the small target point size and another size is equal to the small target point size. First, the algorithm uses the big template to calculate the USAN of each pixel in the image and detect the small target, the edge of the image and isolated noise pixels; Then the algorithm uses the another template to calculate the USAN of pixels detected in the first step and improves the principles of SUSAN algorithm based on the characteristics of the small target so that the algorithm can only detect small targets and don't sensitive to the edge pixels of the image and isolated noise pixels. So the interference of the edge of the image and isolate noise points are removed and the candidate target points can be identified; At last, the target is detected by utilizing the continuity and consistency of target movement. The experimental results indicate that the improved SUSAN detection algorithm can quickly and effectively detect the infrared small targets.

  4. An improved contrast enhancement algorithm for infrared images based on adaptive double plateaus histogram equalization

    Science.gov (United States)

    Li, Shuo; Jin, Weiqi; Li, Li; Li, Yiyang

    2018-05-01

    Infrared thermal images can reflect the thermal-radiation distribution of a particular scene. However, the contrast of the infrared images is usually low. Hence, it is generally necessary to enhance the contrast of infrared images in advance to facilitate subsequent recognition and analysis. Based on the adaptive double plateaus histogram equalization, this paper presents an improved contrast enhancement algorithm for infrared thermal images. In the proposed algorithm, the normalized coefficient of variation of the histogram, which characterizes the level of contrast enhancement, is introduced as feedback information to adjust the upper and lower plateau thresholds. The experiments on actual infrared images show that compared to the three typical contrast-enhancement algorithms, the proposed algorithm has better scene adaptability and yields better contrast-enhancement results for infrared images with more dark areas or a higher dynamic range. Hence, it has high application value in contrast enhancement, dynamic range compression, and digital detail enhancement for infrared thermal images.

  5. Novel search algorithms for a mid-infrared spectral library of cotton contaminants.

    Science.gov (United States)

    Loudermilk, J Brian; Himmelsbach, David S; Barton, Franklin E; de Haseth, James A

    2008-06-01

    During harvest, a variety of plant based contaminants are collected along with cotton lint. The USDA previously created a mid-infrared, attenuated total reflection (ATR), Fourier transform infrared (FT-IR) spectral library of cotton contaminants for contaminant identification as the contaminants have negative impacts on yarn quality. This library has shown impressive identification rates for extremely similar cellulose based contaminants in cases where the library was representative of the samples searched. When spectra of contaminant samples from crops grown in different geographic locations, seasons, and conditions and measured with a different spectrometer and accessories were searched, identification rates for standard search algorithms decreased significantly. Six standard algorithms were examined: dot product, correlation, sum of absolute values of differences, sum of the square root of the absolute values of differences, sum of absolute values of differences of derivatives, and sum of squared differences of derivatives. Four categories of contaminants derived from cotton plants were considered: leaf, stem, seed coat, and hull. Experiments revealed that the performance of the standard search algorithms depended upon the category of sample being searched and that different algorithms provided complementary information about sample identity. These results indicated that choosing a single standard algorithm to search the library was not possible. Three voting scheme algorithms based on result frequency, result rank, category frequency, or a combination of these factors for the results returned by the standard algorithms were developed and tested for their capability to overcome the unpredictability of the standard algorithms' performances. The group voting scheme search was based on the number of spectra from each category of samples represented in the library returned in the top ten results of the standard algorithms. This group algorithm was able to identify

  6. An ATR architecture for algorithm development and testing

    Science.gov (United States)

    Breivik, Gøril M.; Løkken, Kristin H.; Brattli, Alvin; Palm, Hans C.; Haavardsholm, Trym

    2013-05-01

    A research platform with four cameras in the infrared and visible spectral domains is under development at the Norwegian Defence Research Establishment (FFI). The platform will be mounted on a high-speed jet aircraft and will primarily be used for image acquisition and for development and test of automatic target recognition (ATR) algorithms. The sensors on board produce large amounts of data, the algorithms can be computationally intensive and the data processing is complex. This puts great demands on the system architecture; it has to run in real-time and at the same time be suitable for algorithm development. In this paper we present an architecture for ATR systems that is designed to be exible, generic and efficient. The architecture is module based so that certain parts, e.g. specific ATR algorithms, can be exchanged without affecting the rest of the system. The modules are generic and can be used in various ATR system configurations. A software framework in C++ that handles large data ows in non-linear pipelines is used for implementation. The framework exploits several levels of parallelism and lets the hardware processing capacity be fully utilised. The ATR system is under development and has reached a first level that can be used for segmentation algorithm development and testing. The implemented system consists of several modules, and although their content is still limited, the segmentation module includes two different segmentation algorithms that can be easily exchanged. We demonstrate the system by applying the two segmentation algorithms to infrared images from sea trial recordings.

  7. Design and algorithm research of high precision airborne infrared touch screen

    Science.gov (United States)

    Zhang, Xiao-Bing; Wang, Shuang-Jie; Fu, Yan; Chen, Zhao-Quan

    2016-10-01

    There are shortcomings of low precision, touch shaking, and sharp decrease of touch precision when emitting and receiving tubes are failure in the infrared touch screen. A high precision positioning algorithm based on extended axis is proposed to solve these problems. First, the unimpeded state of the beam between emitting and receiving tubes is recorded as 0, while the impeded state is recorded as 1. Then, the method of oblique scan is used, in which the light of one emitting tube is used for five receiving tubes. The impeded information of all emitting and receiving tubes is collected as matrix. Finally, according to the method of arithmetic average, the position of the touch object is calculated. The extended axis positioning algorithm is characteristic of high precision in case of failure of individual infrared tube and affects slightly the precision. The experimental result shows that the 90% display area of the touch error is less than 0.25D, where D is the distance between adjacent emitting tubes. The conclusion is gained that the algorithm based on extended axis has advantages of high precision, little impact when individual infrared tube is failure, and using easily.

  8. Development of an inverse distance weighted active infrared stealth scheme using the repulsive particle swarm optimization algorithm.

    Science.gov (United States)

    Han, Kuk-Il; Kim, Do-Hwi; Choi, Jun-Hyuk; Kim, Tae-Kuk

    2018-04-20

    Treatments for detection by infrared (IR) signals are higher than for other signals such as radar or sonar because an object detected by the IR sensor cannot easily recognize its detection status. Recently, research for actively reducing IR signal has been conducted to control the IR signal by adjusting the surface temperature of the object. In this paper, we propose an active IR stealth algorithm to synchronize IR signals from the object and the background around the object. The proposed method includes the repulsive particle swarm optimization statistical optimization algorithm to estimate the IR stealth surface temperature, which will result in a synchronization between the IR signals from the object and the surrounding background by setting the inverse distance weighted contrast radiant intensity (CRI) equal to zero. We tested the IR stealth performance in mid wavelength infrared (MWIR) and long wavelength infrared (LWIR) bands for a test plate located at three different positions on a forest scene to verify the proposed method. Our results show that the inverse distance weighted active IR stealth technique proposed in this study is proved to be an effective method for reducing the contrast radiant intensity between the object and background up to 32% as compared to the previous method using the CRI determined as the simple signal difference between the object and the background.

  9. Parallel algorithm of real-time infrared image restoration based on total variation theory

    Science.gov (United States)

    Zhu, Ran; Li, Miao; Long, Yunli; Zeng, Yaoyuan; An, Wei

    2015-10-01

    Image restoration is a necessary preprocessing step for infrared remote sensing applications. Traditional methods allow us to remove the noise but penalize too much the gradients corresponding to edges. Image restoration techniques based on variational approaches can solve this over-smoothing problem for the merits of their well-defined mathematical modeling of the restore procedure. The total variation (TV) of infrared image is introduced as a L1 regularization term added to the objective energy functional. It converts the restoration process to an optimization problem of functional involving a fidelity term to the image data plus a regularization term. Infrared image restoration technology with TV-L1 model exploits the remote sensing data obtained sufficiently and preserves information at edges caused by clouds. Numerical implementation algorithm is presented in detail. Analysis indicates that the structure of this algorithm can be easily implemented in parallelization. Therefore a parallel implementation of the TV-L1 filter based on multicore architecture with shared memory is proposed for infrared real-time remote sensing systems. Massive computation of image data is performed in parallel by cooperating threads running simultaneously on multiple cores. Several groups of synthetic infrared image data are used to validate the feasibility and effectiveness of the proposed parallel algorithm. Quantitative analysis of measuring the restored image quality compared to input image is presented. Experiment results show that the TV-L1 filter can restore the varying background image reasonably, and that its performance can achieve the requirement of real-time image processing.

  10. Multiple-algorithm parallel fusion of infrared polarization and intensity images based on algorithmic complementarity and synergy

    Science.gov (United States)

    Zhang, Lei; Yang, Fengbao; Ji, Linna; Lv, Sheng

    2018-01-01

    Diverse image fusion methods perform differently. Each method has advantages and disadvantages compared with others. One notion is that the advantages of different image methods can be effectively combined. A multiple-algorithm parallel fusion method based on algorithmic complementarity and synergy is proposed. First, in view of the characteristics of the different algorithms and difference-features among images, an index vector-based feature-similarity is proposed to define the degree of complementarity and synergy. This proposed index vector is a reliable evidence indicator for algorithm selection. Second, the algorithms with a high degree of complementarity and synergy are selected. Then, the different degrees of various features and infrared intensity images are used as the initial weights for the nonnegative matrix factorization (NMF). This avoids randomness of the NMF initialization parameter. Finally, the fused images of different algorithms are integrated using the NMF because of its excellent data fusing performance on independent features. Experimental results demonstrate that the visual effect and objective evaluation index of the fused images obtained using the proposed method are better than those obtained using traditional methods. The proposed method retains all the advantages that individual fusion algorithms have.

  11. Algorithm integration using ADL (Algorithm Development Library) for improving CrIMSS EDR science product quality

    Science.gov (United States)

    Das, B.; Wilson, M.; Divakarla, M. G.; Chen, W.; Barnet, C.; Wolf, W.

    2013-05-01

    Algorithm Development Library (ADL) is a framework that mimics the operational system IDPS (Interface Data Processing Segment) that is currently being used to process data from instruments aboard Suomi National Polar-orbiting Partnership (S-NPP) satellite. The satellite was launched successfully in October 2011. The Cross-track Infrared and Microwave Sounder Suite (CrIMSS) consists of the Advanced Technology Microwave Sounder (ATMS) and Cross-track Infrared Sounder (CrIS) instruments that are on-board of S-NPP. These instruments will also be on-board of JPSS (Joint Polar Satellite System) that will be launched in early 2017. The primary products of the CrIMSS Environmental Data Record (EDR) include global atmospheric vertical temperature, moisture, and pressure profiles (AVTP, AVMP and AVPP) and Ozone IP (Intermediate Product from CrIS radiances). Several algorithm updates have recently been proposed by CrIMSS scientists that include fixes to the handling of forward modeling errors, a more conservative identification of clear scenes, indexing corrections for daytime products, and relaxed constraints between surface temperature and air temperature for daytime land scenes. We have integrated these improvements into the ADL framework. This work compares the results from ADL emulation of future IDPS system incorporating all the suggested algorithm updates with the current official processing results by qualitative and quantitative evaluations. The results prove these algorithm updates improve science product quality.

  12. Applications of machine-learning algorithms for infrared colour selection of Galactic Wolf-Rayet stars

    Science.gov (United States)

    Morello, Giuseppe; Morris, P. W.; Van Dyk, S. D.; Marston, A. P.; Mauerhan, J. C.

    2018-01-01

    We have investigated and applied machine-learning algorithms for infrared colour selection of Galactic Wolf-Rayet (WR) candidates. Objects taken from the Spitzer Galactic Legacy Infrared Midplane Survey Extraordinaire (GLIMPSE) catalogue of the infrared objects in the Galactic plane can be classified into different stellar populations based on the colours inferred from their broad-band photometric magnitudes [J, H and Ks from 2 Micron All Sky Survey (2MASS), and the four Spitzer/IRAC bands]. The algorithms tested in this pilot study are variants of the k-nearest neighbours approach, which is ideal for exploratory studies of classification problems where interrelations between variables and classes are complicated. The aims of this study are (1) to provide an automated tool to select reliable WR candidates and potentially other classes of objects, (2) to measure the efficiency of infrared colour selection at performing these tasks and (3) to lay the groundwork for statistically inferring the total number of WR stars in our Galaxy. We report the performance results obtained over a set of known objects and selected candidates for which we have carried out follow-up spectroscopic observations, and confirm the discovery of four new WR stars.

  13. Research on the algorithm of infrared target detection based on the frame difference and background subtraction method

    Science.gov (United States)

    Liu, Yun; Zhao, Yuejin; Liu, Ming; Dong, Liquan; Hui, Mei; Liu, Xiaohua; Wu, Yijian

    2015-09-01

    As an important branch of infrared imaging technology, infrared target tracking and detection has a very important scientific value and a wide range of applications in both military and civilian areas. For the infrared image which is characterized by low SNR and serious disturbance of background noise, an innovative and effective target detection algorithm is proposed in this paper, according to the correlation of moving target frame-to-frame and the irrelevance of noise in sequential images based on OpenCV. Firstly, since the temporal differencing and background subtraction are very complementary, we use a combined detection method of frame difference and background subtraction which is based on adaptive background updating. Results indicate that it is simple and can extract the foreground moving target from the video sequence stably. For the background updating mechanism continuously updating each pixel, we can detect the infrared moving target more accurately. It paves the way for eventually realizing real-time infrared target detection and tracking, when transplanting the algorithms on OpenCV to the DSP platform. Afterwards, we use the optimal thresholding arithmetic to segment image. It transforms the gray images to black-white images in order to provide a better condition for the image sequences detection. Finally, according to the relevance of moving objects between different frames and mathematical morphology processing, we can eliminate noise, decrease the area, and smooth region boundaries. Experimental results proves that our algorithm precisely achieve the purpose of rapid detection of small infrared target.

  14. Developing the science product algorithm testbed for Chinese next-generation geostationary meteorological satellites: Fengyun-4 series

    Science.gov (United States)

    Min, Min; Wu, Chunqiang; Li, Chuan; Liu, Hui; Xu, Na; Wu, Xiao; Chen, Lin; Wang, Fu; Sun, Fenglin; Qin, Danyu; Wang, Xi; Li, Bo; Zheng, Zhaojun; Cao, Guangzhen; Dong, Lixin

    2017-08-01

    Fengyun-4A (FY-4A), the first of the Chinese next-generation geostationary meteorological satellites, launched in 2016, offers several advances over the FY-2: more spectral bands, faster imaging, and infrared hyperspectral measurements. To support the major objective of developing the prototypes of FY-4 science algorithms, two science product algorithm testbeds for imagers and sounders have been developed by the scientists in the FY-4 Algorithm Working Group (AWG). Both testbeds, written in FORTRAN and C programming languages for Linux or UNIX systems, have been tested successfully by using Intel/g compilers. Some important FY-4 science products, including cloud mask, cloud properties, and temperature profiles, have been retrieved successfully through using a proxy imager, Himawari-8/Advanced Himawari Imager (AHI), and sounder data, obtained from the Atmospheric InfraRed Sounder, thus demonstrating their robustness. In addition, in early 2016, the FY-4 AWG was developed based on the imager testbed—a near real-time processing system for Himawari-8/AHI data for use by Chinese weather forecasters. Consequently, robust and flexible science product algorithm testbeds have provided essential and productive tools for popularizing FY-4 data and developing substantial improvements in FY-4 products.

  15. Developing Wide-Field Spatio-Spectral Interferometry for Far-Infrared Space Applications

    Science.gov (United States)

    Leisawitz, David; Bolcar, Matthew R.; Lyon, Richard G.; Maher, Stephen F.; Memarsadeghi, Nargess; Rinehart, Stephen A.; Sinukoff, Evan J.

    2012-01-01

    Interferometry is an affordable way to bring the benefits of high resolution to space far-IR astrophysics. We summarize an ongoing effort to develop and learn the practical limitations of an interferometric technique that will enable the acquisition of high-resolution far-IR integral field spectroscopic data with a single instrument in a future space-based interferometer. This technique was central to the Space Infrared Interferometric Telescope (SPIRIT) and Submillimeter Probe of the Evolution of Cosmic Structure (SPECS) space mission design concepts, and it will first be used on the Balloon Experimental Twin Telescope for Infrared Interferometry (BETTII). Our experimental approach combines data from a laboratory optical interferometer (the Wide-field Imaging Interferometry Testbed, WIIT), computational optical system modeling, and spatio-spectral synthesis algorithm development. We summarize recent experimental results and future plans.

  16. STAR Algorithm Integration Team - Facilitating operational algorithm development

    Science.gov (United States)

    Mikles, V. J.

    2015-12-01

    The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.

  17. Algorithm for removing scalp signals from functional near-infrared spectroscopy signals in real time using multidistance optodes.

    Science.gov (United States)

    Kiguchi, Masashi; Funane, Tsukasa

    2014-11-01

    A real-time algorithm for removing scalp-blood signals from functional near-infrared spectroscopy signals is proposed. Scalp and deep signals have different dependencies on the source-detector distance. These signals were separated using this characteristic. The algorithm was validated through an experiment using a dynamic phantom in which shallow and deep absorptions were independently changed. The algorithm for measurement of oxygenated and deoxygenated hemoglobins using two wavelengths was explicitly obtained. This algorithm is potentially useful for real-time systems, e.g., brain-computer interfaces and neuro-feedback systems.

  18. Development of plenoptic infrared camera using low dimensional material based photodetectors

    Science.gov (United States)

    Chen, Liangliang

    Infrared (IR) sensor has extended imaging from submicron visible spectrum to tens of microns wavelength, which has been widely used for military and civilian application. The conventional bulk semiconductor materials based IR cameras suffer from low frame rate, low resolution, temperature dependent and highly cost, while the unusual Carbon Nanotube (CNT), low dimensional material based nanotechnology has been made much progress in research and industry. The unique properties of CNT lead to investigate CNT based IR photodetectors and imaging system, resolving the sensitivity, speed and cooling difficulties in state of the art IR imagings. The reliability and stability is critical to the transition from nano science to nano engineering especially for infrared sensing. It is not only for the fundamental understanding of CNT photoresponse induced processes, but also for the development of a novel infrared sensitive material with unique optical and electrical features. In the proposed research, the sandwich-structured sensor was fabricated within two polymer layers. The substrate polyimide provided sensor with isolation to background noise, and top parylene packing blocked humid environmental factors. At the same time, the fabrication process was optimized by real time electrical detection dielectrophoresis and multiple annealing to improve fabrication yield and sensor performance. The nanoscale infrared photodetector was characterized by digital microscopy and precise linear stage in order for fully understanding it. Besides, the low noise, high gain readout system was designed together with CNT photodetector to make the nano sensor IR camera available. To explore more of infrared light, we employ compressive sensing algorithm into light field sampling, 3-D camera and compressive video sensing. The redundant of whole light field, including angular images for light field, binocular images for 3-D camera and temporal information of video streams, are extracted and

  19. Retinex enhancement of infrared images.

    Science.gov (United States)

    Li, Ying; He, Renjie; Xu, Guizhi; Hou, Changzhi; Sun, Yunyan; Guo, Lei; Rao, Liyun; Yan, Weili

    2008-01-01

    With the ability of imaging the temperature distribution of body, infrared imaging is promising in diagnostication and prognostication of diseases. However the poor quality of the raw original infrared images prevented applications and one of the essential problems is the low contrast appearance of the imagined object. In this paper, the image enhancement technique based on the Retinex theory is studied, which is a process that automatically retrieve the visual realism to images. The algorithms, including Frackle-McCann algorithm, McCann99 algorithm, single-scale Retinex algorithm, multi-scale Retinex algorithm and multi-scale Retinex algorithm with color restoration, are experienced to the enhancement of infrared images. The entropy measurements along with the visual inspection were compared and results shown the algorithms based on Retinex theory have the ability in enhancing the infrared image. Out of the algorithms compared, MSRCR demonstrated the best performance.

  20. Near infrared spectrometric technique for testing fruit quality: optimisation of regression models using genetic algorithms

    Science.gov (United States)

    Isingizwe Nturambirwe, J. Frédéric; Perold, Willem J.; Opara, Umezuruike L.

    2016-02-01

    Near infrared (NIR) spectroscopy has gained extensive use in quality evaluation. It is arguably one of the most advanced spectroscopic tools in non-destructive quality testing of food stuff, from measurement to data analysis and interpretation. NIR spectral data are interpreted through means often involving multivariate statistical analysis, sometimes associated with optimisation techniques for model improvement. The objective of this research was to explore the extent to which genetic algorithms (GA) can be used to enhance model development, for predicting fruit quality. Apple fruits were used, and NIR spectra in the range from 12000 to 4000 cm-1 were acquired on both bruised and healthy tissues, with different degrees of mechanical damage. GAs were used in combination with partial least squares regression methods to develop bruise severity prediction models, and compared to PLS models developed using the full NIR spectrum. A classification model was developed, which clearly separated bruised from unbruised apple tissue. GAs helped improve prediction models by over 10%, in comparison with full spectrum-based models, as evaluated in terms of error of prediction (Root Mean Square Error of Cross-validation). PLS models to predict internal quality, such as sugar content and acidity were developed and compared to the versions optimized by genetic algorithm. Overall, the results highlighted the potential use of GA method to improve speed and accuracy of fruit quality prediction.

  1. Gaussian mixture models-based ship target recognition algorithm in remote sensing infrared images

    Science.gov (United States)

    Yao, Shoukui; Qin, Xiaojuan

    2018-02-01

    Since the resolution of remote sensing infrared images is low, the features of ship targets become unstable. The issue of how to recognize ships with fuzzy features is an open problem. In this paper, we propose a novel ship target recognition algorithm based on Gaussian mixture models (GMMs). In the proposed algorithm, there are mainly two steps. At the first step, the Hu moments of these ship target images are calculated, and the GMMs are trained on the moment features of ships. At the second step, the moment feature of each ship image is assigned to the trained GMMs for recognition. Because of the scale, rotation, translation invariance property of Hu moments and the power feature-space description ability of GMMs, the GMMs-based ship target recognition algorithm can recognize ship reliably. Experimental results of a large simulating image set show that our approach is effective in distinguishing different ship types, and obtains a satisfactory ship recognition performance.

  2. HOW MANY HIPPOS (HOMHIP: ALGORITHM FOR AUTOMATIC COUNTS OF ANIMALS WITH INFRA-RED THERMAL IMAGERY FROM UAV

    Directory of Open Access Journals (Sweden)

    S. Lhoest

    2015-08-01

    Full Text Available The common hippopotamus (Hippopotamus amphibius L. is part of the animal species endangered because of multiple human pressures. Monitoring of species for conservation is then essential, and the development of census protocols has to be chased. UAV technology is considering as one of the new perspectives for wildlife survey. Indeed, this technique has many advantages but its main drawback is the generation of a huge amount of data to handle. This study aims at developing an algorithm for automatic count of hippos, by exploiting thermal infrared aerial images acquired from UAV. This attempt is the first known for automatic detection of this species. Images taken at several flight heights can be used as inputs of the algorithm, ranging from 38 to 155 meters above ground level. A Graphical User Interface has been created in order to facilitate the use of the application. Three categories of animals have been defined following their position in water. The mean error of automatic counts compared with manual delineations is +2.3% and shows that the estimation is unbiased. Those results show great perspectives for the use of the algorithm in populations monitoring after some technical improvements and the elaboration of statistically robust inventories protocols.

  3. Development of the Landsat Data Continuity Mission Cloud Cover Assessment Algorithms

    Science.gov (United States)

    Scaramuzza, Pat; Bouchard, M.A.; Dwyer, John L.

    2012-01-01

    The upcoming launch of the Operational Land Imager (OLI) will start the next era of the Landsat program. However, the Automated Cloud-Cover Assessment (CCA) (ACCA) algorithm used on Landsat 7 requires a thermal band and is thus not suited for OLI. There will be a thermal instrument on the Landsat Data Continuity Mission (LDCM)-the Thermal Infrared Sensor-which may not be available during all OLI collections. This illustrates a need for CCA for LDCM in the absence of thermal data. To research possibilities for full-resolution OLI cloud assessment, a global data set of 207 Landsat 7 scenes with manually generated cloud masks was created. It was used to evaluate the ACCA algorithm, showing that the algorithm correctly classified 79.9% of a standard test subset of 3.95 109 pixels. The data set was also used to develop and validate two successor algorithms for use with OLI data-one derived from an off-the-shelf machine learning package and one based on ACCA but enhanced by a simple neural network. These comprehensive CCA algorithms were shown to correctly classify pixels as cloudy or clear 88.5% and 89.7% of the time, respectively.

  4. Research on the Compression Algorithm of the Infrared Thermal Image Sequence Based on Differential Evolution and Double Exponential Decay Model

    Science.gov (United States)

    Zhang, Jin-Yu; Meng, Xiang-Bing; Xu, Wei; Zhang, Wei; Zhang, Yong

    2014-01-01

    This paper has proposed a new thermal wave image sequence compression algorithm by combining double exponential decay fitting model and differential evolution algorithm. This study benchmarked fitting compression results and precision of the proposed method was benchmarked to that of the traditional methods via experiment; it investigated the fitting compression performance under the long time series and improved model and validated the algorithm by practical thermal image sequence compression and reconstruction. The results show that the proposed algorithm is a fast and highly precise infrared image data processing method. PMID:24696649

  5. Research on the Compression Algorithm of the Infrared Thermal Image Sequence Based on Differential Evolution and Double Exponential Decay Model

    Directory of Open Access Journals (Sweden)

    Jin-Yu Zhang

    2014-01-01

    Full Text Available This paper has proposed a new thermal wave image sequence compression algorithm by combining double exponential decay fitting model and differential evolution algorithm. This study benchmarked fitting compression results and precision of the proposed method was benchmarked to that of the traditional methods via experiment; it investigated the fitting compression performance under the long time series and improved model and validated the algorithm by practical thermal image sequence compression and reconstruction. The results show that the proposed algorithm is a fast and highly precise infrared image data processing method.

  6. Comparison Spatial Pattern of Land Surface Temperature with Mono Window Algorithm and Split Window Algorithm: A Case Study in South Tangerang, Indonesia

    Science.gov (United States)

    Bunai, Tasya; Rokhmatuloh; Wibowo, Adi

    2018-05-01

    In this paper, two methods to retrieve the Land Surface Temperature (LST) from thermal infrared data supplied by band 10 and 11 of the Thermal Infrared Sensor (TIRS) onboard the Landsat 8 is compared. The first is mono window algorithm developed by Qin et al. and the second is split window algorithm by Rozenstein et al. The purpose of this study is to perform the spatial distribution of land surface temperature, as well as to determine more accurate algorithm for retrieving land surface temperature by calculated root mean square error (RMSE). Finally, we present comparison the spatial distribution of land surface temperature by both of algorithm, and more accurate algorithm is split window algorithm refers to the root mean square error (RMSE) is 7.69° C.

  7. Infrared small target detection technology based on OpenCV

    Science.gov (United States)

    Liu, Lei; Huang, Zhijian

    2013-09-01

    Accurate and fast detection of infrared (IR) dim target has very important meaning for infrared precise guidance, early warning, video surveillance, etc. In this paper, some basic principles and the implementing flow charts of a series of algorithms for target detection are described. These algorithms are traditional two-frame difference method, improved three-frame difference method, background estimate and frame difference fusion method, and building background with neighborhood mean method. On the foundation of above works, an infrared target detection software platform which is developed by OpenCV and MFC is introduced. Three kinds of tracking algorithms are integrated in this software. In order to explain the software clearly, the framework and the function are described in this paper. At last, the experiments are performed for some real-life IR images. The whole algorithm implementing processes and results are analyzed, and those algorithms for detection targets are evaluated from the two aspects of subjective and objective. The results prove that the proposed method has satisfying detection effectiveness and robustness. Meanwhile, it has high detection efficiency and can be used for real-time detection.

  8. An Improved Mono-Window Algorithm for Land Surface Temperature Retrieval from Landsat 8 Thermal Infrared Sensor Data

    Directory of Open Access Journals (Sweden)

    Fei Wang

    2015-04-01

    Full Text Available The successful launch of the Landsat 8 satellite with two thermal infrared bands on February 11, 2013, for continuous Earth observation provided another opportunity for remote sensing of land surface temperature (LST. However, calibration notices issued by the United States Geological Survey (USGS indicated that data from the Landsat 8 Thermal Infrared Sensor (TIRS Band 11 have large uncertainty and suggested using TIRS Band 10 data as a single spectral band for LST estimation. In this study, we presented an improved mono-window (IMW algorithm for LST retrieval from the Landsat 8 TIRS Band 10 data. Three essential parameters (ground emissivity, atmospheric transmittance and effective mean atmospheric temperature were required for the IMW algorithm to retrieve LST. A new method was proposed to estimate the parameter of effective mean atmospheric temperature from local meteorological data. The other two essential parameters could be both estimated through the so-called land cover approach. Sensitivity analysis conducted for the IMW algorithm revealed that the possible error in estimating the required atmospheric water vapor content has the most significant impact on the probable LST estimation error. Under moderate errors in both water vapor content and ground emissivity, the algorithm had an accuracy of ~1.4 K for LST retrieval. Validation of the IMW algorithm using the simulated datasets for various situations indicated that the LST difference between the retrieved and the simulated ones was 0.67 K on average, with an RMSE of 0.43 K. Comparison of our IMW algorithm with the single-channel (SC algorithm for three main atmosphere profiles indicated that the average error and RMSE of the IMW algorithm were −0.05 K and 0.84 K, respectively, which were less than the −2.86 K and 1.05 K of the SC algorithm. Application of the IMW algorithm to Nanjing and its vicinity in east China resulted in a reasonable LST estimation for the region. Spatial

  9. Implementation of intensity ratio change and line-of-sight rate change algorithms for imaging infrared trackers

    Science.gov (United States)

    Viau, C. R.

    2012-06-01

    The use of the intensity change and line-of-sight (LOS) change concepts have previously been documented in the open-literature as techniques used by non-imaging infrared (IR) seekers to reject expendable IR countermeasures (IRCM). The purpose of this project was to implement IR counter-countermeasure (IRCCM) algorithms based on target intensity and kinematic behavior for a generic imaging IR (IIR) seeker model with the underlying goal of obtaining a better understanding of how expendable IRCM can be used to defeat the latest generation of seekers. The report describes the Intensity Ratio Change (IRC) and LOS Rate Change (LRC) discrimination techniques. The algorithms and the seeker model are implemented in a physics-based simulation product called Tactical Engagement Simulation Software (TESS™). TESS is developed in the MATLAB®/Simulink® environment and is a suite of RF/IR missile software simulators used to evaluate and analyze the effectiveness of countermeasures against various classes of guided threats. The investigation evaluates the algorithm and tests their robustness by presenting the results of batch simulation runs of surface-to-air (SAM) and air-to-air (AAM) IIR missiles engaging a non-maneuvering target platform equipped with expendable IRCM as self-protection. The report discusses how varying critical parameters such track memory time, ratio thresholds and hold time can influence the outcome of an engagement.

  10. Near infrared system coupled chemometric algorithms for enumeration of total fungi count in cocoa beans neat solution.

    Science.gov (United States)

    Kutsanedzie, Felix Y H; Chen, Quansheng; Hassan, Md Mehedi; Yang, Mingxiu; Sun, Hao; Rahman, Md Hafizur

    2018-02-01

    Total fungi count (TFC) is a quality indicator of cocoa beans when unmonitored leads to quality and safety problems. Fourier transform near infrared spectroscopy (FT-NIRS) combined with chemometric algorithms like partial least square (PLS); synergy interval-PLS (Si-PLS); synergy interval-genetic algorithm-PLS (Si-GAPLS); Ant colony optimization - PLS (ACO-PLS) and competitive-adaptive reweighted sampling-PLS (CARS-PLS) was employed to predict TFC in cocoa beans neat solution. Model results were evaluated using the correlation coefficients of the prediction (Rp) and calibration (Rc); root mean square error of prediction (RMSEP), and the ratio of sample standard deviation to RMSEP (RPD). The developed models performance yielded 0.951≤Rp≤0.975; and 3.15≤RPD≤4.32. The models' prediction stability improved in the order of PLS

  11. An Ensemble Successive Project Algorithm for Liquor Detection Using Near Infrared Sensor.

    Science.gov (United States)

    Qu, Fangfang; Ren, Dong; Wang, Jihua; Zhang, Zhong; Lu, Na; Meng, Lei

    2016-01-11

    Spectral analysis technique based on near infrared (NIR) sensor is a powerful tool for complex information processing and high precision recognition, and it has been widely applied to quality analysis and online inspection of agricultural products. This paper proposes a new method to address the instability of small sample sizes in the successive projections algorithm (SPA) as well as the lack of association between selected variables and the analyte. The proposed method is an evaluated bootstrap ensemble SPA method (EBSPA) based on a variable evaluation index (EI) for variable selection, and is applied to the quantitative prediction of alcohol concentrations in liquor using NIR sensor. In the experiment, the proposed EBSPA with three kinds of modeling methods are established to test their performance. In addition, the proposed EBSPA combined with partial least square is compared with other state-of-the-art variable selection methods. The results show that the proposed method can solve the defects of SPA and it has the best generalization performance and stability. Furthermore, the physical meaning of the selected variables from the near infrared sensor data is clear, which can effectively reduce the variables and improve their prediction accuracy.

  12. An Ensemble Successive Project Algorithm for Liquor Detection Using Near Infrared Sensor

    Directory of Open Access Journals (Sweden)

    Fangfang Qu

    2016-01-01

    Full Text Available Spectral analysis technique based on near infrared (NIR sensor is a powerful tool for complex information processing and high precision recognition, and it has been widely applied to quality analysis and online inspection of agricultural products. This paper proposes a new method to address the instability of small sample sizes in the successive projections algorithm (SPA as well as the lack of association between selected variables and the analyte. The proposed method is an evaluated bootstrap ensemble SPA method (EBSPA based on a variable evaluation index (EI for variable selection, and is applied to the quantitative prediction of alcohol concentrations in liquor using NIR sensor. In the experiment, the proposed EBSPA with three kinds of modeling methods are established to test their performance. In addition, the proposed EBSPA combined with partial least square is compared with other state-of-the-art variable selection methods. The results show that the proposed method can solve the defects of SPA and it has the best generalization performance and stability. Furthermore, the physical meaning of the selected variables from the near infrared sensor data is clear, which can effectively reduce the variables and improve their prediction accuracy.

  13. Development and application of a far infrared laser

    International Nuclear Information System (INIS)

    Nakayama, Kazuya; Okajima, Shigeki; Kawahata, Kazuo

    2011-01-01

    There has been a 40 years history on the application of an infrared laser to interference, polarization and scattering light sources in fusion plasma diagnostics. It is one of important light sources in ITER plasma diagnostics too. In the present review, authors recall the history of the infrared laser development especially of cw infrared lasers. In addition, the state-of-the-art technology for infrared lasers, infrared components and its applications to plasma diagnostics are discussed. (J.P.N.)

  14. Greenhouse gas profiling by infrared-laser and microwave occultation: retrieval algorithm and demonstration results from end-to-end simulations

    Directory of Open Access Journals (Sweden)

    V. Proschek

    2011-10-01

    Full Text Available Measuring greenhouse gas (GHG profiles with global coverage and high accuracy and vertical resolution in the upper troposphere and lower stratosphere (UTLS is key for improved monitoring of GHG concentrations in the free atmosphere. In this respect a new satellite mission concept adding an infrared-laser part to the already well studied microwave occultation technique exploits the joint propagation of infrared-laser and microwave signals between Low Earth Orbit (LEO satellites. This synergetic combination, referred to as LEO-LEO microwave and infrared-laser occultation (LMIO method, enables to retrieve thermodynamic profiles (pressure, temperature, humidity and accurate altitude levels from the microwave signals and GHG profiles from the simultaneously measured infrared-laser signals. However, due to the novelty of the LMIO method, a retrieval algorithm for GHG profiling is not yet available. Here we introduce such an algorithm for retrieving GHGs from LEO-LEO infrared-laser occultation (LIO data, applied as a second step after retrieving thermodynamic profiles from LEO-LEO microwave occultation (LMO data. We thoroughly describe the LIO retrieval algorithm and unveil the synergy with the LMO-retrieved pressure, temperature, and altitude information. We furthermore demonstrate the effective independence of the GHG retrieval results from background (a priori information in discussing demonstration results from LMIO end-to-end simulations for a representative set of GHG profiles, including carbon dioxide (CO2, water vapor (H2O, methane (CH4, and ozone (O3. The GHGs except for ozone are well retrieved throughout the UTLS, while ozone is well retrieved from about 10 km to 15 km upwards, since the ozone layer resides in the lower stratosphere. The GHG retrieval errors are generally smaller than 1% to 3% r.m.s., at a vertical resolution of about 1 km. The retrieved profiles also appear unbiased, which points

  15. Development of Yellow Sand Image Products Using Infrared Brightness Temperature Difference Method

    Science.gov (United States)

    Ha, J.; Kim, J.; Kwak, M.; Ha, K.

    2007-12-01

    A technique for detection of airborne yellow sand dust using meteorological satellite has been developed from various bands from ultraviolet to infrared channels. Among them, Infrared (IR) channels have an advantage of detecting aerosols over high reflecting surface as well as during nighttime. There had been suggestion of using brightness temperature difference (BTD) between 11 and 12¥ìm. We have found that the technique is highly depends on surface temperature, emissivity, and zenith angle, which results in changing the threshold of BTD. In order to overcome these problems, we have constructed the background brightness temperature threshold of BTD and then aerosol index (AI) has been determined from subtracting the background threshold from BTD of our interested scene. Along with this, we utilized high temporal coverage of geostationary satellite, MTSAT, to improve the reliability of the determined AI signal. The products have been evaluated by comparing the forecasted wind field with the movement fiend of AI. The statistical score test illustrates that this newly developed algorithm produces a promising result for detecting mineral dust by reducing the errors with respect to the current BTD method.

  16. Infrared sensing based sensitive skin

    Institute of Scientific and Technical Information of China (English)

    CAO Zheng-cai; FU Yi-li; WANG Shu-guo; JIN Bao

    2006-01-01

    Developed robotics sensitive skin is a modularized, flexible, mini-type array of infrared sensors with data processing capabilities, which can be used to cover the body of a robot. Depending on the infrared sensors and periphery processing circuit, robotics sensitive skin can in real-time provide existence and distance information about obstacles for robots within sensory areas. The methodology of designing sensitive skin and the algorithm of a mass of IR data fusion are presented. The experimental results show that the multi-joint robot with this sensitive skin can work autonomously in an unknown environment.

  17. Baseline correction combined partial least squares algorithm and its application in on-line Fourier transform infrared quantitative analysis.

    Science.gov (United States)

    Peng, Jiangtao; Peng, Silong; Xie, Qiong; Wei, Jiping

    2011-04-01

    In order to eliminate the lower order polynomial interferences, a new quantitative calibration algorithm "Baseline Correction Combined Partial Least Squares (BCC-PLS)", which combines baseline correction and conventional PLS, is proposed. By embedding baseline correction constraints into PLS weights selection, the proposed calibration algorithm overcomes the uncertainty in baseline correction and can meet the requirement of on-line attenuated total reflectance Fourier transform infrared (ATR-FTIR) quantitative analysis. The effectiveness of the algorithm is evaluated by the analysis of glucose and marzipan ATR-FTIR spectra. BCC-PLS algorithm shows improved prediction performance over PLS. The root mean square error of cross-validation (RMSECV) on marzipan spectra for the prediction of the moisture is found to be 0.53%, w/w (range 7-19%). The sugar content is predicted with a RMSECV of 2.04%, w/w (range 33-68%). Copyright © 2011 Elsevier B.V. All rights reserved.

  18. Spinning projectile's attitude measurement with LW infrared radiation under sea-sky background

    Science.gov (United States)

    Xu, Miaomiao; Bu, Xiongzhu; Yu, Jing; He, Zilu

    2018-05-01

    With the further development of infrared radiation research in sea-sky background and the requirement of spinning projectile's attitude measurement, the sea-sky infrared radiation field is used to carry out spinning projectile's attitude angle instead of inertial sensors. Firstly, the generation mechanism of sea-sky infrared radiation is analysed. The mathematical model of sea-sky infrared radiation is deduced in LW (long wave) infrared 8 ∼ 14 μm band by calculating the sea surface and sky infrared radiation. Secondly, according to the movement characteristics of spinning projectile, the attitude measurement model of infrared sensors on projectile's three axis is established. And the feasibility of the model is analysed by simulation. Finally, the projectile's attitude calculation algorithm is designed to improve the attitude angle estimation accuracy. The results of semi-physical experiments show that the segmented interactive algorithm estimation error of pitch and roll angle is within ±1.5°. The attitude measurement method is effective and feasible, and provides accurate measurement basis for the guidance of spinning projectile.

  19. Development of Quantum Devices and Algorithms for Radiation Detection and Radiation Signal Processing

    International Nuclear Information System (INIS)

    El Tokhy, M.E.S.M.E.S.

    2012-01-01

    The main functions of spectroscopy system are signal detection, filtering and amplification, pileup detection and recovery, dead time correction, amplitude analysis and energy spectrum analysis. Safeguards isotopic measurements require the best spectrometer systems with excellent resolution, stability, efficiency and throughput. However, the resolution and throughput, which depend mainly on the detector, amplifier and the analog-to-digital converter (ADC), can still be improved. These modules have been in continuous development and improvement. For this reason we are interested with both the development of quantum detectors and efficient algorithms of the digital processing measurement. Therefore, the main objective of this thesis is concentrated on both 1. Study quantum dot (QD) devices behaviors under gamma radiation 2. Development of efficient algorithms for handling problems of gamma-ray spectroscopy For gamma radiation detection, a detailed study of nanotechnology QD sources and infrared photodetectors (QDIP) for gamma radiation detection is introduced. There are two different types of quantum scintillator detectors, which dominate the area of ionizing radiation measurements. These detectors are QD scintillator detectors and QDIP scintillator detectors. By comparison with traditional systems, quantum systems have less mass, require less volume, and consume less power. These factors are increasing the need for efficient detector for gamma-ray applications such as gamma-ray spectroscopy. Consequently, the nanocomposite materials based on semiconductor quantum dots has potential for radiation detection via scintillation was demonstrated in the literature. Therefore, this thesis presents a theoretical analysis for the characteristics of QD sources and infrared photodetectors (QDIPs). A model of QD sources under incident gamma radiation detection is developed. A novel methodology is introduced to characterize the effect of gamma radiation on QD devices. The rate

  20. Femtosecond infrared spectroscopy: study, development and applications

    International Nuclear Information System (INIS)

    Bonvalet, Adeline

    1997-01-01

    This work has been devoted to the development and the applications of a new technique of infrared (5-20 μm) spectroscopy allowing a temporal resolution of 100 fs. This technique relies on a source of ultrashort infrared pulses obtained by frequency mixing in a nonlinear material. In particular, the optical rectification of 12-fs visible pulses in gallium arsenide has allowed us to obtain 40-fs infrared pulses with a spectrum extending from 5 pm up to 15 μm. Spectral resolution has been achieved by Fourier transform spectroscopy, using a novel device we have called Diffracting FTIR. These developments allow to study inter-subband transitions in quantum-well structures. The inter-subband relaxation time has been measured by a pump-probe experiment, in which the sample was excited with a visible pulse, and the variations of inter-subband absorption probed with an infrared pulse. Besides, we have developed a method of coherent emission spectroscopy allowing to monitor the electric field emitted by coherent charge oscillations in quantum wells. The decay of the oscillations due to the loss of coherence between excited levels yields a direct measurement of the dephasing time between these levels. Other applications include biological macromolecules like reaction centers of photosynthetic bacteria. We have shown that we were able to monitor variations of infrared absorption of about 10 -4 optical densities with a temporal resolution of 100 fs. This would constitute a relevant tool to study the role of molecular vibrations during the primary steps of biological processes. (author) [fr

  1. Visible-infrared micro-spectrometer based on a preaggregated silver nanoparticle monolayer film and an infrared sensor card

    Science.gov (United States)

    Yang, Tao; Peng, Jing-xiao; Ho, Ho-pui; Song, Chun-yuan; Huang, Xiao-li; Zhu, Yong-yuan; Li, Xing-ao; Huang, Wei

    2018-01-01

    By using a preaggregated silver nanoparticle monolayer film and an infrared sensor card, we demonstrate a miniature spectrometer design that covers a broad wavelength range from visible to infrared with high spectral resolution. The spectral contents of an incident probe beam are reconstructed by solving a matrix equation with a smoothing simulated annealing algorithm. The proposed spectrometer offers significant advantages over current instruments that are based on Fourier transform and grating dispersion, in terms of size, resolution, spectral range, cost and reliability. The spectrometer contains three components, which are used for dispersion, frequency conversion and detection. Disordered silver nanoparticles in dispersion component reduce the fabrication complexity. An infrared sensor card in the conversion component broaden the operational spectral range of the system into visible and infrared bands. Since the CCD used in the detection component provides very large number of intensity measurements, one can reconstruct the final spectrum with high resolution. An additional feature of our algorithm for solving the matrix equation, which is suitable for reconstructing both broadband and narrowband signals, we have adopted a smoothing step based on a simulated annealing algorithm. This algorithm improve the accuracy of the spectral reconstruction.

  2. Combined Dust Detection Algorithm by Using MODIS Infrared Channels over East Asia

    Science.gov (United States)

    Park, Sang Seo; Kim, Jhoon; Lee, Jaehwa; Lee, Sukjo; Kim, Jeong Soo; Chang, Lim Seok; Ou, Steve

    2014-01-01

    A new dust detection algorithm is developed by combining the results of multiple dust detectionmethods using IR channels onboard the MODerate resolution Imaging Spectroradiometer (MODIS). Brightness Temperature Difference (BTD) between two wavelength channels has been used widely in previous dust detection methods. However, BTDmethods have limitations in identifying the offset values of the BTDto discriminate clear-sky areas. The current algorithm overcomes the disadvantages of previous dust detection methods by considering the Brightness Temperature Ratio (BTR) values of the dual wavelength channels with 30-day composite, the optical properties of the dust particles, the variability of surface properties, and the cloud contamination. Therefore, the current algorithm shows improvements in detecting the dust loaded region over land during daytime. Finally, the confidence index of the current dust algorithm is shown in 10 × 10 pixels of the MODIS observations. From January to June, 2006, the results of the current algorithm are within 64 to 81% of those found using the fine mode fraction (FMF) and aerosol index (AI) from the MODIS and Ozone Monitoring Instrument (OMI). The agreement between the results of the current algorithm and the OMI AI over the non-polluted land also ranges from 60 to 67% to avoid errors due to the anthropogenic aerosol. In addition, the developed algorithm shows statistically significant results at four AErosol RObotic NETwork (AERONET) sites in East Asia.

  3. Near-infrared spectroscopy determined cerebral oxygenation with eliminated skin blood flow in young males

    DEFF Research Database (Denmark)

    Hirasawa, Ai; Kaneko, Takahito; Tanaka, Naoki

    2016-01-01

    We estimated cerebral oxygenation during handgrip exercise and a cognitive task using an algorithm that eliminates the influence of skin blood flow (SkBF) on the near-infrared spectroscopy (NIRS) signal. The algorithm involves a subtraction method to develop a correction factor for each subject. ...

  4. Multisensor data fusion algorithm development

    Energy Technology Data Exchange (ETDEWEB)

    Yocky, D.A.; Chadwick, M.D.; Goudy, S.P.; Johnson, D.K.

    1995-12-01

    This report presents a two-year LDRD research effort into multisensor data fusion. We approached the problem by addressing the available types of data, preprocessing that data, and developing fusion algorithms using that data. The report reflects these three distinct areas. First, the possible data sets for fusion are identified. Second, automated registration techniques for imagery data are analyzed. Third, two fusion techniques are presented. The first fusion algorithm is based on the two-dimensional discrete wavelet transform. Using test images, the wavelet algorithm is compared against intensity modulation and intensity-hue-saturation image fusion algorithms that are available in commercial software. The wavelet approach outperforms the other two fusion techniques by preserving spectral/spatial information more precisely. The wavelet fusion algorithm was also applied to Landsat Thematic Mapper and SPOT panchromatic imagery data. The second algorithm is based on a linear-regression technique. We analyzed the technique using the same Landsat and SPOT data.

  5. Stack emission monitoring using non-dispersive infrared spectroscopy with an optimized nonlinear absorption cross interference correction algorithm

    Directory of Open Access Journals (Sweden)

    Y. W. Sun

    2013-08-01

    Full Text Available In this paper, we present an optimized analysis algorithm for non-dispersive infrared (NDIR to in situ monitor stack emissions. The proposed algorithm simultaneously compensates for nonlinear absorption and cross interference among different gases. We present a mathematical derivation for the measurement error caused by variations in interference coefficients when nonlinear absorption occurs. The proposed algorithm is derived from a classical one and uses interference functions to quantify cross interference. The interference functions vary proportionally with the nonlinear absorption. Thus, interference coefficients among different gases can be modeled by the interference functions whether gases are characterized by linear or nonlinear absorption. In this study, the simultaneous analysis of two components (CO2 and CO serves as an example for the validation of the proposed algorithm. The interference functions in this case can be obtained by least-squares fitting with third-order polynomials. Experiments show that the results of cross interference correction are improved significantly by utilizing the fitted interference functions when nonlinear absorptions occur. The dynamic measurement ranges of CO2 and CO are improved by about a factor of 1.8 and 3.5, respectively. A commercial analyzer with high accuracy was used to validate the CO and CO2 measurements derived from the NDIR analyzer prototype in which the new algorithm was embedded. The comparison of the two analyzers show that the prototype works well both within the linear and nonlinear ranges.

  6. Research on infrared small-target tracking technology under complex background

    Science.gov (United States)

    Liu, Lei; Wang, Xin; Chen, Jilu; Pan, Tao

    2012-10-01

    In this paper, some basic principles and the implementing flow charts of a series of algorithms for target tracking are described. On the foundation of above works, a moving target tracking software base on the OpenCV is developed by the software developing platform MFC. Three kinds of tracking algorithms are integrated in this software. These two tracking algorithms are Kalman Filter tracking method and Camshift tracking method. In order to explain the software clearly, the framework and the function are described in this paper. At last, the implementing processes and results are analyzed, and those algorithms for tracking targets are evaluated from the two aspects of subjective and objective. This paper is very significant in the application of the infrared target tracking technology.

  7. Noninvasive Biosensor Algorithms for Continuous Metabolic Rate Determination--SMS01302

    Data.gov (United States)

    National Aeronautics and Space Administration — This is the final year of the project. During 2012 we completed the development of an algorithm for calculating VO2 during cycling using data from the Near Infrared...

  8. Selection of discriminant mid-infrared wavenumbers by combining a naïve Bayesian classifier and a genetic algorithm: Application to the evaluation of lignocellulosic biomass biodegradation.

    Science.gov (United States)

    Rammal, Abbas; Perrin, Eric; Vrabie, Valeriu; Assaf, Rabih; Fenniri, Hassan

    2017-07-01

    Infrared spectroscopy provides useful information on the molecular compositions of biological systems related to molecular vibrations, overtones, and combinations of fundamental vibrations. Mid-infrared (MIR) spectroscopy is sensitive to organic and mineral components and has attracted growing interest in the development of biomarkers related to intrinsic characteristics of lignocellulose biomass. However, not all spectral information is valuable for biomarker construction or for applying analysis methods such as classification. Better processing and interpretation can be achieved by identifying discriminating wavenumbers. The selection of wavenumbers has been addressed through several variable- or feature-selection methods. Some of them have not been adapted for use in large data sets or are difficult to tune, and others require additional information, such as concentrations. This paper proposes a new approach by combining a naïve Bayesian classifier with a genetic algorithm to identify discriminating spectral wavenumbers. The genetic algorithm uses a linear combination of an a posteriori probability and the Bayes error rate as the fitness function for optimization. Such a function allows the improvement of both the compactness and the separation of classes. This approach was tested to classify a small set of maize roots in soil according to their biodegradation process based on their MIR spectra. The results show that this optimization method allows better discrimination of the biodegradation process, compared with using the information of the entire MIR spectrum, the use of the spectral information at wavenumbers selected by a genetic algorithm based on a classical validity index or the use of the spectral information selected by combining a genetic algorithm with other methods, such as Linear Discriminant Analysis. The proposed method selects wavenumbers that correspond to principal vibrations of chemical functional groups of compounds that undergo degradation

  9. Forward looking anomaly detection via fusion of infrared and color imagery

    Science.gov (United States)

    Stone, K.; Keller, J. M.; Popescu, M.; Havens, T. C.; Ho, K. C.

    2010-04-01

    This paper develops algorithms for the detection of interesting and abnormal objects in color and infrared imagery taken from cameras mounted on a moving vehicle, observing a fixed scene. The primary purpose of detection is to cue a human-in-the-loop detection system. Algorithms for direct detection and change detection are investigated, as well as fusion of the two. Both methods use temporal information to reduce the number of false alarms. The direct detection algorithm uses image self-similarity computed between local neighborhoods to determine interesting, or unique, parts of an image. Neighborhood similarity is computed using Euclidean distance in CIELAB color space for the color imagery, and Euclidean distance between grey levels in the infrared imagery. The change detection algorithm uses the affine scale-invariant feature transform (ASIFT) to transform multiple background frames into the current image space. Each transformed image is then compared to the current image, and the multiple outputs are fused to produce a single difference image. Changes in lighting and contrast between the background run and the current run are adjusted for in both color and infrared imagery. Frame-to-frame motion is modeled using a perspective transformation, the parameters of which are computed using scale-invariant feature transform (SIFT) keypoint correspondences. This information is used to perform temporal accumulation of single frame detections for both the direct detection and change detection algorithms. Performance of the proposed algorithms is evaluated on multiple lanes from a data collection at a US Army test site.

  10. Development and Testing of Infrared Water Current Meter | Ezenne ...

    African Journals Online (AJOL)

    Continuous monitoring of the river flow is essential for assessing water availability. River flow velocity is crucial to simulate discharge hydrographs of water in the hydrological system.This study developed a digital water current meter with infrared. The infrared current meter was tested using Ebonyi River at Obollo-Etiti and ...

  11. A Comprehensive Training Data Set for the Development of Satellite-Based Volcanic Ash Detection Algorithms

    Science.gov (United States)

    Schmidl, Marius

    2017-04-01

    We present a comprehensive training data set covering a large range of atmospheric conditions, including disperse volcanic ash and desert dust layers. These data sets contain all information required for the development of volcanic ash detection algorithms based on artificial neural networks, urgently needed since volcanic ash in the airspace is a major concern of aviation safety authorities. Selected parts of the data are used to train the volcanic ash detection algorithm VADUGS. They contain atmospheric and surface-related quantities as well as the corresponding simulated satellite data for the channels in the infrared spectral range of the SEVIRI instrument on board MSG-2. To get realistic results, ECMWF, IASI-based, and GEOS-Chem data are used to calculate all parameters describing the environment, whereas the software package libRadtran is used to perform radiative transfer simulations returning the brightness temperatures for each atmospheric state. As optical properties are a prerequisite for radiative simulations accounting for aerosol layers, the development also included the computation of optical properties for a set of different aerosol types from different sources. A description of the developed software and the used methods is given, besides an overview of the resulting data sets.

  12. A consensus successive projections algorithm--multiple linear regression method for analyzing near infrared spectra.

    Science.gov (United States)

    Liu, Ke; Chen, Xiaojing; Li, Limin; Chen, Huiling; Ruan, Xiukai; Liu, Wenbin

    2015-02-09

    The successive projections algorithm (SPA) is widely used to select variables for multiple linear regression (MLR) modeling. However, SPA used only once may not obtain all the useful information of the full spectra, because the number of selected variables cannot exceed the number of calibration samples in the SPA algorithm. Therefore, the SPA-MLR method risks the loss of useful information. To make a full use of the useful information in the spectra, a new method named "consensus SPA-MLR" (C-SPA-MLR) is proposed herein. This method is the combination of consensus strategy and SPA-MLR method. In the C-SPA-MLR method, SPA-MLR is used to construct member models with different subsets of variables, which are selected from the remaining variables iteratively. A consensus prediction is obtained by combining the predictions of the member models. The proposed method is evaluated by analyzing the near infrared (NIR) spectra of corn and diesel. The results of C-SPA-MLR method showed a better prediction performance compared with the SPA-MLR and full-spectra PLS methods. Moreover, these results could serve as a reference for combination the consensus strategy and other variable selection methods when analyzing NIR spectra and other spectroscopic techniques. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. The development of large-aperture test system of infrared camera and visible CCD camera

    Science.gov (United States)

    Li, Yingwen; Geng, Anbing; Wang, Bo; Wang, Haitao; Wu, Yanying

    2015-10-01

    Infrared camera and CCD camera dual-band imaging system is used in many equipment and application widely. If it is tested using the traditional infrared camera test system and visible CCD test system, 2 times of installation and alignment are needed in the test procedure. The large-aperture test system of infrared camera and visible CCD camera uses the common large-aperture reflection collimator, target wheel, frame-grabber, computer which reduces the cost and the time of installation and alignment. Multiple-frame averaging algorithm is used to reduce the influence of random noise. Athermal optical design is adopted to reduce the change of focal length location change of collimator when the environmental temperature is changing, and the image quality of the collimator of large field of view and test accuracy are also improved. Its performance is the same as that of the exotic congener and is much cheaper. It will have a good market.

  14. Dual stacked partial least squares for analysis of near-infrared spectra

    Energy Technology Data Exchange (ETDEWEB)

    Bi, Yiming [Institute of Automation, Chinese Academy of Sciences, 100190 Beijing (China); Xie, Qiong, E-mail: yimbi@163.com [Institute of Automation, Chinese Academy of Sciences, 100190 Beijing (China); Peng, Silong; Tang, Liang; Hu, Yong; Tan, Jie [Institute of Automation, Chinese Academy of Sciences, 100190 Beijing (China); Zhao, Yuhui [School of Economics and Business, Northeastern University at Qinhuangdao, 066000 Qinhuangdao City (China); Li, Changwen [Food Research Institute of Tianjin Tasly Group, 300410 Tianjin (China)

    2013-08-20

    Graphical abstract: -- Highlights: •Dual stacking steps are used for multivariate calibration of near-infrared spectra. •A selective weighting strategy is introduced that only a subset of all available sub-models is used for model fusion. •Using two public near-infrared datasets, the proposed method achieved competitive results. •The method can be widely applied in many fields, such as Mid-infrared spectra data and Raman spectra data. -- Abstract: A new ensemble learning algorithm is presented for quantitative analysis of near-infrared spectra. The algorithm contains two steps of stacked regression and Partial Least Squares (PLS), termed Dual Stacked Partial Least Squares (DSPLS) algorithm. First, several sub-models were generated from the whole calibration set. The inner-stack step was implemented on sub-intervals of the spectrum. Then the outer-stack step was used to combine these sub-models. Several combination rules of the outer-stack step were analyzed for the proposed DSPLS algorithm. In addition, a novel selective weighting rule was also involved to select a subset of all available sub-models. Experiments on two public near-infrared datasets demonstrate that the proposed DSPLS with selective weighting rule provided superior prediction performance and outperformed the conventional PLS algorithm. Compared with the single model, the new ensemble model can provide more robust prediction result and can be considered an alternative choice for quantitative analytical applications.

  15. Dual stacked partial least squares for analysis of near-infrared spectra

    International Nuclear Information System (INIS)

    Bi, Yiming; Xie, Qiong; Peng, Silong; Tang, Liang; Hu, Yong; Tan, Jie; Zhao, Yuhui; Li, Changwen

    2013-01-01

    Graphical abstract: -- Highlights: •Dual stacking steps are used for multivariate calibration of near-infrared spectra. •A selective weighting strategy is introduced that only a subset of all available sub-models is used for model fusion. •Using two public near-infrared datasets, the proposed method achieved competitive results. •The method can be widely applied in many fields, such as Mid-infrared spectra data and Raman spectra data. -- Abstract: A new ensemble learning algorithm is presented for quantitative analysis of near-infrared spectra. The algorithm contains two steps of stacked regression and Partial Least Squares (PLS), termed Dual Stacked Partial Least Squares (DSPLS) algorithm. First, several sub-models were generated from the whole calibration set. The inner-stack step was implemented on sub-intervals of the spectrum. Then the outer-stack step was used to combine these sub-models. Several combination rules of the outer-stack step were analyzed for the proposed DSPLS algorithm. In addition, a novel selective weighting rule was also involved to select a subset of all available sub-models. Experiments on two public near-infrared datasets demonstrate that the proposed DSPLS with selective weighting rule provided superior prediction performance and outperformed the conventional PLS algorithm. Compared with the single model, the new ensemble model can provide more robust prediction result and can be considered an alternative choice for quantitative analytical applications

  16. DEVELOPMENT OF A NEW ALGORITHM FOR KEY AND S-BOX GENERATION IN BLOWFISH ALGORITHM

    Directory of Open Access Journals (Sweden)

    TAYSEER S. ATIA

    2014-08-01

    Full Text Available Blowfish algorithm is a block cipher algorithm, its strong, simple algorithm used to encrypt data in block of size 64-bit. Key and S-box generation process in this algorithm require time and memory space the reasons that make this algorithm not convenient to be used in smart card or application requires changing secret key frequently. In this paper a new key and S-box generation process was developed based on Self Synchronization Stream Cipher (SSS algorithm where the key generation process for this algorithm was modified to be used with the blowfish algorithm. Test result shows that the generation process requires relatively slow time and reasonably low memory requirement and this enhance the algorithm and gave it the possibility for different usage.

  17. Infrared Astronomy Professional Development for K-12 Educators: WISE Telescope

    Science.gov (United States)

    Borders, Kareen; Mendez, B. M.

    2010-01-01

    K-12 educators need effective and relevant astronomy professional development. WISE Telescope (Wide-Field Infrared Survey Explorer) and Spitzer Space Telescope Education programs provided an immersive teacher professional development workshop at Arecibo Observatory in Puerto Rico during the summer of 2009. As many common misconceptions involve scale and distance, teachers worked with Moon/Earth scale, solar system scale, and distance of objects in the universe. Teachers built and used basic telescopes, learned about the history of telescopes, explored ground and satellite based telescopes, and explored and worked on models of WISE Telescope. An in-depth explanation of WISE and Spitzer telescopes gave participants background knowledge for infrared astronomy observations. We taught the electromagnetic spectrum through interactive stations. The stations included an overview via lecture and power point, the use of ultraviolet beads to determine ultraviolet exposure, the study of WISE lenticulars and diagramming of infrared data, listening to light by using speakers hooked up to photoreceptor cells, looking at visible light through diffraction glasses and diagramming the data, protocols for using astronomy based research in the classroom, and infrared thermometers to compare environmental conditions around the observatory. An overview of LIDAR physics was followed up by a simulated LIDAR mapping of the topography of Mars. We will outline specific steps for K-12 infrared astronomy professional development, provide data demonstrating the impact of the above professional development on educator understanding and classroom use, and detail future plans for additional K-12 professional development. Funding was provided by WISE Telescope, Spitzer Space Telescope, Starbucks, Arecibo Observatory, the American Institute of Aeronautics and Astronautics, and the Washington Space Grant Consortium.

  18. To develop a universal gamut mapping algorithm

    International Nuclear Information System (INIS)

    Morovic, J.

    1998-10-01

    When a colour image from one colour reproduction medium (e.g. nature, a monitor) needs to be reproduced on another (e.g. on a monitor or in print) and these media have different colour ranges (gamuts), it is necessary to have a method for mapping between them. If such a gamut mapping algorithm can be used under a wide range of conditions, it can also be incorporated in an automated colour reproduction system and considered to be in some sense universal. In terms of preliminary work, a colour reproduction system was implemented, for which a new printer characterisation model (including grey-scale correction) was developed. Methods were also developed for calculating gamut boundary descriptors and for calculating gamut boundaries along given lines from them. The gamut mapping solution proposed in this thesis is a gamut compression algorithm developed with the aim of being accurate and universally applicable. It was arrived at by way of an evolutionary gamut mapping development strategy for the purposes of which five test images were reproduced between a CRT and printed media obtained using an inkjet printer. Initially, a number of previously published algorithms were chosen and psychophysically evaluated whereby an important characteristic of this evaluation was that it also considered the performance of algorithms for individual colour regions within the test images used. New algorithms were then developed on their basis, subsequently evaluated and this process was repeated once more. In this series of experiments the new GCUSP algorithm, which consists of a chroma-dependent lightness compression followed by a compression towards the lightness of the reproduction cusp on the lightness axis, gave the most accurate and stable performance overall. The results of these experiments were also useful for improving the understanding of some gamut mapping factors - in particular gamut difference. In addition to looking at accuracy, the pleasantness of reproductions obtained

  19. DIDADTIC TOOLS FOR THE STUDENTS’ ALGORITHMIC THINKING DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    T. P. Pushkaryeva

    2017-01-01

    Full Text Available Introduction. Modern engineers must possess high potential of cognitive abilities, in particular, the algorithmic thinking (AT. In this regard, the training of future experts (university graduates of technical specialities has to provide the knowledge of principles and ways of designing of various algorithms, abilities to analyze them, and to choose the most optimal variants for engineering activity implementation. For full formation of AT skills it is necessary to consider all channels of psychological perception and cogitative processing of educational information: visual, auditory, and kinesthetic.The aim of the present research is theoretical basis of design, development and use of resources for successful development of AT during the educational process of training in programming.Methodology and research methods. Methodology of the research involves the basic thesis of cognitive psychology and information approach while organizing the educational process. The research used methods: analysis; modeling of cognitive processes; designing training tools that take into account the mentality and peculiarities of information perception; diagnostic efficiency of the didactic tools. Results. The three-level model for future engineers training in programming aimed at development of AT skills was developed. The model includes three components: aesthetic, simulative, and conceptual. Stages to mastering a new discipline are allocated. It is proved that for development of AT skills when training in programming it is necessary to use kinesthetic tools at the stage of mental algorithmic maps formation; algorithmic animation and algorithmic mental maps at the stage of algorithmic model and conceptual images formation. Kinesthetic tools for development of students’ AT skills when training in algorithmization and programming are designed. Using of kinesthetic training simulators in educational process provide the effective development of algorithmic style of

  20. Passive Infrared (PIR)-Based Indoor Position Tracking for Smart Homes Using Accessibility Maps and A-Star Algorithm.

    Science.gov (United States)

    Yang, Dan; Xu, Bin; Rao, Kaiyou; Sheng, Weihua

    2018-01-24

    Indoor occupants' positions are significant for smart home service systems, which usually consist of robot service(s), appliance control and other intelligent applications. In this paper, an innovative localization method is proposed for tracking humans' position in indoor environments based on passive infrared (PIR) sensors using an accessibility map and an A-star algorithm, aiming at providing intelligent services. First the accessibility map reflecting the visiting habits of the occupants is established through the integral training with indoor environments and other prior knowledge. Then the PIR sensors, which placement depends on the training results in the accessibility map, get the rough location information. For more precise positioning, the A-start algorithm is used to refine the localization, fused with the accessibility map and the PIR sensor data. Experiments were conducted in a mock apartment testbed. The ground truth data was obtained from an Opti-track system. The results demonstrate that the proposed method is able to track persons in a smart home environment and provide a solution for home robot localization.

  1. Passive Infrared (PIR-Based Indoor Position Tracking for Smart Homes Using Accessibility Maps and A-Star Algorithm

    Directory of Open Access Journals (Sweden)

    Dan Yang

    2018-01-01

    Full Text Available Indoor occupants’ positions are significant for smart home service systems, which usually consist of robot service(s, appliance control and other intelligent applications. In this paper, an innovative localization method is proposed for tracking humans’ position in indoor environments based on passive infrared (PIR sensors using an accessibility map and an A-star algorithm, aiming at providing intelligent services. First the accessibility map reflecting the visiting habits of the occupants is established through the integral training with indoor environments and other prior knowledge. Then the PIR sensors, which placement depends on the training results in the accessibility map, get the rough location information. For more precise positioning, the A-start algorithm is used to refine the localization, fused with the accessibility map and the PIR sensor data. Experiments were conducted in a mock apartment testbed. The ground truth data was obtained from an Opti-track system. The results demonstrate that the proposed method is able to track persons in a smart home environment and provide a solution for home robot localization.

  2. Pattern recognition applied to infrared images for early alerts in fog

    Science.gov (United States)

    Boucher, Vincent; Marchetti, Mario; Dumoulin, Jean; Cord, Aurélien

    2014-09-01

    Fog conditions are the cause of severe car accidents in western countries because of the poor induced visibility. Its forecast and intensity are still very difficult to predict by weather services. Infrared cameras allow to detect and to identify objects in fog while visibility is too low for eye detection. Over the past years, the implementation of cost effective infrared cameras on some vehicles has enabled such detection. On the other hand pattern recognition algorithms based on Canny filters and Hough transformation are a common tool applied to images. Based on these facts, a joint research program between IFSTTAR and Cerema has been developed to study the benefit of infrared images obtained in a fog tunnel during its natural dissipation. Pattern recognition algorithms have been applied, specifically on road signs which shape is usually associated to a specific meaning (circular for a speed limit, triangle for an alert, …). It has been shown that road signs were detected early enough in images, with respect to images in the visible spectrum, to trigger useful alerts for Advanced Driver Assistance Systems.

  3. Critical function monitoring system algorithm development

    International Nuclear Information System (INIS)

    Harmon, D.L.

    1984-01-01

    Accurate critical function status information is a key to operator decision-making during events threatening nuclear power plant safety. The Critical Function Monitoring System provides continuous critical function status monitoring by use of algorithms which mathematically represent the processes by which an operating staff would determine critical function status. This paper discusses in detail the systematic design methodology employed to develop adequate Critical Function Monitoring System algorithms

  4. Joint de-blurring and nonuniformity correction method for infrared microscopy imaging

    Science.gov (United States)

    Jara, Anselmo; Torres, Sergio; Machuca, Guillermo; Ramírez, Wagner; Gutiérrez, Pablo A.; Viafora, Laura A.; Godoy, Sebastián E.; Vera, Esteban

    2018-05-01

    In this work, we present a new technique to simultaneously reduce two major degradation artifacts found in mid-wavelength infrared microscopy imagery, namely the inherent focal-plane array nonuniformity noise and the scene defocus presented due to the point spread function of the infrared microscope. We correct both nuisances using a novel, recursive method that combines the constant range nonuniformity correction algorithm with a frame-by-frame deconvolution approach. The ability of the method to jointly compensate for both nonuniformity noise and blur is demonstrated using two different real mid-wavelength infrared microscopic video sequences, which were captured from two microscopic living organisms using a Janos-Sofradir mid-wavelength infrared microscopy setup. The performance of the proposed method is assessed on real and simulated infrared data by computing the root mean-square error and the roughness-laplacian pattern index, which was specifically developed for the present work.

  5. Development of non-destructive testing system of shoes for infrared rays

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jae Yeol; Park, Chang Sun; Oh, Ki Jang; Ma, Sang Dong; Kim, Bong Jae [Chosun Univesity, Kwangju (Korea, Republic of); Yang, Dong Jo [Research Institute of Industrial Science and Technology, Pohang (Korea, Republic of)

    2001-05-15

    Diagnosis or measurements using Infrared thermo-image hasn't been available. A quick diagnosis and thermal analysis can be possible when that kind of system is introduced to the investigation of each part. In this study, Infrared Camera, Thermo-vision 900 of AGEMA Company was used in order to investigate. Infrared Camera usually detects only Infrared wave from the light in order to illustrate the temperature distribution. Infrared diagnosis system can be applied to various field. But the defect discrimination can be automatic or mechanization on the special shoes total inspection system. Also, it is more effective to development and composition on the shoes total inspection system. In this study, it is introduction method of special shoes nondestructive total inspection. Performance of the proposed method are shown by through thermo-Image.

  6. Dispersive infrared spectroscopy measurements of atmospheric CO2 using a Fabry–Pérot interferometer sensor

    International Nuclear Information System (INIS)

    Chan, K.L.; Ning, Z.; Westerdahl, D.; Wong, K.C.; Sun, Y.W.; Hartl, A.; Wenig, M.O.

    2014-01-01

    In this paper, we present the first dispersive infrared spectroscopic (DIRS) measurement of atmospheric carbon dioxide (CO 2 ) using a new scanning Fabry–Pérot interferometer (FPI) sensor. The sensor measures the optical spectra in the mid infrared (3900 nm to 5220 nm) wavelength range with full width half maximum (FWHM) spectral resolution of 78.8 nm at the CO 2 absorption band (∼ 4280 nm) and sampling resolution of 20 nm. The CO 2 concentration is determined from the measured optical absorption spectra by fitting it to the CO 2 reference spectrum. Interference from other major absorbers in the same wavelength range, e.g., carbon monoxide (CO) and water vapor (H 2 O), was taken out by including their reference spectra in the fit as well. The detailed descriptions of the instrumental setup, the retrieval procedure, a modeling study for error analysis as well as laboratory validation using standard gas concentrations are presented. An iterative algorithm to account for the non-linear response of the fit function to the absorption cross sections due to the broad instrument function was developed and tested. A modeling study of the retrieval algorithm showed that errors due to instrument noise can be considerably reduced by using the dispersive spectral information in the retrieval. The mean measurement error of the prototype DIRS CO 2 measurement for 1 minute averaged data is about ± 2.5 ppmv, and down to ± 0.8 ppmv for 10 minute averaged data. A field test of atmospheric CO 2 measurements were carried out in an urban site in Hong Kong for a month and compared to a commercial non-dispersive infrared (NDIR) CO 2 analyzer. 10 minute averaged data shows good agreement between the DIRS and NDIR measurements with Pearson correlation coefficient (R) of 0.99. This new method offers an alternative approach of atmospheric CO 2 measurement featuring high accuracy, correction of non-linear absorption and interference of water vapor. - Highlights: • Dispersive infrared

  7. Comparison of image deconvolution algorithms on simulated and laboratory infrared images

    Energy Technology Data Exchange (ETDEWEB)

    Proctor, D. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    We compare Maximum Likelihood, Maximum Entropy, Accelerated Lucy-Richardson, Weighted Goodness of Fit, and Pixon reconstructions of simple scenes as a function of signal-to-noise ratio for simulated images with randomly generated noise. Reconstruction results of infrared images taken with the TAISIR (Temperature and Imaging System InfraRed) are also discussed.

  8. Applied economic model development algorithm for electronics company

    Directory of Open Access Journals (Sweden)

    Mikhailov I.

    2017-01-01

    Full Text Available The purpose of this paper is to report about received experience in the field of creating the actual methods and algorithms that help to simplify development of applied decision support systems. It reports about an algorithm, which is a result of two years research and have more than one-year practical verification. In a case of testing electronic components, the time of the contract conclusion is crucial point to make the greatest managerial mistake. At this stage, it is difficult to achieve a realistic assessment of time-limit and of wage-fund for future work. The creation of estimating model is possible way to solve this problem. In the article is represented an algorithm for creation of those models. The algorithm is based on example of the analytical model development that serves for amount of work estimation. The paper lists the algorithm’s stages and explains their meanings with participants’ goals. The implementation of the algorithm have made possible twofold acceleration of these models development and fulfilment of management’s requirements. The resulting models have made a significant economic effect. A new set of tasks was identified to be further theoretical study.

  9. Application of point-to-point matching algorithms for background correction in on-line liquid chromatography-Fourier transform infrared spectrometry (LC-FTIR).

    Science.gov (United States)

    Kuligowski, J; Quintás, G; Garrigues, S; de la Guardia, M

    2010-03-15

    A new background correction method for the on-line coupling of gradient liquid chromatography and Fourier transform infrared spectrometry has been developed. It is based on the use of a point-to-point matching algorithm that compares the absorption spectra of the sample data set with those of a previously recorded reference data set in order to select an appropriate reference spectrum. The spectral range used for the point-to-point comparison is selected with minimal user-interaction, thus facilitating considerably the application of the whole method. The background correction method has been successfully tested on a chromatographic separation of four nitrophenols running acetonitrile (0.08%, v/v TFA):water (0.08%, v/v TFA) gradients with compositions ranging from 35 to 85% (v/v) acetonitrile, giving accurate results for both, baseline resolved and overlapped peaks. Copyright (c) 2009 Elsevier B.V. All rights reserved.

  10. Development of infrared spectroscopy techniques for environmental monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Sandsten, Jonas

    2000-08-01

    Infrared spectroscopy techniques have long been utilized in identifying and quantifying species of interest to us. Many of the elementary molecules in the atmosphere interact with infrared radiation through their ability to absorb and emit energy in vibrational and rotational transitions. A large variety of methods for monitoring of molecules and aerosol particles by collecting samples or by using remote sensing methods are available. The objective of the work presented in this thesis was to develop infrared spectroscopic techniques to further enhance the amount of useful information obtained from gathering spectral data. A new method for visualization and quantification of gas flows based on gas-correlation techniques was developed. Real-time imaging of gas leaks and incomplete or erratic flare combustion of ethene was demonstrated. The method relies on the thermal background as a radiation source and the gas can be visualized in absorption or in emission depending on the temperature difference. Diode laser spectroscopy was utilized to monitor three molecular species at the same time and over the same path. Two near-infrared diode lasers beams were combined in a periodically poled lithium niobate crystal and by difference-frequency generation a third beam was created, enabling simultaneous monitoring of oxygen, water vapor and methane. Models of aerosol particle cross sections were used to simulate the diffraction pattern of light scattered by fibers, spherical particles and real particles, such as pollen, through a new aerosol particle sensing prototype. The instrument, using a coupled cavity diode laser, has been designed with a ray-tracing program and the final prototype was employed for single aerosol particle sizing and identification.

  11. Development of Educational Support System for Algorithm using Flowchart

    Science.gov (United States)

    Ohchi, Masashi; Aoki, Noriyuki; Furukawa, Tatsuya; Takayama, Kanta

    Recently, an information technology is indispensable for the business and industrial developments. However, it has been a social problem that the number of software developers has been insufficient. To solve the problem, it is necessary to develop and implement the environment for learning the algorithm and programming language. In the paper, we will describe the algorithm study support system for a programmer using the flowchart. Since the proposed system uses Graphical User Interface(GUI), it will become easy for a programmer to understand the algorithm in programs.

  12. Small-target leak detection for a closed vessel via infrared image sequences

    Science.gov (United States)

    Zhao, Ling; Yang, Hongjiu

    2017-03-01

    This paper focus on a leak diagnosis and localization method based on infrared image sequences. Some problems on high probability of false warning and negative affect for marginal information are solved by leak detection. An experimental model is established for leak diagnosis and localization on infrared image sequences. The differential background prediction is presented to eliminate the negative affect of marginal information on test vessel based on a kernel regression method. A pipeline filter based on layering voting is designed to reduce probability of leak point false warning. A synthesize leak diagnosis and localization algorithm is proposed based on infrared image sequences. The effectiveness and potential are shown for developed techniques through experimental results.

  13. The ship-borne infrared searching and tracking system based on the inertial platform

    Science.gov (United States)

    Li, Yan; Zhang, Haibo

    2011-08-01

    As a result of the radar system got interferenced or in the state of half silent ,it can cause the guided precision drop badly In the modern electronic warfare, therefore it can lead to the equipment depended on electronic guidance cannot strike the incoming goals exactly. It will need to rely on optoelectronic devices to make up for its shortcomings, but when interference is in the process of radar leading ,especially the electro-optical equipment is influenced by the roll, pitch and yaw rotation ,it can affect the target appear outside of the field of optoelectronic devices for a long time, so the infrared optoelectronic equipment can not exert the superiority, and also it cannot get across weapon-control system "reverse bring" missile against incoming goals. So the conventional ship-borne infrared system unable to track the target of incoming quickly , the ability of optoelectronic rivalry declines heavily.Here we provide a brand new controlling algorithm for the semi-automatic searching and infrared tracking based on inertial navigation platform. Now it is applying well in our XX infrared optoelectronic searching and tracking system. The algorithm is mainly divided into two steps: The artificial mode turns into auto-searching when the deviation of guide exceeds the current scene under the course of leading for radar.When the threshold value of the image picked-up is satisfied by the contrast of the target in the searching scene, the speed computed by using the CA model Least Square Method feeds back to the speed loop. And then combine the infrared information to accomplish the closed-loop control of the infrared optoelectronic system tracking. The algorithm is verified via experiment. Target capturing distance is 22.3 kilometers on the great lead deviation by using the algorithm. But without using the algorithm the capturing distance declines 12 kilometers. The algorithm advances the ability of infrared optoelectronic rivalry and declines the target capturing

  14. Signal filtering algorithm for depth-selective diffuse optical topography

    International Nuclear Information System (INIS)

    Fujii, M; Nakayama, K

    2009-01-01

    A compact filtered backprojection algorithm that suppresses the undesirable effects of skin circulation for near-infrared diffuse optical topography is proposed. Our approach centers around a depth-selective filtering algorithm that uses an inverse problem technique and extracts target signals from observation data contaminated by noise from a shallow region. The filtering algorithm is reduced to a compact matrix and is therefore easily incorporated into a real-time system. To demonstrate the validity of this method, we developed a demonstration prototype for depth-selective diffuse optical topography and performed both computer simulations and phantom experiments. The results show that the proposed method significantly suppresses the noise from the shallow region with a minimal degradation of the target signal.

  15. Development of infrared heating technology for tomato peeling

    Science.gov (United States)

    The commercial lye and steam peeling methods used in tomato processing industry are water- and energy-intensive and have a negative impact on the environment. To develop alternative peeling methods, we conducted comprehensive studies of using infrared (IR) heating for tomato peeling. The three major...

  16. Developing an Enhanced Lightning Jump Algorithm for Operational Use

    Science.gov (United States)

    Schultz, Christopher J.; Petersen, Walter A.; Carey, Lawrence D.

    2009-01-01

    Overall Goals: 1. Build on the lightning jump framework set through previous studies. 2. Understand what typically occurs in nonsevere convection with respect to increases in lightning. 3. Ultimately develop a lightning jump algorithm for use on the Geostationary Lightning Mapper (GLM). 4 Lightning jump algorithm configurations were developed (2(sigma), 3(sigma), Threshold 10 and Threshold 8). 5 algorithms were tested on a population of 47 nonsevere and 38 severe thunderstorms. Results indicate that the 2(sigma) algorithm performed best over the entire thunderstorm sample set with a POD of 87%, a far of 35%, a CSI of 59% and a HSS of 75%.

  17. Enhancement system of nighttime infrared video image and visible video image

    Science.gov (United States)

    Wang, Yue; Piao, Yan

    2016-11-01

    Visibility of Nighttime video image has a great significance for military and medicine areas, but nighttime video image has so poor quality that we can't recognize the target and background. Thus we enhance the nighttime video image by fuse infrared video image and visible video image. According to the characteristics of infrared and visible images, we proposed improved sift algorithm andαβ weighted algorithm to fuse heterologous nighttime images. We would deduced a transfer matrix from improved sift algorithm. The transfer matrix would rapid register heterologous nighttime images. And theαβ weighted algorithm can be applied in any scene. In the video image fusion system, we used the transfer matrix to register every frame and then used αβ weighted method to fuse every frame, which reached the time requirement soft video. The fused video image not only retains the clear target information of infrared video image, but also retains the detail and color information of visible video image and the fused video image can fluency play.

  18. Development of a Novel Locomotion Algorithm for Snake Robot

    International Nuclear Information System (INIS)

    Khan, Raisuddin; Billah, Md Masum; Watanabe, Mitsuru; Shafie, A A

    2013-01-01

    A novel algorithm for snake robot locomotion is developed and analyzed in this paper. Serpentine is one of the renowned locomotion for snake robot in disaster recovery mission to overcome narrow space navigation. Several locomotion for snake navigation, such as concertina or rectilinear may be suitable for narrow spaces, but is highly inefficient if the same type of locomotion is used even in open spaces resulting friction reduction which make difficulties for snake movement. A novel locomotion algorithm has been proposed based on the modification of the multi-link snake robot, the modifications include alterations to the snake segments as well elements that mimic scales on the underside of the snake body. Snake robot can be able to navigate in the narrow space using this developed locomotion algorithm. The developed algorithm surmount the others locomotion limitation in narrow space navigation

  19. HIGH-EFFICIENCY INFRARED RECEIVER

    Directory of Open Access Journals (Sweden)

    A. K. Esman

    2016-01-01

    Full Text Available Recent research and development show promising use of high-performance solid-state receivers of the electromagnetic radiation. These receivers are based on the low-barrier Schottky diodes. The approach to the design of the receivers on the basis of delta-doped low-barrier Schottky diodes with beam leads without bias is especially actively developing because for uncooled receivers of the microwave radiation these diodes have virtually no competition. The purpose of this work is to improve the main parameters and characteristics that determine the practical relevance of the receivers of mid-infrared electromagnetic radiation at the operating room temperature by modifying the electrodes configuration of the diode and optimizing the distance between them. Proposed original design solution of the integrated receiver of mid-infrared radiation on the basis of the low-barrier Schottky diodes with beam leads allows to effectively adjust its main parameters and characteristics. Simulation of the electromagnetic characteristics of the proposed receiver by using the software package HFSS with the basic algorithm of a finite element method which implemented to calculate the behavior of electromagnetic fields on an arbitrary geometry with a predetermined material properties have shown that when the inner parts of the electrodes of the low-barrier Schottky diode is performed in the concentric elliptical convex-concave shape, it can be reduce the reflection losses to -57.75 dB and the standing wave ratio to 1.003 while increasing the directivity up to 23 at a wavelength of 6.09 μm. At this time, the rounded radii of the inner parts of the anode and cathode electrodes are equal 212 nm and 318 nm respectively and the gap setting between them is 106 nm. These parameters will improve the efficiency of the developed infrared optical-promising and electronic equipment for various purposes intended for work in the mid-infrared wavelength range. 

  20. Gas leak detection in infrared video with background modeling

    Science.gov (United States)

    Zeng, Xiaoxia; Huang, Likun

    2018-03-01

    Background modeling plays an important role in the task of gas detection based on infrared video. VIBE algorithm is a widely used background modeling algorithm in recent years. However, the processing speed of the VIBE algorithm sometimes cannot meet the requirements of some real time detection applications. Therefore, based on the traditional VIBE algorithm, we propose a fast prospect model and optimize the results by combining the connected domain algorithm and the nine-spaces algorithm in the following processing steps. Experiments show the effectiveness of the proposed method.

  1. Diagnosing basal cell carcinoma in vivo by near-infrared Raman spectroscopy: a Principal Components Analysis discrimination algorithm

    Science.gov (United States)

    Silveira, Landulfo, Jr.; Silveira, Fabrício L.; Bodanese, Benito; Pacheco, Marcos Tadeu T.; Zângaro, Renato A.

    2012-02-01

    This work demonstrated the discrimination among basal cell carcinoma (BCC) and normal human skin in vivo using near-infrared Raman spectroscopy. Spectra were obtained in the suspected lesion prior resectional surgery. After tissue withdrawn, biopsy fragments were submitted to histopathology. Spectra were also obtained in the adjacent, clinically normal skin. Raman spectra were measured using a Raman spectrometer (830 nm) with a fiber Raman probe. By comparing the mean spectra of BCC with the normal skin, it has been found important differences in the 800-1000 cm-1 and 1250-1350 cm-1 (vibrations of C-C and amide III, respectively, from lipids and proteins). A discrimination algorithm based on Principal Components Analysis and Mahalanobis distance (PCA/MD) could discriminate the spectra of both tissues with high sensitivity and specificity.

  2. Model based development of engine control algorithms

    NARCIS (Netherlands)

    Dekker, H.J.; Sturm, W.L.

    1996-01-01

    Model based development of engine control systems has several advantages. The development time and costs are strongly reduced because much of the development and optimization work is carried out by simulating both engine and control system. After optimizing the control algorithm it can be executed

  3. Development of Infrared Lip Movement Sensor for Spoken Word Recognition

    Directory of Open Access Journals (Sweden)

    Takahiro Yoshida

    2007-12-01

    Full Text Available Lip movement of speaker is very informative for many application of speech signal processing such as multi-modal speech recognition and password authentication without speech signal. However, in collecting multi-modal speech information, we need a video camera, large amount of memory, video interface, and high speed processor to extract lip movement in real time. Such a system tends to be expensive and large. This is one reasons of preventing the use of multi-modal speech processing. In this study, we have developed a simple infrared lip movement sensor mounted on a headset, and made it possible to acquire lip movement by PDA, mobile phone, and notebook PC. The sensor consists of an infrared LED and an infrared photo transistor, and measures the lip movement by the reflected light from the mouth region. From experiment, we achieved 66% successfully word recognition rate only by lip movement features. This experimental result shows that our developed sensor can be utilized as a tool for multi-modal speech processing by combining a microphone mounted on the headset.

  4. B ampersand W PWR advanced control system algorithm development

    International Nuclear Information System (INIS)

    Winks, R.W.; Wilson, T.L.; Amick, M.

    1992-01-01

    This paper discusses algorithm development of an Advanced Control System for the B ampersand W Pressurized Water Reactor (PWR) nuclear power plant. The paper summarizes the history of the project, describes the operation of the algorithm, and presents transient results from a simulation of the plant and control system. The history discusses the steps in the development process and the roles played by the utility owners, B ampersand W Nuclear Service Company (BWNS), Oak Ridge National Laboratory (ORNL), and the Foxboro Company. The algorithm description is a brief overview of the features of the control system. The transient results show that operation of the algorithm in a normal power maneuvering mode and in a moderately large upset following a feedwater pump trip

  5. Infrared tomography for diagnostic imaging of port wine stain blood vessels

    Energy Technology Data Exchange (ETDEWEB)

    Goodman, D. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    The objective of this work is the development of Infrared Tomography (IRT) for detecting and characterizing subsurface chromophores in human skin. Characterization of cutaneous chromophores is crucial for advances in the laser treatment of pigmented lesions (e.g., port wine stain birthmarks and tatoos). Infrared tomography (IRT) uses a fast infrared focal plane array (IR-FPA) to detect temperature rises in a substrate induced by pulsed radiation. A pulsed laser is used to produce transient heating of an object. The temperature rise, due to the optical absorption of the pulsed laser light, creates an increase in infrared emission which is measured by the IR-FPA. Although the application of IRT to image subsurface cracks due to metal fatigue is a topic of great interest in the aircraft industry, the application to image subsurface chromophores in biological materials is novel. We present an image recovery method based on a constrained conjugate gradient algorithm that has obtained the first ever high quality images of port wine blood vessels.

  6. Development of LabVIEW Program for Lock-In Infrared Thermography

    International Nuclear Information System (INIS)

    Min, Tae Hoon; Na, Hyung Chul; Kim, Noh Yu

    2011-01-01

    A LabVIEW program has been developed together with simple infrared thermography(IRT) system to control the lock-in conditions of the system efficiently. The IR imaging software was designed to operate both of infrared camera and halogen lamp by synchronizing them with periodic sine signal based on thyristor(SCR) circuits. LabVIEW software was programmed to provide users with screen-menu functions by which it can change the period and energy of heat source, operate the camera to acquire image, and monitor the state of the system on the computer screen In experiment, lock-in IR image for a specimen with artificial hole defects was obtained by the developed IRT system and compared with optical image

  7. Fully Automated Lipid Pool Detection Using Near Infrared Spectroscopy

    Directory of Open Access Journals (Sweden)

    Elżbieta Pociask

    2016-01-01

    Full Text Available Background. Detecting and identifying vulnerable plaque, which is prone to rupture, is still a challenge for cardiologist. Such lipid core-containing plaque is still not identifiable by everyday angiography, thus triggering the need to develop a new tool where NIRS-IVUS can visualize plaque characterization in terms of its chemical and morphologic characteristic. The new tool can lead to the development of new methods of interpreting the newly obtained data. In this study, the algorithm to fully automated lipid pool detection on NIRS images is proposed. Method. Designed algorithm is divided into four stages: preprocessing (image enhancement, segmentation of artifacts, detection of lipid areas, and calculation of Lipid Core Burden Index. Results. A total of 31 NIRS chemograms were analyzed by two methods. The metrics, total LCBI, maximal LCBI in 4 mm blocks, and maximal LCBI in 2 mm blocks, were calculated to compare presented algorithm with commercial available system. Both intraclass correlation (ICC and Bland-Altman plots showed good agreement and correlation between used methods. Conclusions. Proposed algorithm is fully automated lipid pool detection on near infrared spectroscopy images. It is a tool developed for offline data analysis, which could be easily augmented for newer functions and projects.

  8. Solving Optimization Problems via Vortex Optimization Algorithm and Cognitive Development Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Ahmet Demir

    2017-01-01

    Full Text Available In the fields which require finding the most appropriate value, optimization became a vital approach to employ effective solutions. With the use of optimization techniques, many different fields in the modern life have found solutions to their real-world based problems. In this context, classical optimization techniques have had an important popularity. But after a while, more advanced optimization problems required the use of more effective techniques. At this point, Computer Science took an important role on providing software related techniques to improve the associated literature. Today, intelligent optimization techniques based on Artificial Intelligence are widely used for optimization problems. The objective of this paper is to provide a comparative study on the employment of classical optimization solutions and Artificial Intelligence solutions for enabling readers to have idea about the potential of intelligent optimization techniques. At this point, two recently developed intelligent optimization algorithms, Vortex Optimization Algorithm (VOA and Cognitive Development Optimization Algorithm (CoDOA, have been used to solve some multidisciplinary optimization problems provided in the source book Thomas' Calculus 11th Edition and the obtained results have compared with classical optimization solutions. 

  9. Infrared small target detection with kernel Fukunaga Koontz transform

    Science.gov (United States)

    Liu, Rui-ming; Liu, Er-qi; Yang, Jie; Zhang, Tian-hao; Wang, Fang-lin

    2007-09-01

    The Fukunaga-Koontz transform (FKT) has been proposed for many years. It can be used to solve two-pattern classification problems successfully. However, there are few researchers who have definitely extended FKT to kernel FKT (KFKT). In this paper, we first complete this task. Then a method based on KFKT is developed to detect infrared small targets. KFKT is a supervised learning algorithm. How to construct training sets is very important. For automatically detecting targets, the synthetic target images and real background images are used to train KFKT. Because KFKT can represent the higher order statistical properties of images, we expect better detection performance of KFKT than that of FKT. The well-devised experiments verify that KFKT outperforms FKT in detecting infrared small targets.

  10. Development of infrared Echelle spectrograph and mid-infrared heterodyne spectrometer on a small telescope at Haleakala, Hawaii for planetary observation

    Science.gov (United States)

    Sakanoi, Takeshi; Kasaba, Yasumasa; Kagitani, Masato; Nakagawa, Hiromu; Kuhn, Jeff; Okano, Shoichi

    2014-08-01

    We report the development of infrared Echelle spectrograph covering 1 - 4 micron and mid-infrared heterodyne spectrometer around 10 micron installed on the 60-cm telescope at the summit of Haleakala, Hawaii (alt.=3000m). It is essential to carry out continuous measurement of planetary atmosphere, such as the Jovian infrared aurora and the volcanoes on Jovian satellite Io, to understand its time and spatial variations. A compact and easy-to-use high resolution infrared spectrometer provide the good opportunity to investigate these objects continuously. We are developing an Echelle spectrograph called ESPRIT: Echelle Spectrograph for Planetary Research In Tohoku university. The main target of ESPRIT is to measure the Jovian H3+ fundamental line at 3.9 micron, and H2 nu=1 at 2.1 micron. The 256x256 pixel CRC463 InSb array is used. An appropriate Echelle grating is selected to optimize at 3.9 micron and 2.1 micron for the Jovian infrared auroral observations. The pixel scale corresponds to the atmospheric seeing (0.3 arcsec/pixel). This spectrograph is characterized by a long slit field-of-view of ~ 50 arcsec with a spectral resolution is over 20,000. In addition, we recently developed a heterodyne spectrometer called MILAHI on the 60 cm telescope. MILAHI is characterized by super high-resolving power (more than 1,500,000) covering from 7 - 13 microns. Its sensitivity is 2400 K at 9.6 micron with a MCT photo diode detector of which bandwidth of 3000 MHz. ESPRIT and MILAHI is planned to be installed on 60 cm telescope is planned in 2014.

  11. Design of a temperature control system using incremental PID algorithm for a special homemade shortwave infrared spatial remote sensor based on FPGA

    Science.gov (United States)

    Xu, Zhipeng; Wei, Jun; Li, Jianwei; Zhou, Qianting

    2010-11-01

    An image spectrometer of a spatial remote sensing satellite requires shortwave band range from 2.1μm to 3μm which is one of the most important bands in remote sensing. We designed an infrared sub-system of the image spectrometer using a homemade 640x1 InGaAs shortwave infrared sensor working on FPA system which requires high uniformity and low level of dark current. The working temperature should be -15+/-0.2 Degree Celsius. This paper studies the model of noise for focal plane array (FPA) system, investigated the relationship with temperature and dark current noise, and adopts Incremental PID algorithm to generate PWM wave in order to control the temperature of the sensor. There are four modules compose of the FPGA module design. All of the modules are coded by VHDL and implemented in FPGA device APA300. Experiment shows the intelligent temperature control system succeeds in controlling the temperature of the sensor.

  12. Dispersive infrared spectroscopy measurements of atmospheric CO{sub 2} using a Fabry–Pérot interferometer sensor

    Energy Technology Data Exchange (ETDEWEB)

    Chan, K.L. [School of Energy and Environment, City University of Hong Kong (Hong Kong); Ning, Z., E-mail: zhining@cityu.edu.hk [School of Energy and Environment, City University of Hong Kong (Hong Kong); Guy Carpenter Climate Change Centre, City University of Hong Kong (Hong Kong); Westerdahl, D. [Ability R and D Energy Research Centre, City University of Hong Kong (Hong Kong); Wong, K.C. [School of Energy and Environment, City University of Hong Kong (Hong Kong); Sun, Y.W. [Anhui Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Hefei (China); Hartl, A. [School of Energy and Environment, City University of Hong Kong (Hong Kong); Wenig, M.O. [Meteorological Institute, Ludwig-Maximilians-Universität Munich (Germany)

    2014-02-01

    In this paper, we present the first dispersive infrared spectroscopic (DIRS) measurement of atmospheric carbon dioxide (CO{sub 2}) using a new scanning Fabry–Pérot interferometer (FPI) sensor. The sensor measures the optical spectra in the mid infrared (3900 nm to 5220 nm) wavelength range with full width half maximum (FWHM) spectral resolution of 78.8 nm at the CO{sub 2} absorption band (∼ 4280 nm) and sampling resolution of 20 nm. The CO{sub 2} concentration is determined from the measured optical absorption spectra by fitting it to the CO{sub 2} reference spectrum. Interference from other major absorbers in the same wavelength range, e.g., carbon monoxide (CO) and water vapor (H{sub 2}O), was taken out by including their reference spectra in the fit as well. The detailed descriptions of the instrumental setup, the retrieval procedure, a modeling study for error analysis as well as laboratory validation using standard gas concentrations are presented. An iterative algorithm to account for the non-linear response of the fit function to the absorption cross sections due to the broad instrument function was developed and tested. A modeling study of the retrieval algorithm showed that errors due to instrument noise can be considerably reduced by using the dispersive spectral information in the retrieval. The mean measurement error of the prototype DIRS CO{sub 2} measurement for 1 minute averaged data is about ± 2.5 ppmv, and down to ± 0.8 ppmv for 10 minute averaged data. A field test of atmospheric CO{sub 2} measurements were carried out in an urban site in Hong Kong for a month and compared to a commercial non-dispersive infrared (NDIR) CO{sub 2} analyzer. 10 minute averaged data shows good agreement between the DIRS and NDIR measurements with Pearson correlation coefficient (R) of 0.99. This new method offers an alternative approach of atmospheric CO{sub 2} measurement featuring high accuracy, correction of non-linear absorption and interference of water

  13. Added Value of Far-Infrared Radiometry for Ice Cloud Remote Sensing

    Science.gov (United States)

    Libois, Q.; Blanchet, J. P.; Ivanescu, L.; S Pelletier, L.; Laurence, C.

    2017-12-01

    Several cloud retrieval algorithms based on satellite observations in the infrared have been developed in the last decades. However, most of these observations only cover the midinfrared (MIR, λ technology, though, now make it possible to consider spaceborne remote sensing in the FIR. Here we show that adding a few FIR channels with realistic radiometric performances to existing spaceborne narrowband radiometers would significantly improve their ability to retrieve ice cloud radiative properties. For clouds encountered in the polar regions and the upper troposphere, where the atmosphere above clouds is sufficiently transparent in the FIR, using FIR channels would reduce by more than 50% the uncertainties on retrieved values of optical thickness, effective particle diameter, and cloud top altitude. This would somehow extend the range of applicability of current infrared retrieval methods to the polar regions and to clouds with large optical thickness, where MIR algorithms perform poorly. The high performance of solar reflection-based algorithms would thus be reached in nighttime conditions. Using FIR observations is a promising venue for studying ice cloud microphysics and precipitation processes, which is highly relevant for cirrus clouds and convective towers, and for investigating the water cycle in the driest regions of the atmosphere.

  14. Temporal high-pass non-uniformity correction algorithm based on grayscale mapping and hardware implementation

    Science.gov (United States)

    Jin, Minglei; Jin, Weiqi; Li, Yiyang; Li, Shuo

    2015-08-01

    In this paper, we propose a novel scene-based non-uniformity correction algorithm for infrared image processing-temporal high-pass non-uniformity correction algorithm based on grayscale mapping (THP and GM). The main sources of non-uniformity are: (1) detector fabrication inaccuracies; (2) non-linearity and variations in the read-out electronics and (3) optical path effects. The non-uniformity will be reduced by non-uniformity correction (NUC) algorithms. The NUC algorithms are often divided into calibration-based non-uniformity correction (CBNUC) algorithms and scene-based non-uniformity correction (SBNUC) algorithms. As non-uniformity drifts temporally, CBNUC algorithms must be repeated by inserting a uniform radiation source which SBNUC algorithms do not need into the view, so the SBNUC algorithm becomes an essential part of infrared imaging system. The SBNUC algorithms' poor robustness often leads two defects: artifacts and over-correction, meanwhile due to complicated calculation process and large storage consumption, hardware implementation of the SBNUC algorithms is difficult, especially in Field Programmable Gate Array (FPGA) platform. The THP and GM algorithm proposed in this paper can eliminate the non-uniformity without causing defects. The hardware implementation of the algorithm only based on FPGA has two advantages: (1) low resources consumption, and (2) small hardware delay: less than 20 lines, it can be transplanted to a variety of infrared detectors equipped with FPGA image processing module, it can reduce the stripe non-uniformity and the ripple non-uniformity.

  15. A DSP-based neural network non-uniformity correction algorithm for IRFPA

    Science.gov (United States)

    Liu, Chong-liang; Jin, Wei-qi; Cao, Yang; Liu, Xiu

    2009-07-01

    An effective neural network non-uniformity correction (NUC) algorithm based on DSP is proposed in this paper. The non-uniform response in infrared focal plane array (IRFPA) detectors produces corrupted images with a fixed-pattern noise(FPN).We introduced and analyzed the artificial neural network scene-based non-uniformity correction (SBNUC) algorithm. A design of DSP-based NUC development platform for IRFPA is described. The DSP hardware platform designed is of low power consumption, with 32-bit fixed point DSP TMS320DM643 as the kernel processor. The dependability and expansibility of the software have been improved by DSP/BIOS real-time operating system and Reference Framework 5. In order to realize real-time performance, the calibration parameters update is set at a lower task priority then video input and output in DSP/BIOS. In this way, calibration parameters updating will not affect video streams. The work flow of the system and the strategy of real-time realization are introduced. Experiments on real infrared imaging sequences demonstrate that this algorithm requires only a few frames to obtain high quality corrections. It is computationally efficient and suitable for all kinds of non-uniformity.

  16. Infrared and visible image fusion based on total variation and augmented Lagrangian.

    Science.gov (United States)

    Guo, Hanqi; Ma, Yong; Mei, Xiaoguang; Ma, Jiayi

    2017-11-01

    This paper proposes a new algorithm for infrared and visible image fusion based on gradient transfer that achieves fusion by preserving the intensity of the infrared image and then transferring gradients in the corresponding visible one to the result. The gradient transfer suffers from the problems of low dynamic range and detail loss because it ignores the intensity from the visible image. The new algorithm solves these problems by providing additive intensity from the visible image to balance the intensity between the infrared image and the visible one. It formulates the fusion task as an l 1 -l 1 -TV minimization problem and then employs variable splitting and augmented Lagrangian to convert the unconstrained problem to a constrained one that can be solved in the framework of alternating the multiplier direction method. Experiments demonstrate that the new algorithm achieves better fusion results with a high computation efficiency in both qualitative and quantitative tests than gradient transfer and most state-of-the-art methods.

  17. Design of High-Precision Infrared Multi-Touch Screen Based on the EFM32

    Directory of Open Access Journals (Sweden)

    Zhong XIAOLING

    2014-07-01

    Full Text Available Due to the low accuracy of traditional infrared multi-touch screen, it’s difficult to ascertain the touch point. Putting forward a design scheme based on ARM Cortex-M3 kernel EFM32 processor of high precision infrared multi-touch screen. Using tracking scanning area algorithm after accessed electricity for the first time to scan, it greatly improved the scanning efficiency and response speed. Based on the infrared characteristic difference, putting forward a data fitting algorithm, employing the subtraction relationship between the covering area and sampling value to curve fitting, concluding the infrared sampling value of subtraction characteristic curve, establishing a sampling value differential data tables, at last ensuring the precise location of touch point. Besides, practices have proved that the accuracy of the infrared touch screen can up to 0.5 mm. The design uses standard USB port which connected to the PC can also be widely used in various terminals.

  18. Development of the infrared instrument for gas detection

    Science.gov (United States)

    Chen, Ching-Wei; Chen, Chia-Ray

    2017-08-01

    MWIR (Mid-Wave Infrared) spectroscopy shows a large potential in the current IR devices market, due to its multiple applications, such as gas detection, chemical analysis, industrial monitoring, combustion and flame characterization. It opens this technique to the fields of application, such as industrial monitoring and control, agriculture and environmental monitoring. However, a major barrier, which is the lack of affordable specific key elements such a MWIR light sources and low cost uncooled detectors, have held it back from its widespread use. In this paper an uncooled MWIR detector combined with image enhancement technique is reported. This investigation shows good results in gas leakage detection test. It also verify the functions of self-developed MWIR lens and optics. A good agreement in theoretical design and experiment give us the lessons learned for the potential application in infrared satellite technology. A brief discussions will also be presented in this paper.

  19. Infrared image enhancement with learned features

    Science.gov (United States)

    Fan, Zunlin; Bi, Duyan; Ding, Wenshan

    2017-11-01

    Due to the variation of imaging environment and limitations of infrared imaging sensors, infrared images usually have some drawbacks: low contrast, few details and indistinct edges. Hence, to promote the applications of infrared imaging technology, it is essential to improve the qualities of infrared images. To enhance image details and edges adaptively, we propose an infrared image enhancement method under the proposed image enhancement scheme. On the one hand, on the assumption of high-quality image taking more evident structure singularities than low-quality images, we propose an image enhancement scheme that depends on the extractions of structure features. On the other hand, different from the current image enhancement algorithms based on deep learning networks that try to train and build the end-to-end mappings on improving image quality, we analyze the significance of first layer in Stacked Sparse Denoising Auto-encoder and propose a novel feature extraction for the proposed image enhancement scheme. Experiment results prove that the novel feature extraction is free from some artifacts on the edges such as blocking artifacts, ;gradient reversal;, and pseudo contours. Compared with other enhancement methods, the proposed method achieves the best performance in infrared image enhancement.

  20. Development of Nanostructured Antireflection Coatings for Infrared and Electro-Optical Systems

    Directory of Open Access Journals (Sweden)

    Gopal G. Pethuraja

    2017-07-01

    Full Text Available Electro-optic infrared technologies and systems operating from ultraviolet (UV to long-wave infrared (LWIR spectra are being developed for a variety of defense and commercial systems applications. Loss of a significant portion of the incident signal due to reflection limits the performance of electro-optic infrared (IR sensing systems. A critical technology being developed to overcome this limitation and enhance the performance of sensing systems is advanced antireflection (AR coatings. Magnolia is actively involved in the development and advancement of nanostructured AR coatings for a wide variety of defense and commercial applications. Ultrahigh AR performance has been demonstrated for UV to LWIR spectral bands on various substrates. The AR coatings enhance the optical transmission through optical components and devices by significantly minimizing reflection losses, a substantial improvement over conventional thin-film AR coating technologies. Nanostructured AR coatings have been fabricated using a nanomanufacturable self-assembly process on substrates that are transparent for a given spectrum of interest ranging from UV to LWIR. The nanostructured multilayer structures have been designed, developed and optimized for various optoelectronic applications. The optical properties of optical components and sensor substrates coated with AR structures have been measured and the process parameters fine-tuned to achieve a predicted high level of performance. In this paper, we review our latest work on high quality nanostructure-based AR coatings, including recent efforts on the development of nanostructured AR coatings on IR substrates.

  1. Infrared analysis of urinary calculi by a single reflection accessory and a neural network interpretation algorithm

    NARCIS (Netherlands)

    Volmer, M; de Vries, JCM; Goldschmidt, HMJ

    Background: Preparation of KBr tablets, used for Fourier transform infrared (FT-IR) analysis of urinary calculus composition, is time-consuming and often hampered by pellet breakage. We developed a new F:T-IR method for urinary calculus analysis. This method makes use of a Golden Gate Single

  2. Development and Application of a Portable Health Algorithms Test System

    Science.gov (United States)

    Melcher, Kevin J.; Fulton, Christopher E.; Maul, William A.; Sowers, T. Shane

    2007-01-01

    This paper describes the development and initial demonstration of a Portable Health Algorithms Test (PHALT) System that is being developed by researchers at the NASA Glenn Research Center (GRC). The PHALT System was conceived as a means of evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT System allows systems health management algorithms to be developed in a graphical programming environment; to be tested and refined using system simulation or test data playback; and finally, to be evaluated in a real-time hardware-in-the-loop mode with a live test article. In this paper, PHALT System development is described through the presentation of a functional architecture, followed by the selection and integration of hardware and software. Also described is an initial real-time hardware-in-the-loop demonstration that used sensor data qualification algorithms to diagnose and isolate simulated sensor failures in a prototype Power Distribution Unit test-bed. Success of the initial demonstration is highlighted by the correct detection of all sensor failures and the absence of any real-time constraint violations.

  3. Battery algorithm verification and development using hardware-in-the-loop testing

    Science.gov (United States)

    He, Yongsheng; Liu, Wei; Koch, Brain J.

    Battery algorithms play a vital role in hybrid electric vehicles (HEVs), plug-in hybrid electric vehicles (PHEVs), extended-range electric vehicles (EREVs), and electric vehicles (EVs). The energy management of hybrid and electric propulsion systems needs to rely on accurate information on the state of the battery in order to determine the optimal electric drive without abusing the battery. In this study, a cell-level hardware-in-the-loop (HIL) system is used to verify and develop state of charge (SOC) and power capability predictions of embedded battery algorithms for various vehicle applications. Two different batteries were selected as representative examples to illustrate the battery algorithm verification and development procedure. One is a lithium-ion battery with a conventional metal oxide cathode, which is a power battery for HEV applications. The other is a lithium-ion battery with an iron phosphate (LiFePO 4) cathode, which is an energy battery for applications in PHEVs, EREVs, and EVs. The battery cell HIL testing provided valuable data and critical guidance to evaluate the accuracy of the developed battery algorithms, to accelerate battery algorithm future development and improvement, and to reduce hybrid/electric vehicle system development time and costs.

  4. Battery algorithm verification and development using hardware-in-the-loop testing

    Energy Technology Data Exchange (ETDEWEB)

    He, Yongsheng [General Motors Global Research and Development, 30500 Mound Road, MC 480-106-252, Warren, MI 48090 (United States); Liu, Wei; Koch, Brain J. [General Motors Global Vehicle Engineering, Warren, MI 48090 (United States)

    2010-05-01

    Battery algorithms play a vital role in hybrid electric vehicles (HEVs), plug-in hybrid electric vehicles (PHEVs), extended-range electric vehicles (EREVs), and electric vehicles (EVs). The energy management of hybrid and electric propulsion systems needs to rely on accurate information on the state of the battery in order to determine the optimal electric drive without abusing the battery. In this study, a cell-level hardware-in-the-loop (HIL) system is used to verify and develop state of charge (SOC) and power capability predictions of embedded battery algorithms for various vehicle applications. Two different batteries were selected as representative examples to illustrate the battery algorithm verification and development procedure. One is a lithium-ion battery with a conventional metal oxide cathode, which is a power battery for HEV applications. The other is a lithium-ion battery with an iron phosphate (LiFePO{sub 4}) cathode, which is an energy battery for applications in PHEVs, EREVs, and EVs. The battery cell HIL testing provided valuable data and critical guidance to evaluate the accuracy of the developed battery algorithms, to accelerate battery algorithm future development and improvement, and to reduce hybrid/electric vehicle system development time and costs. (author)

  5. Fourier Transform Infrared Spectroscopy (FT-IR) and Simple Algorithm Analysis for Rapid and Non-Destructive Assessment of Developmental Cotton Fibers.

    Science.gov (United States)

    Liu, Yongliang; Kim, Hee-Jin

    2017-06-22

    With cotton fiber growth or maturation, cellulose content in cotton fibers markedly increases. Traditional chemical methods have been developed to determine cellulose content, but it is time-consuming and labor-intensive, mostly owing to the slow hydrolysis process of fiber cellulose components. As one approach, the attenuated total reflection Fourier transform infrared (ATR FT-IR) spectroscopy technique has also been utilized to monitor cotton cellulose formation, by implementing various spectral interpretation strategies of both multivariate principal component analysis (PCA) and 1-, 2- or 3-band/-variable intensity or intensity ratios. The main objective of this study was to compare the correlations between cellulose content determined by chemical analysis and ATR FT-IR spectral indices acquired by the reported procedures, among developmental Texas Marker-1 (TM-1) and immature fiber ( im ) mutant cotton fibers. It was observed that the R value, CI IR , and the integrated intensity of the 895 cm -1 band exhibited strong and linear relationships with cellulose content. The results have demonstrated the suitability and utility of ATR FT-IR spectroscopy, combined with a simple algorithm analysis, in assessing cotton fiber cellulose content, maturity, and crystallinity in a manner which is rapid, routine, and non-destructive.

  6. Development of radio frequency interference detection algorithms for passive microwave remote sensing

    Science.gov (United States)

    Misra, Sidharth

    Radio Frequency Interference (RFI) signals are man-made sources that are increasingly plaguing passive microwave remote sensing measurements. RFI is of insidious nature, with some signals low power enough to go undetected but large enough to impact science measurements and their results. With the launch of the European Space Agency (ESA) Soil Moisture and Ocean Salinity (SMOS) satellite in November 2009 and the upcoming launches of the new NASA sea-surface salinity measuring Aquarius mission in June 2011 and soil-moisture measuring Soil Moisture Active Passive (SMAP) mission around 2015, active steps are being taken to detect and mitigate RFI at L-band. An RFI detection algorithm was designed for the Aquarius mission. The algorithm performance was analyzed using kurtosis based RFI ground-truth. The algorithm has been developed with several adjustable location dependant parameters to control the detection statistics (false-alarm rate and probability of detection). The kurtosis statistical detection algorithm has been compared with the Aquarius pulse detection method. The comparative study determines the feasibility of the kurtosis detector for the SMAP radiometer, as a primary RFI detection algorithm in terms of detectability and data bandwidth. The kurtosis algorithm has superior detection capabilities for low duty-cycle radar like pulses, which are more prevalent according to analysis of field campaign data. Most RFI algorithms developed have generally been optimized for performance with individual pulsed-sinusoidal RFI sources. A new RFI detection model is developed that takes into account multiple RFI sources within an antenna footprint. The performance of the kurtosis detection algorithm under such central-limit conditions is evaluated. The SMOS mission has a unique hardware system, and conventional RFI detection techniques cannot be applied. Instead, an RFI detection algorithm for SMOS is developed and applied in the angular domain. This algorithm compares

  7. Development of on-line sorting system for detection of infected seed potatoes using visible near-infrared transmittance spectral technique

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Dae Yong; Cho, Byoung Kwan [Dept. of Biosystems Engineering, Chungnam National University, Daejeon (Korea, Republic of); Mo, Chang Yeun [Rural Development Administration, National Institute of Agricultural Engineering, Jeonju (Korea, Republic of); Kang, Jun Soon [Dept. of Horticultural Bioscience, Pusan National University, Pusan (Korea, Republic of)

    2015-02-15

    In this study, an online seed potato sorting system using a visible and near infrared (40 1100 nm) transmittance spectral technique and statistical model was evaluated for the nondestructive determination of infected and sound seed potatoes. Seed potatoes that had been artificially infected with Pectobacterium atrosepticum, which is known to cause a soil borne disease infection, were prepared for the experiments. After acquiring transmittance spectra from sound and infected seed potatoes, a determination algorithm for detecting infected seed potatoes was developed using the partial least square discriminant analysis method. The coefficient of determination(R{sup 2}{sub p}) of the prediction model was 0.943, and the classification accuracy was above 99% (n = 80) for discriminating diseased seed potatoes from sound ones. This online sorting system has good potential for developing a technique to detect agricultural products that are infected and contaminated by pathogens.

  8. Estimating the Acquisition Price of Enshi Yulu Young Tea Shoots Using Near-Infrared Spectroscopy by the Back Propagation Artificial Neural Network Model in Conjunction with Backward Interval Partial Least Squares Algorithm

    Science.gov (United States)

    Wang, Sh.-P.; Gong, Z.-M.; Su, X.-Zh.; Liao, J.-Zh.

    2017-09-01

    Near infrared spectroscopy and the back propagation artificial neural network model in conjunction with backward interval partial least squares algorithm were used to estimate the purchasing price of Enshi yulu young tea shoots. The near-infrared spectra regions most relevant to the tea shoots price model (5700.5-5935.8, 7613.6-7848.9, 8091.8-8327.1, 8331-8566.2, 9287.5-9522.5, and 9526.6-9761.9 cm-1) were selected using backward interval partial least squares algorithm. The first five principal components that explained 99.96% of the variability in those selected spectral data were then used to calibrate the back propagation artificial neural tea shoots purchasing price model. The performance of this model (coefficient of determination for prediction 0.9724; root-mean-square error of prediction 4.727) was superior to those of the back propagation artificial neural model (coefficient of determination for prediction 0.8653, root-mean-square error of prediction 5.125) and the backward interval partial least squares model (coefficient of determination for prediction 0.5932, root-mean-square error of prediction 25.125). The acquisition price model with the combined backward interval partial least squares-back propagation artificial neural network algorithms can evaluate the price of Enshi yulu tea shoots accurately, quickly and objectively.

  9. Introductory survey for wireless infrared communications

    Directory of Open Access Journals (Sweden)

    Munsif Ali Jatoi

    2014-08-01

    Full Text Available Wireless infrared communications can be defined as the propagation of light waves in free space using infrared radiation whose range is 400–700 nm. This range corresponds to frequencies of hundreds of terahertz, which is high for higher data rate applications. Wireless infrared is applied for higher data rates applications such as wireless computing, wireless video and wireless multimedia communication applications. Introduced by Gfeller, this field has grown with different link configurations, improved transmitter efficiency, increased receiver responsivity and various multiple access techniques for improved quality. Errors are caused because of background light, which causes degradation overall system performance. Error correction techniques are used to remove the errors caused during transmission. This study provides a brief account on field theory used for error correction in wireless infrared systems. The results are produced in terms of bit error rate and signal-to-noise ratio for various bit lengths to show the ability of encoding and decoding algorithms.

  10. Generating Global Leaf Area Index from Landsat: Algorithm Formulation and Demonstration

    Science.gov (United States)

    Ganguly, Sangram; Nemani, Ramakrishna R.; Zhang, Gong; Hashimoto, Hirofumi; Milesi, Cristina; Michaelis, Andrew; Wang, Weile; Votava, Petr; Samanta, Arindam; Melton, Forrest; hide

    2012-01-01

    This paper summarizes the implementation of a physically based algorithm for the retrieval of vegetation green Leaf Area Index (LAI) from Landsat surface reflectance data. The algorithm is based on the canopy spectral invariants theory and provides a computationally efficient way of parameterizing the Bidirectional Reflectance Factor (BRF) as a function of spatial resolution and wavelength. LAI retrievals from the application of this algorithm to aggregated Landsat surface reflectances are consistent with those of MODIS for homogeneous sites represented by different herbaceous and forest cover types. Example results illustrating the physics and performance of the algorithm suggest three key factors that influence the LAI retrieval process: 1) the atmospheric correction procedures to estimate surface reflectances; 2) the proximity of Landsatobserved surface reflectance and corresponding reflectances as characterized by the model simulation; and 3) the quality of the input land cover type in accurately delineating pure vegetated components as opposed to mixed pixels. Accounting for these factors, a pilot implementation of the LAI retrieval algorithm was demonstrated for the state of California utilizing the Global Land Survey (GLS) 2005 Landsat data archive. In a separate exercise, the performance of the LAI algorithm over California was evaluated by using the short-wave infrared band in addition to the red and near-infrared bands. Results show that the algorithm, while ingesting the short-wave infrared band, has the ability to delineate open canopies with understory effects and may provide useful information compared to a more traditional two-band retrieval. Future research will involve implementation of this algorithm at continental scales and a validation exercise will be performed in evaluating the accuracy of the 30-m LAI products at several field sites. ©

  11. Development of morphing algorithms for Histfactory using information geometry

    Energy Technology Data Exchange (ETDEWEB)

    Bandyopadhyay, Anjishnu; Brock, Ian [University of Bonn (Germany); Cranmer, Kyle [New York University (United States)

    2016-07-01

    Many statistical analyses are based on likelihood fits. In any likelihood fit we try to incorporate all uncertainties, both systematic and statistical. We generally have distributions for the nominal and ±1 σ variations of a given uncertainty. Using that information, Histfactory morphs the distributions for any arbitrary value of the given uncertainties. In this talk, a new morphing algorithm will be presented, which is based on information geometry. The algorithm uses the information about the difference between various probability distributions. Subsequently, we map this information onto geometrical structures and develop the algorithm on the basis of different geometrical properties. Apart from varying all nuisance parameters together, this algorithm can also probe both small (< 1 σ) and large (> 2 σ) variations. It will also be shown how this algorithm can be used for interpolating other forms of probability distributions.

  12. Infrared Spectroscopic Imaging: The Next Generation

    Science.gov (United States)

    Bhargava, Rohit

    2013-01-01

    Infrared (IR) spectroscopic imaging seemingly matured as a technology in the mid-2000s, with commercially successful instrumentation and reports in numerous applications. Recent developments, however, have transformed our understanding of the recorded data, provided capability for new instrumentation, and greatly enhanced the ability to extract more useful information in less time. These developments are summarized here in three broad areas— data recording, interpretation of recorded data, and information extraction—and their critical review is employed to project emerging trends. Overall, the convergence of selected components from hardware, theory, algorithms, and applications is one trend. Instead of similar, general-purpose instrumentation, another trend is likely to be diverse and application-targeted designs of instrumentation driven by emerging component technologies. The recent renaissance in both fundamental science and instrumentation will likely spur investigations at the confluence of conventional spectroscopic analyses and optical physics for improved data interpretation. While chemometrics has dominated data processing, a trend will likely lie in the development of signal processing algorithms to optimally extract spectral and spatial information prior to conventional chemometric analyses. Finally, the sum of these recent advances is likely to provide unprecedented capability in measurement and scientific insight, which will present new opportunities for the applied spectroscopist. PMID:23031693

  13. Far-infrared contraband-detection-system development for personnel-search applications

    International Nuclear Information System (INIS)

    Schellenbaum, R.L.

    1982-09-01

    Experiments have been conducted toward the development of an active near-millimeter-wave, far infrared, personnel search system for the detection of contraband. These experiments employed a microwave hybrid tee interferometer/radiometer scanning system and quasi-optical techniques at 3.3-mm wavelength to illuminate and detect the reflection from target objects against a human body background. Clothing and other common concealing materials are transport at this wavelength. Retroreflector arrays, in conjunction with a Gunn diode radiation source, were investigated to provide all-angle illumination and detection of specular reflections from unaligned and irregular-shaped objects. Results indicate that, under highly controlled search conditions, metal objects greater than or equal to 25 cm 2 can be detected in an enclosure lined with retroreflectors. Further development is required to produce a practical personnel search system. The investigation and feasibility of alternate far infrared search techniques are presented. 23 figures, 2 tables

  14. Algorithm development for Maxwell's equations for computational electromagnetism

    Science.gov (United States)

    Goorjian, Peter M.

    1990-01-01

    A new algorithm has been developed for solving Maxwell's equations for the electromagnetic field. It solves the equations in the time domain with central, finite differences. The time advancement is performed implicitly, using an alternating direction implicit procedure. The space discretization is performed with finite volumes, using curvilinear coordinates with electromagnetic components along those directions. Sample calculations are presented of scattering from a metal pin, a square and a circle to demonstrate the capabilities of the new algorithm.

  15. AeroADL: applying the integration of the Suomi-NPP science algorithms with the Algorithm Development Library to the calibration and validation task

    Science.gov (United States)

    Houchin, J. S.

    2014-09-01

    A common problem for the off-line validation of the calibration algorithms and algorithm coefficients is being able to run science data through the exact same software used for on-line calibration of that data. The Joint Polar Satellite System (JPSS) program solved part of this problem by making the Algorithm Development Library (ADL) available, which allows the operational algorithm code to be compiled and run on a desktop Linux workstation using flat file input and output. However, this solved only part of the problem, as the toolkit and methods to initiate the processing of data through the algorithms were geared specifically toward the algorithm developer, not the calibration analyst. In algorithm development mode, a limited number of sets of test data are staged for the algorithm once, and then run through the algorithm over and over as the software is developed and debugged. In calibration analyst mode, we are continually running new data sets through the algorithm, which requires significant effort to stage each of those data sets for the algorithm without additional tools. AeroADL solves this second problem by providing a set of scripts that wrap the ADL tools, providing both efficient means to stage and process an input data set, to override static calibration coefficient look-up-tables (LUT) with experimental versions of those tables, and to manage a library containing multiple versions of each of the static LUT files in such a way that the correct set of LUTs required for each algorithm are automatically provided to the algorithm without analyst effort. Using AeroADL, The Aerospace Corporation's analyst team has demonstrated the ability to quickly and efficiently perform analysis tasks for both the VIIRS and OMPS sensors with minimal training on the software tools.

  16. Enhancement and evaluation of an algorithm for atmospheric profiling continuity from Aqua to Suomi-NPP

    Science.gov (United States)

    Lipton, A.; Moncet, J. L.; Payne, V.; Lynch, R.; Polonsky, I. N.

    2017-12-01

    We will present recent results from an algorithm for producing climate-quality atmospheric profiling earth system data records (ESDRs) for application to data from hyperspectral sounding instruments, including the Atmospheric InfraRed Sounder (AIRS) on EOS Aqua and the Cross-track Infrared Sounder (CrIS) on Suomi-NPP, along with their companion microwave sounders, AMSU and ATMS, respectively. The ESDR algorithm uses an optimal estimation approach and the implementation has a flexible, modular software structure to support experimentation and collaboration. Data record continuity benefits from the fact that the same algorithm can be applied to different sensors, simply by providing suitable configuration and data files. Developments to be presented include the impact of a radiance-based pre-classification method for the atmospheric background. In addition to improving retrieval performance, pre-classification has the potential to reduce the sensitivity of the retrievals to the climatological data from which the background estimate and its error covariance are derived. We will also discuss evaluation of a method for mitigating the effect of clouds on the radiances, and enhancements of the radiative transfer forward model.

  17. Reducing a congestion with introduce the greedy algorithm on traffic light control

    Science.gov (United States)

    Catur Siswipraptini, Puji; Hendro Martono, Wisnu; Hartanti, Dian

    2018-03-01

    The density of vehicles causes congestion seen at every junction in the city of jakarta due to the static or manual traffic timing lamp system consequently the length of the queue at the junction is uncertain. The research has been aimed at designing a sensor based traffic system based on the queue length detection of the vehicle to optimize the duration of the green light. In detecting the length of the queue of vehicles using infrared sensor assistance placed in each intersection path, then apply Greedy algorithm to help accelerate the movement of green light duration for the path that requires, while to apply the traffic lights regulation program based on greedy algorithm which is then stored on microcontroller with Arduino Mega 2560 type. Where a developed system implements the greedy algorithm with the help of the infrared sensor it will extend the duration of the green light on the long vehicle queue and accelerate the duration of the green light at the intersection that has the queue not too dense. Furthermore, the design is made to form an artificial form of the actual situation of the scale model or simple simulator (next we just called as scale model of simulator) of the intersection then tested. Sensors used are infrared sensors, where the placement of sensors in each intersection on the scale model is placed within 10 cm of each sensor and serves as a queue detector. From the results of the test process on the scale model with a longer queue obtained longer green light time so it will fix the problem of long queue of vehicles. Using greedy algorithms can add long green lights for 2 seconds on tracks that have long queues at least three sensor levels and accelerate time at other intersections that have longer queue sensor levels less than level three.

  18. Performance of Jet Algorithms in CMS

    CERN Document Server

    CMS Collaboration

    The CMS Combined Software and Analysis Challenge 2007 (CSA07) is well underway and expected to produce a wealth of physics analyses to be applied to the first incoming detector data in 2008. The JetMET group of CMS supports four different jet clustering algorithms for the CSA07 Monte Carlo samples, with two different parameterizations each: \\fastkt, \\siscone, \\midpoint, and \\itcone. We present several studies comparing the performance of these algorithms using QCD dijet and \\ttbar Monte Carlo samples. We specifically observe that the \\siscone algorithm performs equal to or better than the \\midpoint algorithm in all presented studies and propose that \\siscone be adopted as the preferred cone-based jet clustering algorithm in future CMS physics analyses, as it is preferred by theorists for its infrared- and collinear-safety to all orders of perturbative QCD. We furthermore encourage the use of the \\fastkt algorithm which is found to perform as good as any other algorithm under study, features dramatically reduc...

  19. Overview of benefits, challenges, and requirements of wheeled-vehicle mounted infrared sensors

    Science.gov (United States)

    Miller, John Lester; Clayton, Paul; Olsson, Stefan F.

    2013-06-01

    Requirements for vehicle mounted infrared sensors, especially as imagers evolve to high definition (HD) format will be detailed and analyzed. Lessons learned from integrations of infrared sensors on armored vehicles, unarmored military vehicles and commercial automobiles will be discussed. Comparisons between sensors for driving and those for situation awareness, targeting and other functions will be presented. Conclusions will be drawn regarding future applications and installations. New business requirements for more advanced digital image processing algorithms in the sensor system will be discussed. Examples of these are smarter contrast/brightness adjustments algorithms, detail enhancement, intelligent blending (IR-Vis) modes, and augmented reality.

  20. Development and validation of an algorithm for laser application in wound treatment

    Directory of Open Access Journals (Sweden)

    Diequison Rite da Cunha

    2017-12-01

    Full Text Available ABSTRACT Objective: To develop and validate an algorithm for laser wound therapy. Method: Methodological study and literature review. For the development of the algorithm, a review was performed in the Health Sciences databases of the past ten years. The algorithm evaluation was performed by 24 participants, nurses, physiotherapists, and physicians. For data analysis, the Cronbach’s alpha coefficient and the chi-square test for independence was used. The level of significance of the statistical test was established at 5% (p<0.05. Results: The professionals’ responses regarding the facility to read the algorithm indicated: 41.70%, great; 41.70%, good; 16.70%, regular. With regard the algorithm being sufficient for supporting decisions related to wound evaluation and wound cleaning, 87.5% said yes to both questions. Regarding the participants’ opinion that the algorithm contained enough information to support their decision regarding the choice of laser parameters, 91.7% said yes. The questionnaire presented reliability using the Cronbach’s alpha coefficient test (α = 0.962. Conclusion: The developed and validated algorithm showed reliability for evaluation, wound cleaning, and use of laser therapy in wounds.

  1. GARLIC — A general purpose atmospheric radiative transfer line-by-line infrared-microwave code: Implementation and evaluation

    International Nuclear Information System (INIS)

    Schreier, Franz; Gimeno García, Sebastián; Hedelt, Pascal; Hess, Michael; Mendrok, Jana; Vasquez, Mayte; Xu, Jian

    2014-01-01

    A suite of programs for high resolution infrared-microwave atmospheric radiative transfer modeling has been developed with emphasis on efficient and reliable numerical algorithms and a modular approach appropriate for simulation and/or retrieval in a variety of applications. The Generic Atmospheric Radiation Line-by-line Infrared Code — GARLIC — is suitable for arbitrary observation geometry, instrumental field-of-view, and line shape. The core of GARLIC's subroutines constitutes the basis of forward models used to implement inversion codes to retrieve atmospheric state parameters from limb and nadir sounding instruments. This paper briefly introduces the physical and mathematical basics of GARLIC and its descendants and continues with an in-depth presentation of various implementation aspects: An optimized Voigt function algorithm combined with a two-grid approach is used to accelerate the line-by-line modeling of molecular cross sections; various quadrature methods are implemented to evaluate the Schwarzschild and Beer integrals; and Jacobians, i.e. derivatives with respect to the unknowns of the atmospheric inverse problem, are implemented by means of automatic differentiation. For an assessment of GARLIC's performance, a comparison of the quadrature methods for solution of the path integral is provided. Verification and validation are demonstrated using intercomparisons with other line-by-line codes and comparisons of synthetic spectra with spectra observed on Earth and from Venus. - Highlights: • High resolution infrared-microwave radiative transfer model. • Discussion of algorithmic and computational aspects. • Jacobians by automatic/algorithmic differentiation. • Performance evaluation by intercomparisons, verification, validation

  2. Multiangle Implementation of Atmospheric Correction (MAIAC): 2. Aerosol Algorithm

    Science.gov (United States)

    Lyapustin, A.; Wang, Y.; Laszlo, I.; Kahn, R.; Korkin, S.; Remer, L.; Levy, R.; Reid, J. S.

    2011-01-01

    An aerosol component of a new multiangle implementation of atmospheric correction (MAIAC) algorithm is presented. MAIAC is a generic algorithm developed for the Moderate Resolution Imaging Spectroradiometer (MODIS), which performs aerosol retrievals and atmospheric correction over both dark vegetated surfaces and bright deserts based on a time series analysis and image-based processing. The MAIAC look-up tables explicitly include surface bidirectional reflectance. The aerosol algorithm derives the spectral regression coefficient (SRC) relating surface bidirectional reflectance in the blue (0.47 micron) and shortwave infrared (2.1 micron) bands; this quantity is prescribed in the MODIS operational Dark Target algorithm based on a parameterized formula. The MAIAC aerosol products include aerosol optical thickness and a fine-mode fraction at resolution of 1 km. This high resolution, required in many applications such as air quality, brings new information about aerosol sources and, potentially, their strength. AERONET validation shows that the MAIAC and MOD04 algorithms have similar accuracy over dark and vegetated surfaces and that MAIAC generally improves accuracy over brighter surfaces due to the SRC retrieval and explicit bidirectional reflectance factor characterization, as demonstrated for several U.S. West Coast AERONET sites. Due to its generic nature and developed angular correction, MAIAC performs aerosol retrievals over bright deserts, as demonstrated for the Solar Village Aerosol Robotic Network (AERONET) site in Saudi Arabia.

  3. [Near infrared spectroscopy study on water content in turbine oil].

    Science.gov (United States)

    Chen, Bin; Liu, Ge; Zhang, Xian-Ming

    2013-11-01

    Near infrared (NIR) spectroscopy combined with successive projections algorithm (SPA) was investigated for determination of water content in turbine oil. Through the 57 samples of different water content in turbine oil scanned applying near infrared (NIR) spectroscopy, with the water content in the turbine oil of 0-0.156%, different pretreatment methods such as the original spectra, first derivative spectra and differential polynomial least squares fitting algorithm Savitzky-Golay (SG), and successive projections algorithm (SPA) were applied for the extraction of effective wavelengths, the correlation coefficient (R) and root mean square error (RMSE) were used as the model evaluation indices, accordingly water content in turbine oil was investigated. The results indicated that the original spectra with different water content in turbine oil were pretreated by the performance of first derivative + SG pretreatments, then the selected effective wavelengths were used as the inputs of least square support vector machine (LS-SVM). A total of 16 variables selected by SPA were employed to construct the model of SPA and least square support vector machine (SPA-LS-SVM). There is 9 as The correlation coefficient was 0.975 9 and the root of mean square error of validation set was 2.655 8 x 10(-3) using the model, and it is feasible to determine the water content in oil using near infrared spectroscopy and SPA-LS-SVM, and an excellent prediction precision was obtained. This study supplied a new and alternative approach to the further application of near infrared spectroscopy in on-line monitoring of contamination such as water content in oil.

  4. Crosstalk effect and its mitigation in Aqua MODIS middle wave infrared bands

    Science.gov (United States)

    Sun, Junqiang; Madhavan, Sriharsha; Wang, Menghua

    2017-09-01

    The MODerate-resolution Imaging Spectroradiometer (MODIS) is one of the primary instruments in the National Aeronautics and Space Administration (NASA) Earth Observing System (EOS). The first MODIS instrument was launched in December 1999 on-board the Terra spacecraft. A follow on MODIS was launched on an afternoon orbit in 2002 and is aboard the Aqua spacecraft. Both MODIS instruments are very akin, has 36 bands, among which bands 20 to 25 are Middle Wave Infrared (MWIR) bands covering a wavelength range from approximately 3.750 μm to 4.515 μm. It was found that there was severe contamination in these bands early in mission but the effect has not been characterized and mitigated at the time. The crosstalk effect induces strong striping in the Earth View (EV) images and causes significant retrieval errors in the EV Brightness Temperature (BT) in these bands. An algorithm using a linear approximation derived from on-orbit lunar observations has been developed to correct the crosstalk effect and successfully applied to mitigate the effect in both Terra and Aqua MODIS Long Wave Infrared (LWIR) Photovoltaic (PV) bands. In this paper, the crosstalk effect in the Aqua MWIR bands is investigated and characterized by deriving the crosstalk coefficients using the scheduled Aqua MODIS lunar observations for the MWIR bands. It is shown that there are strong crosstalk contaminations among the five MWIR bands and they also have significant crosstalk contaminations from Short Wave Infrared (SWIR) bands. The crosstalk correction algorithm previously developed is applied to correct the crosstalk effect in these bands. It is demonstrated that the crosstalk correction successfully reduces the striping in the EV images and improves the accuracy of the EV BT in the five bands as was done similarly for LWIR PV bands. The crosstalk correction algorithm should thus be applied to improve both the image quality and radiometric accuracy of the Aqua MODIS MWIR bands Level 1B (L1B) products.

  5. Development of hybrid artificial intelligent based handover decision algorithm

    Directory of Open Access Journals (Sweden)

    A.M. Aibinu

    2017-04-01

    Full Text Available The possibility of seamless handover remains a mirage despite the plethora of existing handover algorithms. The underlying factor responsible for this has been traced to the Handover decision module in the Handover process. Hence, in this paper, the development of novel hybrid artificial intelligent handover decision algorithm has been developed. The developed model is made up of hybrid of Artificial Neural Network (ANN based prediction model and Fuzzy Logic. On accessing the network, the Received Signal Strength (RSS was acquired over a period of time to form a time series data. The data was then fed to the newly proposed k-step ahead ANN-based RSS prediction system for estimation of prediction model coefficients. The synaptic weights and adaptive coefficients of the trained ANN was then used to compute the k-step ahead ANN based RSS prediction model coefficients. The predicted RSS value was later codified as Fuzzy sets and in conjunction with other measured network parameters were fed into the Fuzzy logic controller in order to finalize handover decision process. The performance of the newly developed k-step ahead ANN based RSS prediction algorithm was evaluated using simulated and real data acquired from available mobile communication networks. Results obtained in both cases shows that the proposed algorithm is capable of predicting ahead the RSS value to about ±0.0002 dB. Also, the cascaded effect of the complete handover decision module was also evaluated. Results obtained show that the newly proposed hybrid approach was able to reduce ping-pong effect associated with other handover techniques.

  6. Tensor Fukunaga-Koontz transform for small target detection in infrared images

    Science.gov (United States)

    Liu, Ruiming; Wang, Jingzhuo; Yang, Huizhen; Gong, Chenglong; Zhou, Yuanshen; Liu, Lipeng; Zhang, Zhen; Shen, Shuli

    2016-09-01

    Infrared small targets detection plays a crucial role in warning and tracking systems. Some novel methods based on pattern recognition technology catch much attention from researchers. However, those classic methods must reshape images into vectors with the high dimensionality. Moreover, vectorizing breaks the natural structure and correlations in the image data. Image representation based on tensor treats images as matrices and can hold the natural structure and correlation information. So tensor algorithms have better classification performance than vector algorithms. Fukunaga-Koontz transform is one of classification algorithms and it is a vector version method with the disadvantage of all vector algorithms. In this paper, we first extended the Fukunaga-Koontz transform into its tensor version, tensor Fukunaga-Koontz transform. Then we designed a method based on tensor Fukunaga-Koontz transform for detecting targets and used it to detect small targets in infrared images. The experimental results, comparison through signal-to-clutter, signal-to-clutter gain and background suppression factor, have validated the advantage of the target detection based on the tensor Fukunaga-Koontz transform over that based on the Fukunaga-Koontz transform.

  7. Photographic infrared spectroscopy and near infrared photometry of Be stars

    International Nuclear Information System (INIS)

    Swings, J.P.

    1976-01-01

    Two topics are tackled in this presentation: spectroscopy and photometry. The following definitions are chosen: photographic infrared spectroscopy (wavelengths Hα<=lambda<1.2 μ); near infrared photometry (wavebands: 1.6 μ<=lambda<=20 μ). Near infrared spectroscopy and photometry of classical and peculiar Be stars are discussed and some future developments in the field are outlined. (Auth.)

  8. Development of secondary cell wall in cotton fibers as examined with Fourier transform-infrared spectroscopy

    Science.gov (United States)

    Our presentation will focus on continuing efforts to examine secondary cell wall development in cotton fibers using infrared Spectroscopy. Cotton fibers harvested at 18, 20, 24, 28, 32, 36 and 40 days after flowering were examined using attenuated total reflection Fourier transform-infrared (ATR FT-...

  9. Development and Evaluation of Algorithms for Breath Alcohol Screening.

    Science.gov (United States)

    Ljungblad, Jonas; Hök, Bertil; Ekström, Mikael

    2016-04-01

    Breath alcohol screening is important for traffic safety, access control and other areas of health promotion. A family of sensor devices useful for these purposes is being developed and evaluated. This paper is focusing on algorithms for the determination of breath alcohol concentration in diluted breath samples using carbon dioxide to compensate for the dilution. The examined algorithms make use of signal averaging, weighting and personalization to reduce estimation errors. Evaluation has been performed by using data from a previously conducted human study. It is concluded that these features in combination will significantly reduce the random error compared to the signal averaging algorithm taken alone.

  10. Motion Cueing Algorithm Development: Human-Centered Linear and Nonlinear Approaches

    Science.gov (United States)

    Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.

    2005-01-01

    While the performance of flight simulator motion system hardware has advanced substantially, the development of the motion cueing algorithm, the software that transforms simulated aircraft dynamics into realizable motion commands, has not kept pace. Prior research identified viable features from two algorithms: the nonlinear "adaptive algorithm", and the "optimal algorithm" that incorporates human vestibular models. A novel approach to motion cueing, the "nonlinear algorithm" is introduced that combines features from both approaches. This algorithm is formulated by optimal control, and incorporates a new integrated perception model that includes both visual and vestibular sensation and the interaction between the stimuli. Using a time-varying control law, the matrix Riccati equation is updated in real time by a neurocomputing approach. Preliminary pilot testing resulted in the optimal algorithm incorporating a new otolith model, producing improved motion cues. The nonlinear algorithm vertical mode produced a motion cue with a time-varying washout, sustaining small cues for longer durations and washing out large cues more quickly compared to the optimal algorithm. The inclusion of the integrated perception model improved the responses to longitudinal and lateral cues. False cues observed with the NASA adaptive algorithm were absent. The neurocomputing approach was crucial in that the number of presentations of an input vector could be reduced to meet the real time requirement without degrading the quality of the motion cues.

  11. Development of Infrared Phase Closure Capability in the Infrared-Optical Telescope Array (IOTA)

    Science.gov (United States)

    Traub, Wesley A.

    2002-01-01

    We completed all major fabrication and testing for the third telescope and phase-closure operation at the Infrared-Optical Telescope Array (IOTA) during this period. In particular we successfully tested the phase-closure operation, using a laboratory light source illuminating the full delay-line optical paths, and using an integrated-optic beam combiner coupled to our Picnic-detector camera. This demonstration is an important and near-final milestone achievement. As of this writing, however, several tasks yet remain, owing to development snags and weather, so the final proof of success, phase-closure observation of a star, is now expected to occur in early 2002, soon after this report has been submitted.

  12. Cultural Artifact Detection in Long Wave Infrared Imagery.

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Dylan Zachary [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Craven, Julia M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ramon, Eric [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-01-01

    Detection of cultural artifacts from airborne remotely sensed data is an important task in the context of on-site inspections. Airborne artifact detection can reduce the size of the search area the ground based inspection team must visit, thereby improving the efficiency of the inspection process. This report details two algorithms for detection of cultural artifacts in aerial long wave infrared imagery. The first algorithm creates an explicit model for cultural artifacts, and finds data that fits the model. The second algorithm creates a model of the background and finds data that does not fit the model. Both algorithms are applied to orthomosaic imagery generated as part of the MSFE13 data collection campaign under the spectral technology evaluation project.

  13. An image analysis system for near-infrared (NIR) fluorescence lymph imaging

    Science.gov (United States)

    Zhang, Jingdan; Zhou, Shaohua Kevin; Xiang, Xiaoyan; Rasmussen, John C.; Sevick-Muraca, Eva M.

    2011-03-01

    Quantitative analysis of lymphatic function is crucial for understanding the lymphatic system and diagnosing the associated diseases. Recently, a near-infrared (NIR) fluorescence imaging system is developed for real-time imaging lymphatic propulsion by intradermal injection of microdose of a NIR fluorophore distal to the lymphatics of interest. However, the previous analysis software3, 4 is underdeveloped, requiring extensive time and effort to analyze a NIR image sequence. In this paper, we develop a number of image processing techniques to automate the data analysis workflow, including an object tracking algorithm to stabilize the subject and remove the motion artifacts, an image representation named flow map to characterize lymphatic flow more reliably, and an automatic algorithm to compute lymph velocity and frequency of propulsion. By integrating all these techniques to a system, the analysis workflow significantly reduces the amount of required user interaction and improves the reliability of the measurement.

  14. Time series analysis of infrared satellite data for detecting thermal anomalies: a hybrid approach

    Science.gov (United States)

    Koeppen, W. C.; Pilger, E.; Wright, R.

    2011-07-01

    We developed and tested an automated algorithm that analyzes thermal infrared satellite time series data to detect and quantify the excess energy radiated from thermal anomalies such as active volcanoes. Our algorithm enhances the previously developed MODVOLC approach, a simple point operation, by adding a more complex time series component based on the methods of the Robust Satellite Techniques (RST) algorithm. Using test sites at Anatahan and Kīlauea volcanoes, the hybrid time series approach detected ~15% more thermal anomalies than MODVOLC with very few, if any, known false detections. We also tested gas flares in the Cantarell oil field in the Gulf of Mexico as an end-member scenario representing very persistent thermal anomalies. At Cantarell, the hybrid algorithm showed only a slight improvement, but it did identify flares that were undetected by MODVOLC. We estimate that at least 80 MODIS images for each calendar month are required to create good reference images necessary for the time series analysis of the hybrid algorithm. The improved performance of the new algorithm over MODVOLC will result in the detection of low temperature thermal anomalies that will be useful in improving our ability to document Earth's volcanic eruptions, as well as detecting low temperature thermal precursors to larger eruptions.

  15. Improved target detection algorithm using Fukunaga-Koontz transform and distance classifier correlation filter

    Science.gov (United States)

    Bal, A.; Alam, M. S.; Aslan, M. S.

    2006-05-01

    Often sensor ego-motion or fast target movement causes the target to temporarily go out of the field-of-view leading to reappearing target detection problem in target tracking applications. Since the target goes out of the current frame and reenters at a later frame, the reentering location and variations in rotation, scale, and other 3D orientations of the target are not known thus complicating the detection algorithm has been developed using Fukunaga-Koontz Transform (FKT) and distance classifier correlation filter (DCCF). The detection algorithm uses target and background information, extracted from training samples, to detect possible candidate target images. The detected candidate target images are then introduced into the second algorithm, DCCF, called clutter rejection module, to determine the target coordinates are detected and tracking algorithm is initiated. The performance of the proposed FKT-DCCF based target detection algorithm has been tested using real-world forward looking infrared (FLIR) video sequences.

  16. Night-Time Vehicle Detection Algorithm Based on Visual Saliency and Deep Learning

    Directory of Open Access Journals (Sweden)

    Yingfeng Cai

    2016-01-01

    Full Text Available Night vision systems get more and more attention in the field of automotive active safety field. In this area, a number of researchers have proposed far-infrared sensor based night-time vehicle detection algorithm. However, existing algorithms have low performance in some indicators such as the detection rate and processing time. To solve this problem, we propose a far-infrared image vehicle detection algorithm based on visual saliency and deep learning. Firstly, most of the nonvehicle pixels will be removed with visual saliency computation. Then, vehicle candidate will be generated by using prior information such as camera parameters and vehicle size. Finally, classifier trained with deep belief networks will be applied to verify the candidates generated in last step. The proposed algorithm is tested in around 6000 images and achieves detection rate of 92.3% and processing time of 25 Hz which is better than existing methods.

  17. An Algorithm For Climate-Quality Atmospheric Profiling Continuity From EOS Aqua To Suomi-NPP

    Science.gov (United States)

    Moncet, J. L.

    2015-12-01

    We will present results from an algorithm that is being developed to produce climate-quality atmospheric profiling earth system data records (ESDRs) for application to hyperspectral sounding instrument data from Suomi-NPP, EOS Aqua, and other spacecraft. The current focus is on data from the S-NPP Cross-track Infrared Sounder (CrIS) and Advanced Technology Microwave Sounder (ATMS) instruments as well as the Atmospheric InfraRed Sounder (AIRS) on EOS Aqua. The algorithm development at Atmospheric and Environmental Research (AER) has common heritage with the optimal estimation (OE) algorithm operationally processing S-NPP data in the Interface Data Processing Segment (IDPS), but the ESDR algorithm has a flexible, modular software structure to support experimentation and collaboration and has several features adapted to the climate orientation of ESDRs. Data record continuity benefits from the fact that the same algorithm can be applied to different sensors, simply by providing suitable configuration and data files. The radiative transfer component uses an enhanced version of optimal spectral sampling (OSS) with updated spectroscopy, treatment of emission that is not in local thermodynamic equilibrium (non-LTE), efficiency gains with "global" optimal sampling over all channels, and support for channel selection. The algorithm is designed for adaptive treatment of clouds, with capability to apply "cloud clearing" or simultaneous cloud parameter retrieval, depending on conditions. We will present retrieval results demonstrating the impact of a new capability to perform the retrievals on sigma or hybrid vertical grid (as opposed to a fixed pressure grid), which particularly affects profile accuracy over land with variable terrain height and with sharp vertical structure near the surface. In addition, we will show impacts of alternative treatments of regularization of the inversion. While OE algorithms typically implement regularization by using background estimates from

  18. Development of the algorithm for obtaining 3-dimensional information using the structured light

    Energy Technology Data Exchange (ETDEWEB)

    Shin, Dong Uk; Lee, Jae Hyub; Kim, Chung Soo [Korea University of Technology and Education, Cheonan (Korea)

    1998-03-01

    The utilization of robot in atomic power plants or nuclear-related facilities has grown rapidly. In order to perform preassigned jobs using robot in nuclear-related facilities, advanced technology extracting 3D information of objects is essential. We have studied an algorithm to extract 3D information of objects using laser slit light and camera, and developed the following hardware system and algorithms. (1) We have designed and fabricated the hardware system which consists of laser light and two cameras. The hardware system can be easily installed on the robot. (2) In order to reduce the occlusion problem when measuring 3D information using laser slit light and camera, we have studied system with laser slit light and two cameras and developed algorithm to synthesize 3D information obtained from two cameras. (2) For easy use of obtained 3D information, we expressed it as digital distance image format and developed algorithm to interpolate 3D information of points which is not obtained. (4) In order to simplify calibration of the camera's parameter, we have also designed an fabricated LED plate, and developed an algorithm detecting the center position of LED automatically. We can certify the efficiency of developed algorithm and hardware system through experimental results. 16 refs., 26 figs., 1 tabs. (Author)

  19. Human body region enhancement method based on Kinect infrared imaging

    Science.gov (United States)

    Yang, Lei; Fan, Yubo; Song, Xiaowei; Cai, Wenjing

    2016-10-01

    To effectively improve the low contrast of human body region in the infrared images, a combing method of several enhancement methods is utilized to enhance the human body region. Firstly, for the infrared images acquired by Kinect, in order to improve the overall contrast of the infrared images, an Optimal Contrast-Tone Mapping (OCTM) method with multi-iterations is applied to balance the contrast of low-luminosity infrared images. Secondly, to enhance the human body region better, a Level Set algorithm is employed to improve the contour edges of human body region. Finally, to further improve the human body region in infrared images, Laplacian Pyramid decomposition is adopted to enhance the contour-improved human body region. Meanwhile, the background area without human body region is processed by bilateral filtering to improve the overall effect. With theoretical analysis and experimental verification, the results show that the proposed method could effectively enhance the human body region of such infrared images.

  20. A Developed Artificial Bee Colony Algorithm Based on Cloud Model

    Directory of Open Access Journals (Sweden)

    Ye Jin

    2018-04-01

    Full Text Available The Artificial Bee Colony (ABC algorithm is a bionic intelligent optimization method. The cloud model is a kind of uncertainty conversion model between a qualitative concept T ˜ that is presented by nature language and its quantitative expression, which integrates probability theory and the fuzzy mathematics. A developed ABC algorithm based on cloud model is proposed to enhance accuracy of the basic ABC algorithm and avoid getting trapped into local optima by introducing a new select mechanism, replacing the onlooker bees’ search formula and changing the scout bees’ updating formula. Experiments on CEC15 show that the new algorithm has a faster convergence speed and higher accuracy than the basic ABC and some cloud model based ABC variants.

  1. Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics

    Science.gov (United States)

    Goodrich, John W.; Dyson, Rodger W.

    1999-01-01

    The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that

  2. Search for 'Little Higgs' and reconstruction algorithms developments in Atlas

    International Nuclear Information System (INIS)

    Rousseau, D.

    2007-05-01

    This document summarizes developments of framework and reconstruction algorithms for the ATLAS detector at the LHC. A library of reconstruction algorithms has been developed in a more and more complex environment. The reconstruction software originally designed on an optimistic Monte-Carlo simulation, has been confronted with a more detailed 'as-built' simulation. The 'Little Higgs' is an effective theory which can be taken for granted, or as an opportunity to study heavy resonances. In several cases, these resonances can be detected in original channels like tZ, ZH or WH. (author)

  3. Stochastic split determinant algorithms

    International Nuclear Information System (INIS)

    Horvatha, Ivan

    2000-01-01

    I propose a large class of stochastic Markov processes associated with probability distributions analogous to that of lattice gauge theory with dynamical fermions. The construction incorporates the idea of approximate spectral split of the determinant through local loop action, and the idea of treating the infrared part of the split through explicit diagonalizations. I suggest that exact algorithms of practical relevance might be based on Markov processes so constructed

  4. Development of Base Transceiver Station Selection Algorithm for ...

    African Journals Online (AJOL)

    TEMS) equipment was carried out on the existing BTSs, and a linear algorithm optimization program based on the spectral link efficiency of each BTS was developed, the output of this site optimization gives the selected number of base station sites ...

  5. Developing and Implementing the Data Mining Algorithms in RAVEN

    International Nuclear Information System (INIS)

    Sen, Ramazan Sonat; Maljovec, Daniel Patrick; Alfonsi, Andrea; Rabiti, Cristian

    2015-01-01

    The RAVEN code is becoming a comprehensive tool to perform probabilistic risk assessment, uncertainty quantification, and verification and validation. The RAVEN code is being developed to support many programs and to provide a set of methodologies and algorithms for advanced analysis. Scientific computer codes can generate enormous amounts of data. To post-process and analyze such data might, in some cases, take longer than the initial software runtime. Data mining algorithms/methods help in recognizing and understanding patterns in the data, and thus discover knowledge in databases. The methodologies used in the dynamic probabilistic risk assessment or in uncertainty and error quantification analysis couple system/physics codes with simulation controller codes, such as RAVEN. RAVEN introduces both deterministic and stochastic elements into the simulation while the system/physics code model the dynamics deterministically. A typical analysis is performed by sampling values of a set of parameter values. A major challenge in using dynamic probabilistic risk assessment or uncertainty and error quantification analysis for a complex system is to analyze the large number of scenarios generated. Data mining techniques are typically used to better organize and understand data, i.e. recognizing patterns in the data. This report focuses on development and implementation of Application Programming Interfaces (APIs) for different data mining algorithms, and the application of these algorithms to different databases.

  6. Developing and Implementing the Data Mining Algorithms in RAVEN

    Energy Technology Data Exchange (ETDEWEB)

    Sen, Ramazan Sonat [Idaho National Lab. (INL), Idaho Falls, ID (United States); Maljovec, Daniel Patrick [Idaho National Lab. (INL), Idaho Falls, ID (United States); Alfonsi, Andrea [Idaho National Lab. (INL), Idaho Falls, ID (United States); Rabiti, Cristian [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    The RAVEN code is becoming a comprehensive tool to perform probabilistic risk assessment, uncertainty quantification, and verification and validation. The RAVEN code is being developed to support many programs and to provide a set of methodologies and algorithms for advanced analysis. Scientific computer codes can generate enormous amounts of data. To post-process and analyze such data might, in some cases, take longer than the initial software runtime. Data mining algorithms/methods help in recognizing and understanding patterns in the data, and thus discover knowledge in databases. The methodologies used in the dynamic probabilistic risk assessment or in uncertainty and error quantification analysis couple system/physics codes with simulation controller codes, such as RAVEN. RAVEN introduces both deterministic and stochastic elements into the simulation while the system/physics code model the dynamics deterministically. A typical analysis is performed by sampling values of a set of parameter values. A major challenge in using dynamic probabilistic risk assessment or uncertainty and error quantification analysis for a complex system is to analyze the large number of scenarios generated. Data mining techniques are typically used to better organize and understand data, i.e. recognizing patterns in the data. This report focuses on development and implementation of Application Programming Interfaces (APIs) for different data mining algorithms, and the application of these algorithms to different databases.

  7. Estimating the marine signal in the near infrared for atmospheric correction of satellite ocean-color imagery over turbid waters

    Science.gov (United States)

    Bourdet, Alice; Frouin, Robert J.

    2014-11-01

    The classic atmospheric correction algorithm, routinely applied to second-generation ocean-color sensors such as SeaWiFS, MODIS, and MERIS, consists of (i) estimating the aerosol reflectance in the red and near infrared (NIR) where the ocean is considered black (i.e., totally absorbing), and (ii) extrapolating the estimated aerosol reflectance to shorter wavelengths. The marine reflectance is then retrieved by subtraction. Variants and improvements have been made over the years to deal with non-null reflectance in the red and near infrared, a general situation in estuaries and the coastal zone, but the solutions proposed so far still suffer some limitations, due to uncertainties in marine reflectance modeling in the near infrared or difficulty to extrapolate the aerosol signal to the blue when using observations in the shortwave infrared (SWIR), a spectral range far from the ocean-color wavelengths. To estimate the marine signal (i.e., the product of marine reflectance and atmospheric transmittance) in the near infrared, the proposed approach is to decompose the aerosol reflectance in the near infrared to shortwave infrared into principal components. Since aerosol scattering is smooth spectrally, a few components are generally sufficient to represent the perturbing signal, i.e., the aerosol reflectance in the near infrared can be determined from measurements in the shortwave infrared where the ocean is black. This gives access to the marine signal in the near infrared, which can then be used in the classic atmospheric correction algorithm. The methodology is evaluated theoretically from simulations of the top-of-atmosphere reflectance for a wide range of geophysical conditions and angular geometries and applied to actual MODIS imagery acquired over the Gulf of Mexico. The number of discarded pixels is reduced by over 80% using the PC modeling to determine the marine signal in the near infrared prior to applying the classic atmospheric correction algorithm.

  8. Edge enhancement and noise suppression for infrared image based on feature analysis

    Science.gov (United States)

    Jiang, Meng

    2018-06-01

    Infrared images are often suffering from background noise, blurred edges, few details and low signal-to-noise ratios. To improve infrared image quality, it is essential to suppress noise and enhance edges simultaneously. To realize it in this paper, we propose a novel algorithm based on feature analysis in shearlet domain. Firstly, as one of multi-scale geometric analysis (MGA), we introduce the theory and superiority of shearlet transform. Secondly, after analyzing the defects of traditional thresholding technique to suppress noise, we propose a novel feature extraction distinguishing image structures from noise well and use it to improve the traditional thresholding technique. Thirdly, with computing the correlations between neighboring shearlet coefficients, the feature attribute maps identifying the weak detail and strong edges are completed to improve the generalized unsharped masking (GUM). At last, experiment results with infrared images captured in different scenes demonstrate that the proposed algorithm suppresses noise efficiently and enhances image edges adaptively.

  9. Advances in research and development homojunction and quantum-well infrared detectors

    CERN Document Server

    Francombe, Maurice H

    1995-01-01

    Physics of Thin Films is one of the longest running continuing series in thin film science, consisting of twenty volumes since 1963. The series contains quality studies of the properties of various thinfilms materials and systems.In order to be able to reflect the development of today''s science and to cover all modern aspects of thin films, the series, starting with Volume 20, has moved beyond the basic physics of thin films. It now addresses the most important aspects of both inorganic and organic thin films, in both their theoretical as well as technological aspects. Therefore, in order to reflect the modern technology-oriented problems, the title has been slightly modified from Physics of Thin Films to Thin Films.Key Features* Discusses the latest research about structure, physics, and infrared photoemissive behavior of heavily doped silicon homojunctions and Ge and GaAs-based alloy junctions* Reviews the current status of SiGe/Si quantum wells for infrared detection* Discusses key developments in the gro...

  10. History of infrared detectors

    Science.gov (United States)

    Rogalski, A.

    2012-09-01

    This paper overviews the history of infrared detector materials starting with Herschel's experiment with thermometer on February 11th, 1800. Infrared detectors are in general used to detect, image, and measure patterns of the thermal heat radiation which all objects emit. At the beginning, their development was connected with thermal detectors, such as thermocouples and bolometers, which are still used today and which are generally sensitive to all infrared wavelengths and operate at room temperature. The second kind of detectors, called the photon detectors, was mainly developed during the 20th Century to improve sensitivity and response time. These detectors have been extensively developed since the 1940's. Lead sulphide (PbS) was the first practical IR detector with sensitivity to infrared wavelengths up to ˜3 μm. After World War II infrared detector technology development was and continues to be primarily driven by military applications. Discovery of variable band gap HgCdTe ternary alloy by Lawson and co-workers in 1959 opened a new area in IR detector technology and has provided an unprecedented degree of freedom in infrared detector design. Many of these advances were transferred to IR astronomy from Departments of Defence research. Later on civilian applications of infrared technology are frequently called "dual-use technology applications." One should point out the growing utilisation of IR technologies in the civilian sphere based on the use of new materials and technologies, as well as the noticeable price decrease in these high cost technologies. In the last four decades different types of detectors are combined with electronic readouts to make detector focal plane arrays (FPAs). Development in FPA technology has revolutionized infrared imaging. Progress in integrated circuit design and fabrication techniques has resulted in continued rapid growth in the size and performance of these solid state arrays.

  11. On-line monitoring of extraction process of Flos Lonicerae Japonicae using near infrared spectroscopy combined with synergy interval PLS and genetic algorithm

    Science.gov (United States)

    Yang, Yue; Wang, Lei; Wu, Yongjiang; Liu, Xuesong; Bi, Yuan; Xiao, Wei; Chen, Yong

    2017-07-01

    There is a growing need for the effective on-line process monitoring during the manufacture of traditional Chinese medicine to ensure quality consistency. In this study, the potential of near infrared (NIR) spectroscopy technique to monitor the extraction process of Flos Lonicerae Japonicae was investigated. A new algorithm of synergy interval PLS with genetic algorithm (Si-GA-PLS) was proposed for modeling. Four different PLS models, namely Full-PLS, Si-PLS, GA-PLS, and Si-GA-PLS, were established, and their performances in predicting two quality parameters (viz. total acid and soluble solid contents) were compared. In conclusion, Si-GA-PLS model got the best results due to the combination of superiority of Si-PLS and GA. For Si-GA-PLS, the determination coefficient (Rp2) and root-mean-square error for the prediction set (RMSEP) were 0.9561 and 147.6544 μg/ml for total acid, 0.9062 and 0.1078% for soluble solid contents, correspondingly. The overall results demonstrated that the NIR spectroscopy technique combined with Si-GA-PLS calibration is a reliable and non-destructive alternative method for on-line monitoring of the extraction process of TCM on the production scale.

  12. Machine Learning Algorithms Outperform Conventional Regression Models in Predicting Development of Hepatocellular Carcinoma

    Science.gov (United States)

    Singal, Amit G.; Mukherjee, Ashin; Elmunzer, B. Joseph; Higgins, Peter DR; Lok, Anna S.; Zhu, Ji; Marrero, Jorge A; Waljee, Akbar K

    2015-01-01

    Background Predictive models for hepatocellular carcinoma (HCC) have been limited by modest accuracy and lack of validation. Machine learning algorithms offer a novel methodology, which may improve HCC risk prognostication among patients with cirrhosis. Our study's aim was to develop and compare predictive models for HCC development among cirrhotic patients, using conventional regression analysis and machine learning algorithms. Methods We enrolled 442 patients with Child A or B cirrhosis at the University of Michigan between January 2004 and September 2006 (UM cohort) and prospectively followed them until HCC development, liver transplantation, death, or study termination. Regression analysis and machine learning algorithms were used to construct predictive models for HCC development, which were tested on an independent validation cohort from the Hepatitis C Antiviral Long-term Treatment against Cirrhosis (HALT-C) Trial. Both models were also compared to the previously published HALT-C model. Discrimination was assessed using receiver operating characteristic curve analysis and diagnostic accuracy was assessed with net reclassification improvement and integrated discrimination improvement statistics. Results After a median follow-up of 3.5 years, 41 patients developed HCC. The UM regression model had a c-statistic of 0.61 (95%CI 0.56-0.67), whereas the machine learning algorithm had a c-statistic of 0.64 (95%CI 0.60–0.69) in the validation cohort. The machine learning algorithm had significantly better diagnostic accuracy as assessed by net reclassification improvement (pmachine learning algorithm (p=0.047). Conclusion Machine learning algorithms improve the accuracy of risk stratifying patients with cirrhosis and can be used to accurately identify patients at high-risk for developing HCC. PMID:24169273

  13. Single-footprint retrievals for AIRS using a fast TwoSlab cloud-representation model and the SARTA all-sky infrared radiative transfer algorithm

    Science.gov (United States)

    DeSouza-Machado, Sergio; Larrabee Strow, L.; Tangborn, Andrew; Huang, Xianglei; Chen, Xiuhong; Liu, Xu; Wu, Wan; Yang, Qiguang

    2018-01-01

    One-dimensional variational retrievals of temperature and moisture fields from hyperspectral infrared (IR) satellite sounders use cloud-cleared radiances (CCRs) as their observation. These derived observations allow the use of clear-sky-only radiative transfer in the inversion for geophysical variables but at reduced spatial resolution compared to the native sounder observations. Cloud clearing can introduce various errors, although scenes with large errors can be identified and ignored. Information content studies show that, when using multilayer cloud liquid and ice profiles in infrared hyperspectral radiative transfer codes, there are typically only 2-4 degrees of freedom (DOFs) of cloud signal. This implies a simplified cloud representation is sufficient for some applications which need accurate radiative transfer. Here we describe a single-footprint retrieval approach for clear and cloudy conditions, which uses the thermodynamic and cloud fields from numerical weather prediction (NWP) models as a first guess, together with a simple cloud-representation model coupled to a fast scattering radiative transfer algorithm (RTA). The NWP model thermodynamic and cloud profiles are first co-located to the observations, after which the N-level cloud profiles are converted to two slab clouds (TwoSlab; typically one for ice and one for water clouds). From these, one run of our fast cloud-representation model allows an improvement of the a priori cloud state by comparing the observed and model-simulated radiances in the thermal window channels. The retrieval yield is over 90 %, while the degrees of freedom correlate with the observed window channel brightness temperature (BT) which itself depends on the cloud optical depth. The cloud-representation and scattering package is benchmarked against radiances computed using a maximum random overlap (RMO) cloud scheme. All-sky infrared radiances measured by NASA's Atmospheric Infrared Sounder (AIRS) and NWP thermodynamic and cloud

  14. Single-footprint retrievals for AIRS using a fast TwoSlab cloud-representation model and the SARTA all-sky infrared radiative transfer algorithm

    Directory of Open Access Journals (Sweden)

    S. DeSouza-Machado

    2018-01-01

    Full Text Available One-dimensional variational retrievals of temperature and moisture fields from hyperspectral infrared (IR satellite sounders use cloud-cleared radiances (CCRs as their observation. These derived observations allow the use of clear-sky-only radiative transfer in the inversion for geophysical variables but at reduced spatial resolution compared to the native sounder observations. Cloud clearing can introduce various errors, although scenes with large errors can be identified and ignored. Information content studies show that, when using multilayer cloud liquid and ice profiles in infrared hyperspectral radiative transfer codes, there are typically only 2–4 degrees of freedom (DOFs of cloud signal. This implies a simplified cloud representation is sufficient for some applications which need accurate radiative transfer. Here we describe a single-footprint retrieval approach for clear and cloudy conditions, which uses the thermodynamic and cloud fields from numerical weather prediction (NWP models as a first guess, together with a simple cloud-representation model coupled to a fast scattering radiative transfer algorithm (RTA. The NWP model thermodynamic and cloud profiles are first co-located to the observations, after which the N-level cloud profiles are converted to two slab clouds (TwoSlab; typically one for ice and one for water clouds. From these, one run of our fast cloud-representation model allows an improvement of the a priori cloud state by comparing the observed and model-simulated radiances in the thermal window channels. The retrieval yield is over 90 %, while the degrees of freedom correlate with the observed window channel brightness temperature (BT which itself depends on the cloud optical depth. The cloud-representation and scattering package is benchmarked against radiances computed using a maximum random overlap (RMO cloud scheme. All-sky infrared radiances measured by NASA's Atmospheric Infrared Sounder (AIRS and NWP

  15. Algorithm theoretical baseline for formaldehyde retrievals from S5P TROPOMI and from the QA4ECV project

    Directory of Open Access Journals (Sweden)

    I. De Smedt

    2018-04-01

    Full Text Available On board the Copernicus Sentinel-5 Precursor (S5P platform, the TROPOspheric Monitoring Instrument (TROPOMI is a double-channel, nadir-viewing grating spectrometer measuring solar back-scattered earthshine radiances in the ultraviolet, visible, near-infrared, and shortwave infrared with global daily coverage. In the ultraviolet range, its spectral resolution and radiometric performance are equivalent to those of its predecessor OMI, but its horizontal resolution at true nadir is improved by an order of magnitude. This paper introduces the formaldehyde (HCHO tropospheric vertical column retrieval algorithm implemented in the S5P operational processor and comprehensively describes its various retrieval steps. Furthermore, algorithmic improvements developed in the framework of the EU FP7-project QA4ECV are described for future updates of the processor. Detailed error estimates are discussed in the light of Copernicus user requirements and needs for validation are highlighted. Finally, verification results based on the application of the algorithm to OMI measurements are presented, demonstrating the performances expected for TROPOMI.

  16. Development of target-tracking algorithms using neural network

    Energy Technology Data Exchange (ETDEWEB)

    Park, Dong Sun; Lee, Joon Whaoan; Yoon, Sook; Baek, Seong Hyun; Lee, Myung Jae [Chonbuk National University, Chonjoo (Korea)

    1998-04-01

    The utilization of remote-control robot system in atomic power plants or nuclear-related facilities grows rapidly, to protect workers form high radiation environments. Such applications require complete stability of the robot system, so that precisely tracking the robot is essential for the whole system. This research is to accomplish the goal by developing appropriate algorithms for remote-control robot systems. A neural network tracking system is designed and experimented to trace a robot Endpoint. This model is aimed to utilized the excellent capabilities of neural networks; nonlinear mapping between inputs and outputs, learning capability, and generalization capability. The neural tracker consists of two networks for position detection and prediction. Tracking algorithms are developed and experimented for the two models. Results of the experiments show that both models are promising as real-time target-tracking systems for remote-control robot systems. (author). 10 refs., 47 figs.

  17. Development of web-based reliability data analysis algorithm model and its application

    International Nuclear Information System (INIS)

    Hwang, Seok-Won; Oh, Ji-Yong; Moosung-Jae

    2010-01-01

    For this study, a database model of plant reliability was developed for the effective acquisition and management of plant-specific data that can be used in various applications of plant programs as well as in Probabilistic Safety Assessment (PSA). Through the development of a web-based reliability data analysis algorithm, this approach systematically gathers specific plant data such as component failure history, maintenance history, and shift diary. First, for the application of the developed algorithm, this study reestablished the raw data types, data deposition procedures and features of the Enterprise Resource Planning (ERP) system process. The component codes and system codes were standardized to make statistical analysis between different types of plants possible. This standardization contributes to the establishment of a flexible database model that allows the customization of reliability data for the various applications depending on component types and systems. In addition, this approach makes it possible for users to perform trend analyses and data comparisons for the significant plant components and systems. The validation of the algorithm is performed through a comparison of the importance measure value (Fussel-Vesely) of the mathematical calculation and that of the algorithm application. The development of a reliability database algorithm is one of the best approaches for providing systemic management of plant-specific reliability data with transparency and continuity. This proposed algorithm reinforces the relationships between raw data and application results so that it can provide a comprehensive database that offers everything from basic plant-related data to final customized data.

  18. Development of web-based reliability data analysis algorithm model and its application

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Seok-Won, E-mail: swhwang@khnp.co.k [Korea Hydro and Nuclear Power Co. Ltd., Jang-Dong 25-1, Yuseong-Gu, 305-343 Daejeon (Korea, Republic of); Oh, Ji-Yong [Korea Hydro and Nuclear Power Co. Ltd., Jang-Dong 25-1, Yuseong-Gu, 305-343 Daejeon (Korea, Republic of); Moosung-Jae [Department of Nuclear Engineering Hanyang University 17 Haengdang, Sungdong, Seoul (Korea, Republic of)

    2010-02-15

    For this study, a database model of plant reliability was developed for the effective acquisition and management of plant-specific data that can be used in various applications of plant programs as well as in Probabilistic Safety Assessment (PSA). Through the development of a web-based reliability data analysis algorithm, this approach systematically gathers specific plant data such as component failure history, maintenance history, and shift diary. First, for the application of the developed algorithm, this study reestablished the raw data types, data deposition procedures and features of the Enterprise Resource Planning (ERP) system process. The component codes and system codes were standardized to make statistical analysis between different types of plants possible. This standardization contributes to the establishment of a flexible database model that allows the customization of reliability data for the various applications depending on component types and systems. In addition, this approach makes it possible for users to perform trend analyses and data comparisons for the significant plant components and systems. The validation of the algorithm is performed through a comparison of the importance measure value (Fussel-Vesely) of the mathematical calculation and that of the algorithm application. The development of a reliability database algorithm is one of the best approaches for providing systemic management of plant-specific reliability data with transparency and continuity. This proposed algorithm reinforces the relationships between raw data and application results so that it can provide a comprehensive database that offers everything from basic plant-related data to final customized data.

  19. Face recognition in the thermal infrared domain

    Science.gov (United States)

    Kowalski, M.; Grudzień, A.; Palka, N.; Szustakowski, M.

    2017-10-01

    Biometrics refers to unique human characteristics. Each unique characteristic may be used to label and describe individuals and for automatic recognition of a person based on physiological or behavioural properties. One of the most natural and the most popular biometric trait is a face. The most common research methods on face recognition are based on visible light. State-of-the-art face recognition systems operating in the visible light spectrum achieve very high level of recognition accuracy under controlled environmental conditions. Thermal infrared imagery seems to be a promising alternative or complement to visible range imaging due to its relatively high resistance to illumination changes. A thermal infrared image of the human face presents its unique heat-signature and can be used for recognition. The characteristics of thermal images maintain advantages over visible light images, and can be used to improve algorithms of human face recognition in several aspects. Mid-wavelength or far-wavelength infrared also referred to as thermal infrared seems to be promising alternatives. We present the study on 1:1 recognition in thermal infrared domain. The two approaches we are considering are stand-off face verification of non-moving person as well as stop-less face verification on-the-move. The paper presents methodology of our studies and challenges for face recognition systems in the thermal infrared domain.

  20. Development of an inter-layer solute transport algorithm for SOLTR computer program. Part 1. The algorithm

    International Nuclear Information System (INIS)

    Miller, I.; Roman, K.

    1979-12-01

    In order to perform studies of the influence of regional groundwater flow systems on the long-term performance of potential high-level nuclear waste repositories, it was determined that an adequate computer model would have to consider the full three-dimensional flow system. Golder Associates' SOLTR code, while three-dimensional, has an overly simple algorithm for simulating the passage of radionuclides from one aquifier to another above or below it. Part 1 of this report describes the algorithm developed to provide SOLTR with an improved capability for simulating interaquifer transport

  1. A TRMM-Calibrated Infrared Rainfall Algorithm Applied Over Brazil

    Science.gov (United States)

    Negri, A. J.; Xu, L.; Adler, R. F.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    The development of a satellite infrared technique for estimating convective and stratiform rainfall and its application in studying the diurnal variability of rainfall in Amazonia are presented. The Convective-Stratiform. Technique, calibrated by coincident, physically retrieved rain rates from the Tropical Rain Measuring Mission (TRMM) Microwave Imager (TMI), is applied during January to April 1999 over northern South America. The diurnal cycle of rainfall, as well as the division between convective and stratiform rainfall is presented. Results compare well (a one-hour lag) with the diurnal cycle derived from Tropical Ocean-Global Atmosphere (TOGA) radar-estimated rainfall in Rondonia. The satellite estimates reveal that the convective rain constitutes, in the mean, 24% of the rain area while accounting for 67% of the rain volume. The effects of geography (rivers, lakes, coasts) and topography on the diurnal cycle of convection are examined. In particular, the Amazon River, downstream of Manaus, is shown to both enhance early morning rainfall and inhibit afternoon convection. Monthly estimates from this technique, dubbed CST/TMI, are verified over a dense rain gage network in the state of Ceara, in northeast Brazil. The CST/TMI showed a high bias equal to +33% of the gage mean, indicating that possibly the TMI estimates alone are also high. The root mean square difference (after removal of the bias) equaled 36.6% of the gage mean. The correlation coefficient was 0.77 based on 72 station-months.

  2. Infrared Contrast Enhancement Through Log-Power Histogram Modification

    NARCIS (Netherlands)

    Toet, A.; Wu, T.

    2015-01-01

    A simple power-logarithm histogram modification operator is proposed to enhance infrared (IR) image contrast. The algorithm combines a logarithm operator that smoothes the input image histogram while retaining the relative ordering of the original bins, with a power operator that restores the

  3. Infrared and visible images registration with adaptable local-global feature integration for rail inspection

    Science.gov (United States)

    Tang, Chaoqing; Tian, Gui Yun; Chen, Xiaotian; Wu, Jianbo; Li, Kongjing; Meng, Hongying

    2017-12-01

    Active thermography provides infrared images that contain sub-surface defect information, while visible images only reveal surface information. Mapping infrared information to visible images offers more comprehensive visualization for decision-making in rail inspection. However, the common information for registration is limited due to different modalities in both local and global level. For example, rail track which has low temperature contrast reveals rich details in visible images, but turns blurry in the infrared counterparts. This paper proposes a registration algorithm called Edge-Guided Speeded-Up-Robust-Features (EG-SURF) to address this issue. Rather than sequentially integrating local and global information in matching stage which suffered from buckets effect, this algorithm adaptively integrates local and global information into a descriptor to gather more common information before matching. This adaptability consists of two facets, an adaptable weighting factor between local and global information, and an adaptable main direction accuracy. The local information is extracted using SURF while the global information is represented by shape context from edges. Meanwhile, in shape context generation process, edges are weighted according to local scale and decomposed into bins using a vector decomposition manner to provide more accurate descriptor. The proposed algorithm is qualitatively and quantitatively validated using eddy current pulsed thermography scene in the experiments. In comparison with other algorithms, better performance has been achieved.

  4. [Application of wavelength selection algorithm to measure the effective component of Chinese medicine based on near-infrared spectroscopy].

    Science.gov (United States)

    Gu, Xiao-Yu; Xu, Ke-Xin; Wang, Yan

    2006-09-01

    Near infrared (NIR) spectroscopy has raised a lot of interest in the pharmaceutical industry because it is a rapid and cost-effective analytical type of spectroscopy with no need for extensive sample preparation, and with the easy-realizable ability of on-line application. The NIR technology can increase the quality control standard of the Chinese medicine and accelerate the entry into the international market. In the present paper, two methods for wavelength selection are applied to the measurement of borneol, one of which is the multiple-chain stepwise, which tends to select many variables in the same area containing valuable information, and the other is the mixture genetic algorithm, which incorporates simulated annealing so as to improve the local searching ability while maintaining the global searching ability. The results present that the number of wavelength is reduced to 16% compared with the original number of wavelength, and the prediction accuracy has increased 47.6%. Therefore, the method of wavelength selection is a good way to enhance the prediction accuracy and simplify the model in NIR region.

  5. Influence Of Nonuniformity On Infrared Focal Plane Array Performance

    Science.gov (United States)

    Milton, A. F.; Barone, F. R.; Kruer, M. R.

    1985-08-01

    It is well known that detector response nonuniformity results in pattern noise with staring sensors that is a severe problem in the infrared due to the low intrinsic contrast of IR imagery. The pattern noise can be corrected by electronic processing; however, the ability to correct for pattern noise is limited by the interaction of interscene and intrascene variability with the dynamic range of the processor (number of bits) and, depending upon the algorithm used, by nonlinearities in the detector response. This paper quantifies these limitations and describes the interaction of detector gain nonuniformity and detector nonlinearities. Probabilistic models are developed to determine the maximum sensitivity that can be obtained using a two-point algorithm to correct a nonlinear response curve over a wide temperature range. Curves that permit a prediction of the noise equivalent differential temperature (NEAT) under varying circumstances are presented. A piecewise linear approach to dealing with severe detector response nonlinearities is presented and analyzed for its effectiveness.

  6. Optimal hemodynamic response model for functional near-infrared spectroscopy

    Directory of Open Access Journals (Sweden)

    Muhammad Ahmad Kamran

    2015-06-01

    Full Text Available Functional near-infrared spectroscopy (fNIRS is an emerging non-invasive brain imaging technique and measures brain activities by means of near-infrared light of 650-950 nm wavelengths. The cortical hemodynamic response (HR differs in attributes at different brain regions and on repetition of trials, even if the experimental paradigm is kept exactly the same. Therefore, an HR model that can estimate such variations in the response is the objective of this research. The canonical hemodynamic response function (cHRF is modeled by using two Gamma functions with six unknown parameters. The HRF model is supposed to be linear combination of HRF, baseline and physiological noises (amplitudes and frequencies of physiological noises are supposed to be unknown. An objective function is developed as a square of the residuals with constraints on twelve free parameters. The formulated problem is solved by using an iterative optimization algorithm to estimate the unknown parameters in the model. Inter-subject variations in HRF and physiological noises have been estimated for better cortical functional maps. The accuracy of the algorithm has been verified using ten real and fifteen simulated data sets. Ten healthy subjects participated in the experiment and their HRF for finger-tapping tasks have been estimated and analyzed. The statistical significance of the estimated activity strength parameters has been verified by employing statistical analysis, i.e., (t-value >tcritical and p-value < 0.05.

  7. Optimal hemodynamic response model for functional near-infrared spectroscopy.

    Science.gov (United States)

    Kamran, Muhammad A; Jeong, Myung Yung; Mannan, Malik M N

    2015-01-01

    Functional near-infrared spectroscopy (fNIRS) is an emerging non-invasive brain imaging technique and measures brain activities by means of near-infrared light of 650-950 nm wavelengths. The cortical hemodynamic response (HR) differs in attributes at different brain regions and on repetition of trials, even if the experimental paradigm is kept exactly the same. Therefore, an HR model that can estimate such variations in the response is the objective of this research. The canonical hemodynamic response function (cHRF) is modeled by two Gamma functions with six unknown parameters (four of them to model the shape and other two to scale and baseline respectively). The HRF model is supposed to be a linear combination of HRF, baseline, and physiological noises (amplitudes and frequencies of physiological noises are supposed to be unknown). An objective function is developed as a square of the residuals with constraints on 12 free parameters. The formulated problem is solved by using an iterative optimization algorithm to estimate the unknown parameters in the model. Inter-subject variations in HRF and physiological noises have been estimated for better cortical functional maps. The accuracy of the algorithm has been verified using 10 real and 15 simulated data sets. Ten healthy subjects participated in the experiment and their HRF for finger-tapping tasks have been estimated and analyzed. The statistical significance of the estimated activity strength parameters has been verified by employing statistical analysis (i.e., t-value > t critical and p-value < 0.05).

  8. Application of DIRI dynamic infrared imaging in reconstructive surgery

    Science.gov (United States)

    Pawlowski, Marek; Wang, Chengpu; Jin, Feng; Salvitti, Matthew; Tenorio, Xavier

    2006-04-01

    We have developed the BioScanIR System based on QWIP (Quantum Well Infrared Photodetector). Data collected by this sensor are processed using the DIRI (Dynamic Infrared Imaging) algorithms. The combination of DIRI data processing methods with the unique characteristics of the QWIP sensor permit the creation of a new imaging modality capable of detecting minute changes in temperature at the surface of the tissue and organs associated with blood perfusion due to certain diseases such as cancer, vascular disease and diabetes. The BioScanIR System has been successfully applied in reconstructive surgery to localize donor flap feeding vessels (perforators) during the pre-surgical planning stage. The device is also used in post-surgical monitoring of skin flap perfusion. Since the BioScanIR is mobile; it can be moved to the bedside for such monitoring. In comparison to other modalities, the BioScanIR can localize perforators in a single, 20 seconds scan with definitive results available in minutes. The algorithms used include (FFT) Fast Fourier Transformation, motion artifact correction, spectral analysis and thermal image scaling. The BioScanIR is completely non-invasive and non-toxic, requires no exogenous contrast agents and is free of ionizing radiation. In addition to reconstructive surgery applications, the BioScanIR has shown promise as a useful functional imaging modality in neurosurgery, drug discovery in pre-clinical animal models, wound healing and peripheral vascular disease management.

  9. Science with High Spatial Resolution Far-Infrared Data

    Science.gov (United States)

    Terebey, Susan (Editor); Mazzarella, Joseph M. (Editor)

    1994-01-01

    The goal of this workshop was to discuss new science and techniques relevant to high spatial resolution processing of far-infrared data, with particular focus on high resolution processing of IRAS data. Users of the maximum correlation method, maximum entropy, and other resolution enhancement algorithms applicable to far-infrared data gathered at the Infrared Processing and Analysis Center (IPAC) for two days in June 1993 to compare techniques and discuss new results. During a special session on the third day, interested astronomers were introduced to IRAS HIRES processing, which is IPAC's implementation of the maximum correlation method to the IRAS data. Topics discussed during the workshop included: (1) image reconstruction; (2) random noise; (3) imagery; (4) interacting galaxies; (5) spiral galaxies; (6) galactic dust and elliptical galaxies; (7) star formation in Seyfert galaxies; (8) wavelet analysis; and (9) supernova remnants.

  10. Multi-robot system using low-cost infrared sensors

    Directory of Open Access Journals (Sweden)

    Anubhav Kakkar

    2013-03-01

    Full Text Available This paper presents a proposed set of the novel technique, methods, and algorithm for simultaneous path planning, area exploration, area retrieval, obstacle avoidance, object detection, and object retrieval   autonomously by a multi-robot system. The proposed methods and algorithms are built considering the use of low cost infrared sensors with the ultimate function of efficiently exploring the given unknown area and simultaneously identifying desired objects by analyzing the physical characteristics of several of the objects that come across during exploration. In this paper, we have explained the scenario by building a coordinative multi-robot system consisting of two autonomously operated robots equipped with low-cost and low-range infrared sensors to perform the assigned task by analyzing some of the sudden changes in their environment. Along with identifying and retrieving the desired object, the proposed methodology also provide an inclusive analysis of the area being explored. The novelties presented in the paper may significantly provide a cost-effective solution to the problem of area exploration and finding a known object in an unknown environment by demonstrating an innovative approach of using the infrared sensors instead of high cost long range sensors and cameras. Additionally, the methodology provides a speedy and uncomplicated method of traversing a complicated arena while performing all the necessary and inter-related tasks of avoiding the obstacles, analyzing the area as well as objects, and reconstructing the area using all these information collected and interpreted for an unknown environment. The methods and algorithms proposed are simulated over a complex arena to depict the operations and manually tested over a physical environment which provided 78% correct results with respect to various complex parameters set randomly.

  11. Mid-infrared spectroscopy and multivariate analysis for determination of tetracycline residues in cow's milk

    Directory of Open Access Journals (Sweden)

    Lizeth Mariel Casarrubias-Torres

    2018-01-01

    Full Text Available Mid-infrared spectroscopy and chemometric analysis were tested to determine tetracycline's residues in cow's milk. Cow's milk samples (n = 30 were spiked with tetracycline, chlortetracycline, and oxytetracycline in the range of 10-400 µg/l. Chemometric models to quantify each of the tetracycline's residues were developed by applying Partial Components Regression and Partial Least Squares algorithms. The Soft Independent Modeling of Class Analogy model was used to differentiate between pure milk and milk sample with tetracycline residues. The best models for predicting the levels of these antibiotics were obtained using Partial Least Square 1 algorithm (coefficient of determination between 0.997-0.999 and the standard error of calibration from 1.81 to 2.95. The Soft Independent Modeling of Class Analogy model showed well-separated groups allowing classification of milk samples and milk sample with antibiotics. The obtained results demonstrate the great analytical potential of chemometrics coupled with mid-infrared spectroscopy for the prediction of antibiotic in cow's milk at a concentration of microgram per litre (µg/l. This technique can be used to verify the safety of the milk rapidly and reliably.

  12. Texas Medication Algorithm Project: development and feasibility testing of a treatment algorithm for patients with bipolar disorder.

    Science.gov (United States)

    Suppes, T; Swann, A C; Dennehy, E B; Habermacher, E D; Mason, M; Crismon, M L; Toprac, M G; Rush, A J; Shon, S P; Altshuler, K Z

    2001-06-01

    Use of treatment guidelines for treatment of major psychiatric illnesses has increased in recent years. The Texas Medication Algorithm Project (TMAP) was developed to study the feasibility and process of developing and implementing guidelines for bipolar disorder, major depressive disorder, and schizophrenia in the public mental health system of Texas. This article describes the consensus process used to develop the first set of TMAP algorithms for the Bipolar Disorder Module (Phase 1) and the trial testing the feasibility of their implementation in inpatient and outpatient psychiatric settings across Texas (Phase 2). The feasibility trial answered core questions regarding implementation of treatment guidelines for bipolar disorder. A total of 69 patients were treated with the original algorithms for bipolar disorder developed in Phase 1 of TMAP. Results support that physicians accepted the guidelines, followed recommendations to see patients at certain intervals, and utilized sequenced treatment steps differentially over the course of treatment. While improvements in clinical symptoms (24-item Brief Psychiatric Rating Scale) were observed over the course of enrollment in the trial, these conclusions are limited by the fact that physician volunteers were utilized for both treatment and ratings. and there was no control group. Results from Phases 1 and 2 indicate that it is possible to develop and implement a treatment guideline for patients with a history of mania in public mental health clinics in Texas. TMAP Phase 3, a recently completed larger and controlled trial assessing the clinical and economic impact of treatment guidelines and patient and family education in the public mental health system of Texas, improves upon this methodology.

  13. Crowdsourcing seizure detection: algorithm development and validation on human implanted device recordings.

    Science.gov (United States)

    Baldassano, Steven N; Brinkmann, Benjamin H; Ung, Hoameng; Blevins, Tyler; Conrad, Erin C; Leyde, Kent; Cook, Mark J; Khambhati, Ankit N; Wagenaar, Joost B; Worrell, Gregory A; Litt, Brian

    2017-06-01

    There exist significant clinical and basic research needs for accurate, automated seizure detection algorithms. These algorithms have translational potential in responsive neurostimulation devices and in automatic parsing of continuous intracranial electroencephalography data. An important barrier to developing accurate, validated algorithms for seizure detection is limited access to high-quality, expertly annotated seizure data from prolonged recordings. To overcome this, we hosted a kaggle.com competition to crowdsource the development of seizure detection algorithms using intracranial electroencephalography from canines and humans with epilepsy. The top three performing algorithms from the contest were then validated on out-of-sample patient data including standard clinical data and continuous ambulatory human data obtained over several years using the implantable NeuroVista seizure advisory system. Two hundred teams of data scientists from all over the world participated in the kaggle.com competition. The top performing teams submitted highly accurate algorithms with consistent performance in the out-of-sample validation study. The performance of these seizure detection algorithms, achieved using freely available code and data, sets a new reproducible benchmark for personalized seizure detection. We have also shared a 'plug and play' pipeline to allow other researchers to easily use these algorithms on their own datasets. The success of this competition demonstrates how sharing code and high quality data results in the creation of powerful translational tools with significant potential to impact patient care. © The Author (2017). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  14. [Near-infrared spectroscopy as an auxiliary tool in the study of child development].

    Science.gov (United States)

    Oliveira, Suelen Rosa de; Machado, Ana Carolina Cabral de Paula; Miranda, Débora Marques de; Campos, Flávio Dos Santos; Ribeiro, Cristina Oliveira; Magalhães, Lívia de Castro; Bouzada, Maria Cândida Ferrarez

    2015-01-01

    To investigate the applicability of Near-Infrared Spectroscopy (NIRS) for cortical hemodynamic assessment tool as an aid in the study of child development. Search was conducted in the PubMed and Lilacs databases using the following keywords: "psychomotor performance/child development/growth and development/neurodevelopment/spectroscopy/near-infrared" and their equivalents in Portuguese and Spanish. The review was performed according to criteria established by Cochrane and search was limited to 2003 to 2013. English, Portuguese and Spanish were included in the search. Of the 484 articles, 19 were selected: 17 cross-sectional and two longitudinal studies, published in non-Brazilian journals. The analyzed articles were grouped in functional and non-functional studies of child development. Functional studies addressed the object processing, social skills development, language and cognitive development. Non-functional studies discussed the relationship between cerebral oxygen saturation and neurological outcomes, and the comparison between the cortical hemodynamic response of preterm and term newborns. NIRS has become an increasingly feasible alternative and a potentially useful technique for studying functional activity of the infant brain. Copyright © 2015 Associação de Pediatria de São Paulo. Publicado por Elsevier Editora Ltda. All rights reserved.

  15. Development of infrared communication in radiation protection and monitoring

    International Nuclear Information System (INIS)

    Thakur, Vaishali M.; Choithramani, S.J.; Sharma, D.N.; Abani, M.C.

    2003-01-01

    Infra-red communication has many important applications in instrumentation and control. Different types of nuclear instruments are used for radiation protection and surveillance program. The application of this mode of communication in these instruments helps in monitoring of inaccessible or high radiation field areas by avoiding undue exposure to the occupational worker. The demand for remotely controlled monitoring instruments and wireless data communication in the mobile computing environment has rapidly increased. This is due to the increasing need for on-line radiological data analysis with minimum human interventions, especially so if the monitoring is in hazardous environment. The wireless communication can be achieved using different communication methodology for short and long range communication. The infrared based communication is used for different applications for short range up to 9-10 meters. The use of this mode of communication has been implemented in some of the radiation monitoring instruments developed in house. The evaluation of data communication using this mode was conducted for the systems like Environmental Radiation Monitor (ERM) and results showed that data communication error is less than 0.1% up to 10 meter distance. (author)

  16. A Method of Sky Ripple Residual Nonuniformity Reduction for a Cooled Infrared Imager and Hardware Implementation.

    Science.gov (United States)

    Li, Yiyang; Jin, Weiqi; Li, Shuo; Zhang, Xu; Zhu, Jin

    2017-05-08

    Cooled infrared detector arrays always suffer from undesired ripple residual nonuniformity (RNU) in sky scene observations. The ripple residual nonuniformity seriously affects the imaging quality, especially for small target detection. It is difficult to eliminate it using the calibration-based techniques and the current scene-based nonuniformity algorithms. In this paper, we present a modified temporal high-pass nonuniformity correction algorithm using fuzzy scene classification. The fuzzy scene classification is designed to control the correction threshold so that the algorithm can remove ripple RNU without degrading the scene details. We test the algorithm on a real infrared sequence by comparing it to several well-established methods. The result shows that the algorithm has obvious advantages compared with the tested methods in terms of detail conservation and convergence speed for ripple RNU correction. Furthermore, we display our architecture with a prototype built on a Xilinx Virtex-5 XC5VLX50T field-programmable gate array (FPGA), which has two advantages: (1) low resources consumption; and (2) small hardware delay (less than 10 image rows). It has been successfully applied in an actual system.

  17. FY 2005 Infrared Photonics Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Anheier, Norman C.; Allen, Paul J.; Ho, Nicolas; Krishnaswami, Kannan; Johnson, Bradley R.; Sundaram, S. K.; Riley, Bradley M.; Martinez, James E.; Qiao, Hong (Amy); Schultz, John F.

    2005-12-01

    Research done by the Infrared Photonics team at Pacific Northwest National Laboratory (PNNL) is focused on developing miniaturized integrated optics for mid-wave infrared (MWIR) and long-wave infrared (LWIR) sensing applications by exploiting the unique optical and material properties of chalcogenide glass. PNNL has developed thin-film deposition capabilities, direct laser writing techniques, infrared photonic device demonstration, holographic optical element design and fabrication, photonic device modeling, and advanced optical metrology—all specific to chalcogenide glass. Chalcogenide infrared photonics provides a pathway to quantum cascade laser (QCL) transmitter miniaturization. QCLs provide a viable infrared laser source for a new class of laser transmitters capable of meeting the performance requirements for a variety of national security sensing applications. The high output power, small size, and superb stability and modulation characteristics of QCLs make them amenable for integration as transmitters into ultra-sensitive, ultra-selective point sampling and remote short-range chemical sensors that are particularly useful for nuclear nonproliferation missions. During FY 2005, PNNL’s Infrared Photonics research team made measurable progress exploiting the extraordinary optical and material properties of chalcogenide glass to develop miniaturized integrated optics for mid-wave infrared (MWIR) and long-wave infrared (LWIR) sensing applications. We investigated sulfur purification methods that will eventually lead to routine production of optical quality chalcogenide glass. We also discovered a glass degradation phenomenon and our investigation uncovered the underlying surface chemistry mechanism and developed mitigation actions. Key research was performed to understand and control the photomodification properties. This research was then used to demonstrate several essential infrared photonic devices, including LWIR single-mode waveguide devices and

  18. Development of a MELCOR self-initialization algorithm for boiling water reactors

    International Nuclear Information System (INIS)

    Chien, C.S.; Wang, S.J.; Cheng, S.K.

    1996-01-01

    The MELCOR code, developed by Sandia National Laboratories, is suitable for calculating source terms and simulating severe accident phenomena of nuclear power plants. Prior to simulating a severe accident transient with MELCOR, the initial steady-state conditions must be generated in advance. The current MELCOR users' manuals do not provide a self-initialization procedure; this is the reason users have to adjust the initial conditions by themselves through a trial-and-error approach. A MELCOR self-initialization algorithm for boiling water reactor plants has been developed, which eliminates the tedious trial-and-error procedures and improves the simulation accuracy. This algorithm adjusts the important plant variable such as the dome pressure, downcomer level, and core flow rate to the desired conditions automatically. It is implemented through input with control functions provided in MELCOR. The reactor power and feedwater temperature are fed as input data. The initialization work of full-power conditions of the Kuosheng nuclear power station is cited as an example. These initial conditions are generated successfully with the developed algorithm. The generated initial conditions can be stored in a restart file and used for transient analysis. The methodology in this study improves the accuracy and consistency of transient calculations. Meanwhile, the algorithm provides all MELCOR users an easy and correct method for establishing the initial conditions

  19. Poster abstract: A machine learning approach for vehicle classification using passive infrared and ultrasonic sensors

    KAUST Repository

    Warriach, Ehsan Ullah

    2013-01-01

    This article describes the implementation of four different machine learning techniques for vehicle classification in a dual ultrasonic/passive infrared traffic flow sensors. Using k-NN, Naive Bayes, SVM and KNN-SVM algorithms, we show that KNN-SVM significantly outperforms other algorithms in terms of classification accuracy. We also show that some of these algorithms could run in real time on the prototype system. Copyright © 2013 ACM.

  20. Computationally Efficient Automatic Coast Mode Target Tracking Based on Occlusion Awareness in Infrared Images.

    Science.gov (United States)

    Kim, Sohyun; Jang, Gwang-Il; Kim, Sungho; Kim, Junmo

    2018-03-27

    This paper proposes the automatic coast mode tracking of centroid trackers for infrared images to overcome the target occlusion status. The centroid tracking method, using only the brightness information of an image, is still widely used in infrared imaging tracking systems because it is difficult to extract meaningful features from infrared images. However, centroid trackers are likely to lose the track because they are highly vulnerable to screened status by the clutter or background. Coast mode, one of the tracking modes, maintains the servo slew rate with the tracking rate right before the loss of track. The proposed automatic coast mode tracking method makes decisions regarding entering coast mode by the prediction of target occlusion and tries to re-lock the target and resume the tracking after blind time. This algorithm comprises three steps. The first step is the prediction process of the occlusion by checking both matters which have target-likelihood brightness and which may screen the target despite different brightness. The second step is the process making inertial tracking commands to the servo. The last step is the process of re-locking a target based on the target modeling of histogram ratio. The effectiveness of the proposed algorithm is addressed by presenting experimental results based on computer simulation with various test imagery sequences compared to published tracking algorithms. The proposed algorithm is tested under a real environment with a naval electro-optical tracking system (EOTS) and airborne EO/IR system.

  1. Computationally Efficient Automatic Coast Mode Target Tracking Based on Occlusion Awareness in Infrared Images

    Directory of Open Access Journals (Sweden)

    Sohyun Kim

    2018-03-01

    Full Text Available This paper proposes the automatic coast mode tracking of centroid trackers for infrared images to overcome the target occlusion status. The centroid tracking method, using only the brightness information of an image, is still widely used in infrared imaging tracking systems because it is difficult to extract meaningful features from infrared images. However, centroid trackers are likely to lose the track because they are highly vulnerable to screened status by the clutter or background. Coast mode, one of the tracking modes, maintains the servo slew rate with the tracking rate right before the loss of track. The proposed automatic coast mode tracking method makes decisions regarding entering coast mode by the prediction of target occlusion and tries to re-lock the target and resume the tracking after blind time. This algorithm comprises three steps. The first step is the prediction process of the occlusion by checking both matters which have target-likelihood brightness and which may screen the target despite different brightness. The second step is the process making inertial tracking commands to the servo. The last step is the process of re-locking a target based on the target modeling of histogram ratio. The effectiveness of the proposed algorithm is addressed by presenting experimental results based on computer simulation with various test imagery sequences compared to published tracking algorithms. The proposed algorithm is tested under a real environment with a naval electro-optical tracking system (EOTS and airborne EO/IR system.

  2. Development of an Infrared Remote Sensing System for Continuous Monitoring of Stromboli Volcano

    Science.gov (United States)

    Harig, R.; Burton, M.; Rausch, P.; Jordan, M.; Gorgas, J.; Gerhard, J.

    2009-04-01

    In order to monitor gases emitted by Stromboli volcano in the Eolian archipelago, Italy, a remote sensing system based on Fourier-transform infrared spectroscopy has been developed and installed on the summit of Stromboli volcano. Hot rocks and lava are used as sources of infrared radiation. The system is based on an interferometer with a single detector element in combination with an azimuth-elevation scanning mirror system. The mirror system is used to align the field of view of the instrument. In addition, the system is equipped with an infrared camera. Two basic modes of operation have been implemented: The user may use the infrared image to align the system to a vent that is to be examined. In addition, the scanning system may be used for (hyperspectral) imaging of the scene. In this mode, the scanning mirror is set sequentially move to all positions within a region of interest which is defined by the operator using the image generated from the infrared camera. The spectral range used for the measurements is 1600 - 4200 cm-1 allowing the quantification of many gases such as CO, CO2, SO2, and HCl. The spectral resolution is 0.5 cm-1. In order to protect the optical, mechanical and electrical parts of the system from the volcanic gases, all components are contained in a gas-tight aluminium housing. The system is controlled via TCP/IP (data transfer by WLAN), allowing the user to operate it from a remote PC. The infrared image of the scene and measured spectra are transferred to and displayed by a remote PC at INGV or TUHH in real-time. However, the system is capable of autonomous operation on the volcano, once a measurement has been started. Measurements are stored by an internal embedded PC.

  3. On-line monitoring the extract process of Fu-fang Shuanghua oral solution using near infrared spectroscopy and different PLS algorithms

    Science.gov (United States)

    Kang, Qian; Ru, Qingguo; Liu, Yan; Xu, Lingyan; Liu, Jia; Wang, Yifei; Zhang, Yewen; Li, Hui; Zhang, Qing; Wu, Qing

    2016-01-01

    An on-line near infrared (NIR) spectroscopy monitoring method with an appropriate multivariate calibration method was developed for the extraction process of Fu-fang Shuanghua oral solution (FSOS). On-line NIR spectra were collected through two fiber optic probes, which were designed to transmit NIR radiation by a 2 mm flange. Partial least squares (PLS), interval PLS (iPLS) and synergy interval PLS (siPLS) algorithms were used comparatively for building the calibration regression models. During the extraction process, the feasibility of NIR spectroscopy was employed to determine the concentrations of chlorogenic acid (CA) content, total phenolic acids contents (TPC), total flavonoids contents (TFC) and soluble solid contents (SSC). High performance liquid chromatography (HPLC), ultraviolet spectrophotometric method (UV) and loss on drying methods were employed as reference methods. Experiment results showed that the performance of siPLS model is the best compared with PLS and iPLS. The calibration models for AC, TPC, TFC and SSC had high values of determination coefficients of (R2) (0.9948, 0.9992, 0.9950 and 0.9832) and low root mean square error of cross validation (RMSECV) (0.0113, 0.0341, 0.1787 and 1.2158), which indicate a good correlation between reference values and NIR predicted values. The overall results show that the on line detection method could be feasible in real application and would be of great value for monitoring the mixed decoction process of FSOS and other Chinese patent medicines.

  4. Ripple FPN reduced algorithm based on temporal high-pass filter and hardware implementation

    Science.gov (United States)

    Li, Yiyang; Li, Shuo; Zhang, Zhipeng; Jin, Weiqi; Wu, Lei; Jin, Minglei

    2016-11-01

    Cooled infrared detector arrays always suffer from undesired Ripple Fixed-Pattern Noise (FPN) when observe the scene of sky. The Ripple Fixed-Pattern Noise seriously affect the imaging quality of thermal imager, especially for small target detection and tracking. It is hard to eliminate the FPN by the Calibration based techniques and the current scene-based nonuniformity algorithms. In this paper, we present a modified space low-pass and temporal high-pass nonuniformity correction algorithm using adaptive time domain threshold (THP&GM). The threshold is designed to significantly reduce ghosting artifacts. We test the algorithm on real infrared in comparison to several previously published methods. This algorithm not only can effectively correct common FPN such as Stripe, but also has obviously advantage compared with the current methods in terms of detail protection and convergence speed, especially for Ripple FPN correction. Furthermore, we display our architecture with a prototype built on a Xilinx Virtex-5 XC5VLX50T field-programmable gate array (FPGA). The hardware implementation of the algorithm based on FPGA has two advantages: (1) low resources consumption, and (2) small hardware delay (less than 20 lines). The hardware has been successfully applied in actual system.

  5. Infrared dim moving target tracking via sparsity-based discriminative classifier and convolutional network

    Science.gov (United States)

    Qian, Kun; Zhou, Huixin; Wang, Bingjian; Song, Shangzhen; Zhao, Dong

    2017-11-01

    Infrared dim and small target tracking is a great challenging task. The main challenge for target tracking is to account for appearance change of an object, which submerges in the cluttered background. An efficient appearance model that exploits both the global template and local representation over infrared image sequences is constructed for dim moving target tracking. A Sparsity-based Discriminative Classifier (SDC) and a Convolutional Network-based Generative Model (CNGM) are combined with a prior model. In the SDC model, a sparse representation-based algorithm is adopted to calculate the confidence value that assigns more weights to target templates than negative background templates. In the CNGM model, simple cell feature maps are obtained by calculating the convolution between target templates and fixed filters, which are extracted from the target region at the first frame. These maps measure similarities between each filter and local intensity patterns across the target template, therefore encoding its local structural information. Then, all the maps form a representation, preserving the inner geometric layout of a candidate template. Furthermore, the fixed target template set is processed via an efficient prior model. The same operation is applied to candidate templates in the CNGM model. The online update scheme not only accounts for appearance variations but also alleviates the migration problem. At last, collaborative confidence values of particles are utilized to generate particles' importance weights. Experiments on various infrared sequences have validated the tracking capability of the presented algorithm. Experimental results show that this algorithm runs in real-time and provides a higher accuracy than state of the art algorithms.

  6. Low-cost uncooled VOx infrared camera development

    Science.gov (United States)

    Li, Chuan; Han, C. J.; Skidmore, George D.; Cook, Grady; Kubala, Kenny; Bates, Robert; Temple, Dorota; Lannon, John; Hilton, Allan; Glukh, Konstantin; Hardy, Busbee

    2013-06-01

    The DRS Tamarisk® 320 camera, introduced in 2011, is a low cost commercial camera based on the 17 µm pixel pitch 320×240 VOx microbolometer technology. A higher resolution 17 µm pixel pitch 640×480 Tamarisk®640 has also been developed and is now in production serving the commercial markets. Recently, under the DARPA sponsored Low Cost Thermal Imager-Manufacturing (LCTI-M) program and internal project, DRS is leading a team of industrial experts from FiveFocal, RTI International and MEMSCAP to develop a small form factor uncooled infrared camera for the military and commercial markets. The objective of the DARPA LCTI-M program is to develop a low SWaP camera (costs less than US $500 based on a 10,000 units per month production rate. To meet this challenge, DRS is developing several innovative technologies including a small pixel pitch 640×512 VOx uncooled detector, an advanced digital ROIC and low power miniature camera electronics. In addition, DRS and its partners are developing innovative manufacturing processes to reduce production cycle time and costs including wafer scale optic and vacuum packaging manufacturing and a 3-dimensional integrated camera assembly. This paper provides an overview of the DRS Tamarisk® project and LCTI-M related uncooled technology development activities. Highlights of recent progress and challenges will also be discussed. It should be noted that BAE Systems and Raytheon Vision Systems are also participants of the DARPA LCTI-M program.

  7. Infrared Astronomy and Education: Linking Infrared Whole Sky Mapping with Teacher and Student Research

    Science.gov (United States)

    Borders, Kareen; Mendez, Bryan; Thaller, Michelle; Gorjian, Varoujan; Borders, Kyla; Pitman, Peter; Pereira, Vincent; Sepulveda, Babs; Stark, Ron; Knisely, Cindy; Dandrea, Amy; Winglee, Robert; Plecki, Marge; Goebel, Jeri; Condit, Matt; Kelly, Susan

    The Spitzer Space Telescope and the recently launched WISE (Wide Field Infrared Survey Explorer) observe the sky in infrared light. Among the objects WISE will study are asteroids, the coolest and dimmest stars, and the most luminous galaxies. Secondary students can do authentic research using infrared data. For example, students will use WISE data to mea-sure physical properties of asteroids. In order to prepare students and teachers at this level with a high level of rigor and scientific understanding, the WISE and the Spitzer Space Tele-scope Education programs provided an immersive teacher professional development workshop in infrared astronomy.The lessons learned from the Spitzer and WISE teacher and student pro-grams can be applied to other programs engaging them in authentic research experiences using data from space-borne observatories such as Herschel and Planck. Recently, WISE Educator Ambassadors and NASA Explorer School teachers developed and led an infrared astronomy workshop at Arecibo Observatory in PuertoRico. As many common misconceptions involve scale and distance, teachers worked with Moon/Earth scale, solar system scale, and distance and age of objects in the Universe. Teachers built and used basic telescopes, learned about the history of telescopes, explored ground and satellite based telescopes, and explored and worked on models of WISE Telescope. An in-depth explanation of WISE and the Spitzer telescopes gave participants background knowledge for infrared astronomy observations. We taught the electromagnetic spectrum through interactive stations. We will outline specific steps for sec-ondary astronomy professional development, detail student involvement in infrared telescope data analysis, provide data demonstrating the impact of the above professional development on educator understanding and classroom use, and detail future plans for additional secondary professional development and student involvement in infrared astronomy. Funding was

  8. Development of CAD implementing the algorithm of boundary elements’ numerical analytical method

    Directory of Open Access Journals (Sweden)

    Yulia V. Korniyenko

    2015-03-01

    Full Text Available Up to recent days the algorithms for numerical-analytical boundary elements method had been implemented with programs written in MATLAB environment language. Each program had a local character, i.e. used to solve a particular problem: calculation of beam, frame, arch, etc. Constructing matrices in these programs was carried out “manually” therefore being time-consuming. The research was purposed onto a reasoned choice of programming language for new CAD development, allows to implement algorithm of numerical analytical boundary elements method and to create visualization tools for initial objects and calculation results. Research conducted shows that among wide variety of programming languages the most efficient one for CAD development, employing the numerical analytical boundary elements method algorithm, is the Java language. This language provides tools not only for development of calculating CAD part, but also to build the graphic interface for geometrical models construction and calculated results interpretation.

  9. the simple mono-canal algorithm for the temperature estimating of ...

    African Journals Online (AJOL)

    30 juin 2010 ... the brightness temperature (Tb) at the sensor level. This algorithm ..... des attributs de textures et de la fusion de segmentations: application à la zone ... retreved from thermal infrared single channel remote sensing data. 2004 ...

  10. Using qualitative research to inform development of a diagnostic algorithm for UTI in children.

    Science.gov (United States)

    de Salis, Isabel; Whiting, Penny; Sterne, Jonathan A C; Hay, Alastair D

    2013-06-01

    Diagnostic and prognostic algorithms can help reduce clinical uncertainty. The selection of candidate symptoms and signs to be measured in case report forms (CRFs) for potential inclusion in diagnostic algorithms needs to be comprehensive, clearly formulated and relevant for end users. To investigate whether qualitative methods could assist in designing CRFs in research developing diagnostic algorithms. Specifically, the study sought to establish whether qualitative methods could have assisted in designing the CRF for the Health Technology Association funded Diagnosis of Urinary Tract infection in Young children (DUTY) study, which will develop a diagnostic algorithm to improve recognition of urinary tract infection (UTI) in children aged children in primary care and a Children's Emergency Department. We elicited features that clinicians believed useful in diagnosing UTI and compared these for presence or absence and terminology with the DUTY CRF. Despite much agreement between clinicians' accounts and the DUTY CRFs, we identified a small number of potentially important symptoms and signs not included in the CRF and some included items that could have been reworded to improve understanding and final data analysis. This study uniquely demonstrates the role of qualitative methods in the design and content of CRFs used for developing diagnostic (and prognostic) algorithms. Research groups developing such algorithms should consider using qualitative methods to inform the selection and wording of candidate symptoms and signs.

  11. Infrared and visible fusion face recognition based on NSCT domain

    Science.gov (United States)

    Xie, Zhihua; Zhang, Shuai; Liu, Guodong; Xiong, Jinquan

    2018-01-01

    Visible face recognition systems, being vulnerable to illumination, expression, and pose, can not achieve robust performance in unconstrained situations. Meanwhile, near infrared face images, being light- independent, can avoid or limit the drawbacks of face recognition in visible light, but its main challenges are low resolution and signal noise ratio (SNR). Therefore, near infrared and visible fusion face recognition has become an important direction in the field of unconstrained face recognition research. In this paper, a novel fusion algorithm in non-subsampled contourlet transform (NSCT) domain is proposed for Infrared and visible face fusion recognition. Firstly, NSCT is used respectively to process the infrared and visible face images, which exploits the image information at multiple scales, orientations, and frequency bands. Then, to exploit the effective discriminant feature and balance the power of high-low frequency band of NSCT coefficients, the local Gabor binary pattern (LGBP) and Local Binary Pattern (LBP) are applied respectively in different frequency parts to obtain the robust representation of infrared and visible face images. Finally, the score-level fusion is used to fuse the all the features for final classification. The visible and near infrared face recognition is tested on HITSZ Lab2 visible and near infrared face database. Experiments results show that the proposed method extracts the complementary features of near-infrared and visible-light images and improves the robustness of unconstrained face recognition.

  12. Development of antibiotic regimens using graph based evolutionary algorithms.

    Science.gov (United States)

    Corns, Steven M; Ashlock, Daniel A; Bryden, Kenneth M

    2013-12-01

    This paper examines the use of evolutionary algorithms in the development of antibiotic regimens given to production animals. A model is constructed that combines the lifespan of the animal and the bacteria living in the animal's gastro-intestinal tract from the early finishing stage until the animal reaches market weight. This model is used as the fitness evaluation for a set of graph based evolutionary algorithms to assess the impact of diversity control on the evolving antibiotic regimens. The graph based evolutionary algorithms have two objectives: to find an antibiotic treatment regimen that maintains the weight gain and health benefits of antibiotic use and to reduce the risk of spreading antibiotic resistant bacteria. This study examines different regimens of tylosin phosphate use on bacteria populations divided into Gram positive and Gram negative types, with a focus on Campylobacter spp. Treatment regimens were found that provided decreased antibiotic resistance relative to conventional methods while providing nearly the same benefits as conventional antibiotic regimes. By using a graph to control the information flow in the evolutionary algorithm, a variety of solutions along the Pareto front can be found automatically for this and other multi-objective problems. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  13. Retrieval of Ice Cloud Properties Using an Optimal Estimation Algorithm and MODIS Infrared Observations. Part I: Forward Model, Error Analysis, and Information Content

    Science.gov (United States)

    Wang, Chenxi; Platnick, Steven; Zhang, Zhibo; Meyer, Kerry; Yang, Ping

    2016-01-01

    An optimal estimation (OE) retrieval method is developed to infer three ice cloud properties simultaneously: optical thickness (tau), effective radius (r(sub eff)), and cloud top height (h). This method is based on a fast radiative transfer (RT) model and infrared (IR) observations from the MODerate resolution Imaging Spectroradiometer (MODIS). This study conducts thorough error and information content analyses to understand the error propagation and performance of retrievals from various MODIS band combinations under different cloud/atmosphere states. Specifically, the algorithm takes into account four error sources: measurement uncertainty, fast RT model uncertainty, uncertainties in ancillary data sets (e.g., atmospheric state), and assumed ice crystal habit uncertainties. It is found that the ancillary and ice crystal habit error sources dominate the MODIS IR retrieval uncertainty and cannot be ignored. The information content analysis shows that for a given ice cloud, the use of four MODIS IR observations is sufficient to retrieve the three cloud properties. However, the selection of MODIS IR bands that provide the most information and their order of importance varies with both the ice cloud properties and the ambient atmospheric and the surface states. As a result, this study suggests the inclusion of all MODIS IR bands in practice since little a priori information is available.

  14. Retrieval of ice cloud properties using an optimal estimation algorithm and MODIS infrared observations. Part I: Forward model, error analysis, and information content

    Science.gov (United States)

    Wang, Chenxi; Platnick, Steven; Zhang, Zhibo; Meyer, Kerry; Yang, Ping

    2018-01-01

    An optimal estimation (OE) retrieval method is developed to infer three ice cloud properties simultaneously: optical thickness (τ), effective radius (reff), and cloud-top height (h). This method is based on a fast radiative transfer (RT) model and infrared (IR) observations from the MODerate resolution Imaging Spectroradiometer (MODIS). This study conducts thorough error and information content analyses to understand the error propagation and performance of retrievals from various MODIS band combinations under different cloud/atmosphere states. Specifically, the algorithm takes into account four error sources: measurement uncertainty, fast RT model uncertainty, uncertainties in ancillary datasets (e.g., atmospheric state), and assumed ice crystal habit uncertainties. It is found that the ancillary and ice crystal habit error sources dominate the MODIS IR retrieval uncertainty and cannot be ignored. The information content analysis shows that, for a given ice cloud, the use of four MODIS IR observations is sufficient to retrieve the three cloud properties. However, the selection of MODIS IR bands that provide the most information and their order of importance varies with both the ice cloud properties and the ambient atmospheric and the surface states. As a result, this study suggests the inclusion of all MODIS IR bands in practice since little a priori information is available. PMID:29707470

  15. SPECIAL LIBRARIES OF FRAGMENTS OF ALGORITHMIC NETWORKS TO AUTOMATE THE DEVELOPMENT OF ALGORITHMIC MODELS

    Directory of Open Access Journals (Sweden)

    V. E. Marley

    2015-01-01

    Full Text Available Summary. The concept of algorithmic models appeared from the algorithmic approach in which the simulated object, the phenomenon appears in the form of process, subject to strict rules of the algorithm, which placed the process of operation of the facility. Under the algorithmic model is the formalized description of the scenario subject specialist for the simulated process, the structure of which is comparable with the structure of the causal and temporal relationships between events of the process being modeled, together with all information necessary for its software implementation. To represent the structure of algorithmic models used algorithmic network. Normally, they were defined as loaded finite directed graph, the vertices which are mapped to operators and arcs are variables, bound by operators. The language of algorithmic networks has great features, the algorithms that it can display indifference the class of all random algorithms. In existing systems, automation modeling based on algorithmic nets, mainly used by operators working with real numbers. Although this reduces their ability, but enough for modeling a wide class of problems related to economy, environment, transport, technical processes. The task of modeling the execution of schedules and network diagrams is relevant and useful. There are many counting systems, network graphs, however, the monitoring process based analysis of gaps and terms of graphs, no analysis of prediction execution schedule or schedules. The library is designed to build similar predictive models. Specifying source data to obtain a set of projections from which to choose one and take it for a new plan.

  16. Development of computed tomography system and image reconstruction algorithm

    International Nuclear Information System (INIS)

    Khairiah Yazid; Mohd Ashhar Khalid; Azaman Ahmad; Khairul Anuar Mohd Salleh; Ab Razak Hamzah

    2006-01-01

    Computed tomography is one of the most advanced and powerful nondestructive inspection techniques, which is currently used in many different industries. In several CT systems, detection has been by combination of an X-ray image intensifier and charge -coupled device (CCD) camera or by using line array detector. The recent development of X-ray flat panel detector has made fast CT imaging feasible and practical. Therefore this paper explained the arrangement of a new detection system which is using the existing high resolution (127 μm pixel size) flat panel detector in MINT and the image reconstruction technique developed. The aim of the project is to develop a prototype flat panel detector based CT imaging system for NDE. The prototype consisted of an X-ray tube, a flat panel detector system, a rotation table and a computer system to control the sample motion and image acquisition. Hence this project is divided to two major tasks, firstly to develop image reconstruction algorithm and secondly to integrate X-ray imaging components into one CT system. The image reconstruction algorithm using filtered back-projection method is developed and compared to other techniques. The MATLAB program is the tools used for the simulations and computations for this project. (Author)

  17. Infrared emission and extragalactic starbursts

    International Nuclear Information System (INIS)

    Telesco, C.M.

    1985-01-01

    The paper examines the belief that recent star formation plays a significant role in determining many of the infrared properties of galaxies. Pertinent types of infrared observations and the infrared properties of starbursts are briefly summarized. Recently developed models which describe the evolution of starbursts are also considered. (U.K.)

  18. Handheld Longwave Infrared Camera Based on Highly-Sensitive Quantum Well Infrared Photodetectors, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to develop a compact handheld longwave infrared camera based on quantum well infrared photodetector (QWIP) focal plane array (FPA) technology. Based on...

  19. A filtered backprojection algorithm with characteristics of the iterative landweber algorithm

    OpenAIRE

    L. Zeng, Gengsheng

    2012-01-01

    Purpose: In order to eventually develop an analytical algorithm with noise characteristics of an iterative algorithm, this technical note develops a window function for the filtered backprojection (FBP) algorithm in tomography that behaves as an iterative Landweber algorithm.

  20. Infrared Sky Surveys

    Science.gov (United States)

    Price, Stephan D.

    2009-02-01

    A retrospective is given on infrared sky surveys from Thomas Edison’s proposal in the late 1870s to IRAS, the first sensitive mid- to far-infrared all-sky survey, and the mid-1990s experiments that filled in the IRAS deficiencies. The emerging technology for space-based surveys is highlighted, as is the prominent role the US Defense Department, particularly the Air Force, played in developing and applying detector and cryogenic sensor advances to early mid-infrared probe-rocket and satellite-based surveys. This technology was transitioned to the infrared astronomical community in relatively short order and was essential to the success of IRAS, COBE and ISO. Mention is made of several of the little known early observational programs that were superseded by more successful efforts.

  1. Development of a Thermal Equilibrium Prediction Algorithm

    International Nuclear Information System (INIS)

    Aviles-Ramos, Cuauhtemoc

    2002-01-01

    A thermal equilibrium prediction algorithm is developed and tested using a heat conduction model and data sets from calorimetric measurements. The physical model used in this study is the exact solution of a system of two partial differential equations that govern the heat conduction in the calorimeter. A multi-parameter estimation technique is developed and implemented to estimate the effective volumetric heat generation and thermal diffusivity in the calorimeter measurement chamber, and the effective thermal diffusivity of the heat flux sensor. These effective properties and the exact solution are used to predict the heat flux sensor voltage readings at thermal equilibrium. Thermal equilibrium predictions are carried out considering only 20% of the total measurement time required for thermal equilibrium. A comparison of the predicted and experimental thermal equilibrium voltages shows that the average percentage error from 330 data sets is only 0.1%. The data sets used in this study come from calorimeters of different sizes that use different kinds of heat flux sensors. Furthermore, different nuclear material matrices were assayed in the process of generating these data sets. This study shows that the integration of this algorithm into the calorimeter data acquisition software will result in an 80% reduction of measurement time. This reduction results in a significant cutback in operational costs for the calorimetric assay of nuclear materials. (authors)

  2. developed algorithm for the application of british method of concret

    African Journals Online (AJOL)

    t-iyke

    Most of the methods of concrete mix design developed over the years were geared towards manual approach. ... Key words: Concrete mix design; British method; Manual Approach; Algorithm. ..... Statistics for Science and Engineering.

  3. Spectrally-Tunable Infrared Camera Based on Highly-Sensitive Quantum Well Infrared Photodetectors, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to develop a SPECTRALLY-TUNABLE INFRARED CAMERA based on quantum well infrared photodetector (QWIP) focal plane array (FPA) technology. This will build on...

  4. Non-uniformity Correction of Infrared Images by Midway Equalization

    Directory of Open Access Journals (Sweden)

    Yohann Tendero

    2012-07-01

    Full Text Available The non-uniformity is a time-dependent noise caused by the lack of sensor equalization. We present here the detailed algorithm and on line demo of the non-uniformity correction method by midway infrared equalization. This method was designed to suit infrared images. Nevertheless, it can be applied to images produced for example by scanners, or by push-broom satellites. The obtained single image method works on static images, is fully automatic, having no user parameter, and requires no registration. It needs no camera motion compensation, no closed aperture sensor equalization and is able to correct for a fully non-linear non-uniformity.

  5. The Texas Medication Algorithm Project (TMAP) schizophrenia algorithms.

    Science.gov (United States)

    Miller, A L; Chiles, J A; Chiles, J K; Crismon, M L; Rush, A J; Shon, S P

    1999-10-01

    In the Texas Medication Algorithm Project (TMAP), detailed guidelines for medication management of schizophrenia and related disorders, bipolar disorders, and major depressive disorders have been developed and implemented. This article describes the algorithms developed for medication treatment of schizophrenia and related disorders. The guidelines recommend a sequence of medications and discuss dosing, duration, and switch-over tactics. They also specify response criteria at each stage of the algorithm for both positive and negative symptoms. The rationale and evidence for each aspect of the algorithms are presented.

  6. Segmentation of knee injury swelling on infrared images

    Science.gov (United States)

    Puentes, John; Langet, Hélène; Herry, Christophe; Frize, Monique

    2011-03-01

    Interpretation of medical infrared images is complex due to thermal noise, absence of texture, and small temperature differences in pathological zones. Acute inflammatory response is a characteristic symptom of some knee injuries like anterior cruciate ligament sprains, muscle or tendons strains, and meniscus tear. Whereas artificial coloring of the original grey level images may allow to visually assess the extent inflammation in the area, their automated segmentation remains a challenging problem. This paper presents a hybrid segmentation algorithm to evaluate the extent of inflammation after knee injury, in terms of temperature variations and surface shape. It is based on the intersection of rapid color segmentation and homogeneous region segmentation, to which a Laplacian of a Gaussian filter is applied. While rapid color segmentation enables to properly detect the observed core of swollen area, homogeneous region segmentation identifies possible inflammation zones, combining homogeneous grey level and hue area segmentation. The hybrid segmentation algorithm compares the potential inflammation regions partially detected by each method to identify overlapping areas. Noise filtering and edge segmentation are then applied to common zones in order to segment the swelling surfaces of the injury. Experimental results on images of a patient with anterior cruciate ligament sprain show the improved performance of the hybrid algorithm with respect to its separated components. The main contribution of this work is a meaningful automatic segmentation of abnormal skin temperature variations on infrared thermography images of knee injury swelling.

  7. Comparison of vehicle-mounted forward-looking polarimetric infrared and downward-looking infrared sensors for landmine detection

    NARCIS (Netherlands)

    Cremer, F.; Schavemaker, J.G.M.; Jong, W. de; Schutte, K.

    2003-01-01

    This paper gives a comparison of two vehicle-mounted infrared systems for landmine detection. The first system is a down-ward looking standard infrared camera using processing methods developed within the EU project LOTUS. The second system is using a forward-looking polarimetric infrared camera.

  8. Development of information preserving data compression algorithm for CT images

    International Nuclear Information System (INIS)

    Kobayashi, Yoshio

    1989-01-01

    Although digital imaging techniques in radiology develop rapidly, problems arise in archival storage and communication of image data. This paper reports on a new information preserving data compression algorithm for computed tomographic (CT) images. This algorithm consists of the following five processes: 1. Pixels surrounding the human body showing CT values smaller than -900 H.U. are eliminated. 2. Each pixel is encoded by its numerical difference from its neighboring pixel along a matrix line. 3. Difference values are encoded by a newly designed code rather than the natural binary code. 4. Image data, obtained with the above process, are decomposed into bit planes. 5. The bit state transitions in each bit plane are encoded by run length coding. Using this new algorithm, the compression ratios of brain, chest, and abdomen CT images are 4.49, 4.34. and 4.40 respectively. (author)

  9. Automatic recognition of ship types from infrared images using superstructure moment invariants

    Science.gov (United States)

    Li, Heng; Wang, Xinyu

    2007-11-01

    Automatic object recognition is an active area of interest for military and commercial applications. In this paper, a system addressing autonomous recognition of ship types in infrared images is proposed. Firstly, an approach of segmentation based on detection of salient features of the target with subsequent shadow removing is proposed, as is the base of the subsequent object recognition. Considering the differences between the shapes of various ships mainly lie in their superstructures, we then use superstructure moment functions invariant to translation, rotation and scale differences in input patterns and develop a robust algorithm of obtaining ship superstructure. Subsequently a back-propagation neural network is used as a classifier in the recognition stage and projection images of simulated three-dimensional ship models are used as the training sets. Our recognition model was implemented and experimentally validated using both simulated three-dimensional ship model images and real images derived from video of an AN/AAS-44V Forward Looking Infrared(FLIR) sensor.

  10. The development of controller and navigation algorithm for underwater wall crawler

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Hyung Suck; Kim, Kyung Hoon; Kim, Min Young [Korea Advanced Institute of Science and Technology, Taejon (Korea)

    1999-01-01

    In this project, the control system of a underwater robotic vehicle(URV) for underwater wall inspection in the nuclear reactor pool or the related facilities has been developed. The following 4-sub projects have been studied for this project: (1) Development of the controller and motor driver for the URV (2) Development of the control algorithm for the tracking control of the URV (3) Development of the localization system (4) Underwater experiments of the developed system. First, the dynamic characteristic of thruster with the DC servo-motor was analyzed experimentally. Second the controller board using the INTEL 80C196 was designed and constructed, and the software for the communication and motor control is developed. Third the PWM motor-driver was developed. Fourth the localization system using the laser scanner and inclinometer was developed and tested in the pool. Fifth the dynamics of the URV was studied and the proper control algorithms for the URV was proposed. Lastly the validation of the integrated system was experimentally performed. (author). 27 refs., 51 figs., 8 tabs.

  11. Recent advances in infrared astronomy

    International Nuclear Information System (INIS)

    Robson, E.I.

    1980-01-01

    A background survey is given of developments in infrared astronomy during the last decade. Advantages obtained in using infrared wavelengths to penetrate the Earth's atmosphere and the detectors used for this work are considered. Infrared studies of, among other subjects, the stars, dust clouds, the centre of our galaxy and the 3k cosmic background radiation, are discussed. (UK)

  12. Effect of motion artifacts and their correction on near-infrared spectroscopy oscillation data

    DEFF Research Database (Denmark)

    Selb, Juliette; Yücel, Meryem A; Phillip, Dorte

    2015-01-01

    Functional near-infrared spectroscopy is prone to contamination by motion artifacts (MAs). Motion correction algorithms have previously been proposed and their respective performance compared for evoked rain activation studies. We study instead the effect of MAs on "oscillation" data which...... in the frequency band around 0.1 and 0.04 Hz, suggesting a physiological origin for the difference. We emphasize the importance of considering MAs as a confounding factor in oscillation-based functional near-infrared spectroscopy studies....

  13. [Development of a portable mid-infrared rapid analyzer for oil concentration in water based on MEMS linear sensor array].

    Science.gov (United States)

    Gao, Zhi-fan; Zeng, Li-bo; Shi, Lei; Li, Kai; Yang, Yuan-zhou; Wu, Qiong-shui

    2014-06-01

    Aiming at the existing problems such as weak environmental adaptability, low analytic efficiency and poor measuring repeatability in the traditional spectral oil analyzers, the present paper designed a portable mid-infrared rapid analyzer for oil concentration in water. To reduce the volume of the instrument, the non-symmetrical folding M-type Czerny-Turner optical structure was adopted in the core optical path. With a periodically rotating chopper, controlled by digital PID algorithm, applied for infrared light modulation, the modulating accuracy reached ±0.5%. Different from traditional grating-scanning spectrophotometers, this instrument used a fixed grating for light dispersion and avoided rotating error in the course of the measuring procedures. A new-type MEMS infrared linear sensor array was applied for modulated spectral signals detection, which improved the measuring efficiency remarkably. Optical simulation and experimental results indicate that the spectral range is 2 800 - 3 200 cm(-1), the spectral resolution is 6 cm(-1) (@3 130 cm(-1)), and the signal to noise ratio is up to 5 200 : 1. The acquisition time is 13 milliseconds per spectrogram, and the standard deviation of absorbance is less than 3 x 10(-3). These performances meet the standards of oil concentration measurements perfectly. Compared with traditional infrared spectral analyzers for oil concentration, the instrument demonstrated in this paper has many advantages such as smaller size, more efficiency, higher precision, and stronger vibration & moisture isolation. In addition, the proposed instrument is especially suitable for the environmental monitoring departments to implement real-time measurements in the field for oil concentration in water, hence it has broad prospects of application in the field of water quality monitoring.

  14. Technique of infrared synchrotron acceleration diagnostics

    International Nuclear Information System (INIS)

    Mal'tsev, A.A.; Mal'tsev, M.A.

    1997-01-01

    Techniques of measuring of current and geometric parameters and evaluating of energy parameters of the ring bunch of relativistic low-energy electrons have been presented. They have been based on using the synchrotron radiation effect in its infrared spectral part. Fast infrared detectors have provided radiation detection in the spectral range Δλ ≅ 0.3-45 μm. The descriptions of some data monitoring and measuring systems developed in JINR for the realization of techniques of the infrared synchrotron acceleration diagnostics have been given. Infrared optics elements specially developed have been used in these systems

  15. Correlation signatures of wet soils and snows. [algorithm development and computer programming

    Science.gov (United States)

    Phillips, M. R.

    1972-01-01

    Interpretation, analysis, and development of algorithms have provided the necessary computational programming tools for soil data processing, data handling and analysis. Algorithms that have been developed thus far, are adequate and have been proven successful for several preliminary and fundamental applications such as software interfacing capabilities, probability distributions, grey level print plotting, contour plotting, isometric data displays, joint probability distributions, boundary mapping, channel registration and ground scene classification. A description of an Earth Resources Flight Data Processor, (ERFDP), which handles and processes earth resources data under a users control is provided.

  16. Properties of the Variation of the Infrared Emission of OH/IR Stars I. The K Band Light Curves

    Directory of Open Access Journals (Sweden)

    Kyung-Won Suh

    2009-09-01

    Full Text Available To study properties of the variation of the infrared emission of OH/IR stars, we collect and analyze the infrared observational data in K band for nine OH/IR stars. We use the observational data obtained for about three decades including recent data from the two micron all sky survey (2MASS and the deep near infrared survey of the southern sky (DENIS. We use Marquardt-Levenberg algorithm to determine the pulsation period and amplitude for each star and compare them with previous results of infrared and radio investigations.

  17. Mid-infrared spectroscopic investigation

    International Nuclear Information System (INIS)

    Walter, L.; Vergo, N.; Salisbury, J.W.

    1987-01-01

    Mid-infrared spectroscopic research efforts are discussed. The development of a new instrumentation to permit advanced measurements in the mid-infrared region of the spectrum, the development of a special library of well-characterized mineral and rock specimens for interpretation of remote sensing data, and cooperative measurements of the spectral signatures of analogues of materials that may be present on the surfaces of asteroids, planets or their Moons are discussed

  18. Integrated Graphics Operations and Analysis Lab Development of Advanced Computer Graphics Algorithms

    Science.gov (United States)

    Wheaton, Ira M.

    2011-01-01

    The focus of this project is to aid the IGOAL in researching and implementing algorithms for advanced computer graphics. First, this project focused on porting the current International Space Station (ISS) Xbox experience to the web. Previously, the ISS interior fly-around education and outreach experience only ran on an Xbox 360. One of the desires was to take this experience and make it into something that can be put on NASA s educational site for anyone to be able to access. The current code works in the Unity game engine which does have cross platform capability but is not 100% compatible. The tasks for an intern to complete this portion consisted of gaining familiarity with Unity and the current ISS Xbox code, porting the Xbox code to the web as is, and modifying the code to work well as a web application. In addition, a procedurally generated cloud algorithm will be developed. Currently, the clouds used in AGEA animations and the Xbox experiences are a texture map. The desire is to create a procedurally generated cloud algorithm to provide dynamically generated clouds for both AGEA animations and the Xbox experiences. This task consists of gaining familiarity with AGEA and the plug-in interface, developing the algorithm, creating an AGEA plug-in to implement the algorithm inside AGEA, and creating a Unity script to implement the algorithm for the Xbox. This portion of the project was unable to be completed in the time frame of the internship; however, the IGOAL will continue to work on it in the future.

  19. In-situ volumetric topography of IC chips for defect detection using infrared confocal measurement with active structured light

    International Nuclear Information System (INIS)

    Chen, Liang-Chia; Le, Manh-Trung; Phuc, Dao Cong; Lin, Shyh-Tsong

    2014-01-01

    The article presents the development of in-situ integrated circuit (IC) chip defect detection techniques for automated clipping detection by proposing infrared imaging and full-field volumetric topography. IC chip inspection, especially held during or post IC packaging, has become an extremely critical procedure in IC fabrication to assure manufacturing quality and reduce production costs. To address this, in the article, microscopic infrared imaging using an electromagnetic light spectrum that ranges from 0.9 to 1.7 µm is developed to perform volumetric inspection of IC chips, in order to identify important defects such as silicon clipping, cracking or peeling. The main difficulty of infrared (IR) volumetric imaging lies in its poor image contrast, which makes it incapable of achieving reliable inspection, as infrared imaging is sensitive to temperature difference but insensitive to geometric variance of materials, resulting in difficulty detecting and quantifying defects precisely. To overcome this, 3D volumetric topography based on 3D infrared confocal measurement with active structured light, as well as light refractive matching principles, is developed to detect defects the size, shape and position of defects in ICs. The experimental results show that the algorithm is effective and suitable for in-situ defect detection of IC semiconductor packaging. The quality of defect detection, such as measurement repeatability and accuracy, is addressed. Confirmed by the experimental results, the depth measurement resolution can reach up to 0.3 µm, and the depth measurement uncertainty with one standard deviation was verified to be less than 1.0% of the full-scale depth-measuring range. (paper)

  20. Development of transmission dose estimation algorithm for in vivo dosimetry in high energy radiation treatment

    International Nuclear Information System (INIS)

    Yun, Hyong Geun; Shin, Kyo Chul; Hun, Soon Nyung; Woo, Hong Gyun; Ha, Sung Whan; Lee, Hyoung Koo

    2004-01-01

    In vivo dosimetry is very important for quality assurance purpose in high energy radiation treatment. Measurement of transmission dose is a new method of in vivo dosimetry which is noninvasive and easy for daily performance. This study is to develop a tumor dose estimation algorithm using measured transmission dose for open radiation field. For basic beam data, transmission dose was measured with various field size (FS) of square radiation field, phantom thickness (Tp), and phantom chamber distance (PCD) with a acrylic phantom for 6 MV and 10 MV X-ray. Source to chamber distance (SCD) was set to 150 cm. Measurement was conducted with a 0.6 cc Farmer type ion chamber. By using regression analysis of measured basic beam data, a transmission dose estimation algorithm was developed. Accuracy of the algorithm was tested with flat solid phantom with various thickness in various settings of rectangular fields and various PCD. In our developed algorithm, transmission dose was equated to quadratic function of log(A/P) (where A/P is area-perimeter ratio) and the coefficients of the quadratic functions were equated to tertiary functions of PCD. Our developed algorithm could estimate the radiation dose with the errors within ±0.5% for open square field, and with the errors within ±1.0% for open elongated radiation field. Developed algorithm could accurately estimate the transmission dose in open radiation fields with various treatment settings of high energy radiation treatment. (author)

  1. jClustering, an open framework for the development of 4D clustering algorithms.

    Directory of Open Access Journals (Sweden)

    José María Mateos-Pérez

    Full Text Available We present jClustering, an open framework for the design of clustering algorithms in dynamic medical imaging. We developed this tool because of the difficulty involved in manually segmenting dynamic PET images and the lack of availability of source code for published segmentation algorithms. Providing an easily extensible open tool encourages publication of source code to facilitate the process of comparing algorithms and provide interested third parties with the opportunity to review code. The internal structure of the framework allows an external developer to implement new algorithms easily and quickly, focusing only on the particulars of the method being implemented and not on image data handling and preprocessing. This tool has been coded in Java and is presented as an ImageJ plugin in order to take advantage of all the functionalities offered by this imaging analysis platform. Both binary packages and source code have been published, the latter under a free software license (GNU General Public License to allow modification if necessary.

  2. Theoretical algorithms for satellite-derived sea surface temperatures

    Science.gov (United States)

    Barton, I. J.; Zavody, A. M.; O'Brien, D. M.; Cutten, D. R.; Saunders, R. W.; Llewellyn-Jones, D. T.

    1989-03-01

    Reliable climate forecasting using numerical models of the ocean-atmosphere system requires accurate data sets of sea surface temperature (SST) and surface wind stress. Global sets of these data will be supplied by the instruments to fly on the ERS 1 satellite in 1990. One of these instruments, the Along-Track Scanning Radiometer (ATSR), has been specifically designed to provide SST in cloud-free areas with an accuracy of 0.3 K. The expected capabilities of the ATSR can be assessed using transmission models of infrared radiative transfer through the atmosphere. The performances of several different models are compared by estimating the infrared brightness temperatures measured by the NOAA 9 AVHRR for three standard atmospheres. Of these, a computationally quick spectral band model is used to derive typical AVHRR and ATSR SST algorithms in the form of linear equations. These algorithms show that a low-noise 3.7-μm channel is required to give the best satellite-derived SST and that the design accuracy of the ATSR is likely to be achievable. The inclusion of extra water vapor information in the analysis did not improve the accuracy of multiwavelength SST algorithms, but some improvement was noted with the multiangle technique. Further modeling is required with atmospheric data that include both aerosol variations and abnormal vertical profiles of water vapor and temperature.

  3. Performance and development for the Inner Detector Trigger algorithms at ATLAS

    CERN Document Server

    Penc, O; The ATLAS collaboration

    2014-01-01

    The performance of the ATLAS Inner Detector (ID) Trigger algorithms being developed for running on the ATLAS High Level Trigger (HLT) processor farm during Run 2 of the LHC are presented. During the 2013-14 LHC long shutdown modifications are being carried out to the LHC accelerator to increase both the beam energy and luminosity. These modifications will pose significant challenges for the ID Trigger algorithms, both in terms execution time and physics performance. To meet these challenges, the ATLAS HLT software is being restructured to run as a more flexible single stage HLT, instead of two separate stages (Level2 and Event Filter) as in Run 1. This will reduce the overall data volume that needs to be requested by the HLT system, since data will no longer need to be requested for each of the two separate processing stages. Development of the ID Trigger algorithms for Run 2, currently expected to be ready for detector commissioning near the end of 2014, is progressing well and the current efforts towards op...

  4. Evidence-based algorithm for heparin dosing before cardiopulmonary bypass. Part 1: Development of the algorithm.

    Science.gov (United States)

    McKinney, Mark C; Riley, Jeffrey B

    2007-12-01

    The incidence of heparin resistance during adult cardiac surgery with cardiopulmonary bypass has been reported at 15%-20%. The consistent use of a clinical decision-making algorithm may increase the consistency of patient care and likely reduce the total required heparin dose and other problems associated with heparin dosing. After a directed survey of practicing perfusionists regarding treatment of heparin resistance and a literature search for high-level evidence regarding the diagnosis and treatment of heparin resistance, an evidence-based decision-making algorithm was constructed. The face validity of the algorithm decisive steps and logic was confirmed by a second survey of practicing perfusionists. The algorithm begins with review of the patient history to identify predictors for heparin resistance. The definition for heparin resistance contained in the algorithm is an activated clotting time 450 IU/kg heparin loading dose. Based on the literature, the treatment for heparin resistance used in the algorithm is anti-thrombin III supplement. The algorithm seems to be valid and is supported by high-level evidence and clinician opinion. The next step is a human randomized clinical trial to test the clinical procedure guideline algorithm vs. current standard clinical practice.

  5. Overhead longwave infrared hyperspectral material identification using radiometric models

    Energy Technology Data Exchange (ETDEWEB)

    Zelinski, M. E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2018-01-09

    Material detection algorithms used in hyperspectral data processing are computationally efficient but can produce relatively high numbers of false positives. Material identification performed as a secondary processing step on detected pixels can help separate true and false positives. This paper presents a material identification processing chain for longwave infrared hyperspectral data of solid materials collected from airborne platforms. The algorithms utilize unwhitened radiance data and an iterative algorithm that determines the temperature, humidity, and ozone of the atmospheric profile. Pixel unmixing is done using constrained linear regression and Bayesian Information Criteria for model selection. The resulting product includes an optimal atmospheric profile and full radiance material model that includes material temperature, abundance values, and several fit statistics. A logistic regression method utilizing all model parameters to improve identification is also presented. This paper details the processing chain and provides justification for the algorithms used. Several examples are provided using modeled data at different noise levels.

  6. The Continuous Monitoring of Desert Dust using an Infrared-based Dust Detection and Retrieval Method

    Science.gov (United States)

    Duda, David P.; Minnis, Patrick; Trepte, Qing; Sun-Mack, Sunny

    2006-01-01

    Airborne dust and sand are significant aerosol sources that can impact the atmospheric and surface radiation budgets. Because airborne dust affects visibility and air quality, it is desirable to monitor the location and concentrations of this aerosol for transportation and public health. Although aerosol retrievals have been derived for many years using visible and near-infrared reflectance measurements from satellites, the detection and quantification of dust from these channels is problematic over bright surfaces, or when dust concentrations are large. In addition, aerosol retrievals from polar orbiting satellites lack the ability to monitor the progression and sources of dust storms. As a complement to current aerosol dust retrieval algorithms, multi-spectral thermal infrared (8-12 micron) data from the Moderate Resolution Imaging Spectroradiometer (MODIS) and the Meteosat-8 Spinning Enhanced Visible and Infrared Imager (SEVIRI) are used in the development of a prototype dust detection method and dust property retrieval that can monitor the progress of Saharan dust fields continuously, both night and day. The dust detection method is incorporated into the processing of CERES (Clouds and the Earth s Radiant Energy System) aerosol retrievals to produce dust property retrievals. Both MODIS (from Terra and Aqua) and SEVERI data are used to develop the method.

  7. Developing infrared array controller with software real time operating system

    Science.gov (United States)

    Sako, Shigeyuki; Miyata, Takashi; Nakamura, Tomohiko; Motohara, Kentaro; Uchimoto, Yuka Katsuno; Onaka, Takashi; Kataza, Hirokazu

    2008-07-01

    Real-time capabilities are required for a controller of a large format array to reduce a dead-time attributed by readout and data transfer. The real-time processing has been achieved by dedicated processors including DSP, CPLD, and FPGA devices. However, the dedicated processors have problems with memory resources, inflexibility, and high cost. Meanwhile, a recent PC has sufficient resources of CPUs and memories to control the infrared array and to process a large amount of frame data in real-time. In this study, we have developed an infrared array controller with a software real-time operating system (RTOS) instead of the dedicated processors. A Linux PC equipped with a RTAI extension and a dual-core CPU is used as a main computer, and one of the CPU cores is allocated to the real-time processing. A digital I/O board with DMA functions is used for an I/O interface. The signal-processing cores are integrated in the OS kernel as a real-time driver module, which is composed of two virtual devices of the clock processor and the frame processor tasks. The array controller with the RTOS realizes complicated operations easily, flexibly, and at a low cost.

  8. Technical note: A new day- and night-time Meteosat Second Generation Cirrus Detection Algorithm MeCiDA

    Directory of Open Access Journals (Sweden)

    W. Krebs

    2007-12-01

    Full Text Available A new cirrus detection algorithm for the Spinning Enhanced Visible and Infra-Red Imager (SEVIRI aboard the geostationary Meteosat Second Generation (MSG, MeCiDA, is presented. The algorithm uses the seven infrared channels of SEVIRI and thus provides a consistent scheme for cirrus detection at day and night. MeCiDA combines morphological and multi-spectral threshold tests and detects optically thick and thin ice clouds. The thresholds were determined by a comprehensive theoretical study using radiative transfer simulations for various atmospheric situations as well as by manually evaluating actual satellite observations. The cirrus detection has been optimized for mid- and high latitudes but it could be adapted to other regions as well. The retrieved cirrus masks have been validated by comparison with the Moderate Resolution Imaging Spectroradiometer (MODIS Cirrus Reflection Flag. To study possible seasonal variations in the performance of the algorithm, one scene per month of the year 2004 was randomly selected and compared with the MODIS flag. 81% of the pixels were classified identically by both algorithms. In a comparison of monthly mean values for Europe and the North-Atlantic MeCiDA detected 29.3% cirrus coverage, while the MODIS SWIR cirrus coverage was 38.1%. A lower detection efficiency is to be expected for MeCiDA, as the spatial resolution of MODIS is considerably better and as we used only the thermal infrared channels in contrast to the MODIS algorithm which uses infrared and visible radiances. The advantage of MeCiDA compared to retrievals for polar orbiting instruments or previous geostationary satellites is that it permits the derivation of quantitative data every 15 min, 24 h a day. This high temporal resolution allows the study of diurnal variations and life cycle aspects. MeCiDA is fast enough for near real-time applications.

  9. Development and testing of incident detection algorithms. Vol. 2, research methodology and detailed results.

    Science.gov (United States)

    1976-04-01

    The development and testing of incident detection algorithms was based on Los Angeles and Minneapolis freeway surveillance data. Algorithms considered were based on times series and pattern recognition techniques. Attention was given to the effects o...

  10. A prediction algorithm for first onset of major depression in the general population: development and validation.

    Science.gov (United States)

    Wang, JianLi; Sareen, Jitender; Patten, Scott; Bolton, James; Schmitz, Norbert; Birney, Arden

    2014-05-01

    Prediction algorithms are useful for making clinical decisions and for population health planning. However, such prediction algorithms for first onset of major depression do not exist. The objective of this study was to develop and validate a prediction algorithm for first onset of major depression in the general population. Longitudinal study design with approximate 3-year follow-up. The study was based on data from a nationally representative sample of the US general population. A total of 28 059 individuals who participated in Waves 1 and 2 of the US National Epidemiologic Survey on Alcohol and Related Conditions and who had not had major depression at Wave 1 were included. The prediction algorithm was developed using logistic regression modelling in 21 813 participants from three census regions. The algorithm was validated in participants from the 4th census region (n=6246). Major depression occurred since Wave 1 of the National Epidemiologic Survey on Alcohol and Related Conditions, assessed by the Alcohol Use Disorder and Associated Disabilities Interview Schedule-diagnostic and statistical manual for mental disorders IV. A prediction algorithm containing 17 unique risk factors was developed. The algorithm had good discriminative power (C statistics=0.7538, 95% CI 0.7378 to 0.7699) and excellent calibration (F-adjusted test=1.00, p=0.448) with the weighted data. In the validation sample, the algorithm had a C statistic of 0.7259 and excellent calibration (Hosmer-Lemeshow χ(2)=3.41, p=0.906). The developed prediction algorithm has good discrimination and calibration capacity. It can be used by clinicians, mental health policy-makers and service planners and the general public to predict future risk of having major depression. The application of the algorithm may lead to increased personalisation of treatment, better clinical decisions and more optimal mental health service planning.

  11. Automated vehicle detection in forward-looking infrared imagery.

    Science.gov (United States)

    Der, Sandor; Chan, Alex; Nasrabadi, Nasser; Kwon, Heesung

    2004-01-10

    We describe an algorithm for the detection and clutter rejection of military vehicles in forward-looking infrared (FLIR) imagery. The detection algorithm is designed to be a prescreener that selects regions for further analysis and uses a spatial anomaly approach that looks for target-sized regions of the image that differ in texture, brightness, edge strength, or other spatial characteristics. The features are linearly combined to form a confidence image that is thresholded to find likely target locations. The clutter rejection portion uses target-specific information extracted from training samples to reduce the false alarms of the detector. The outputs of the clutter rejecter and detector are combined by a higher-level evidence integrator to improve performance over simple concatenation of the detector and clutter rejecter. The algorithm has been applied to a large number of FLIR imagery sets, and some of these results are presented here.

  12. Leadership development in the age of the algorithm.

    Science.gov (United States)

    Buckingham, Marcus

    2012-06-01

    By now we expect personalized content--it's routinely served up by online retailers and news services, for example. But the typical leadership development program still takes a formulaic, one-size-fits-all approach. And it rarely happens that an excellent technique can be effectively transferred from one leader to all others. Someone trying to adopt a practice from a leader with a different style usually seems stilted and off--a Franken-leader. Breakthrough work at Hilton Hotels and other organizations shows how companies can use an algorithmic model to deliver training tips uniquely suited to each individual's style. It's a five-step process: First, a company must choose a tool with which to identify each person's leadership type. Second, it should assess its best leaders, and third, it should interview them about their techniques. Fourth, it should use its algorithmic model to feed tips drawn from those techniques to developing leaders of the same type. And fifth, it should make the system dynamically intelligent, with user reactions sharpening the content and targeting of tips. The power of this kind of system--highly customized, based on peer-to-peer sharing, and continually evolving--will soon overturn the generic model of leadership development. And such systems will inevitably break through any one organization, until somewhere in the cloud the best leadership tips from all over are gathered, sorted, and distributed according to which ones suit which people best.

  13. DOOCS environment for FPGA-based cavity control system and control algorithms development

    International Nuclear Information System (INIS)

    Pucyk, P.; Koprek, W.; Kaleta, P.; Szewinski, J.; Pozniak, K.T.; Czarski, T.; Romaniuk, R.S.

    2005-01-01

    The paper describes the concept and realization of the DOOCS control software for FPGAbased TESLA cavity controller and simulator (SIMCON). It bases on universal software components, created for laboratory purposes and used in MATLAB based control environment. These modules have been recently adapted to the DOOCS environment to ensure a unified software to hardware communication model. The presented solution can be also used as a general platform for control algorithms development. The proposed interfaces between MATLAB and DOOCS modules allow to check the developed algorithm in the operation environment before implementation in the FPGA. As the examples two systems have been presented. (orig.)

  14. Classification of diesel pool refinery streams through near infrared spectroscopy and support vector machines using C-SVC and ν-SVC.

    Science.gov (United States)

    Alves, Julio Cesar L; Henriques, Claudete B; Poppi, Ronei J

    2014-01-03

    The use of near infrared (NIR) spectroscopy combined with chemometric methods have been widely used in petroleum and petrochemical industry and provides suitable methods for process control and quality control. The algorithm support vector machines (SVM) has demonstrated to be a powerful chemometric tool for development of classification models due to its ability to nonlinear modeling and with high generalization capability and these characteristics can be especially important for treating near infrared (NIR) spectroscopy data of complex mixtures such as petroleum refinery streams. In this work, a study on the performance of the support vector machines algorithm for classification was carried out, using C-SVC and ν-SVC, applied to near infrared (NIR) spectroscopy data of different types of streams that make up the diesel pool in a petroleum refinery: light gas oil, heavy gas oil, hydrotreated diesel, kerosene, heavy naphtha and external diesel. In addition to these six streams, the diesel final blend produced in the refinery was added to complete the data set. C-SVC and ν-SVC classification models with 2, 4, 6 and 7 classes were developed for comparison between its results and also for comparison with the soft independent modeling of class analogy (SIMCA) models results. It is demonstrated the superior performance of SVC models especially using ν-SVC for development of classification models for 6 and 7 classes leading to an improvement of sensitivity on validation sample sets of 24% and 15%, respectively, when compared to SIMCA models, providing better identification of chemical compositions of different diesel pool refinery streams. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Algorithm improvement program nuclide identification algorithm scoring criteria and scoring application.

    Energy Technology Data Exchange (ETDEWEB)

    Enghauser, Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-02-01

    The goal of the Domestic Nuclear Detection Office (DNDO) Algorithm Improvement Program (AIP) is to facilitate gamma-radiation detector nuclide identification algorithm development, improvement, and validation. Accordingly, scoring criteria have been developed to objectively assess the performance of nuclide identification algorithms. In addition, a Microsoft Excel spreadsheet application for automated nuclide identification scoring has been developed. This report provides an overview of the equations, nuclide weighting factors, nuclide equivalencies, and configuration weighting factors used by the application for scoring nuclide identification algorithm performance. Furthermore, this report presents a general overview of the nuclide identification algorithm scoring application including illustrative examples.

  16. Aeolian system dynamics derived from thermal infrared data

    Science.gov (United States)

    Scheidt, Stephen Paul

    Thermal infrared (TIR) remote-sensing and field-based observations were used to study aeolian systems, specifically sand transport pathways, dust emission sources and Saharan atmospheric dust. A method was developed for generating seamless and radiometrically accurate mosaics of thermal infrared data from the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) instrument. Using a combination of high resolution thermal emission spectroscopy results of sand samples and mosaic satellite data, surface emissivity was derived to map surface composition, which led to improvement in the understanding of sand accumulation in the Gran Desierto of northern Sonora, Mexico. These methods were also used to map sand transport pathways in the Sahara Desert, where the interaction between sand saltation and dust emission sources was explored. The characteristics and dynamics of dust sources were studied at White Sands, NM and in the Sahara Desert. At White Sands, an application was developed for studying the response of dust sources to surface soil moisture based on the relationship between soil moisture, apparent thermal inertia and the erosion potential of dust sources. The dynamics of dust sources and the interaction with sand transport pathways were also studied, focusing on the Bodele Depression of Chad and large dust sources in Mali and Mauritania. A dust detection algorithm was developed using ASTER data, and the spectral emissivity of observed atmospheric dust was related to the dust source area in the Sahara. At the Atmospheric Observatory (IZO) in Tenerife, Spain where direct measurement of the Saharan Air Layer could be made, the cycle of dust events occurring in July 2009 were examined. From the observation tower at the IZO, measurements of emitted longwave atmospheric radiance in the TIR wavelength region were made using a Forward Looking Infrared Radiometer (FLIR) handheld camera. The use of the FLIR to study atmospheric dust from the Saharan is a

  17. Development of paints with infrared radiation reflective properties

    Directory of Open Access Journals (Sweden)

    Eliane Coser

    2015-06-01

    Full Text Available AbstractLarge buildings situated in hot regions of the Globe need to be agreeable to their residents. Air conditioning is extensively used to make these buildings comfortable, with consequent energy consumption. Absorption of solar visible and infrared radiations are responsible for heating objects on the surface of the Earth, including houses and buildings. To avoid excessive energy consumption, it is possible to use coatings formulated with special pigments that are able to reflect the radiation in the near- infrared, NIR, spectrum. To evaluate this phenomenon an experimental study about the reflectivity of paints containing infrared-reflective pigments has been made. By irradiating with an IR source and by measuring the surface temperatures of the samples we evaluated: color according to ASTM D 2244-14, UV/VIS/NIR reflectance according to ASTM E 903-12 and thermal performance. Additionally, the spectral reflectance and the IR emittance were measured and the solar reflectance of the samples were calculated. The results showed that plates coated with paints containing IR-reflecting pigments displayed lower air temperature on the opposite side as compared to conventional coatings, indicating that they can be effective to reflect NIR and decrease the temperature of buildings when used in roofs and walls.

  18. Infrared

    Science.gov (United States)

    Vollmer, M.

    2013-11-01

    'Infrared' is a very wide field in physics and the natural sciences which has evolved enormously in recent decades. It all started in 1800 with Friedrich Wilhelm Herschel's discovery of infrared (IR) radiation within the spectrum of the Sun. Thereafter a few important milestones towards widespread use of IR were the quantitative description of the laws of blackbody radiation by Max Planck in 1900; the application of quantum mechanics to understand the rotational-vibrational spectra of molecules starting in the first half of the 20th century; and the revolution in source and detector technologies due to micro-technological breakthroughs towards the end of the 20th century. This has led to much high-quality and sophisticated equipment in terms of detectors, sources and instruments in the IR spectral range, with a multitude of different applications in science and technology. This special issue tries to focus on a few aspects of the astonishing variety of different disciplines, techniques and applications concerning the general topic of infrared radiation. Part of the content is based upon an interdisciplinary international conference on the topic held in 2012 in Bad Honnef, Germany. It is hoped that the information provided here may be useful for teaching the general topic of electromagnetic radiation in the IR spectral range in advanced university courses for postgraduate students. In the most general terms, the infrared spectral range is defined to extend from wavelengths of 780 nm (upper range of the VIS spectral range) up to wavelengths of 1 mm (lower end of the microwave range). Various definitions of near, middle and far infrared or thermal infrared, and lately terahertz frequencies, are used, which all fall in this range. These special definitions often depend on the scientific field of research. Unfortunately, many of these fields seem to have developed independently from neighbouring disciplines, although they deal with very similar topics in respect of the

  19. Combining Passive Microwave Rain Rate Retrieval with Visible and Infrared Cloud Classification.

    Science.gov (United States)

    Miller, Shawn William

    The relation between cloud type and rain rate has been investigated here from different approaches. Previous studies and intercomparisons have indicated that no single passive microwave rain rate algorithm is an optimal choice for all types of precipitating systems. Motivated by the upcoming Tropical Rainfall Measuring Mission (TRMM), an algorithm which combines visible and infrared cloud classification with passive microwave rain rate estimation was developed and analyzed in a preliminary manner using data from the Tropical Ocean Global Atmosphere-Coupled Ocean Atmosphere Response Experiment (TOGA-COARE). Overall correlation with radar rain rate measurements across five case studies showed substantial improvement in the combined algorithm approach when compared to the use of any single microwave algorithm. An automated neural network cloud classifier for use over both land and ocean was independently developed and tested on Advanced Very High Resolution Radiometer (AVHRR) data. The global classifier achieved strict accuracy for 82% of the test samples, while a more localized version achieved strict accuracy for 89% of its own test set. These numbers provide hope for the eventual development of a global automated cloud classifier for use throughout the tropics and the temperate zones. The localized classifier was used in conjunction with gridded 15-minute averaged radar rain rates at 8km resolution produced from the current operational network of National Weather Service (NWS) radars, to investigate the relation between cloud type and rain rate over three regions of the continental United States and adjacent waters. The results indicate a substantially lower amount of available moisture in the Front Range of the Rocky Mountains than in the Midwest or in the eastern Gulf of Mexico.

  20. [State Recognition of Solid Fermentation Process Based on Near Infrared Spectroscopy with Adaboost and Spectral Regression Discriminant Analysis].

    Science.gov (United States)

    Yu, Shuang; Liu, Guo-hai; Xia, Rong-sheng; Jiang, Hui

    2016-01-01

    In order to achieve the rapid monitoring of process state of solid state fermentation (SSF), this study attempted to qualitative identification of process state of SSF of feed protein by use of Fourier transform near infrared (FT-NIR) spectroscopy analysis technique. Even more specifically, the FT-NIR spectroscopy combined with Adaboost-SRDA-NN integrated learning algorithm as an ideal analysis tool was used to accurately and rapidly monitor chemical and physical changes in SSF of feed protein without the need for chemical analysis. Firstly, the raw spectra of all the 140 fermentation samples obtained were collected by use of Fourier transform near infrared spectrometer (Antaris II), and the raw spectra obtained were preprocessed by use of standard normal variate transformation (SNV) spectral preprocessing algorithm. Thereafter, the characteristic information of the preprocessed spectra was extracted by use of spectral regression discriminant analysis (SRDA). Finally, nearest neighbors (NN) algorithm as a basic classifier was selected and building state recognition model to identify different fermentation samples in the validation set. Experimental results showed as follows: the SRDA-NN model revealed its superior performance by compared with other two different NN models, which were developed by use of the feature information form principal component analysis (PCA) and linear discriminant analysis (LDA), and the correct recognition rate of SRDA-NN model achieved 94.28% in the validation set. In this work, in order to further improve the recognition accuracy of the final model, Adaboost-SRDA-NN ensemble learning algorithm was proposed by integrated the Adaboost and SRDA-NN methods, and the presented algorithm was used to construct the online monitoring model of process state of SSF of feed protein. Experimental results showed as follows: the prediction performance of SRDA-NN model has been further enhanced by use of Adaboost lifting algorithm, and the correct

  1. FY 2006 Infrared Photonics Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Anheier, Norman C.; Allen, Paul J.; Bernacki, Bruce E.; Ho, Nicolas; Krishnaswami, Kannan; Qiao, Hong (Amy); Schultz, John F.

    2006-12-28

    Research done by the Infrared Photonics team at Pacific Northwest National Laboratory (PNNL) is focused on developing miniaturized integrated optics and optical fiber processing methods for mid-wave infrared (MWIR) and long-wave infrared (LWIR) sensing applications by exploiting the unique optical and material properties of chalcogenide glass. PNNL has developed thin-film deposition capabilities, direct laser writing techniques, infrared photonic device demonstration, holographic optical element design and fabrication, photonic device modeling, and advanced optical metrology—all specific to chalcogenide glass. Chalcogenide infrared photonics provides a pathway to quantum cascade laser (QCL) transmitter miniaturization. The high output power, small size, and superb stability and modulation characteristics of QCLs make them amenable for integration as transmitters into ultra-sensitive, ultra-selective point sampling and remote short-range chemical sensors that are particularly useful for nuclear nonproliferation missions.

  2. Infrared analyzers for breast milk analysis: fat levels can influence the accuracy of protein measurements.

    Science.gov (United States)

    Kwan, Celia; Fusch, Gerhard; Bahonjic, Aldin; Rochow, Niels; Fusch, Christoph

    2017-10-26

    Currently, there is a growing interest in lacto-engineering in the neonatal intensive care unit, using infrared milk analyzers to rapidly measure the macronutrient content in breast milk before processing and feeding it to preterm infants. However, there is an overlap in the spectral information of different macronutrients, so they can potentially impact the robustness of the measurement. In this study, we investigate whether the measurement of protein is dependent on the levels of fat present while using an infrared milk analyzer. Breast milk samples (n=25) were measured for fat and protein content before and after being completely defatted by centrifugation, using chemical reference methods and near-infrared milk analyzer (Unity SpectraStar) with two different calibration algorithms provided by the manufacturer (released 2009 and 2015). While the protein content remained unchanged, as measured by elemental analysis, measurements by infrared milk analyzer show a difference in protein measurements dependent on fat content; high fat content can lead to falsely high protein content. This difference is less pronounced when measured using the more recent calibration algorithm. Milk analyzer users must be cautious of their devices' measurements, especially if they are changing the matrix of breast milk using more advanced lacto-engineering.

  3. Added value of far-infrared radiometry for remote sensing of ice clouds

    Science.gov (United States)

    Libois, Quentin; Blanchet, Jean-Pierre

    2017-06-01

    Several cloud retrieval algorithms based on satellite observations in the infrared have been developed in the last decades. However, these observations only cover the midinfrared (MIR, λ transparent in the FIR, using FIR channels would reduce by more than 50% the uncertainties on retrieved values of optical thickness, effective particle diameter, and cloud top altitude. Notably, this would extend the range of applicability of current retrieval methods to the polar regions and to clouds with large optical thickness, where MIR algorithms perform poorly. The high performance of solar reflection-based algorithms would thus be reached in nighttime conditions. Since the sensitivity of ice cloud thermal emission to effective particle diameter is approximately 5 times larger in the FIR than in the MIR, using FIR observations is a promising venue for studying ice cloud microphysics and precipitation processes. This is highly relevant for cirrus clouds and convective towers. This is also essential to study precipitation in the driest regions of the atmosphere, where strong feedbacks are at play between clouds and water vapor. The deployment in the near future of a FIR spaceborne radiometer is technologically feasible and should be strongly supported.

  4. Development of microfluidic devices for biomedical applications of synchrotron radiation infrared microspectroscopy

    OpenAIRE

    Birarda, Giovanni

    2011-01-01

    2009/2010 ABSTRACT DEVELOPMENT OF MICROFLUIDIC DEVICES FOR BIOMEDICAL APPLICATIONS OF SYNCHROTRON RADIATION INFRARED MICROSPECTROSCOPY by Birarda Giovanni The detection and measurement of biological processes in a complex living system is a discipline at the edge of Physics, Biology, and Engineering, with major scientific challenges, new technological applications and a great potential impact on dissection of phenomena occurring at tissue, cell, and sub cellular level. The ...

  5. Data compressive paradigm for multispectral sensing using tunable DWELL mid-infrared detectors.

    Science.gov (United States)

    Jang, Woo-Yong; Hayat, Majeed M; Godoy, Sebastián E; Bender, Steven C; Zarkesh-Ha, Payman; Krishna, Sanjay

    2011-09-26

    While quantum dots-in-a-well (DWELL) infrared photodetectors have the feature that their spectral responses can be shifted continuously by varying the applied bias, the width of the spectral response at any applied bias is not sufficiently narrow for use in multispectral sensing without the aid of spectral filters. To achieve higher spectral resolutions without using physical spectral filters, algorithms have been developed for post-processing the DWELL's bias-dependent photocurrents resulting from probing an object of interest repeatedly over a wide range of applied biases. At the heart of these algorithms is the ability to approximate an arbitrary spectral filter, which we desire the DWELL-algorithm combination to mimic, by forming a weighted superposition of the DWELL's non-orthogonal spectral responses over a range of applied biases. However, these algorithms assume availability of abundant DWELL data over a large number of applied biases (>30), leading to large overall acquisition times in proportion with the number of biases. This paper reports a new multispectral sensing algorithm to substantially compress the number of necessary bias values subject to a prescribed performance level across multiple sensing applications. The algorithm identifies a minimal set of biases to be used in sensing only the relevant spectral information for remote-sensing applications of interest. Experimental results on target spectrometry and classification demonstrate a reduction in the number of required biases by a factor of 7 (e.g., from 30 to 4). The tradeoff between performance and bias compression is thoroughly investigated. © 2011 Optical Society of America

  6. The development of an algebraic multigrid algorithm for symmetric positive definite linear systems

    Energy Technology Data Exchange (ETDEWEB)

    Vanek, P.; Mandel, J.; Brezina, M. [Univ. of Colorado, Denver, CO (United States)

    1996-12-31

    An algebraic multigrid algorithm for symmetric, positive definite linear systems is developed based on the concept of prolongation by smoothed aggregation. Coarse levels are generated automatically. We present a set of requirements motivated heuristically by a convergence theory. The algorithm then attempts to satisfy the requirements. Input to the method are the coefficient matrix and zero energy modes, which are determined from nodal coordinates and knowledge of the differential equation. Efficiency of the resulting algorithm is demonstrated by computational results on real world problems from solid elasticity, plate blending, and shells.

  7. Development of GPT-based optimization algorithm

    International Nuclear Information System (INIS)

    White, J.R.; Chapman, D.M.; Biswas, D.

    1985-01-01

    The University of Lowell and Westinghouse Electric Corporation are involved in a joint effort to evaluate the potential benefits of generalized/depletion perturbation theory (GPT/DTP) methods for a variety of light water reactor (LWR) physics applications. One part of that work has focused on the development of a GPT-based optimization algorithm for the overall design, analysis, and optimization of LWR reload cores. The use of GPT sensitivity data in formulating the fuel management optimization problem is conceptually straightforward; it is the actual execution of the concept that is challenging. Thus, the purpose of this paper is to address some of the major difficulties, to outline our approach to these problems, and to present some illustrative examples of an efficient GTP-based optimization scheme

  8. The development of a scalable parallel 3-D CFD algorithm for turbomachinery. M.S. Thesis Final Report

    Science.gov (United States)

    Luke, Edward Allen

    1993-01-01

    Two algorithms capable of computing a transonic 3-D inviscid flow field about rotating machines are considered for parallel implementation. During the study of these algorithms, a significant new method of measuring the performance of parallel algorithms is developed. The theory that supports this new method creates an empirical definition of scalable parallel algorithms that is used to produce quantifiable evidence that a scalable parallel application was developed. The implementation of the parallel application and an automated domain decomposition tool are also discussed.

  9. Geostationary Sensor Based Forest Fire Detection and Monitoring: An Improved Version of the SFIDE Algorithm

    Directory of Open Access Journals (Sweden)

    Valeria Di Biase

    2018-05-01

    Full Text Available The paper aims to present the results obtained in the development of a system allowing for the detection and monitoring of forest fires and the continuous comparison of their intensity when several events occur simultaneously—a common occurrence in European Mediterranean countries during the summer season. The system, called SFIDE (Satellite FIre DEtection, exploits a geostationary satellite sensor (SEVIRI, Spinning Enhanced Visible and InfraRed Imager, on board of MSG, Meteosat Second Generation, satellite series. The algorithm was developed several years ago in the framework of a project (SIGRI funded by the Italian Space Agency (ASI. This algorithm has been completely reviewed in order to enhance its efficiency by reducing false alarms rate preserving a high sensitivity. Due to the very low spatial resolution of SEVIRI images (4 × 4 km2 at Mediterranean latitude the sensitivity of the algorithm should be very high to detect even small fires. The improvement of the algorithm has been obtained by: introducing the sun elevation angle in the computation of the preliminary thresholds to identify potential thermal anomalies (hot spots, introducing a contextual analysis in the detection of clouds and in the detection of night-time fires. The results of the algorithm have been validated in the Sardinia region by using ground true data provided by the regional Corpo Forestale e di Vigilanza Ambientale (CFVA. A significant reduction of the commission error (less than 10% has been obtained with respect to the previous version of the algorithm and also with respect to fire-detection algorithms based on low earth orbit satellites.

  10. Design requirements and development of an airborne descent path definition algorithm for time navigation

    Science.gov (United States)

    Izumi, K. H.; Thompson, J. L.; Groce, J. L.; Schwab, R. W.

    1986-01-01

    The design requirements for a 4D path definition algorithm are described. These requirements were developed for the NASA ATOPS as an extension of the Local Flow Management/Profile Descent algorithm. They specify the processing flow, functional and data architectures, and system input requirements, and recommended the addition of a broad path revision (reinitialization) function capability. The document also summarizes algorithm design enhancements and the implementation status of the algorithm on an in-house PDP-11/70 computer. Finally, the requirements for the pilot-computer interfaces, the lateral path processor, and guidance and steering function are described.

  11. Infrared and visible image fusion based on robust principal component analysis and compressed sensing

    Science.gov (United States)

    Li, Jun; Song, Minghui; Peng, Yuanxi

    2018-03-01

    Current infrared and visible image fusion methods do not achieve adequate information extraction, i.e., they cannot extract the target information from infrared images while retaining the background information from visible images. Moreover, most of them have high complexity and are time-consuming. This paper proposes an efficient image fusion framework for infrared and visible images on the basis of robust principal component analysis (RPCA) and compressed sensing (CS). The novel framework consists of three phases. First, RPCA decomposition is applied to the infrared and visible images to obtain their sparse and low-rank components, which represent the salient features and background information of the images, respectively. Second, the sparse and low-rank coefficients are fused by different strategies. On the one hand, the measurements of the sparse coefficients are obtained by the random Gaussian matrix, and they are then fused by the standard deviation (SD) based fusion rule. Next, the fused sparse component is obtained by reconstructing the result of the fused measurement using the fast continuous linearized augmented Lagrangian algorithm (FCLALM). On the other hand, the low-rank coefficients are fused using the max-absolute rule. Subsequently, the fused image is superposed by the fused sparse and low-rank components. For comparison, several popular fusion algorithms are tested experimentally. By comparing the fused results subjectively and objectively, we find that the proposed framework can extract the infrared targets while retaining the background information in the visible images. Thus, it exhibits state-of-the-art performance in terms of both fusion effects and timeliness.

  12. Genetic algorithms

    Science.gov (United States)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  13. Far-infrared spectroscopy of HII regions

    International Nuclear Information System (INIS)

    Emery, R.J.; Kessler, M.F.

    1984-01-01

    Interest has developed rapidly in the astrophysics associated with far-infrared line emission from ionised regions, following the development of spectroscopic instruments and observing facilities appropriate to those wavelengths. Far-infrared observations and their interpretation are now at the stage where the need for specific developments in theoretical and laboratory work have been identified. The need is also apparent for the development of models dealing with more realistic astrophysical situations. (Auth.)

  14. Adaptive infrared-reflecting systems inspired by cephalopods

    Science.gov (United States)

    Xu, Chengyi; Stiubianu, George T.; Gorodetsky, Alon A.

    2018-03-01

    Materials and systems that statically reflect radiation in the infrared region of the electromagnetic spectrum underpin the performance of many entrenched technologies, including building insulation, energy-conserving windows, spacecraft components, electronics shielding, container packaging, protective clothing, and camouflage platforms. The development of their adaptive variants, in which the infrared-reflecting properties dynamically change in response to external stimuli, has emerged as an important unmet scientific challenge. By drawing inspiration from cephalopod skin, we developed adaptive infrared-reflecting platforms that feature a simple actuation mechanism, low working temperature, tunable spectral range, weak angular dependence, fast response, stability to repeated cycling, amenability to patterning and multiplexing, autonomous operation, robust mechanical properties, and straightforward manufacturability. Our findings may open opportunities for infrared camouflage and other technologies that regulate infrared radiation.

  15. Discriminating Phytoplankton Functional Types (PFTs) in the Coastal Ocean Using the Inversion Algorithm Phydotax and Airborne Imaging Spectrometer Data

    Science.gov (United States)

    Palacios, Sherry L.; Schafer, Chris; Broughton, Jennifer; Guild, Liane S.; Kudela, Raphael M.

    2013-01-01

    There is a need in the Biological Oceanography community to discriminate among phytoplankton groups within the bulk chlorophyll pool to understand energy flow through ecosystems, to track the fate of carbon in the ocean, and to detect and monitor-for harmful algal blooms (HABs). The ocean color community has responded to this demand with the development of phytoplankton functional type (PFT) discrimination algorithms. These PFT algorithms fall into one of three categories depending on the science application: size-based, biogeochemical function, and taxonomy. The new PFT algorithm Phytoplankton Detection with Optics (PHYDOTax) is an inversion algorithm that discriminates taxon-specific biomass to differentiate among six taxa found in the California Current System: diatoms, dinoflagellates, haptophytes, chlorophytes, cryptophytes, and cyanophytes. PHYDOTax was developed and validated in Monterey Bay, CA for the high resolution imaging spectrometer, Spectroscopic Aerial Mapping System with On-board Navigation (SAMSON - 3.5 nm resolution). PHYDOTax exploits the high spectral resolution of an imaging spectrometer and the improved spatial resolution that airborne data provides for coastal areas. The objective of this study was to apply PHYDOTax to a relatively lower resolution imaging spectrometer to test the algorithm's sensitivity to atmospheric correction, to evaluate capability with other sensors, and to determine if down-sampling spectral resolution would degrade its ability to discriminate among phytoplankton taxa. This study is a part of the larger Hyperspectral Infrared Imager (HyspIRI) airborne simulation campaign which is collecting Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) imagery aboard NASA's ER-2 aircraft during three seasons in each of two years over terrestrial and marine targets in California. Our aquatic component seeks to develop and test algorithms to retrieve water quality properties (e.g. HABs and river plumes) in both marine and in

  16. Development of real-time plasma analysis and control algorithms for the TCV tokamak using SIMULINK

    International Nuclear Information System (INIS)

    Felici, F.; Le, H.B.; Paley, J.I.; Duval, B.P.; Coda, S.; Moret, J.-M.; Bortolon, A.; Federspiel, L.; Goodman, T.P.; Hommen, G.; Karpushov, A.; Piras, F.; Pitzschke, A.; Romero, J.; Sevillano, G.; Sauter, O.; Vijvers, W.

    2014-01-01

    Highlights: • A new digital control system for the TCV tokamak has been commissioned. • The system is entirely programmable by SIMULINK, allowing rapid algorithm development. • Different control system nodes can run different algorithms at varying sampling times. • The previous control system functions have been emulated and improved. • New capabilities include MHD control, profile control, equilibrium reconstruction. - Abstract: One of the key features of the new digital plasma control system installed on the TCV tokamak is the possibility to rapidly design, test and deploy real-time algorithms. With this flexibility the new control system has been used for a large number of new experiments which exploit TCV's powerful actuators consisting of 16 individually controllable poloidal field coils and 7 real-time steerable electron cyclotron (EC) launchers. The system has been used for various applications, ranging from event-based real-time MHD control to real-time current diffusion simulations. These advances have propelled real-time control to one of the cornerstones of the TCV experimental program. Use of the SIMULINK graphical programming language to directly program the control system has greatly facilitated algorithm development and allowed a multitude of different algorithms to be deployed in a short time. This paper will give an overview of the developed algorithms and their application in physics experiments

  17. FAR-INFRARED EXTINCTION MAPPING OF INFRARED DARK CLOUDS

    Energy Technology Data Exchange (ETDEWEB)

    Lim, Wanggi [Department of Astronomy, University of Florida, Gainesville, FL 32611 (United States); Tan, Jonathan C. [Departments of Astronomy and Physics, University of Florida, Gainesville, FL 32611 (United States)

    2014-01-10

    Progress in understanding star formation requires detailed observational constraints on the initial conditions, i.e., dense clumps and cores in giant molecular clouds that are on the verge of gravitational instability. Such structures have been studied by their extinction of near-infrared and, more recently, mid-infrared (MIR) background light. It has been somewhat more of a surprise to find that there are regions that appear as dark shadows at far-infrared (FIR) wavelengths as long as ∼100 μm! Here we develop analysis methods of FIR images from Spitzer-MIPS and Herschel-PACS that allow quantitative measurements of cloud mass surface density, Σ. The method builds on that developed for MIR extinction mapping by Butler and Tan, in particular involving a search for independently saturated, i.e., very opaque, regions that allow measurement of the foreground intensity. We focus on three massive starless core/clumps in the Infrared Dark Cloud (IRDC) G028.37+00.07, deriving mass surface density maps from 3.5 to 70 μm. A by-product of this analysis is the measurement of the spectral energy distribution of the diffuse foreground emission. The lower opacity at 70 μm allows us to probe to higher Σ values, up to ∼1 g cm{sup –2} in the densest parts of the core/clumps. Comparison of the Σ maps at different wavelengths constrains the shape of the MIR-FIR dust opacity law in IRDCs. We find that it is most consistent with the thick ice mantle models of Ossenkopf and Henning. There is tentative evidence for grain ice mantle growth as one goes from lower to higher Σ regions.

  18. FAR-INFRARED EXTINCTION MAPPING OF INFRARED DARK CLOUDS

    International Nuclear Information System (INIS)

    Lim, Wanggi; Tan, Jonathan C.

    2014-01-01

    Progress in understanding star formation requires detailed observational constraints on the initial conditions, i.e., dense clumps and cores in giant molecular clouds that are on the verge of gravitational instability. Such structures have been studied by their extinction of near-infrared and, more recently, mid-infrared (MIR) background light. It has been somewhat more of a surprise to find that there are regions that appear as dark shadows at far-infrared (FIR) wavelengths as long as ∼100 μm! Here we develop analysis methods of FIR images from Spitzer-MIPS and Herschel-PACS that allow quantitative measurements of cloud mass surface density, Σ. The method builds on that developed for MIR extinction mapping by Butler and Tan, in particular involving a search for independently saturated, i.e., very opaque, regions that allow measurement of the foreground intensity. We focus on three massive starless core/clumps in the Infrared Dark Cloud (IRDC) G028.37+00.07, deriving mass surface density maps from 3.5 to 70 μm. A by-product of this analysis is the measurement of the spectral energy distribution of the diffuse foreground emission. The lower opacity at 70 μm allows us to probe to higher Σ values, up to ∼1 g cm –2 in the densest parts of the core/clumps. Comparison of the Σ maps at different wavelengths constrains the shape of the MIR-FIR dust opacity law in IRDCs. We find that it is most consistent with the thick ice mantle models of Ossenkopf and Henning. There is tentative evidence for grain ice mantle growth as one goes from lower to higher Σ regions

  19. Nonlinear optics in germanium mid-infrared fiber material: Detuning oscillations in femtosecond mid-infrared spectroscopy

    Directory of Open Access Journals (Sweden)

    M. Ordu

    2017-09-01

    Full Text Available Germanium optical fibers hold great promise in extending semiconductor photonics into the fundamentally important mid-infrared region of the electromagnetic spectrum. The demonstration of nonlinear response in fabricated Ge fiber samples is a key step in the development of mid-infrared fiber materials. Here we report the observation of detuning oscillations in a germanium fiber in the mid-infrared region using femtosecond dispersed pump-probe spectroscopy. Detuning oscillations are observed in the frequency-resolved response when mid-infrared pump and probe pulses are overlapped in a fiber segment. The oscillations arise from the nonlinear frequency resolved nonlinear (χ(3 response in the germanium semiconductor. Our work represents the first observation of coherent oscillations in the emerging field of germanium mid-infrared fiber optics.

  20. Infrared and optical observations of Nova Mus 1983

    International Nuclear Information System (INIS)

    Whitelock, P.A.; Carter, B.S.; Feast, M.W.; Glass, I.S.; Laney, D.; Menzies, J.W.

    1984-01-01

    Extensive optical (UBVRI) and infrared (JHKL) photometry of Nova Mus 1983 obtained over a period of 300 days is tabulated. Infrared and optical spectra are described. Although by classical definition this was a fast nova its later development was slower than for typical objects of this class. Surprisingly the development of infrared thermal dust emission did not occur. Throughout the period covered, the infrared emission was characteristic of a bound-free plus free-free plasma continuum with emission lines. (author)

  1. Progressive geometric algorithms

    NARCIS (Netherlands)

    Alewijnse, S.P.A.; Bagautdinov, T.M.; de Berg, M.T.; Bouts, Q.W.; ten Brink, Alex P.; Buchin, K.A.; Westenberg, M.A.

    2015-01-01

    Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms

  2. Progressive geometric algorithms

    NARCIS (Netherlands)

    Alewijnse, S.P.A.; Bagautdinov, T.M.; Berg, de M.T.; Bouts, Q.W.; Brink, ten A.P.; Buchin, K.; Westenberg, M.A.

    2014-01-01

    Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms

  3. Prosthetic joint infection development of an evidence-based diagnostic algorithm.

    Science.gov (United States)

    Mühlhofer, Heinrich M L; Pohlig, Florian; Kanz, Karl-Georg; Lenze, Ulrich; Lenze, Florian; Toepfer, Andreas; Kelch, Sarah; Harrasser, Norbert; von Eisenhart-Rothe, Rüdiger; Schauwecker, Johannes

    2017-03-09

    Increasing rates of prosthetic joint infection (PJI) have presented challenges for general practitioners, orthopedic surgeons and the health care system in the recent years. The diagnosis of PJI is complex; multiple diagnostic tools are used in the attempt to correctly diagnose PJI. Evidence-based algorithms can help to identify PJI using standardized diagnostic steps. We reviewed relevant publications between 1990 and 2015 using a systematic literature search in MEDLINE and PUBMED. The selected search results were then classified into levels of evidence. The keywords were prosthetic joint infection, biofilm, diagnosis, sonication, antibiotic treatment, implant-associated infection, Staph. aureus, rifampicin, implant retention, pcr, maldi-tof, serology, synovial fluid, c-reactive protein level, total hip arthroplasty (THA), total knee arthroplasty (TKA) and combinations of these terms. From an initial 768 publications, 156 publications were stringently reviewed. Publications with class I-III recommendations (EAST) were considered. We developed an algorithm for the diagnostic approach to display the complex diagnosis of PJI in a clear and logically structured process according to ISO 5807. The evidence-based standardized algorithm combines modern clinical requirements and evidence-based treatment principles. The algorithm provides a detailed transparent standard operating procedure (SOP) for diagnosing PJI. Thus, consistently high, examiner-independent process quality is assured to meet the demands of modern quality management in PJI diagnosis.

  4. Shelf-life of infrared dry-roasted almonds

    Science.gov (United States)

    Infrared heating was recently used to develop a more efficient roasting technology than traditional hot air roasting. Therefore, in this study, we evaluated the shelf-life of almonds roasted with three different approaches, namely infrared [IR], sequential infrared and hot air [SIRHA], and regular h...

  5. Development of algorithm for continuous generation of a computer game in terms of usability and optimization of developed code in computer science

    Directory of Open Access Journals (Sweden)

    Tibor Skala

    2018-03-01

    Full Text Available As both hardware and software have become increasingly available and constantly developed, they globally contribute to improvements in technology in every field of technology and arts. Digital tools for creation and processing of graphical contents are very developed and they have been designed to shorten the time required for content creation, which is, in this case, animation. Since contemporary animation has experienced a surge in various visual styles and visualization methods, programming is built-in in everything that is currently in use. There is no doubt that there is a variety of algorithms and software which are the brain and the moving force behind any idea created for a specific purpose and applicability in society. Art and technology combined make a direct and oriented medium for publishing and marketing in every industry, including those which are not necessarily closely related to those that rely heavily on visual aspect of work. Additionally, quality and consistency of an algorithm will also depend on proper integration into the system that will be powered by that algorithm as well as on the way the algorithm is designed. Development of an endless algorithm and its effective use will be shown during the use of the computer game. In order to present the effect of various parameters, in the final phase of the computer game development an endless algorithm was tested with varying number of key input parameters (achieved time, score reached, pace of the game.

  6. Mechanisms of browning development in aggregates of marine organic matter formed under anoxic conditions: A study by mid-infrared and near-infrared spectroscopy

    Science.gov (United States)

    Mecozzi, Mauro; Acquistucci, Rita; Nisini, Laura; Conti, Marcelo Enrique

    2014-03-01

    In this paper we analyze some chemical aspects concerning the browning development associated to the aggregation of marine organic matter (MOM) occurring in anoxic conditions. Organic matter samples obtained by the degradation of different algal samples were daily taken to follow the evolution of the aggregation process and the associated browning process. These samples were examined by Fourier transform mid infrared (FTIR) and Fourier transform near infrared (FTNIR) spectroscopy and the colour changes occurring during the above mentioned aggregation process were measured by means of Colour Indices (CIs). Spectral Cross Correlation Analysis (SCCA) was applied to correlate changes in CI values to the structural changes of MOM observed by FTIR and FTNIR spectra which were also submitted to Two-Dimensional Hetero Correlation Analysis (2HDCORR). SCCA results showed that all biomolecules present in MOM aggregates such as carbohydrates, proteins and lipids are involved in the browning development. In particular, SCCA results of algal mixtures suggest that the observed yellow-brown colour can be linked to the development of non enzymatic (i.e. Maillard) browning reactions. SCCA results for MOM furthermore suggest that aggregates coming from brown algae also showed evidence of browning related to enzymatic reactions. In the end 2HDCORR results indicate that hydrogen bond interactions among different molecules of MOM can play a significant role in the browning development.

  7. A comparison of three self-tuning control algorithms developed for the Bristol-Babcock controller

    International Nuclear Information System (INIS)

    Tapp, P.A.

    1992-04-01

    A brief overview of adaptive control methods relating to the design of self-tuning proportional-integral-derivative (PID) controllers is given. The methods discussed include gain scheduling, self-tuning, auto-tuning, and model-reference adaptive control systems. Several process identification and parameter adjustment methods are discussed. Characteristics of the two most common types of self-tuning controllers implemented by industry (i.e., pattern recognition and process identification) are summarized. The substance of the work is a comparison of three self-tuning proportional-plus-integral (STPI) control algorithms developed to work in conjunction with the Bristol-Babcock PID control module. The STPI control algorithms are based on closed-loop cycling theory, pattern recognition theory, and model-based theory. A brief theory of operation of these three STPI control algorithms is given. Details of the process simulations developed to test the STPI algorithms are given, including an integrating process, a first-order system, a second-order system, a system with initial inverse response, and a system with variable time constant and delay. The STPI algorithms' performance with regard to both setpoint changes and load disturbances is evaluated, and their robustness is compared. The dynamic effects of process deadtime and noise are also considered. Finally, the limitations of each of the STPI algorithms is discussed, some conclusions are drawn from the performance comparisons, and a few recommendations are made. 6 refs

  8. Far infrared supplement: Catalog of infrared observations, second edition

    International Nuclear Information System (INIS)

    Gezari, D.Y.; Schmitz, M.; Mead, J.M.

    1988-08-01

    The Far Infrared Supplement: Catalog of Infrared Observations summarizes all infrared astronomical observations at far infrared wavelengths (5 to 1000 microns) published in the scientific literature from 1965 through 1986. The Supplement list contain 25 percent of the observations in the full Catalog of Infrared Observations (CIO), and essentially eliminates most visible stars from the listings. The Supplement is thus more compact than the main catalog, and is intended for easy reference during astronomical observations. The Far Infrared Supplement (2nd Edition) includes the Index of Infrared Source Positions and the Bibliography of Infrared Astronomy for the subset of far infrared observations listed

  9. Recursive estimation techniques for detection of small objects in infrared image data

    Science.gov (United States)

    Zeidler, J. R.; Soni, T.; Ku, W. H.

    1992-04-01

    This paper describes a recursive detection scheme for point targets in infrared (IR) images. Estimation of the background noise is done using a weighted autocorrelation matrix update method and the detection statistic is calculated using a recursive technique. A weighting factor allows the algorithm to have finite memory and deal with nonstationary noise characteristics. The detection statistic is created by using a matched filter for colored noise, using the estimated noise autocorrelation matrix. The relationship between the weighting factor, the nonstationarity of the noise and the probability of detection is described. Some results on one- and two-dimensional infrared images are presented.

  10. A Developed ESPRIT Algorithm for DOA Estimation

    Science.gov (United States)

    Fayad, Youssef; Wang, Caiyun; Cao, Qunsheng; Hafez, Alaa El-Din Sayed

    2015-05-01

    A novel algorithm for estimating direction of arrival (DOAE) for target, which aspires to contribute to increase the estimation process accuracy and decrease the calculation costs, has been carried out. It has introduced time and space multiresolution in Estimation of Signal Parameter via Rotation Invariance Techniques (ESPRIT) method (TS-ESPRIT) to realize subspace approach that decreases errors caused by the model's nonlinearity effect. The efficacy of the proposed algorithm is verified by using Monte Carlo simulation, the DOAE accuracy has evaluated by closed-form Cramér-Rao bound (CRB) which reveals that the proposed algorithm's estimated results are better than those of the normal ESPRIT methods leading to the estimator performance enhancement.

  11. Feasibility of infrared Earth tracking for deep-space optical communications.

    Science.gov (United States)

    Chen, Yijiang; Hemmati, Hamid; Ortiz, Gerry G

    2012-01-01

    Infrared (IR) Earth thermal tracking is a viable option for optical communications to distant planet and outer-planetary missions. However, blurring due to finite receiver aperture size distorts IR Earth images in the presence of Earth's nonuniform thermal emission and limits its applicability. We demonstrate a deconvolution algorithm that can overcome this limitation and reduce the error from blurring to a negligible level. The algorithm is applied successfully to Earth thermal images taken by the Mars Odyssey spacecraft. With the solution to this critical issue, IR Earth tracking is established as a viable means for distant planet and outer-planetary optical communications. © 2012 Optical Society of America

  12. Development and comparisons of wind retrieval algorithms for small unmanned aerial systems

    Science.gov (United States)

    Bonin, T. A.; Chilson, P. B.; Zielke, B. S.; Klein, P. M.; Leeman, J. R.

    2012-12-01

    Recently, there has been an increase in use of Unmanned Aerial Systems (UASs) as platforms for conducting fundamental and applied research in the lower atmosphere due to their relatively low cost and ability to collect samples with high spatial and temporal resolution. Concurrent with this development comes the need for accurate instrumentation and measurement methods suitable for small meteorological UASs. Moreover, the instrumentation to be integrated into such platforms must be small and lightweight. Whereas thermodynamic variables can be easily measured using well aspirated sensors onboard, it is much more challenging to accurately measure the wind with a UAS. Several algorithms have been developed that incorporate GPS observations as a means of estimating the horizontal wind vector, with each algorithm exhibiting its own particular strengths and weaknesses. In the present study, the performance of three such GPS-based wind-retrieval algorithms has been investigated and compared with wind estimates from rawinsonde and sodar observations. Each of the algorithms considered agreed well with the wind measurements from sounding and sodar data. Through the integration of UAS-retrieved profiles of thermodynamic and kinematic parameters, one can investigate the static and dynamic stability of the atmosphere and relate them to the state of the boundary layer across a variety of times and locations, which might be difficult to access using conventional instrumentation.

  13. Development of pattern recognition algorithms for the central drift chamber of the Belle II detector

    Energy Technology Data Exchange (ETDEWEB)

    Trusov, Viktor

    2016-11-04

    In this thesis, the development of one of the pattern recognition algorithms for the Belle II experiment based on conformal and Legendre transformations is presented. In order to optimize the performance of the algorithm (CPU time and efficiency) specialized processing steps have been introduced. To show achieved results, Monte-Carlo based efficiency measurements of the tracking algorithms in the Central Drift Chamber (CDC) has been done.

  14. Design and implement of infrared small target real-time detection system based on pipeline technology

    Science.gov (United States)

    Sun, Lihui; Wang, Yongzhong; He, Yongqiang

    2007-01-01

    The detection for motive small target in infrared image sequence has become a hot topic nowadays. Background suppress algorithm based on minim gradient median filter and temporal recursion target detection algorithm are introduced. On the basis of contents previously mentioned, a four stages pipeline structure infrared small target detection process system, which aims at characters of algorithm complexity, large amounts of data to process, high frame frequency and exigent real-time character in this kind of application, is designed and implemented. The logical structure of the system was introduced and the function and signals flows are programmed. The system is composed of two FPGA chips and two DSP chips of TI. According to the function of each part, the system is divided into image preprocess stage, target detection stage, track relation stage and image output stage. The experiment of running algorithms on the system presented in this paper proved that the system could meet acquisition and process of 50Hz 240x320 digital image and the system could real time detect small target with a signal-noise ratio more than 3 reliably. The system achieves the characters of large amount of memory, high real-time processing, excellent extension and favorable interactive interface.

  15. Poster abstract: A machine learning approach for vehicle classification using passive infrared and ultrasonic sensors

    KAUST Repository

    Warriach, Ehsan Ullah; Claudel, Christian G.

    2013-01-01

    This article describes the implementation of four different machine learning techniques for vehicle classification in a dual ultrasonic/passive infrared traffic flow sensors. Using k-NN, Naive Bayes, SVM and KNN-SVM algorithms, we show that KNN

  16. Germanium blocked impurity band far infrared detectors

    International Nuclear Information System (INIS)

    Rossington, C.S.

    1988-04-01

    The infrared portion of the electromagnetic spectrum has been of interest to scientist since the eighteenth century when Sir William Herschel discovered the infrared as he measured temperatures in the sun's spectrum and found that there was energy beyond the red. In the late nineteenth century, Thomas Edison established himself as the first infrared astronomer to look beyond the solar system when he observed the star Arcturus in the infrared. Significant advances in infrared technology and physics, long since Edison's time, have resulted in many scientific developments, such as the Infrared Astronomy Satellite (IRAS) which was launched in 1983, semiconductor infrared detectors for materials characterization, military equipment such as night-vision goggles and infrared surveillance equipment. It is now planned that cooled semiconductor infrared detectors will play a major role in the ''Star Wars'' nuclear defense scheme proposed by the Reagan administration

  17. Genetic Algorithms for Development of New Financial Products

    Directory of Open Access Journals (Sweden)

    Eder Oliveira Abensur

    2007-06-01

    Full Text Available New Product Development (NPD is recognized as a fundamental activity that has a relevant impact on the performance of companies. Despite the relevance of the financial market there is a lack of work on new financial product development. The aim of this research is to propose the use of Genetic Algorithms (GA as an alternative procedure for evaluating the most favorable combination of variables for the product launch. The paper focuses on: (i determining the essential variables of the financial product studied (investment fund; (ii determining how to evaluate the success of a new investment fund launch and (iii how GA can be applied to the financial product development problem. The proposed framework was tested using 4 years of real data from the Brazilian financial market and the results suggest that this is an innovative development methodology and useful for designing complex financial products with many attributes.

  18. Development of a near-infrared spectroscopy instrument for applications in urology.

    Science.gov (United States)

    Macnab, Andrew J; Stothers, Lynn

    2008-10-01

    Near infrared spectroscopy (NIRS) is an established technology using photons of light in the near infrared spectrum to monitor changes in tissue of naturally occurring chromophores, including oxygenated and deoxygenated hemoglobin. Technology and methodology have been validated for measurement of a range of physiologic parameters. NIRS has been applied successfully in urology research; however current instruments are designed principally for brain and muscle study. To describe development of a NIRS instrument specifically designed for monitoring changes in chromophore concentration in the bladder detrusor in real time, to facilitate research to establish the role of this non-invasive technology in the evaluation of patients with voiding dysfunction The portable continuous wave NIRS instrument has a 3 laser diode light source (785, 808 and 830 nanometers), fiber optic cables for light transmission, a self adhesive patient interface patch with an emitter and sensor, and software to detect the difference between the light transmitted and received by the instrument. Software incorporated auto-attenuates the optical signals and converts raw optical data into chromophore concentrations displayed graphically. The prototype was designed, tested, and iteratively developed to achieve optimal suprapubic transcutaneous monitoring of the detrusor in human subjects during bladder filling and emptying. Evaluation with simultaneous invasive urodynamic measurement in men and women indicates good specificity and sensitivity of NIRS chromophore concentration changes by receiver operator curve analysis, and correlation between NIRS data and urodynamic pressures. Urological monitoring with this NIRS instrument is feasible and generates data of potential diagnostic value.

  19. Land Surface Temperature and Emissivity Separation from Cross-Track Infrared Sounder Data with Atmospheric Reanalysis Data and ISSTES Algorithm

    Directory of Open Access Journals (Sweden)

    Yu-Ze Zhang

    2017-01-01

    Full Text Available The Cross-track Infrared Sounder (CrIS is one of the most advanced hyperspectral instruments and has been used for various atmospheric applications such as atmospheric retrievals and weather forecast modeling. However, because of the specific design purpose of CrIS, little attention has been paid to retrieving land surface parameters from CrIS data. To take full advantage of the rich spectral information in CrIS data to improve the land surface retrievals, particularly the acquisition of a continuous Land Surface Emissivity (LSE spectrum, this paper attempts to simultaneously retrieve a continuous LSE spectrum and the Land Surface Temperature (LST from CrIS data with the atmospheric reanalysis data and the Iterative Spectrally Smooth Temperature and Emissivity Separation (ISSTES algorithm. The results show that the accuracy of the retrieved LSEs and LST is comparable with the current land products. The overall differences of the LST and LSE retrievals are approximately 1.3 K and 1.48%, respectively. However, the LSEs in our study can be provided as a continuum spectrum instead of the single-channel values in traditional products. The retrieved LST and LSEs now can be better used to further analyze the surface properties or improve the retrieval of atmospheric parameters.

  20. Development of cryo-cell for infrared Raman laser

    International Nuclear Information System (INIS)

    Harada, Tetsuro; Ohmori, Takao; Saito, Hideaki

    1984-01-01

    Laser isotope separation (LIS) for uranium enrichment is remarkable for its higher efficiency and cost effectiveness over the gaseous diffusion process. A prototype Raman Laser apparatus for uranium enrichment was developed and manufactured by IHI for the Institute of Physical and Chemical Research. This apparatus is capable of emitting tunable infrared Laser beam of a wave length from 13 μm to 17 μm from its multiple pass resonator by injecting a highly coherent CO 2 Laser beam into the para-hydrogen gas vessel (kept at 100 K) to induce Raman scattering. This paper describes the Laser oscillation mechanism and the structure of the multiple pass cell; it also discusses the technical aspects that are essential for a Raman Laser apparatus. Moreover, the cooling characteristics of the present apparatus are reported by analyzing the results of tests conducted in actual service thermal conditions. (author)

  1. Particle swarm optimization-based local entropy weighted histogram equalization for infrared image enhancement

    Science.gov (United States)

    Wan, Minjie; Gu, Guohua; Qian, Weixian; Ren, Kan; Chen, Qian; Maldague, Xavier

    2018-06-01

    Infrared image enhancement plays a significant role in intelligent urban surveillance systems for smart city applications. Unlike existing methods only exaggerating the global contrast, we propose a particle swam optimization-based local entropy weighted histogram equalization which involves the enhancement of both local details and fore-and background contrast. First of all, a novel local entropy weighted histogram depicting the distribution of detail information is calculated based on a modified hyperbolic tangent function. Then, the histogram is divided into two parts via a threshold maximizing the inter-class variance in order to improve the contrasts of foreground and background, respectively. To avoid over-enhancement and noise amplification, double plateau thresholds of the presented histogram are formulated by means of particle swarm optimization algorithm. Lastly, each sub-image is equalized independently according to the constrained sub-local entropy weighted histogram. Comparative experiments implemented on real infrared images prove that our algorithm outperforms other state-of-the-art methods in terms of both visual and quantized evaluations.

  2. Development of an ultra-compact mid-infrared attenuated total reflectance spectrophotometer

    Science.gov (United States)

    Kim, Dong Soo; Lee, Tae-Ro; Yoon, Gilwon

    2014-07-01

    Mid-infrared spectroscopy has been an important tool widely used for qualitative analysis in various fields. However, portable or personal use is size and cost prohibitive for either Fourier transform infrared or attenuated total reflectance (ATR) spectrophotometers. In this study, we developed an ultra-compact ATR spectrophotometer whose frequency band was 5.5-11.0 μm. We used miniature components, such as a light source fabricated by semiconductor technology, a linear variable filter, and a pyro-electric array detector. There were no moving parts. Optimal design based on two light sources, a zippered configuration of the array detector and ATR optics could produce absorption spectra that might be used for qualitative analysis. A microprocessor synchronized the pulsed light sources and detector, and all the signals were processed digitally. The size was 13.5×8.5×3.5 cm3 and the weight was 300 grams. Due to its low cost, our spectrophotometer can replace many online monitoring devices. Another application could be for a u-healthcare system installed in the bathroom or attached to a smartphone for monitoring substances in body fluids.

  3. Digital Breast Tomosynthesis guided Near Infrared Spectroscopy: Volumetric estimates of fibroglandular fraction and breast density from tomosynthesis reconstructions.

    Science.gov (United States)

    Vedantham, Srinivasan; Shi, Linxi; Michaelsen, Kelly E; Krishnaswamy, Venkataramanan; Pogue, Brian W; Poplack, Steven P; Karellas, Andrew; Paulsen, Keith D

    A multimodality system combining a clinical prototype digital breast tomosynthesis with its imaging geometry modified to facilitate near-infrared spectroscopic imaging has been developed. The accuracy of parameters recovered from near-infrared spectroscopy is dependent on fibroglandular tissue content. Hence, in this study, volumetric estimates of fibroglandular tissue from tomosynthesis reconstructions were determined. A kernel-based fuzzy c-means algorithm was implemented to segment tomosynthesis reconstructed slices in order to estimate fibroglandular content and to provide anatomic priors for near-infrared spectroscopy. This algorithm was used to determine volumetric breast density (VBD), defined as the ratio of fibroglandular tissue volume to the total breast volume, expressed as percentage, from 62 tomosynthesis reconstructions of 34 study participants. For a subset of study participants who subsequently underwent mammography, VBD from mammography matched for subject, breast laterality and mammographic view was quantified using commercial software and statistically analyzed to determine if it differed from tomosynthesis. Summary statistics of the VBD from all study participants were compared with prior independent studies. The fibroglandular volume from tomosynthesis and mammography were not statistically different ( p =0.211, paired t-test). After accounting for the compressed breast thickness, which were different between tomosynthesis and mammography, the VBD from tomosynthesis was correlated with ( r =0.809, p 0.99, paired t-test), and was linearly related to, the VBD from mammography. Summary statistics of the VBD from tomosynthesis were not statistically different from prior studies using high-resolution dedicated breast computed tomography. The observation of correlation and linear association in VBD between mammography and tomosynthesis suggests that breast density associated risk measures determined for mammography are translatable to tomosynthesis

  4. Towards the mid-infrared optical biopsy

    DEFF Research Database (Denmark)

    Seddon, Angela B.; Benson, Trevor M.; Sujecki, Slawomir

    2016-01-01

    We are establishing a new paradigm in mid-infrared molecular sensing, mapping and imaging to open up the mid-infrared spectral region for in vivo (i.e. in person) medical diagnostics and surgery. Thus, we are working towards the mid-infrared optical biopsy ('opsy' look at, bio the biology) in situ...... in the body for real-time diagnosis. This new paradigm will be enabled through focused development of devices and systems which are robust, functionally designed, safe, compact and cost effective and are based on active and passive mid-infrared optical fibers. In particular, this will enable early diagnosis...... of a bright mid-infrared wideband source in a portable package as a first step for medical fiber-based systems operating in the mid-infrared. Moreover, mid-infrared molecular mapping and imaging is potentially a disruptive technology to give improved monitoring of the environment, energy efficiency, security...

  5. A novel visual saliency detection method for infrared video sequences

    Science.gov (United States)

    Wang, Xin; Zhang, Yuzhen; Ning, Chen

    2017-12-01

    Infrared video applications such as target detection and recognition, moving target tracking, and so forth can benefit a lot from visual saliency detection, which is essentially a method to automatically localize the ;important; content in videos. In this paper, a novel visual saliency detection method for infrared video sequences is proposed. Specifically, for infrared video saliency detection, both the spatial saliency and temporal saliency are considered. For spatial saliency, we adopt a mutual consistency-guided spatial cues combination-based method to capture the regions with obvious luminance contrast and contour features. For temporal saliency, a multi-frame symmetric difference approach is proposed to discriminate salient moving regions of interest from background motions. Then, the spatial saliency and temporal saliency are combined to compute the spatiotemporal saliency using an adaptive fusion strategy. Besides, to highlight the spatiotemporal salient regions uniformly, a multi-scale fusion approach is embedded into the spatiotemporal saliency model. Finally, a Gestalt theory-inspired optimization algorithm is designed to further improve the reliability of the final saliency map. Experimental results demonstrate that our method outperforms many state-of-the-art saliency detection approaches for infrared videos under various backgrounds.

  6. Development Modules for Specification of Requirements for a System of Verification of Parallel Algorithms

    Directory of Open Access Journals (Sweden)

    Vasiliy Yu. Meltsov

    2012-05-01

    Full Text Available This paper presents the results of the development of one of the modules of the system verification of parallel algorithms that are used to verify the inference engine. This module is designed to build the specification requirements, the feasibility of which on the algorithm is necessary to prove (test.

  7. Algorithmic mathematics

    CERN Document Server

    Hougardy, Stefan

    2016-01-01

    Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.

  8. [Near infrared distance sensing method for Chang'e-3 alpha particle X-ray spectrometer].

    Science.gov (United States)

    Liang, Xiao-Hua; Wu, Ming-Ye; Wang, Huan-Yu; Peng, Wen-Xi; Zhang, Cheng-Mo; Cui, Xing-Zhu; Wang, Jin-Zhou; Zhang, Jia-Yu; Yang, Jia-Wei; Fan, Rui-Rui; Gao, Min; Liu, Ya-Qing; Zhang, Fei; Dong, Yi-Fan; Guo, Dong-Ya

    2013-05-01

    Alpha particle X-ray spectrometer (APXS) is one of the payloads of Chang'E-3 lunar rover, the scientific objective of which is in-situ observation and off-line analysis of lunar regolith and rock. Distance measurement is one of the important functions for APXS to perform effective detection on the moon. The present paper will first give a brief introduction to APXS, and then analyze the specific requirements and constraints to realize distance measurement, at last present a new near infrared distance sensing algorithm by using the inflection point of response curve. The theoretical analysis and the experiment results verify the feasibility of this algorithm. Although the theoretical analysis shows that this method is not sensitive to the operating temperature and reflectance of the lunar surface, the solar infrared radiant intensity may make photosensor saturation. The solutions are reducing the gain of device and avoiding direct exposure to sun light.

  9. Superior Generalization Capability of Hardware-Learing Algorithm Developed for Self-Learning Neuron-MOS Neural Networks

    Science.gov (United States)

    Kondo, Shuhei; Shibata, Tadashi; Ohmi, Tadahiro

    1995-02-01

    We have investigated the learning performance of the hardware backpropagation (HBP) algorithm, a hardware-oriented learning algorithm developed for the self-learning architecture of neural networks constructed using neuron MOS (metal-oxide-semiconductor) transistors. The solution to finding a mirror symmetry axis in a 4×4 binary pixel array was tested by computer simulation based on the HBP algorithm. Despite the inherent restrictions imposed on the hardware-learning algorithm, HBP exhibits equivalent learning performance to that of the original backpropagation (BP) algorithm when all the pertinent parameters are optimized. Very importantly, we have found that HBP has a superior generalization capability over BP; namely, HBP exhibits higher performance in solving problems that the network has not yet learnt.

  10. Mid-infrared Semiconductor Optoelectronics

    CERN Document Server

    Krier, Anthony

    2006-01-01

    The practical realisation of optoelectronic devices operating in the 2–10 µm (mid-infrared) wavelength range offers potential applications in a variety of areas from environmental gas monitoring around oil rigs and landfill sites to the detection of pharmaceuticals, particularly narcotics. In addition, an atmospheric transmission window exists between 3 µm and 5 µm that enables free-space optical communications, thermal imaging applications and the development of infrared measures for "homeland security". Consequently, the mid-infrared is very attractive for the development of sensitive optical sensor instrumentation. Unfortunately, the nature of the likely applications dictates stringent requirements in terms of laser operation, miniaturisation and cost that are difficult to meet. Many of the necessary improvements are linked to a better ability to fabricate and to understand the optoelectronic properties of suitable high-quality epitaxial materials and device structures. Substantial progress in these m...

  11. A correlated-k model of radiative transfer in the near-infrared windows of Venus

    International Nuclear Information System (INIS)

    Tsang, C.C.C.; Irwin, P.G.J.; Taylor, F.W.; Wilson, C.F.

    2008-01-01

    We present a correlated-k-based model for generating synthetic spectra in the near-infrared window regions, from 1.0 to 2.5 μm, emitted from the deep atmosphere of Venus on the nightside. This approach is applicable for use with any near-infrared instrument, ground-based and space-borne, for analysis of the thermal emissions in this spectral range. We also approach this work with the view of using the model, in conjunction with a retrieval algorithm, to retrieve minor species from the Venus Express/VIRTIS instrument. An existing radiative-transfer model was adapted for Venusian conditions to deal with the prevailing high pressures and temperatures and other conditions. A comprehensive four-modal cloud structure model based on Pollack et al. [Near-infrared light from venus' nightside: a spectroscopic analysis. Icarus 1993;103:1-42], using refractive indices for a 75% H 2 SO 4 25% H 2 O mixture from Palmer and Williams [Optical constants of sulfuric acid; application to the clouds of Venus? Appl Opt 1975;14(1):208-19], was also implemented. We then utilized a Mie scattering algorithm to account for the multiple scattering effect between cloud and haze layers that occur in the Venusian atmosphere. The correlated-k model is shown to produce good agreement with ground-based spectra of Venus in the near infrared, and to match the output from a line-by-line radiative-transfer model to better than 10%

  12. Development of Data Processing Algorithms for the Upgraded LHCb Vertex Locator

    CERN Document Server

    AUTHOR|(CDS)2101352

    The LHCb detector will see a major upgrade during LHC Long Shutdown II, which is planned for 2019/20. The silicon Vertex Locator subdetector will be upgraded for operation under the new run conditions. The detector will be read out using a data acquisition board based on an FPGA. The work presented in this thesis is concerned with the development of the data processing algorithms to be used in this data acquisition board. In particular, work in three different areas of the FPGA is covered: the data processing block, the low level interface, and the post router block. The algorithms produced have been simulated and tested, and shown to provide the required performance. Errors in the initial implementation of the Gigabit Wireline Transmitter serialized data in the low level interface were discovered and corrected. The data scrambling algorithm and the post router block have been incorporated in the front end readout chip.

  13. Developments in the Aerosol Layer Height Retrieval Algorithm for the Copernicus Sentinel-4/UVN Instrument

    Science.gov (United States)

    Nanda, Swadhin; Sanders, Abram; Veefkind, Pepijn

    2016-04-01

    The Sentinel-4 mission is a part of the European Commission's Copernicus programme, the goal of which is to provide geo-information to manage environmental assets, and to observe, understand and mitigate the effects of the changing climate. The Sentinel-4/UVN instrument design is motivated by the need to monitor trace gas concentrations and aerosols in the atmosphere from a geostationary orbit. The on-board instrument is a high resolution UV-VIS-NIR (UVN) spectrometer system that provides hourly radiance measurements over Europe and northern Africa with a spatial sampling of 8 km. The main application area of Sentinel-4/UVN is air quality. One of the data products that is being developed for Sentinel-4/UVN is the Aerosol Layer Height (ALH). The goal is to determine the height of aerosol plumes with a resolution of better than 0.5 - 1 km. The ALH product thus targets aerosol layers in the free troposphere, such as desert dust, volcanic ash and biomass during plumes. KNMI is assigned with the development of the Aerosol Layer Height (ALH) algorithm. Its heritage is the ALH algorithm developed by Sanders and De Haan (ATBD, 2016) for the TROPOMI instrument on board the Sentinel-5 Precursor mission that is to be launched in June or July 2016 (tentative date). The retrieval algorithm designed so far for the aerosol height product is based on the absorption characteristics of the oxygen-A band (759-770 nm). The algorithm has heritage to the ALH algorithm developed for TROPOMI on the Sentinel 5 precursor satellite. New aspects for Sentinel-4/UVN include the higher resolution (0.116 nm compared to 0.4 for TROPOMI) and hourly observation from the geostationary orbit. The algorithm uses optimal estimation to obtain a spectral fit of the reflectance across absorption band, while assuming a single uniform layer with fixed width to represent the aerosol vertical distribution. The state vector includes amongst other elements the height of this layer and its aerosol optical

  14. Development of a meta-algorithm for guiding primary care encounters for patients with multimorbidity using evidence-based and case-based guideline development methodology.

    Science.gov (United States)

    Muche-Borowski, Cathleen; Lühmann, Dagmar; Schäfer, Ingmar; Mundt, Rebekka; Wagner, Hans-Otto; Scherer, Martin

    2017-06-22

    The study aimed to develop a comprehensive algorithm (meta-algorithm) for primary care encounters of patients with multimorbidity. We used a novel, case-based and evidence-based procedure to overcome methodological difficulties in guideline development for patients with complex care needs. Systematic guideline development methodology including systematic evidence retrieval (guideline synopses), expert opinions and informal and formal consensus procedures. Primary care. The meta-algorithm was developed in six steps:1. Designing 10 case vignettes of patients with multimorbidity (common, epidemiologically confirmed disease patterns and/or particularly challenging health care needs) in a multidisciplinary workshop.2. Based on the main diagnoses, a systematic guideline synopsis of evidence-based and consensus-based clinical practice guidelines was prepared. The recommendations were prioritised according to the clinical and psychosocial characteristics of the case vignettes.3. Case vignettes along with the respective guideline recommendations were validated and specifically commented on by an external panel of practicing general practitioners (GPs).4. Guideline recommendations and experts' opinions were summarised as case specific management recommendations (N-of-one guidelines).5. Healthcare preferences of patients with multimorbidity were elicited from a systematic literature review and supplemented with information from qualitative interviews.6. All N-of-one guidelines were analysed using pattern recognition to identify common decision nodes and care elements. These elements were put together to form a generic meta-algorithm. The resulting meta-algorithm reflects the logic of a GP's encounter of a patient with multimorbidity regarding decision-making situations, communication needs and priorities. It can be filled with the complex problems of individual patients and hereby offer guidance to the practitioner. Contrary to simple, symptom-oriented algorithms, the meta-algorithm

  15. Infrared video based gas leak detection method using modified FAST features

    Science.gov (United States)

    Wang, Min; Hong, Hanyu; Huang, Likun

    2018-03-01

    In order to detect the invisible leaking gas that is usually dangerous and easily leads to fire or explosion in time, many new technologies have arisen in the recent years, among which the infrared video based gas leak detection is widely recognized as a viable tool. However, all the moving regions of a video frame can be detected as leaking gas regions by the existing infrared video based gas leak detection methods, without discriminating the property of each detected region, e.g., a walking person in a video frame may be also detected as gas by the current gas leak detection methods.To solve this problem, we propose a novel infrared video based gas leak detection method in this paper, which is able to effectively suppress strong motion disturbances.Firstly, the Gaussian mixture model(GMM) is used to establish the background model.Then due to the observation that the shapes of gas regions are different from most rigid moving objects, we modify the Features From Accelerated Segment Test (FAST) algorithm and use the modified FAST (mFAST) features to describe each connected component. In view of the fact that the statistical property of the mFAST features extracted from gas regions is different from that of other motion regions, we propose the Pixel-Per-Points (PPP) condition to further select candidate connected components.Experimental results show that the algorithm is able to effectively suppress most strong motion disturbances and achieve real-time leaking gas detection.

  16. Deriving Total Suspended Matter Concentration from the Near-Infrared-Based Inherent Optical Properties over Turbid Waters: A Case Study in Lake Taihu

    Directory of Open Access Journals (Sweden)

    Wei Shi

    2018-02-01

    Full Text Available Normalized water-leaving radiance spectra nLw(λ, particle backscattering coefficients bbp(λ in the near-infrared (NIR wavelengths, and total suspended matter (TSM concentrations over turbid waters are analytically correlated. To demonstrate the use of bbp(λ in the NIR wavelengths in coastal and inland waters, we used in situ optics and TSM data to develop two TSM algorithms from measurements of the Visible Infrared Imaging Radiometer Suite (VIIRS onboard the Suomi National Polar-orbiting Partnership (SNPP using backscattering coefficients at the two NIR bands bbp(745 and bbp(862 for Lake Taihu. The correlation coefficients between the modeled TSM concentrations from bbp(745 and bbp(862 and the in situ TSM are 0.93 and 0.92, respectively. A different in situ dataset acquired between 2012 and 2016 for Lake Taihu was used to validate the performance of the NIR TSM algorithms for VIIRS-SNPP observations. TSM concentrations derived from VIIRS-SNPP observations with these two NIR bbp(λ-based TSM algorithms matched well with in situ TSM concentrations in Lake Taihu between 2012 and 2016. The normalized root mean square errors (NRMSEs for the two NIR algorithms are 0.234 and 0.226, respectively. The two NIR-based TSM algorithms are used to compute the satellite-derived TSM concentrations to study the seasonal and interannual variability of the TSM concentration in Lake Taihu between 2012 and 2016. In fact, the NIR-based TSM algorithms are analytically based with minimal in situ data to tune the coefficients. They are not sensitive to the possible nLw(λ saturation in the visible bands for highly turbid waters, and have the potential to be used for estimation of TSM concentrations in turbid waters with similar NIR nLw(λ spectra as those in Lake Taihu.

  17. The operational methane retrieval algorithm for TROPOMI

    Directory of Open Access Journals (Sweden)

    H. Hu

    2016-11-01

    Full Text Available This work presents the operational methane retrieval algorithm for the Sentinel 5 Precursor (S5P satellite and its performance tested on realistic ensembles of simulated measurements. The target product is the column-averaged dry air volume mixing ratio of methane (XCH4, which will be retrieved simultaneously with scattering properties of the atmosphere. The algorithm attempts to fit spectra observed by the shortwave and near-infrared channels of the TROPOspheric Monitoring Instrument (TROPOMI spectrometer aboard S5P.The sensitivity of the retrieval performance to atmospheric scattering properties, atmospheric input data and instrument calibration errors is evaluated. In addition, we investigate the effect of inhomogeneous slit illumination on the instrument spectral response function. Finally, we discuss the cloud filters to be used operationally and as backup.We show that the required accuracy and precision of  < 1 % for the XCH4 product are met for clear-sky measurements over land surfaces and after appropriate filtering of difficult scenes. The algorithm is very stable, having a convergence rate of 99 %. The forward model error is less than 1 % for about 95 % of the valid retrievals. Model errors in the input profile of water do not influence the retrieval outcome noticeably. The methane product is expected to meet the requirements if errors in input profiles of pressure and temperature remain below 0.3 % and 2 K, respectively. We further find that, of all instrument calibration errors investigated here, our retrievals are the most sensitive to an error in the instrument spectral response function of the shortwave infrared channel.

  18. Development of MODIS data-based algorithm for retrieving sea surface temperature in coastal waters.

    Science.gov (United States)

    Wang, Jiao; Deng, Zhiqiang

    2017-06-01

    A new algorithm was developed for retrieving sea surface temperature (SST) in coastal waters using satellite remote sensing data from Moderate Resolution Imaging Spectroradiometer (MODIS) aboard Aqua platform. The new SST algorithm was trained using the Artificial Neural Network (ANN) method and tested using 8 years of remote sensing data from MODIS Aqua sensor and in situ sensing data from the US coastal waters in Louisiana, Texas, Florida, California, and New Jersey. The ANN algorithm could be utilized to map SST in both deep offshore and particularly shallow nearshore waters at the high spatial resolution of 1 km, greatly expanding the coverage of remote sensing-based SST data from offshore waters to nearshore waters. Applications of the ANN algorithm require only the remotely sensed reflectance values from the two MODIS Aqua thermal bands 31 and 32 as input data. Application results indicated that the ANN algorithm was able to explaining 82-90% variations in observed SST in US coastal waters. While the algorithm is generally applicable to the retrieval of SST, it works best for nearshore waters where important coastal resources are located and existing algorithms are either not applicable or do not work well, making the new ANN-based SST algorithm unique and particularly useful to coastal resource management.

  19. Light-leaking region segmentation of FOG fiber based on quality evaluation of infrared image

    Science.gov (United States)

    Liu, Haoting; Wang, Wei; Gao, Feng; Shan, Lianjie; Ma, Yuzhou; Ge, Wenqian

    2014-07-01

    To improve the assembly reliability of Fiber Optic Gyroscope (FOG), a light leakage detection system and method is developed. First, an agile movement control platform is designed to implement the pose control of FOG optical path component in 6 Degrees of Freedom (DOF). Second, an infrared camera is employed to capture the working state images of corresponding fibers in optical path component after the manual assembly of FOG; therefore the entire light transmission process of key sections in light-path can be recorded. Third, an image quality evaluation based region segmentation method is developed for the light leakage images. In contrast to the traditional methods, the image quality metrics, including the region contrast, the edge blur, and the image noise level, are firstly considered to distinguish the image characters of infrared image; then the robust segmentation algorithms, including graph cut and flood fill, are all developed for region segmentation according to the specific image quality. Finally, after the image segmentation of light leakage region, the typical light-leaking type, such as the point defect, the wedge defect, and the surface defect can be identified. By using the image quality based method, the applicability of our proposed system can be improved dramatically. Many experiment results have proved the validity and effectiveness of this method.

  20. Validation of near infrared satellite based algorithms to relative atmospheric water vapour content over land

    International Nuclear Information System (INIS)

    Serpolla, A.; Bonafoni, S.; Basili, P.; Biondi, R.; Arino, O.

    2009-01-01

    This paper presents the validation results of ENVISAT MERIS and TERRA MODIS retrieval algorithms for atmospheric Water Vapour Content (WVC) estimation in clear sky condition on land. The MERIS algorithms exploits the radiance ratio of the absorbing channel at 900 nm with the almost absorption-free reference at 890 nm, while the MODIS one is based on the ratio of measurements centred at near 0.905, 0.936, and 0.94 μm with atmospheric window reflectance at 0.865 and 1.24 μm. The first test was performed in the Mediterranean area using WVC provided from both ECMWF and AERONET. As a second step, the performances of the algorithms were tested exploiting WVC computed from radio sounding (RAOBs)in the North East Australia. The different comparisons with respect to reference WVC values showed an overestimation of WVC by MODIS (root mean square error percentage greater than 20%) and an acceptable performance of MERIS algorithms (root mean square error percentage around 10%) [it

  1. CINE: Comet INfrared Excitation

    Science.gov (United States)

    de Val-Borro, Miguel; Cordiner, Martin A.; Milam, Stefanie N.; Charnley, Steven B.

    2017-08-01

    CINE calculates infrared pumping efficiencies that can be applied to the most common molecules found in cometary comae such as water, hydrogen cyanide or methanol. One of the main mechanisms for molecular excitation in comets is the fluorescence by the solar radiation followed by radiative decay to the ground vibrational state. This command-line tool calculates the effective pumping rates for rotational levels in the ground vibrational state scaled by the heliocentric distance of the comet. Fluorescence coefficients are useful for modeling rotational emission lines observed in cometary spectra at sub-millimeter wavelengths. Combined with computational methods to solve the radiative transfer equations based, e.g., on the Monte Carlo algorithm, this model can retrieve production rates and rotational temperatures from the observed emission spectrum.

  2. On developing B-spline registration algorithms for multi-core processors

    International Nuclear Information System (INIS)

    Shackleford, J A; Kandasamy, N; Sharp, G C

    2010-01-01

    Spline-based deformable registration methods are quite popular within the medical-imaging community due to their flexibility and robustness. However, they require a large amount of computing time to obtain adequate results. This paper makes two contributions towards accelerating B-spline-based registration. First, we propose a grid-alignment scheme and associated data structures that greatly reduce the complexity of the registration algorithm. Based on this grid-alignment scheme, we then develop highly data parallel designs for B-spline registration within the stream-processing model, suitable for implementation on multi-core processors such as graphics processing units (GPUs). Particular attention is focused on an optimal method for performing analytic gradient computations in a data parallel fashion. CPU and GPU versions are validated for execution time and registration quality. Performance results on large images show that our GPU algorithm achieves a speedup of 15 times over the single-threaded CPU implementation whereas our multi-core CPU algorithm achieves a speedup of 8 times over the single-threaded implementation. The CPU and GPU versions achieve near-identical registration quality in terms of RMS differences between the generated vector fields.

  3. PCTFPeval: a web tool for benchmarking newly developed algorithms for predicting cooperative transcription factor pairs in yeast.

    Science.gov (United States)

    Lai, Fu-Jou; Chang, Hong-Tsun; Wu, Wei-Sheng

    2015-01-01

    Computational identification of cooperative transcription factor (TF) pairs helps understand the combinatorial regulation of gene expression in eukaryotic cells. Many advanced algorithms have been proposed to predict cooperative TF pairs in yeast. However, it is still difficult to conduct a comprehensive and objective performance comparison of different algorithms because of lacking sufficient performance indices and adequate overall performance scores. To solve this problem, in our previous study (published in BMC Systems Biology 2014), we adopted/proposed eight performance indices and designed two overall performance scores to compare the performance of 14 existing algorithms for predicting cooperative TF pairs in yeast. Most importantly, our performance comparison framework can be applied to comprehensively and objectively evaluate the performance of a newly developed algorithm. However, to use our framework, researchers have to put a lot of effort to construct it first. To save researchers time and effort, here we develop a web tool to implement our performance comparison framework, featuring fast data processing, a comprehensive performance comparison and an easy-to-use web interface. The developed tool is called PCTFPeval (Predicted Cooperative TF Pair evaluator), written in PHP and Python programming languages. The friendly web interface allows users to input a list of predicted cooperative TF pairs from their algorithm and select (i) the compared algorithms among the 15 existing algorithms, (ii) the performance indices among the eight existing indices, and (iii) the overall performance scores from two possible choices. The comprehensive performance comparison results are then generated in tens of seconds and shown as both bar charts and tables. The original comparison results of each compared algorithm and each selected performance index can be downloaded as text files for further analyses. Allowing users to select eight existing performance indices and 15

  4. Development of computational algorithms for quantification of pulmonary structures

    International Nuclear Information System (INIS)

    Oliveira, Marcela de; Alvarez, Matheus; Alves, Allan F.F.; Miranda, Jose R.A.; Pina, Diana R.

    2012-01-01

    The high-resolution computed tomography has become the imaging diagnostic exam most commonly used for the evaluation of the squeals of Paracoccidioidomycosis. The subjective evaluations the radiological abnormalities found on HRCT images do not provide an accurate quantification. The computer-aided diagnosis systems produce a more objective assessment of the abnormal patterns found in HRCT images. Thus, this research proposes the development of algorithms in MATLAB® computing environment can quantify semi-automatically pathologies such as pulmonary fibrosis and emphysema. The algorithm consists in selecting a region of interest (ROI), and by the use of masks, filter densities and morphological operators, to obtain a quantification of the injured area to the area of a healthy lung. The proposed method was tested on ten HRCT scans of patients with confirmed PCM. The results of semi-automatic measurements were compared with subjective evaluations performed by a specialist in radiology, falling to a coincidence of 80% for emphysema and 58% for fibrosis. (author)

  5. Development of algorithm for depreciation costs allocation in dynamic input-output industrial enterprise model

    Directory of Open Access Journals (Sweden)

    Keller Alevtina

    2017-01-01

    Full Text Available The article considers the issue of allocation of depreciation costs in the dynamic inputoutput model of an industrial enterprise. Accounting the depreciation costs in such a model improves the policy of fixed assets management. It is particularly relevant to develop the algorithm for the allocation of depreciation costs in the construction of dynamic input-output model of an industrial enterprise, since such enterprises have a significant amount of fixed assets. Implementation of terms of the adequacy of such an algorithm itself allows: evaluating the appropriateness of investments in fixed assets, studying the final financial results of an industrial enterprise, depending on management decisions in the depreciation policy. It is necessary to note that the model in question for the enterprise is always degenerate. It is caused by the presence of zero rows in the matrix of capital expenditures by lines of structural elements unable to generate fixed assets (part of the service units, households, corporate consumers. The paper presents the algorithm for the allocation of depreciation costs for the model. This algorithm was developed by the authors and served as the basis for further development of the flowchart for subsequent implementation with use of software. The construction of such algorithm and its use for dynamic input-output models of industrial enterprises is actualized by international acceptance of the effectiveness of the use of input-output models for national and regional economic systems. This is what allows us to consider that the solutions discussed in the article are of interest to economists of various industrial enterprises.

  6. Autonomous Star Tracker Algorithms

    DEFF Research Database (Denmark)

    Betto, Maurizio; Jørgensen, John Leif; Kilsgaard, Søren

    1998-01-01

    Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances.......Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances....

  7. High throughput assessment of cells and tissues: Bayesian classification of spectral metrics from infrared vibrational spectroscopic imaging data.

    Science.gov (United States)

    Bhargava, Rohit; Fernandez, Daniel C; Hewitt, Stephen M; Levin, Ira W

    2006-07-01

    Vibrational spectroscopy allows a visualization of tissue constituents based on intrinsic chemical composition and provides a potential route to obtaining diagnostic markers of diseases. Characterizations utilizing infrared vibrational spectroscopy, in particular, are conventionally low throughput in data acquisition, generally lacking in spatial resolution with the resulting data requiring intensive numerical computations to extract information. These factors impair the ability of infrared spectroscopic measurements to represent accurately the spatial heterogeneity in tissue, to incorporate robustly the diversity introduced by patient cohorts or preparative artifacts and to validate developed protocols in large population studies. In this manuscript, we demonstrate a combination of Fourier transform infrared (FTIR) spectroscopic imaging, tissue microarrays (TMAs) and fast numerical analysis as a paradigm for the rapid analysis, development and validation of high throughput spectroscopic characterization protocols. We provide an extended description of the data treatment algorithm and a discussion of various factors that may influence decision-making using this approach. Finally, a number of prostate tissue biopsies, arranged in an array modality, are employed to examine the efficacy of this approach in histologic recognition of epithelial cell polarization in patients displaying a variety of normal, malignant and hyperplastic conditions. An index of epithelial cell polarization, derived from a combined spectral and morphological analysis, is determined to be a potentially useful diagnostic marker.

  8. Development of algorithms for building inventory compilation through remote sensing and statistical inferencing

    Science.gov (United States)

    Sarabandi, Pooya

    Building inventories are one of the core components of disaster vulnerability and loss estimations models, and as such, play a key role in providing decision support for risk assessment, disaster management and emergency response efforts. In may parts of the world inclusive building inventories, suitable for the use in catastrophe models cannot be found. Furthermore, there are serious shortcomings in the existing building inventories that include incomplete or out-dated information on critical attributes as well as missing or erroneous values for attributes. In this dissertation a set of methodologies for updating spatial and geometric information of buildings from single and multiple high-resolution optical satellite images are presented. Basic concepts, terminologies and fundamentals of 3-D terrain modeling from satellite images are first introduced. Different sensor projection models are then presented and sources of optical noise such as lens distortions are discussed. An algorithm for extracting height and creating 3-D building models from a single high-resolution satellite image is formulated. The proposed algorithm is a semi-automated supervised method capable of extracting attributes such as longitude, latitude, height, square footage, perimeter, irregularity index and etc. The associated errors due to the interactive nature of the algorithm are quantified and solutions for minimizing the human-induced errors are proposed. The height extraction algorithm is validated against independent survey data and results are presented. The validation results show that an average height modeling accuracy of 1.5% can be achieved using this algorithm. Furthermore, concept of cross-sensor data fusion for the purpose of 3-D scene reconstruction using quasi-stereo images is developed in this dissertation. The developed algorithm utilizes two or more single satellite images acquired from different sensors and provides the means to construct 3-D building models in a more

  9. The Day-1 GPM Combined Precipitation Algorithm: IMERG

    Science.gov (United States)

    Huffman, G. J.; Bolvin, D. T.; Braithwaite, D.; Hsu, K.; Joyce, R.; Kidd, C.; Sorooshian, S.; Xie, P.

    2012-12-01

    The Integrated Multi-satellitE Retrievals for Global Precipitation Measurement (GPM) mission (IMERG) algorithm will provide the at-launch combined-sensor precipitation dataset being produced by the U.S. GPM Science Team. IMERG is being developed as a unified U.S. algorithm that takes advantage of strengths in three current U.S. algorithms: - the TRMM Multi-satellite Precipitation Analysis (TMPA), which addresses inter-satellite calibration of precipitation estimates and monthly scale combination of satellite and gauge analyses; - the CPC Morphing algorithm with Kalman Filtering (KF-CMORPH), which provides quality-weighted time interpolation of precipitation patterns following storm motion; and - the Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks using a Cloud Classification System (PERSIANN-CCS), which provides a neural-network-based scheme for generating microwave-calibrated precipitation estimates from geosynchronous infrared brightness temperatures, and filters out some non-raining cold clouds. The goal is to provide a long-term, fine-scale record of global precipitation from the entire constellation of precipitation-relevant satellite sensors, with input from surface precipitation gauges. The record will begin January 1998 at the start of the Tropical Rainfall Measuring Mission (TRMM) and extend as GPM records additional data. Although homogeneity is considered desirable, the use of diverse and evolving data sources works against the strict long-term homogeneity that characterizes a Climate Data Record (CDR). This talk will briefly review the design requirements for IMERG, including multiple runs at different latencies (most likely around 4 hours, 12 hours, and 2 months after observation time), various intermediate data fields as part of the IMERG data file, and the plans to bring up IMERG with calibration by TRMM initially, transitioning to GPM when its individual-sensor precipitation algorithms are fully functional

  10. DEVELOPMENT OF THE ALGORITHM FOR CHOOSING THE OPTIMAL SCENARIO FOR THE DEVELOPMENT OF THE REGION'S ECONOMY

    Directory of Open Access Journals (Sweden)

    I. S. Borisova

    2018-01-01

    Full Text Available Purpose: the article deals with the development of an algorithm for choosing the optimal scenario for the development of the regional economy. Since the "Strategy for socio-economic development of the Lipetsk region for the period until 2020" does not contain scenarios for the development of the region, the algorithm for choosing the optimal scenario for the development of the regional economy is formalized. The scenarios for the development of the economy of the Lipetsk region according to the indicators of the Program of social and economic development are calculated: "Quality of life index", "Average monthly nominal wage", "Level of registered unemployment", "Growth rate of gross regional product", "The share of innovative products in the total volume of goods shipped, works performed and services rendered by industrial organizations", "Total volume of atmospheric pollution per unit GRP" and "Satisfaction of the population with the activity of executive bodies of state power of the region". Based on the calculation of development scenarios, the dynamics of the values of these indicators was developed in the implementation of scenarios for the development of the economy of the Lipetsk region in 2016–2020. Discounted financial costs of economic participants for realization of scenarios of development of economy of the Lipetsk region are estimated. It is shown that the current situation in the economy of the Russian Federation assumes the choice of a paradigm for the innovative development of territories and requires all participants in economic relations at the regional level to concentrate their resources on the creation of new science-intensive products. An assessment of the effects of the implementation of reasonable scenarios for the development of the economy of the Lipetsk region was carried out. It is shown that the most acceptable is the "base" scenario, which assumes a consistent change in the main indicators. The specific economic

  11. Variable selection in near-infrared spectroscopy: Benchmarking of feature selection methods on biodiesel data

    International Nuclear Information System (INIS)

    Balabin, Roman M.; Smirnov, Sergey V.

    2011-01-01

    During the past several years, near-infrared (near-IR/NIR) spectroscopy has increasingly been adopted as an analytical tool in various fields from petroleum to biomedical sectors. The NIR spectrum (above 4000 cm -1 ) of a sample is typically measured by modern instruments at a few hundred of wavelengths. Recently, considerable effort has been directed towards developing procedures to identify variables (wavelengths) that contribute useful information. Variable selection (VS) or feature selection, also called frequency selection or wavelength selection, is a critical step in data analysis for vibrational spectroscopy (infrared, Raman, or NIRS). In this paper, we compare the performance of 16 different feature selection methods for the prediction of properties of biodiesel fuel, including density, viscosity, methanol content, and water concentration. The feature selection algorithms tested include stepwise multiple linear regression (MLR-step), interval partial least squares regression (iPLS), backward iPLS (BiPLS), forward iPLS (FiPLS), moving window partial least squares regression (MWPLS), (modified) changeable size moving window partial least squares (CSMWPLS/MCSMWPLSR), searching combination moving window partial least squares (SCMWPLS), successive projections algorithm (SPA), uninformative variable elimination (UVE, including UVE-SPA), simulated annealing (SA), back-propagation artificial neural networks (BP-ANN), Kohonen artificial neural network (K-ANN), and genetic algorithms (GAs, including GA-iPLS). Two linear techniques for calibration model building, namely multiple linear regression (MLR) and partial least squares regression/projection to latent structures (PLS/PLSR), are used for the evaluation of biofuel properties. A comparison with a non-linear calibration model, artificial neural networks (ANN-MLP), is also provided. Discussion of gasoline, ethanol-gasoline (bioethanol), and diesel fuel data is presented. The results of other spectroscopic

  12. Development of Speckle Interferometry Algorithm and System

    International Nuclear Information System (INIS)

    Shamsir, A. A. M.; Jafri, M. Z. M.; Lim, H. S.

    2011-01-01

    Electronic speckle pattern interferometry (ESPI) method is a wholefield, non destructive measurement method widely used in the industries such as detection of defects on metal bodies, detection of defects in intergrated circuits in digital electronics components and in the preservation of priceless artwork. In this research field, this method is widely used to develop algorithms and to develop a new laboratory setup for implementing the speckle pattern interferometry. In speckle interferometry, an optically rough test surface is illuminated with an expanded laser beam creating a laser speckle pattern in the space surrounding the illuminated region. The speckle pattern is optically mixed with a second coherent light field that is either another speckle pattern or a smooth light field. This produces an interferometric speckle pattern that will be detected by sensor to count the change of the speckle pattern due to force given. In this project, an experimental setup of ESPI is proposed to analyze a stainless steel plate using 632.8 nm (red) wavelength of lights.

  13. Development of a Framework for Genetic Algorithms

    OpenAIRE

    Wååg, Håkan

    2009-01-01

    Genetic algorithms is a method of optimization that can be used tosolve many different kinds of problems. This thesis focuses ondeveloping a framework for genetic algorithms that is capable ofsolving at least the two problems explored in the work. Otherproblems are supported by allowing user-made extensions.The purpose of this thesis is to explore the possibilities of geneticalgorithms for optimization problems and artificial intelligenceapplications.To test the framework two applications are...

  14. VISUALIZATION OF PAGERANK ALGORITHM

    OpenAIRE

    Perhaj, Ervin

    2013-01-01

    The goal of the thesis is to develop a web application that help users understand the functioning of the PageRank algorithm. The thesis consists of two parts. First we develop an algorithm to calculate PageRank values of web pages. The input of algorithm is a list of web pages and links between them. The user enters the list through the web interface. From the data the algorithm calculates PageRank value for each page. The algorithm repeats the process, until the difference of PageRank va...

  15. The Algorithmic Imaginary

    DEFF Research Database (Denmark)

    Bucher, Taina

    2017-01-01

    the notion of the algorithmic imaginary. It is argued that the algorithmic imaginary – ways of thinking about what algorithms are, what they should be and how they function – is not just productive of different moods and sensations but plays a generative role in moulding the Facebook algorithm itself...... of algorithms affect people's use of these platforms, if at all? To help answer these questions, this article examines people's personal stories about the Facebook algorithm through tweets and interviews with 25 ordinary users. To understand the spaces where people and algorithms meet, this article develops...

  16. Synegies Between Visible/Near-Infrared Imaging Spectrometry and the Thermal Infrared in an Urban Environment: An Evaluation of the Hyperspectral Infrared Imager (HYSPIRI) Mission

    Science.gov (United States)

    Roberts, Dar A.; Quattrochi, Dale A.; Hulley, Glynn C.; Hook, Simon J.; Green, Robert O.

    2012-01-01

    A majority of the human population lives in urban areas and as such, the quality of urban environments is becoming increasingly important to the human population. Furthermore, these areas are major sources of environmental contaminants and sinks of energy and materials. Remote sensing provides an improved understanding of urban areas and their impacts by mapping urban extent, urban composition (vegetation and impervious cover fractions), and urban radiation balance through measures of albedo, emissivity and land surface temperature (LST). Recently, the National Research Council (NRC) completed an assessment of remote sensing needs for the next decade (NRC, 2007), proposing several missions suitable for urban studies, including a visible, near-infrared and shortwave infrared (VSWIR) imaging spectrometer and a multispectral thermal infrared (TIR) instrument called the Hyperspectral Infrared Imagery (HyspIRI). In this talk, we introduce the HyspIRI mission, focusing on potential synergies between VSWIR and TIR data in an urban area. We evaluate potential synergies using an Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and MODIS-ASTER (MASTER) image pair acquired over Santa Barbara, United States. AVIRIS data were analyzed at their native spatial resolutions (7.5m VSWIR and 15m TIR), and aggregated 60 m spatial resolution similar to HyspIRI. Surface reflectance was calculated using ACORN and a ground reflectance target to remove atmospheric and sensor artifacts. MASTER data were processed to generate estimates of spectral emissivity and LST using Modtran radiative transfer code and the ASTER Temperature Emissivity Separation algorithm. A spectral library of common urban materials, including urban vegetation, roofs and roads was assembled from combined AVIRIS and field-measured reflectance spectra. LST and emissivity were also retrieved from MASTER and reflectance/emissivity spectra for a subset of urban materials were retrieved from co-located MASTER and

  17. STAR FORMATION ACTIVITY OF CORES WITHIN INFRARED DARK CLOUDS

    International Nuclear Information System (INIS)

    Chambers, E. T.; Jackson, J. M.; Rathborne, J. M.; Simon, R.

    2009-01-01

    Infrared Dark Clouds (IRDCs) contain compact cores which probably host the early stages of high-mass star formation. Many of these cores contain regions of extended, enhanced 4.5 μm emission, the so-called 'green fuzzies', which indicate shocked gas. Many cores also contain 24 μm emission, presumably from heated dust which indicates embedded protostars. Because 'green fuzzies' and 24 μm point sources both indicate star formation, we have developed an algorithm to identify star-forming cores within IRDCs by searching for the simultaneous presence of these two distinct indicators. We employ this algorithm on a sample of 190 cores found toward IRDCs, and classify the cores as 'active' if they contain a green fuzzy coincident with an embedded 24 μm source, and as 'quiescent' if they contain neither IR signature. We hypothesize that the 'quiescent' cores represent the earliest 'preprotostellar' (starless) core phase, before the development of a warm protostar, and that the 'active' cores represent a later phase, after the development of a protostar. We test this idea by comparing the sizes, densities, and maser activity of the 'active' and 'quiescent' cores. We find that, on average, 'active' cores have smaller sizes, higher densities, and more pronounced water and methanol maser activity than the 'quiescent' cores. This is expected if the 'quiescent' cores are in an earlier evolutionary state than the 'active' cores. The masses of 'active' cores suggest that they may be forming high-mass stars. The highest mass 'quiescent' cores are excellent candidates for the elusive high-mass starless cores.

  18. GARLIC - A general purpose atmospheric radiative transfer line-by-line infrared-microwave code: Implementation and evaluation

    Science.gov (United States)

    Schreier, Franz; Gimeno García, Sebastián; Hedelt, Pascal; Hess, Michael; Mendrok, Jana; Vasquez, Mayte; Xu, Jian

    2014-04-01

    A suite of programs for high resolution infrared-microwave atmospheric radiative transfer modeling has been developed with emphasis on efficient and reliable numerical algorithms and a modular approach appropriate for simulation and/or retrieval in a variety of applications. The Generic Atmospheric Radiation Line-by-line Infrared Code - GARLIC - is suitable for arbitrary observation geometry, instrumental field-of-view, and line shape. The core of GARLIC's subroutines constitutes the basis of forward models used to implement inversion codes to retrieve atmospheric state parameters from limb and nadir sounding instruments. This paper briefly introduces the physical and mathematical basics of GARLIC and its descendants and continues with an in-depth presentation of various implementation aspects: An optimized Voigt function algorithm combined with a two-grid approach is used to accelerate the line-by-line modeling of molecular cross sections; various quadrature methods are implemented to evaluate the Schwarzschild and Beer integrals; and Jacobians, i.e. derivatives with respect to the unknowns of the atmospheric inverse problem, are implemented by means of automatic differentiation. For an assessment of GARLIC's performance, a comparison of the quadrature methods for solution of the path integral is provided. Verification and validation are demonstrated using intercomparisons with other line-by-line codes and comparisons of synthetic spectra with spectra observed on Earth and from Venus.

  19. Sea-land segmentation for infrared remote sensing images based on superpixels and multi-scale features

    Science.gov (United States)

    Lei, Sen; Zou, Zhengxia; Liu, Dunge; Xia, Zhenghuan; Shi, Zhenwei

    2018-06-01

    Sea-land segmentation is a key step for the information processing of ocean remote sensing images. Traditional sea-land segmentation algorithms ignore the local similarity prior of sea and land, and thus fail in complex scenarios. In this paper, we propose a new sea-land segmentation method for infrared remote sensing images to tackle the problem based on superpixels and multi-scale features. Considering the connectivity and local similarity of sea or land, we interpret the sea-land segmentation task in view of superpixels rather than pixels, where similar pixels are clustered and the local similarity are explored. Moreover, the multi-scale features are elaborately designed, comprising of gray histogram and multi-scale total variation. Experimental results on infrared bands of Landsat-8 satellite images demonstrate that the proposed method can obtain more accurate and more robust sea-land segmentation results than the traditional algorithms.

  20. Implementation on Landsat Data of a Simple Cloud Mask Algorithm Developed for MODIS Land Bands

    Science.gov (United States)

    Oreopoulos, Lazaros; Wilson, Michael J.; Varnai, Tamas

    2010-01-01

    This letter assesses the performance on Landsat-7 images of a modified version of a cloud masking algorithm originally developed for clear-sky compositing of Moderate Resolution Imaging Spectroradiometer (MODIS) images at northern mid-latitudes. While data from recent Landsat missions include measurements at thermal wavelengths, and such measurements are also planned for the next mission, thermal tests are not included in the suggested algorithm in its present form to maintain greater versatility and ease of use. To evaluate the masking algorithm we take advantage of the availability of manual (visual) cloud masks developed at USGS for the collection of Landsat scenes used here. As part of our evaluation we also include the Automated Cloud Cover Assesment (ACCA) algorithm that includes thermal tests and is used operationally by the Landsat-7 mission to provide scene cloud fractions, but no cloud masks. We show that the suggested algorithm can perform about as well as ACCA both in terms of scene cloud fraction and pixel-level cloud identification. Specifically, we find that the algorithm gives an error of 1.3% for the scene cloud fraction of 156 scenes, and a root mean square error of 7.2%, while it agrees with the manual mask for 93% of the pixels, figures very similar to those from ACCA (1.2%, 7.1%, 93.7%).

  1. Additive Manufacturing Infrared Inspection

    Science.gov (United States)

    Gaddy, Darrell; Nettles, Mindy

    2015-01-01

    The Additive Manufacturing Infrared Inspection Task started the development of a real-time dimensional inspection technique and digital quality record for the additive manufacturing process using infrared camera imaging and processing techniques. This project will benefit additive manufacturing by providing real-time inspection of internal geometry that is not currently possible and reduce the time and cost of additive manufactured parts with automated real-time dimensional inspections which deletes post-production inspections.

  2. Quasar Photometric Redshifts and Candidate Selection: A New Algorithm Based on Optical and Mid-infrared Photometric Data

    Science.gov (United States)

    Yang, Qian; Wu, Xue-Bing; Fan, Xiaohui; Jiang, Linhua; McGreer, Ian; Green, Richard; Yang, Jinyi; Schindler, Jan-Torge; Wang, Feige; Zuo, Wenwen; Fu, Yuming

    2017-12-01

    We present a new algorithm to estimate quasar photometric redshifts (photo-zs), by considering the asymmetries in the relative flux distributions of quasars. The relative flux models are built with multivariate Skew-t distributions in the multidimensional space of relative fluxes as a function of redshift and magnitude. For 151,392 quasars in the SDSS, we achieve a photo-z accuracy, defined as the fraction of quasars with the difference between the photo-z z p and the spectroscopic redshift z s , | {{Δ }}z| =| {z}s-{z}p| /(1+{z}s) within 0.1, of 74%. Combining the WISE W1 and W2 infrared data with the SDSS data, the photo-z accuracy is enhanced to 87%. Using the Pan-STARRS1 or DECaLS photometry with WISE W1 and W2 data, the photo-z accuracies are 79% and 72%, respectively. The prior probabilities as a function of magnitude for quasars, stars, and galaxies are calculated, respectively, based on (1) the quasar luminosity function, (2) the Milky Way synthetic simulation with the Besançon model, and (3) the Bayesian Galaxy Photometric Redshift estimation. The relative fluxes of stars are obtained with the Padova isochrones, and the relative fluxes of galaxies are modeled through galaxy templates. We test our classification method to select quasars using the DECaLS g, r, z, and WISE W1 and W2 photometry. The quasar selection completeness is higher than 70% for a wide redshift range 0.5publicly available.

  3. Fast geometric algorithms

    International Nuclear Information System (INIS)

    Noga, M.T.

    1984-01-01

    This thesis addresses a number of important problems that fall within the framework of the new discipline of Computational Geometry. The list of topics covered includes sorting and selection, convex hull algorithms, the L 1 hull, determination of the minimum encasing rectangle of a set of points, the Euclidean and L 1 diameter of a set of points, the metric traveling salesman problem, and finding the superrange of star-shaped and monotype polygons. The main theme of all the work was to develop a set of very fast state-of-the-art algorithms that supersede any rivals in terms of speed and ease of implementation. In some cases existing algorithms were refined; for others new techniques were developed that add to the present database of fast adaptive geometric algorithms. What emerges is a collection of techniques that is successful at merging modern tools developed in analysis of algorithms with those of classical geometry

  4. Determination of zinc oxide content of mineral medicine calamine using near-infrared spectroscopy based on MIV and BP-ANN algorithm

    Science.gov (United States)

    Zhang, Xiaodong; Chen, Long; Sun, Yangbo; Bai, Yu; Huang, Bisheng; Chen, Keli

    2018-03-01

    Near-infrared (NIR) spectroscopy has been widely used in the analysis fields of traditional Chinese medicine. It has the advantages of fast analysis, no damage to samples and no pollution. In this research, a fast quantitative model for zinc oxide (ZnO) content in mineral medicine calamine was explored based on NIR spectroscopy. NIR spectra of 57 batches of calamine samples were collected and the first derivative (FD) method was adopted for conducting spectral pretreatment. The content of ZnO in calamine sample was determined using ethylenediaminetetraacetic acid (EDTA) titration and taken as reference value of NIR spectroscopy. 57 batches of calamine samples were categorized into calibration and prediction set using the Kennard-Stone (K-S) algorithm. Firstly, in the calibration set, to calculate the correlation coefficient (r) between the absorbance value and the ZnO content of corresponding samples at each wave number. Next, according to the square correlation coefficient (r2) value to obtain the top 50 wave numbers to compose the characteristic spectral bands (4081.8-4096.3, 4188.9-4274.7, 4335.4, 4763.6,4794.4-4802.1, 4809.9, 4817.6-4875.4 cm- 1), which were used to establish the quantitative model of ZnO content using back propagation artificial neural network (BP-ANN) algorithm. Then, the 50 wave numbers were operated by the mean impact value (MIV) algorithm to choose wave numbers whose absolute value of MIV greater than or equal to 25, to obtain the optimal characteristic spectral bands (4875.4-4836.9, 4223.6-4080.9 cm- 1). And then, both internal cross and external validation were used to screen the number of hidden layer nodes of BP-ANN. Finally, the number 4 of hidden layer nodes was chosen as the best. At last, the BP-ANN model was found to enjoy a high accuracy and strong forecasting capacity for analyzing ZnO content in calamine samples ranging within 42.05-69.98%, with relative mean square error of cross validation (RMSECV) of 1.66% and coefficient of

  5. A semi-learning algorithm for noise rejection: an fNIRS study on ADHD children

    Science.gov (United States)

    Sutoko, Stephanie; Funane, Tsukasa; Katura, Takusige; Sato, Hiroki; Kiguchi, Masashi; Maki, Atsushi; Monden, Yukifumi; Nagashima, Masako; Yamagata, Takanori; Dan, Ippeita

    2017-02-01

    In pediatrics studies, the quality of functional near infrared spectroscopy (fNIRS) signals is often reduced by motion artifacts. These artifacts likely mislead brain functionality analysis, causing false discoveries. While noise correction methods and their performance have been investigated, these methods require several parameter assumptions that apparently result in noise overfitting. In contrast, the rejection of noisy signals serves as a preferable method because it maintains the originality of the signal waveform. Here, we describe a semi-learning algorithm to detect and eliminate noisy signals. The algorithm dynamically adjusts noise detection according to the predetermined noise criteria, which are spikes, unusual activation values (averaged amplitude signals within the brain activation period), and high activation variances (among trials). Criteria were sequentially organized in the algorithm and orderly assessed signals based on each criterion. By initially setting an acceptable rejection rate, particular criteria causing excessive data rejections are neglected, whereas others with tolerable rejections practically eliminate noises. fNIRS data measured during the attention response paradigm (oddball task) in children with attention deficit/hyperactivity disorder (ADHD) were utilized to evaluate and optimize the algorithm's performance. This algorithm successfully substituted the visual noise identification done in the previous studies and consistently found significantly lower activation of the right prefrontal and parietal cortices in ADHD patients than in typical developing children. Thus, we conclude that the semi-learning algorithm confers more objective and standardized judgment for noise rejection and presents a promising alternative to visual noise rejection

  6. Dataset exploited for the development and validation of automated cyanobacteria quantification algorithm, ACQUA

    Directory of Open Access Journals (Sweden)

    Emanuele Gandola

    2016-09-01

    Full Text Available The estimation and quantification of potentially toxic cyanobacteria in lakes and reservoirs are often used as a proxy of risk for water intended for human consumption and recreational activities. Here, we present data sets collected from three volcanic Italian lakes (Albano, Vico, Nemi that present filamentous cyanobacteria strains at different environments. Presented data sets were used to estimate abundance and morphometric characteristics of potentially toxic cyanobacteria comparing manual Vs. automated estimation performed by ACQUA (“ACQUA: Automated Cyanobacterial Quantification Algorithm for toxic filamentous genera using spline curves, pattern recognition and machine learning” (Gandola et al., 2016 [1]. This strategy was used to assess the algorithm performance and to set up the denoising algorithm. Abundance and total length estimations were used for software development, to this aim we evaluated the efficiency of statistical tools and mathematical algorithms, here described. The image convolution with the Sobel filter has been chosen to denoise input images from background signals, then spline curves and least square method were used to parameterize detected filaments and to recombine crossing and interrupted sections aimed at performing precise abundances estimations and morphometric measurements. Keywords: Comparing data, Filamentous cyanobacteria, Algorithm, Deoising, Natural sample

  7. Algorithm development and verification of UASCM for multi-dimension and multi-group neutron kinetics model

    International Nuclear Information System (INIS)

    Si, S.

    2012-01-01

    The Universal Algorithm of Stiffness Confinement Method (UASCM) for neutron kinetics model of multi-dimensional and multi-group transport equations or diffusion equations has been developed. The numerical experiments based on transport theory code MGSNM and diffusion theory code MGNEM have demonstrated that the algorithm has sufficient accuracy and stability. (authors)

  8. Development of an algorithm for quantifying extremity biological tissue

    International Nuclear Information System (INIS)

    Pavan, Ana L.M.; Miranda, Jose R.A.; Pina, Diana R. de

    2013-01-01

    The computerized radiology (CR) has become the most widely used device for image acquisition and production, since its introduction in the 80s. The detection and early diagnosis, obtained via CR, are important for the successful treatment of diseases such as arthritis, metabolic bone diseases, tumors, infections and fractures. However, the standards used for optimization of these images are based on international protocols. Therefore, it is necessary to compose radiographic techniques for CR system that provides a secure medical diagnosis, with doses as low as reasonably achievable. To this end, the aim of this work is to develop a quantifier algorithm of tissue, allowing the construction of a homogeneous end used phantom to compose such techniques. It was developed a database of computed tomography images of hand and wrist of adult patients. Using the Matlab ® software, was developed a computational algorithm able to quantify the average thickness of soft tissue and bones present in the anatomical region under study, as well as the corresponding thickness in simulators materials (aluminium and lucite). This was possible through the application of mask and Gaussian removal technique of histograms. As a result, was obtained an average thickness of soft tissue of 18,97 mm and bone tissue of 6,15 mm, and their equivalents in materials simulators of 23,87 mm of acrylic and 1,07mm of aluminum. The results obtained agreed with the medium thickness of biological tissues of a patient's hand pattern, enabling the construction of an homogeneous phantom

  9. Design and development of wafer-level near-infrared micro-camera

    Science.gov (United States)

    Zeller, John W.; Rouse, Caitlin; Efstathiadis, Harry; Haldar, Pradeep; Dhar, Nibir K.; Lewis, Jay S.; Wijewarnasuriya, Priyalal; Puri, Yash R.; Sood, Ashok K.

    2015-08-01

    SiGe offers a low-cost alternative to conventional infrared sensor material systems such as InGaAs, InSb, and HgCdTe for developing near-infrared (NIR) photodetector devices that do not require cooling and can offer high bandwidths and responsivities. As a result of the significant difference in thermal expansion coefficients between germanium and silicon, tensile strain incorporated into Ge epitaxial layers deposited on Si utilizing specialized growth processes can extend the operational range of detection to 1600 nm and longer wavelengths. We have fabricated SiGe based PIN detector devices on 300 mm diameter Si wafers in order to take advantage of high throughput, large-area complementary metal-oxide semiconductor (CMOS) technology. This device fabrication process involves low temperature epitaxial deposition of Ge to form a thin p+ seed/buffer layer, followed by higher temperature deposition of a thicker Ge intrinsic layer. An n+-Ge layer formed by ion implantation of phosphorus, passivating oxide cap, and then top copper contacts complete the PIN photodetector design. Various techniques including transmission electron microscopy (TEM) and secondary ion mass spectrometry (SIMS) have been employed to characterize the material and structural properties of the epitaxial growth and fabricated detector devices. In addition, electrical characterization was performed to compare the I-V dark current vs. photocurrent response as well as the time and wavelength varying photoresponse properties of the fabricated devices, results of which are likewise presented.

  10. Infrared laser-induced chemical reactions

    International Nuclear Information System (INIS)

    Katayama, Mikio

    1978-01-01

    The experimental means which clearly distinguishes between infrared ray-induced reactions and thermal reactions has been furnished for the first time when an intense monochromatic light source has been obtained by the development of infrared laser. Consequently, infrared laser-induced chemical reactions have started to develop as one field of chemical reaction researches. Researches of laser-induced chemical reactions have become new means for the researches of chemical reactions since they were highlighted as a new promising technique for isotope separation. Specifically, since the success has been reported in 235 U separation using laser in 1974, comparison of this method with conventional separation techniques from the economic point of view has been conducted, and it was estimated by some people that the laser isotope separation is cheaper. This report briefly describes on the excitation of oscillation and reaction rate, and introduces the chemical reactions induced by CW laser and TEA CO 2 laser. Dependence of reaction yield on laser power, measurement of the absorbed quantity of infrared ray and excitation mechanism are explained. Next, isomerizing reactions are reported, and finally, isotope separation is explained. It was found that infrared laser-induced chemical reactions have the selectivity for isotopes. Since it is evident that there are many examples different from thermal and photo-chemical reactions, future collection of the data is expected. (Wakatsuki, Y.)

  11. Atmospheric correction using near-infrared bands for satellite ocean color data processing in the turbid western Pacific region.

    Science.gov (United States)

    Wang, Menghua; Shi, Wei; Jiang, Lide

    2012-01-16

    A regional near-infrared (NIR) ocean normalized water-leaving radiance (nL(w)(λ)) model is proposed for atmospheric correction for ocean color data processing in the western Pacific region, including the Bohai Sea, Yellow Sea, and East China Sea. Our motivation for this work is to derive ocean color products in the highly turbid western Pacific region using the Geostationary Ocean Color Imager (GOCI) onboard South Korean Communication, Ocean, and Meteorological Satellite (COMS). GOCI has eight spectral bands from 412 to 865 nm but does not have shortwave infrared (SWIR) bands that are needed for satellite ocean color remote sensing in the turbid ocean region. Based on a regional empirical relationship between the NIR nL(w)(λ) and diffuse attenuation coefficient at 490 nm (K(d)(490)), which is derived from the long-term measurements with the Moderate-resolution Imaging Spectroradiometer (MODIS) on the Aqua satellite, an iterative scheme with the NIR-based atmospheric correction algorithm has been developed. Results from MODIS-Aqua measurements show that ocean color products in the region derived from the new proposed NIR-corrected atmospheric correction algorithm match well with those from the SWIR atmospheric correction algorithm. Thus, the proposed new atmospheric correction method provides an alternative for ocean color data processing for GOCI (and other ocean color satellite sensors without SWIR bands) in the turbid ocean regions of the Bohai Sea, Yellow Sea, and East China Sea, although the SWIR-based atmospheric correction approach is still much preferred. The proposed atmospheric correction methodology can also be applied to other turbid coastal regions.

  12. The Development of Novel Near-Infrared (NIR Tetraarylazadipyrromethene Fluorescent Dyes

    Directory of Open Access Journals (Sweden)

    Young-Tae Chang

    2013-05-01

    Full Text Available Novel structures of an near-infrared (NIR tetraarylazadipyrromethene (aza-BODIPY series have been prepared. We designed the core structure containing two amido groups at the para-position of the aromatic rings. The amido group was incorporated to secure insensitivity to pH and to ensure a bathochromic shift to the NIR region. Forty members of aza-BODIPY compounds were synthesized by substitution of the acetyl group with commercial amines on the alpha bromide. The physicochemical properties and photostability were investigated and the fluorescence emission maxima (745~755 nm were found to be in the near infrared (NIR range of fluorescence.

  13. SENSOR++: Simulation of Remote Sensing Systems from Visible to Thermal Infrared

    Science.gov (United States)

    Paproth, C.; Schlüßler, E.; Scherbaum, P.; Börner, A.

    2012-07-01

    During the development process of a remote sensing system, the optimization and the verification of the sensor system are important tasks. To support these tasks, the simulation of the sensor and its output is valuable. This enables the developers to test algorithms, estimate errors, and evaluate the capabilities of the whole sensor system before the final remote sensing system is available and produces real data. The presented simulation concept, SENSOR++, consists of three parts. The first part is the geometric simulation which calculates where the sensor looks at by using a ray tracing algorithm. This also determines whether the observed part of the scene is shadowed or not. The second part describes the radiometry and results in the spectral at-sensor radiance from the visible spectrum to the thermal infrared according to the simulated sensor type. In the case of earth remote sensing, it also includes a model of the radiative transfer through the atmosphere. The final part uses the at-sensor radiance to generate digital images by using an optical and an electronic sensor model. Using SENSOR++ for an optimization requires the additional application of task-specific data processing algorithms. The principle of the simulation approach is explained, all relevant concepts of SENSOR++ are discussed, and first examples of its use are given, for example a camera simulation for a moon lander. Finally, the verification of SENSOR++ is demonstrated.

  14. Comparative investigation of Fourier transform infrared (FT-IR) spectroscopy and X-ray diffraction (XRD) in the determination of cotton fiber crystallinity.

    Science.gov (United States)

    Liu, Yongliang; Thibodeaux, Devron; Gamble, Gary; Bauer, Philip; VanDerveer, Don

    2012-08-01

    Despite considerable efforts in developing curve-fitting protocols to evaluate the crystallinity index (CI) from X-ray diffraction (XRD) measurements, in its present state XRD can only provide a qualitative or semi-quantitative assessment of the amounts of crystalline or amorphous fraction in a sample. The greatest barrier to establishing quantitative XRD is the lack of appropriate cellulose standards, which are needed to calibrate the XRD measurements. In practice, samples with known CI are very difficult to prepare or determine. In a previous study, we reported the development of a simple algorithm for determining fiber crystallinity information from Fourier transform infrared (FT-IR) spectroscopy. Hence, in this study we not only compared the fiber crystallinity information between FT-IR and XRD measurements, by developing a simple XRD algorithm in place of a time-consuming and subjective curve-fitting process, but we also suggested a direct way of determining cotton cellulose CI by calibrating XRD with the use of CI(IR) as references.

  15. Advancements in the Development of an Operational Lightning Jump Algorithm for GOES-R GLM

    Science.gov (United States)

    Shultz, Chris; Petersen, Walter; Carey, Lawrence

    2011-01-01

    Rapid increases in total lightning have been shown to precede the manifestation of severe weather at the surface. These rapid increases have been termed lightning jumps, and are the current focus of algorithm development for the GOES-R Geostationary Lightning Mapper (GLM). Recent lightning jump algorithm work has focused on evaluation of algorithms in three additional regions of the country, as well as, markedly increasing the number of thunderstorms in order to evaluate the each algorithm s performance on a larger population of storms. Lightning characteristics of just over 600 thunderstorms have been studied over the past four years. The 2 lightning jump algorithm continues to show the most promise for an operational lightning jump algorithm, with a probability of detection of 82%, a false alarm rate of 35%, a critical success index of 57%, and a Heidke Skill Score of 0.73 on the entire population of thunderstorms. Average lead time for the 2 algorithm on all severe weather is 21.15 minutes, with a standard deviation of +/- 14.68 minutes. Looking at tornadoes alone, the average lead time is 18.71 minutes, with a standard deviation of +/-14.88 minutes. Moreover, removing the 2 lightning jumps that occur after a jump has been detected, and before severe weather is detected at the ground, the 2 lightning jump algorithm s false alarm rate drops from 35% to 21%. Cold season, low topped, and tropical environments cause problems for the 2 lightning jump algorithm, due to their relative dearth in lightning as compared to a supercellular or summertime airmass thunderstorm environment.

  16. A New Algorithm for Detecting Cloud Height using OMPS/LP Measurements

    Science.gov (United States)

    Chen, Zhong; DeLand, Matthew; Bhartia, Pawan K.

    2016-01-01

    The Ozone Mapping and Profiler Suite Limb Profiler (OMPS/LP) ozone product requires the determination of cloud height for each event to establish the lower boundary of the profile for the retrieval algorithm. We have created a revised cloud detection algorithm for LP measurements that uses the spectral dependence of the vertical gradient in radiance between two wavelengths in the visible and near-IR spectral regions. This approach provides better discrimination between clouds and aerosols than results obtained using a single wavelength. Observed LP cloud height values show good agreement with coincident Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) measurements.

  17. Technical Note: Interference errors in infrared remote sounding of the atmosphere

    Directory of Open Access Journals (Sweden)

    R. Sussmann

    2007-07-01

    Full Text Available Classical error analysis in remote sounding distinguishes between four classes: "smoothing errors," "model parameter errors," "forward model errors," and "retrieval noise errors". For infrared sounding "interference errors", which, in general, cannot be described by these four terms, can be significant. Interference errors originate from spectral residuals due to "interfering species" whose spectral features overlap with the signatures of the target species. A general method for quantification of interference errors is presented, which covers all possible algorithmic implementations, i.e., fine-grid retrievals of the interfering species or coarse-grid retrievals, and cases where the interfering species are not retrieved. In classical retrieval setups interference errors can exceed smoothing errors and can vary by orders of magnitude due to state dependency. An optimum strategy is suggested which practically eliminates interference errors by systematically minimizing the regularization strength applied to joint profile retrieval of the interfering species. This leads to an interfering-species selective deweighting of the retrieval. Details of microwindow selection are no longer critical for this optimum retrieval and widened microwindows even lead to reduced overall (smoothing and interference errors. Since computational power will increase, more and more operational algorithms will be able to utilize this optimum strategy in the future. The findings of this paper can be applied to soundings of all infrared-active atmospheric species, which include more than two dozen different gases relevant to climate and ozone. This holds for all kinds of infrared remote sounding systems, i.e., retrievals from ground-based, balloon-borne, airborne, or satellite spectroradiometers.

  18. Detection of Melamine in Soybean Meal Using Near-Infrared Microscopy Imaging with Pure Component Spectra as the Evaluation Criteria

    Directory of Open Access Journals (Sweden)

    Zengling Yang

    2016-01-01

    Full Text Available Soybean meal was adulterated with melamine with the purpose of boosting the protein content for unlawful interests. In recent years, the near-infrared (NIR spectroscopy technique has been widely used for guaranteeing food and feed security for its fast, nondestructive, and pollution-free characteristics. However, there are problems with using near-infrared (NIR spectroscopy for detecting samples with low contaminant concentration because of instrument noise and sampling issues. In addition, methods based on NIR are indirect and depend on calibration models. NIR microscopy imaging offers the opportunity to investigate the chemical species present in food and feed at the microscale level (the minimum spot size is a few micrometers, thus avoiding the problem of the spectral features of contaminants being diluted by scanning. The aim of this work was to investigate the feasibility of using NIR microscopy imaging to identify melamine particles in soybean meal using only the pure component spectrum. The results presented indicate that using the classical least squares (CLS algorithm with the nonnegative least squares (NNLS algorithm, without needing first to develop a calibration model, could identify soybean meal that is both uncontaminated and contaminated with melamine particles at as low a level as 50 mg kg−1.

  19. An automatic driving system for a Baker's garlic [Allium chinense] planter: Development of the infrared beam guidance system

    International Nuclear Information System (INIS)

    Tanaka, H.; Iwasaki, M.; Takeda, H.

    2002-01-01

    We have developed a tractor attachment type semi-automatic Baker's garlic (shallot) planter to save hard labor requirement during planting. The velocity of the tractor in operation is so slow (2 to 3m/min) that the tractor driver is forced to tie his hands for a long time. This is an obstacle to its diffusion, because farm managers have to drive their own tractors by themselves in most Japanese farmhouses, yet they have to do other jobs during the planting season. We designed a new automatic driving system that consists of one infrared beam radiator and two infrared beam receivers to solve this problem. The infrared radiator is located in front of the tractor and shows the infrared guideline of tractor path. The infrared receivers are equipped on the front of the tractor and detect the infrared from the radiator. The receivers are arranged symmetrically at 4.5 degree from the center of the tractor. So the misalignment of the tractor creates a difference in sensitivity and it is possible to distinguish the tractor direction against the infrared beam. This system was tested under the sand dune field conditions with the tractor that was converted to automatic driving. The results show the system can effectively steer about 80 m automatically with an almost straight path, and the error from the starting point is within 0.1 m

  20. Remote Estimation of Chlorophyll-a in Inland Waters by a NIR-Red-Based Algorithm: Validation in Asian Lakes

    Directory of Open Access Journals (Sweden)

    Gongliang Yu

    2014-04-01

    Full Text Available Satellite remote sensing is a highly useful tool for monitoring chlorophyll-a concentration (Chl-a in water bodies. Remote sensing algorithms based on near-infrared-red (NIR-red wavelengths have demonstrated great potential for retrieving Chl-a in inland waters. This study tested the performance of a recently developed NIR-red based algorithm, SAMO-LUT (Semi-Analytical Model Optimizing and Look-Up Tables, using an extensive dataset collected from five Asian lakes. Results demonstrated that Chl-a retrieved by the SAMO-LUT algorithm was strongly correlated with measured Chl-a (R2 = 0.94, and the root-mean-square error (RMSE and normalized root-mean-square error (NRMS were 8.9 mg∙m−3 and 72.6%, respectively. However, the SAMO-LUT algorithm yielded large errors for sites where Chl-a was less than 10 mg∙m−3 (RMSE = 1.8 mg∙m−3 and NRMS = 217.9%. This was because differences in water-leaving radiances at the NIR-red wavelengths (i.e., 665 nm, 705 nm and 754 nm used in the SAMO-LUT were too small due to low concentrations of water constituents. Using a blue-green algorithm (OC4E instead of the SAMO-LUT for the waters with low constituent concentrations would have reduced the RMSE and NRMS to 1.0 mg∙m−3 and 16.0%, respectively. This indicates (1 the NIR-red algorithm does not work well when water constituent concentrations are relatively low; (2 different algorithms should be used in light of water constituent concentration; and thus (3 it is necessary to develop a classification method for selecting the appropriate algorithm.

  1. Forecasting pulsatory motion for non-invasive cardiac radiosurgery: an analysis of algorithms from respiratory motion prediction.

    Science.gov (United States)

    Ernst, Floris; Bruder, Ralf; Schlaefer, Alexander; Schweikard, Achim

    2011-01-01

    Recently, radiosurgical treatment of cardiac arrhythmia, especially atrial fibrillation, has been proposed. Using the CyberKnife, focussed radiation will be used to create ablation lines on the beating heart to block unwanted electrical activity. Since this procedure requires high accuracy, the inevitable latency of the system (i.e., the robotic manipulator following the motion of the heart) has to be compensated for. We examine the applicability of prediction algorithms developed for respiratory motion prediction to the prediction of pulsatory motion. We evaluated the MULIN, nLMS, wLMS, SVRpred and EKF algorithms. The test data used has been recorded using external infrared position sensors, 3D ultrasound and the NavX catheter systems. With this data, we have shown that the error from latency can be reduced by at least 10 and as much as 75% (44% average), depending on the type of signal. It has also been shown that, although the SVRpred algorithm was successful in most cases, it was outperformed by the simple nLMS algorithm, the EKF or the wLMS algorithm in a number of cases. We have shown that prediction of cardiac motion is possible and that the algorithms known from respiratory motion prediction are applicable. Since pulsation is more regular than respiration, more research will have to be done to improve frequency-tracking algorithms, like the EKF method, which performed better than expected from their behaviour on respiratory motion traces.

  2. A Cross-Sectional Survey of Near-Infrared Spectroscopy Use in Pediatric Cardiac ICUs in the United Kingdom, Ireland, Italy, and Germany.

    Science.gov (United States)

    Hoskote, Aparna U; Tume, Lyvonne N; Trieschmann, Uwe; Menzel, Christoph; Cogo, Paola; Brown, Katherine L; Broadhead, Michael W

    2016-01-01

    Despite the increasing use of near-infrared spectroscopy across pediatric cardiac ICUs, there is significant variability and equipoise with no universally accepted management algorithms. We aimed to explore the use of near-infrared spectroscopy in pediatric cardiac ICUs in the United Kingdom, Ireland, Italy, and Germany. A cross-sectional multicenter, multinational electronic survey of one consultant in each pediatric cardiac ICU. Pediatric cardiac ICUs in the United Kingdom and Ireland (n = 13), Italy (n = 12), and Germany (n = 33). Questionnaire targeted to establish use, targets, protocols/thresholds for intervention, and perceived usefulness of near-infrared spectroscopy monitoring. Overall, 42 of 58 pediatric cardiac ICUs (72%) responded: United Kingdom and Ireland, 11 of 13 (84.6%); Italy, 12 of 12 (100%); and Germany, 19 of 33 (57%, included all major centers). Near-infrared spectroscopy usage varied with 35% (15/42) reporting that near-infrared spectroscopy was not used at all (7/42) or occasionally (8/42); near-infrared spectroscopy use was much less common in the United Kingdom (46%) when compared with 78% in Germany and all (100%) in Italy. Only four units had a near-infrared spectroscopy protocol, and 18 specifically used near-infrared spectroscopy in high-risk patients; 37 respondents believed that near-infrared spectroscopy added value to standard monitoring and 23 believed that it gave an earlier indication of deterioration, but only 19 would respond based on near-infrared spectroscopy data alone. Targets for absolute values and critical thresholds for intervention varied widely between units. The reasons cited for not or occasionally using near-infrared spectroscopy were expense (n = 6), limited evidence and uncertainty on how it guides management (n = 4), difficulty in interpretation, and unreliability of data (n = 3). Amongst the regular or occasional near-infrared spectroscopy users (n = 35), 28 (66%) agreed that a multicenter study is warranted

  3. Geographic atrophy segmentation in infrared and autofluorescent retina images using supervised learning.

    Science.gov (United States)

    Devisetti, K; Karnowski, T P; Giancardo, L; Li, Y; Chaum, E

    2011-01-01

    Geographic Atrophy (GA) of the retinal pigment epithelium (RPE) is an advanced form of atrophic age-related macular degeneration (AMD) and is responsible for about 20% of AMD-related legal blindness in the United States. Two different imaging modalities for retinas, infrared imaging and autofluorescence imaging, serve as interesting complimentary technologies for highlighting GA. In this work we explore the use of neural network classifiers in performing segmentation of GA in registered infrared (IR) and autofluorescence (AF) images. Our segmentation achieved a performance level of 82.5% sensitivity and 92.9% specificity on a per-pixel basis using hold-one-out validation testing. The algorithm, feature extraction, data set and experimental results are discussed and shown.

  4. An improved algorithm for calculating cloud radiation

    International Nuclear Information System (INIS)

    Yuan Guibin; Sun Xiaogang; Dai Jingmin

    2005-01-01

    Clouds radiation characteristic is very important in cloud scene simulation, weather forecasting, pattern recognition, and other fields. In order to detect missiles against cloud backgrounds, to enhance the fidelity of simulation, it is critical to understand a cloud's thermal radiation model. Firstly, the definition of cloud layer infrared emittance is given. Secondly, the discrimination conditions of judging a pixel of focal plane on a satellite in daytime or night time are shown and equations are given. Radiance such as reflected solar radiance, solar scattering, diffuse solar radiance, solar and thermal sky shine, solar and thermal path radiance, cloud blackbody and background radiance are taken into account. Thirdly, the computing methods of background radiance for daytime and night time are given. Through simulations and comparison, this algorithm is proved to be an effective calculating algorithm for cloud radiation

  5. Next-generation mid-infrared sources

    Science.gov (United States)

    Jung, D.; Bank, S.; Lee, M. L.; Wasserman, D.

    2017-12-01

    The mid-infrared (mid-IR) is a wavelength range with a variety of technologically vital applications in molecular sensing, security and defense, energy conservation, and potentially in free-space communication. The recent development and rapid commercialization of new coherent mid-infrared sources have spurred significant interest in the development of mid-infrared optical systems for the above applications. However, optical systems designers still do not have the extensive optical infrastructure available to them that exists at shorter wavelengths (for instance, in the visible and near-IR/telecom wavelengths). Even in the field of optoelectronic sources, which has largely driven the growing interest in the mid-infrared, the inherent limitations of state-of-the-art sources and the gaps in spectral coverage offer opportunities for the development of new classes of lasers, light emitting diodes and emitters for a range of potential applications. In this topical review, we will first present an overview of the current state-of-the-art mid-IR sources, in particular thermal emitters, which have long been utilized, and the relatively new quantum- and interband-cascade lasers, as well as the applications served by these sources. Subsequently, we will discuss potential mid-infrared applications and wavelength ranges which are poorly served by the current stable of mid-IR sources, with an emphasis on understanding the fundamental limitations of the current source technology. The bulk of the manuscript will then explore both past and recent developments in mid-infrared source technology, including narrow bandgap quantum well lasers, type-I and type-II quantum dot materials, type-II superlattices, highly mismatched alloys, lead-salts and transition-metal-doped II-VI materials. We will discuss both the advantages and limitations of each of the above material systems, as well as the potential new applications which they might serve. All in all, this topical review does not aim

  6. Viability of infrared FEL facilities

    International Nuclear Information System (INIS)

    Schwettman, H.A.

    2004-01-01

    Infrared FELs have broken important ground in optical science in the past decade. The rapid development of optical parametric amplifiers and oscillators, and THz sources, however, has changed the competitive landscape and compelled FEL facilities to identify and exploit their unique advantages. The viability of infrared FEL facilities depends on targeting unique world-class science and providing adequate experimental beam time at competitive costs

  7. The ASTRA Toolbox: A platform for advanced algorithm development in electron tomography

    Energy Technology Data Exchange (ETDEWEB)

    Aarle, Wim van, E-mail: wim.vanaarle@uantwerpen.be [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Palenstijn, Willem Jan, E-mail: willemjan.palenstijn@uantwerpen.be [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Centrum Wiskunde & Informatica, Science Park 123, NL-1098 XG Amsterdam (Netherlands); De Beenhouwer, Jan, E-mail: jan.debeenhouwer@uantwerpen.be [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Altantzis, Thomas, E-mail: thomas.altantzis@uantwerpen.be [Electron Microscopy for Materials Science, University of Antwerp, Groenenborgerlaan 171, B-2020 Wilrijk (Belgium); Bals, Sara, E-mail: sara.bals@uantwerpen.be [Electron Microscopy for Materials Science, University of Antwerp, Groenenborgerlaan 171, B-2020 Wilrijk (Belgium); Batenburg, K. Joost, E-mail: joost.batenburg@cwi.nl [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Centrum Wiskunde & Informatica, Science Park 123, NL-1098 XG Amsterdam (Netherlands); Mathematical Institute, Leiden University, P.O. Box 9512, NL-2300 RA Leiden (Netherlands); Sijbers, Jan, E-mail: jan.sijbers@uantwerpen.be [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium)

    2015-10-15

    We present the ASTRA Toolbox as an open platform for 3D image reconstruction in tomography. Most of the software tools that are currently used in electron tomography offer limited flexibility with respect to the geometrical parameters of the acquisition model and the algorithms used for reconstruction. The ASTRA Toolbox provides an extensive set of fast and flexible building blocks that can be used to develop advanced reconstruction algorithms, effectively removing these limitations. We demonstrate this flexibility, the resulting reconstruction quality, and the computational efficiency of this toolbox by a series of experiments, based on experimental dual-axis tilt series. - Highlights: • The ASTRA Toolbox is an open platform for 3D image reconstruction in tomography. • Advanced reconstruction algorithms can be prototyped using the fast and flexible building blocks. • This flexibility is demonstrated on a common use case: dual-axis tilt series reconstruction with prior knowledge. • The computational efficiency is validated on an experimentally measured tilt series.

  8. A sonification algorithm for developing the off-roads models for driving simulators

    Science.gov (United States)

    Chiroiu, Veturia; Brişan, Cornel; Dumitriu, Dan; Munteanu, Ligia

    2018-01-01

    In this paper, a sonification algorithm for developing the off-road models for driving simulators, is proposed. The aim of this algorithm is to overcome difficulties of heuristics identification which are best suited to a particular off-road profile built by measurements. The sonification algorithm is based on the stochastic polynomial chaos analysis suitable in solving equations with random input data. The fluctuations are generated by incomplete measurements leading to inhomogeneities of the cross-sectional curves of off-roads before and after deformation, the unstable contact between the tire and the road and the unreal distribution of contact and friction forces in the unknown contact domains. The approach is exercised on two particular problems and results compare favorably to existing analytical and numerical solutions. The sonification technique represents a useful multiscale analysis able to build a low-cost virtual reality environment with increased degrees of realism for driving simulators and higher user flexibility.

  9. The ASTRA Toolbox: A platform for advanced algorithm development in electron tomography

    International Nuclear Information System (INIS)

    Aarle, Wim van; Palenstijn, Willem Jan; De Beenhouwer, Jan; Altantzis, Thomas; Bals, Sara; Batenburg, K. Joost; Sijbers, Jan

    2015-01-01

    We present the ASTRA Toolbox as an open platform for 3D image reconstruction in tomography. Most of the software tools that are currently used in electron tomography offer limited flexibility with respect to the geometrical parameters of the acquisition model and the algorithms used for reconstruction. The ASTRA Toolbox provides an extensive set of fast and flexible building blocks that can be used to develop advanced reconstruction algorithms, effectively removing these limitations. We demonstrate this flexibility, the resulting reconstruction quality, and the computational efficiency of this toolbox by a series of experiments, based on experimental dual-axis tilt series. - Highlights: • The ASTRA Toolbox is an open platform for 3D image reconstruction in tomography. • Advanced reconstruction algorithms can be prototyped using the fast and flexible building blocks. • This flexibility is demonstrated on a common use case: dual-axis tilt series reconstruction with prior knowledge. • The computational efficiency is validated on an experimentally measured tilt series

  10. Development of a Crosstalk Suppression Algorithm for KID Readout

    Science.gov (United States)

    Lee, Kyungmin; Ishitsuka, H.; Oguri, S.; Suzuki, J.; Tajima, O.; Tomita, N.; Won, Eunil; Yoshida, M.

    2018-06-01

    The GroundBIRD telescope aims to detect B-mode polarization of the cosmic microwave background radiation using the kinetic inductance detector array as a polarimeter. For the readout of the signal from detector array, we have developed a frequency division multiplexing readout system based on a digital down converter method. These techniques in general have the leakage problems caused by the crosstalks. The window function was applied in the field programmable gate arrays to mitigate the effect of these problems and tested it in algorithm level.

  11. Infrared source test

    Energy Technology Data Exchange (ETDEWEB)

    Ott, L.

    1994-11-15

    The purpose of the Infrared Source Test (IRST) is to demonstrate the ability to track a ground target with an infrared sensor from an airplane. The system is being developed within the Advance Technology Program`s Theater Missile Defense/Unmanned Aerial Vehicle (UAV) section. The IRST payload consists of an Amber Radiance 1 infrared camera system, a computer, a gimbaled mirror, and a hard disk. The processor is a custom R3000 CPU board made by Risq Modular Systems, Inc. for LLNL. The board has ethernet, SCSI, parallel I/O, and serial ports, a DMA channel, a video (frame buffer) interface, and eight MBytes of main memory. The real-time operating system VxWorks has been ported to the processor. The application code is written in C on a host SUN 4 UNIX workstation. The IRST is the result of a combined effort by physicists, electrical and mechanical engineers, and computer scientists.

  12. Latest developments in GaN-based quantum devices for infrared optoelectronics

    OpenAIRE

    Monroy, Eva; Guillot, Fabien; Leconte, Sylvain; Nevou, Laurent; Doyennette, Laeticia; Tchernycheva, Maria; Julien, François H.; Baumann, Esther; Giorgetta, Fabrizio R.; Hofstetter, Daniel

    2011-01-01

    In this work, we summarize the latest progress in intersubband devices based on GaN/AlN nanostructures for operation in the near-infrared. We first discuss the growth and characterization of ultra-thin GaN/AlN quantum well and quantum dot superlattices by plasma-assisted molecular-beam epitaxy. Then, we present the performance of nitride-based infrared photodetectors and electro-optical modulators operating at 1.55 μm. Finally, we discuss the progress towards intersubband light emitters, incl...

  13. Alteration mineral mapping in inaccessible regions using target detection algorithms to ASTER data

    International Nuclear Information System (INIS)

    Pour, A B; Hashim, M; Park, Y

    2017-01-01

    In this study, the applications of target detection algorithms such as Constrained Energy Minimization (CEM), Orthogonal Subspace Projection (OSP) and Adaptive Coherence Estimator (ACE) to shortwave infrared bands of the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) data was investigated to extract geological information for alteration mineral mapping in poorly exposed lithologies in inaccessible domains. The Oscar II coast area north-eastern Graham Land, Antarctic Peninsula (AP) was selected in this study to conduct a satellite-based remote sensing mapping technique. It is an inaccessible region due to the remoteness of many rock exposures and the necessity to travel over sever mountainous and glacier-cover terrains for geological field mapping and sample collection. Fractional abundance of alteration minerals such as muscovite, kaolinite, illite, montmorillonite, epidote, chlorite and biotite were identified in alteration zones using CEM, OSP and ACE algorithms in poorly mapped and unmapped zones at district scale for the Oscar II coast area. The results of this investigation demonstrated the applicability of ASTER shortwave infrared spectral data for lithological and alteration mineral mapping in poorly exposed lithologies and inaccessible regions, particularly using the image processing algorithms that are capable to detect sub-pixel targets in the remotely sensed images, where no prior information is available. (paper)

  14. Derivation of Land Surface Temperature for Landsat-8 TIRS Using a Split Window Algorithm

    Directory of Open Access Journals (Sweden)

    Offer Rozenstein

    2014-03-01

    Full Text Available Land surface temperature (LST is one of the most important variables measured by satellite remote sensing. Public domain data are available from the newly operational Landsat-8 Thermal Infrared Sensor (TIRS. This paper presents an adjustment of the split window algorithm (SWA for TIRS that uses atmospheric transmittance and land surface emissivity (LSE as inputs. Various alternatives for estimating these SWA inputs are reviewed, and a sensitivity analysis of the SWA to misestimating the input parameters is performed. The accuracy of the current development was assessed using simulated Modtran data. The root mean square error (RMSE of the simulated LST was calculated as 0.93 °C. This SWA development is leading to progress in the determination of LST by Landsat-8 TIRS.

  15. Hybrid active pixel sensors in infrared astronomy

    International Nuclear Information System (INIS)

    Finger, Gert; Dorn, Reinhold J.; Meyer, Manfred; Mehrgan, Leander; Stegmeier, Joerg; Moorwood, Alan

    2005-01-01

    Infrared astronomy is currently benefiting from three main technologies providing high-performance hybrid active pixel sensors. In the near infrared from 1 to 5 μm two technologies, both aiming for buttable 2Kx2K mosaics, are competing, namely InSb and HgCdTe grown by LPE or MBE on Al 2 O 3 , Si or CdZnTe substrates. Blocked impurity band Si:As arrays cover the mid infrared spectral range from 8 to 28 μm. Adaptive optics combined with multiple integral field units feeding high-resolution spectrographs drive the requirements for the array format of infrared sensors used at ground-based infrared observatories. The pixel performance is now approaching fundamental limits. In view of this development, a detection limit for the photon flux of the ideal detector will be derived, depending only on the temperature and the impedance of the detector. It will be shown that this limit is approximated by state of the art infrared arrays for long on-chip integrations. Different detector materials are compared and strategies to populate large focal planes are discussed. The need for the development of small-format low noise sensors for adaptive optics and interferometry will be pointed out

  16. Development of independent MU/treatment time verification algorithm for non-IMRT treatment planning: A clinical experience

    Science.gov (United States)

    Tatli, Hamza; Yucel, Derya; Yilmaz, Sercan; Fayda, Merdan

    2018-02-01

    The aim of this study is to develop an algorithm for independent MU/treatment time (TT) verification for non-IMRT treatment plans, as a part of QA program to ensure treatment delivery accuracy. Two radiotherapy delivery units and their treatment planning systems (TPS) were commissioned in Liv Hospital Radiation Medicine Center, Tbilisi, Georgia. Beam data were collected according to vendors' collection guidelines, and AAPM reports recommendations, and processed by Microsoft Excel during in-house algorithm development. The algorithm is designed and optimized for calculating SSD and SAD treatment plans, based on AAPM TG114 dose calculation recommendations, coded and embedded in MS Excel spreadsheet, as a preliminary verification algorithm (VA). Treatment verification plans were created by TPSs based on IAEA TRS 430 recommendations, also calculated by VA, and point measurements were collected by solid water phantom, and compared. Study showed that, in-house VA can be used for non-IMRT plans MU/TT verifications.

  17. Development of Image Reconstruction Algorithms in electrical Capacitance Tomography

    International Nuclear Information System (INIS)

    Fernandez Marron, J. L.; Alberdi Primicia, J.; Barcala Riveira, J. M.

    2007-01-01

    The Electrical Capacitance Tomography (ECT) has not obtained a good development in order to be used at industrial level. That is due first to difficulties in the measurement of very little capacitances (in the range of femto farads) and second to the problem of reconstruction on- line of the images. This problem is due also to the small numbers of electrodes (maximum 16), that made the usual algorithms of reconstruction has many errors. In this work it is described a new purely geometrical method that could be used for this purpose. (Author) 4 refs

  18. Volkov transform generalized projection algorithm for attosecond pulse characterization

    International Nuclear Information System (INIS)

    Keathley, P D; Bhardwaj, S; Moses, J; Laurent, G; Kärtner, F X

    2016-01-01

    An algorithm for characterizing attosecond extreme ultraviolet pulses that is not bandwidth-limited, requires no interpolation of the experimental data, and makes no approximations beyond the strong-field approximation is introduced. This approach fully incorporates the dipole transition matrix element into the retrieval process. Unlike attosecond retrieval methods such as phase retrieval by omega oscillation filtering (PROOF), or improved PROOF, it simultaneously retrieves both the attosecond and infrared (IR) pulses, without placing fundamental restrictions on the IR pulse duration, intensity or bandwidth. The new algorithm is validated both numerically and experimentally, and is also found to have practical advantages. These include an increased robustness to noise, and relaxed requirements for the size of the experimental dataset and the intensity of the streaking pulse. (paper)

  19. Mineral Potential in India Using Airborne Visible/Infrared Imaging Spectrometer-Next Generation (AVIRIS-NG) Data

    Science.gov (United States)

    Oommen, T.; Chatterjee, S.

    2017-12-01

    NASA and the Indian Space Research Organization (ISRO) are generating Earth surface features data using Airborne Visible/Infrared Imaging Spectrometer-Next Generation (AVIRIS-NG) within 380 to 2500 nm spectral range. This research focuses on the utilization of such data to better understand the mineral potential in India and to demonstrate the application of spectral data in rock type discrimination and mapping for mineral exploration by using automated mapping techniques. The primary focus area of this research is the Hutti-Maski greenstone belt, located in Karnataka, India. The AVIRIS-NG data was integrated with field analyzed data (laboratory scaled compositional analysis, mineralogy, and spectral library) to characterize minerals and rock types. An expert system was developed to produce mineral maps from AVIRIS-NG data automatically. The ground truth data from the study areas was obtained from the existing literature and collaborators from India. The Bayesian spectral unmixing algorithm was used in AVIRIS-NG data for endmember selection. The classification maps of the minerals and rock types were developed using support vector machine algorithm. The ground truth data was used to verify the mineral maps.

  20. An improved algorithm for small and cool fire detection using MODIS data: A preliminary study in the southeastern United States

    Science.gov (United States)

    Wanting Wang; John J. Qu; Xianjun Hao; Yongqiang Liu; William T. Sommers

    2006-01-01

    Traditional fire detection algorithms mainly rely on hot spot detection using thermal infrared (TIR) channels with fixed or contextual thresholds. Three solar reflectance channels (0.65 μm, 0.86 μm, and 2.1 μm) were recently adopted into the MODIS version 4 contextual algorithm to improve the active fire detection. In the southeastern United...

  1. Unveiling the development of intracranial injury using dynamic brain EIT: an evaluation of current reconstruction algorithms.

    Science.gov (United States)

    Li, Haoting; Chen, Rongqing; Xu, Canhua; Liu, Benyuan; Tang, Mengxing; Yang, Lin; Dong, Xiuzhen; Fu, Feng

    2017-08-21

    Dynamic brain electrical impedance tomography (EIT) is a promising technique for continuously monitoring the development of cerebral injury. While there are many reconstruction algorithms available for brain EIT, there is still a lack of study to compare their performance in the context of dynamic brain monitoring. To address this problem, we develop a framework for evaluating different current algorithms with their ability to correctly identify small intracranial conductivity changes. Firstly, a simulation 3D head phantom with realistic layered structure and impedance distribution is developed. Next several reconstructing algorithms, such as back projection (BP), damped least-square (DLS), Bayesian, split Bregman (SB) and GREIT are introduced. We investigate their temporal response, noise performance, location and shape error with respect to different noise levels on the simulation phantom. The results show that the SB algorithm demonstrates superior performance in reducing image error. To further improve the location accuracy, we optimize SB by incorporating the brain structure-based conductivity distribution priors, in which differences of the conductivities between different brain tissues and the inhomogeneous conductivity distribution of the skull are considered. We compare this novel algorithm (called SB-IBCD) with SB and DLS using anatomically correct head shaped phantoms with spatial varying skull conductivity. Main results and Significance: The results showed that SB-IBCD is the most effective in unveiling small intracranial conductivity changes, where it can reduce the image error by an average of 30.0% compared to DLS.

  2. Development of an improved genetic algorithm and its application in the optimal design of ship nuclear power system

    International Nuclear Information System (INIS)

    Jia Baoshan; Yu Jiyang; You Songbo

    2005-01-01

    This article focuses on the development of an improved genetic algorithm and its application in the optimal design of the ship nuclear reactor system, whose goal is to find a combination of system parameter values that minimize the mass or volume of the system given the power capacity requirement and safety criteria. An improved genetic algorithm (IGA) was developed using an 'average fitness value' grouping + 'specified survival probability' rank selection method and a 'separate-recombine' duplication operator. Combining with a simulated annealing algorithm (SAA) that continues the local search after the IGA reaches a satisfactory point, the algorithm gave satisfactory optimization results from both search efficiency and accuracy perspectives. This IGA-SAA algorithm successfully solved the design optimization problem of ship nuclear power system. It is an advanced and efficient methodology that can be applied to the similar optimization problems in other areas. (authors)

  3. Detection of surface algal blooms using the newly developed algorithm surface algal bloom index (SABI)

    Science.gov (United States)

    Alawadi, Fahad

    2010-10-01

    Quantifying ocean colour properties has evolved over the past two decades from being able to merely detect their biological activity to the ability to estimate chlorophyll concentration using optical satellite sensors like MODIS and MERIS. The production of chlorophyll spatial distribution maps is a good indicator of plankton biomass (primary production) and is useful for the tracing of oceanographic currents, jets and blooms, including harmful algal blooms (HABs). Depending on the type of HABs involved and the environmental conditions, if their concentration rises above a critical threshold, it can impact the flora and fauna of the aquatic habitat through the introduction of the so called "red tide" phenomenon. The estimation of chlorophyll concentration is derived from quantifying the spectral relationship between the blue and the green bands reflected from the water column. This spectral relationship is employed in the standard ocean colour chlorophyll-a (Chlor-a) product, but is incapable of detecting certain macro-algal species that float near to or at the water surface in the form of dense filaments or mats. The ability to accurately identify algal formations that sometimes appear as oil spill look-alikes in satellite imagery, contributes towards the reduction of false-positive incidents arising from oil spill monitoring operations. Such algal formations that occur in relatively high concentrations may experience, as in land vegetation, what is known as the "red-edge" effect. This phenomena occurs at the highest reflectance slope between the maximum absorption in the red due to the surrounding ocean water and the maximum reflectance in the infra-red due to the photosynthetic pigments present in the surface algae. A new algorithm termed the surface algal bloom index (SABI), has been proposed to delineate the spatial distributions of floating micro-algal species like for example cyanobacteria or exposed inter-tidal vegetation like seagrass. This algorithm was

  4. The development of a novel knowledge-based weaning algorithm using pulmonary parameters: a simulation study.

    Science.gov (United States)

    Guler, Hasan; Kilic, Ugur

    2018-03-01

    Weaning is important for patients and clinicians who have to determine correct weaning time so that patients do not become addicted to the ventilator. There are already some predictors developed, such as the rapid shallow breathing index (RSBI), the pressure time index (PTI), and Jabour weaning index. Many important dimensions of weaning are sometimes ignored by these predictors. This is an attempt to develop a knowledge-based weaning process via fuzzy logic that eliminates the disadvantages of the present predictors. Sixteen vital parameters listed in published literature have been used to determine the weaning decisions in the developed system. Since there are considered to be too many individual parameters in it, related parameters were grouped together to determine acid-base balance, adequate oxygenation, adequate pulmonary function, hemodynamic stability, and the psychological status of the patients. To test the performance of the developed algorithm, 20 clinical scenarios were generated using Monte Carlo simulations and the Gaussian distribution method. The developed knowledge-based algorithm and RSBI predictor were applied to the generated scenarios. Finally, a clinician evaluated each clinical scenario independently. The Student's t test was used to show the statistical differences between the developed weaning algorithm, RSBI, and the clinician's evaluation. According to the results obtained, there were no statistical differences between the proposed methods and the clinician evaluations.

  5. Multi-sparse dictionary colorization algorithm based on the feature classification and detail enhancement

    Science.gov (United States)

    Yan, Dan; Bai, Lianfa; Zhang, Yi; Han, Jing

    2018-02-01

    For the problems of missing details and performance of the colorization based on sparse representation, we propose a conceptual model framework for colorizing gray-scale images, and then a multi-sparse dictionary colorization algorithm based on the feature classification and detail enhancement (CEMDC) is proposed based on this framework. The algorithm can achieve a natural colorized effect for a gray-scale image, and it is consistent with the human vision. First, the algorithm establishes a multi-sparse dictionary classification colorization model. Then, to improve the accuracy rate of the classification, the corresponding local constraint algorithm is proposed. Finally, we propose a detail enhancement based on Laplacian Pyramid, which is effective in solving the problem of missing details and improving the speed of image colorization. In addition, the algorithm not only realizes the colorization of the visual gray-scale image, but also can be applied to the other areas, such as color transfer between color images, colorizing gray fusion images, and infrared images.

  6. The Development of Several Electromagnetic Monitoring Strategies and Algorithms for Validating Pre-Earthquake Electromagnetic Signals

    Science.gov (United States)

    Bleier, T. E.; Dunson, J. C.; Roth, S.; Mueller, S.; Lindholm, C.; Heraud, J. A.

    2012-12-01

    QuakeFinder, a private research group in California, reports on the development of a 100+ station network consisting of 3-axis induction magnetometers, and air conductivity sensors to collect and characterize pre-seismic electromagnetic (EM) signals. These signals are combined with daily Infra Red signals collected from the GOES weather satellite infrared (IR) instrument to compare and correlate with the ground EM signals, both from actual earthquakes and boulder stressing experiments. This presentation describes the efforts QuakeFinder has undertaken to automatically detect these pulse patterns using their historical data as a reference, and to develop other discriminative algorithms that can be used with air conductivity sensors, and IR instruments from the GOES satellites. The overall big picture results of the QuakeFinder experiment are presented. In 2007, QuakeFinder discovered the occurrence of strong uni-polar pulses in their magnetometer coil data that increased in tempo dramatically prior to the M5.1 earthquake at Alum Rock, California. Suggestions that these pulses might have been lightning or power-line arcing did not fit with the data actually recorded as was reported in Bleier [2009]. Then a second earthquake occurred near the same site on January 7, 2010 as was reported in Dunson [2011], and the pattern of pulse count increases before the earthquake occurred similarly to the 2007 event. There were fewer pulses, and the magnitude of them was decreased, both consistent with the fact that the earthquake was smaller (M4.0 vs M5.4) and farther away (7Km vs 2km). At the same time similar effects were observed at the QuakeFinder Tacna, Peru site before the May 5th, 2010 M6.2 earthquake and a cluster of several M4-5 earthquakes.

  7. Infrared spectroscopy by use of synchrotron radiation

    International Nuclear Information System (INIS)

    Nanba, Takao

    1991-01-01

    During five years since the author wrote the paper on the utilization of synchrotron radiation in long wavelength region, it seems to be recognized that in synchrotron radiation, the light from infrared to milli wave can be utilized, and is considerably useful. Recently the research on coherent synchrotron radiation in this region using electron linac has been developed by Tohoku University group, and the high capability of synchrotron radiation as light source is verified. This paper is the report on the infrared spectroscopic research using incoherent synchrotron radiation obtained from the deflection electromagnet part of electron storage rings. Synchrotron radiation is high luminance white light source including from X-ray to micro wave. The example of research that the author carried out at UVSOR is reported, and the perspective in near future is mentioned. Synchrotron radiation as the light source for infrared spectroscopy, the intensity and dimensions of the light source, far infrared region and mid infrared region, far infrared high pressure spectroscopic experiment, and the heightening of luminance of synchrotron radiation as infrared light source are described. (K.I.)

  8. A Novel Algorithm (G-JPSO and Its Development for the Optimal Control of Pumps in Water Distribution Networks

    Directory of Open Access Journals (Sweden)

    Rasoul Rajabpour

    2017-01-01

    Full Text Available Recent decades have witnessed growing applications of metaheuristic techniques as efficient tools for solving complex engineering problems. One such method is the JPSO algorithm. In this study, innovative modifications were made in the nature of the jump algorithm JPSO to make it capable of coping with graph-based solutions, which led to the development of a new algorithm called ‘G-JPSO’. The new algorithm was then used to solve the Fletcher-Powell optimal control problem and its application to optimal control of pumps in water distribution networks was evaluated. Optimal control of pumps consists in an optimum operation timetable (on and off for each of the pumps at the desired time interval. Maximum number of on and off positions for each pump was introduced into the objective function as a constraint such that not only would power consumption at each node be reduced but such problem requirements as the minimum pressure required at each node and minimum/maximum storage tank heights would be met. To determine the optimal operation of pumps, a model-based optimization-simulation algorithm was developed based on G-JPSO and JPSO algorithms. The model proposed by van Zyl was used to determine the optimal operation of the distribution network. Finally, the results obtained from the proposed algorithm were compared with those obtained from ant colony, genetic, and JPSO algorithms to show the robustness of the proposed algorithm in finding near-optimum solutions at reasonable computation costs.

  9. Detecting Plastic PFM-1 Butterfly Mines Using Thermal Infrared Sensing

    Science.gov (United States)

    Baur, J.; de Smet, T.; Nikulin, A.

    2017-12-01

    Remnant plastic-composite landmines, such as the mass-produced PFM-1, represent an ongoing humanitarian threat aggravated by high costs associated with traditional demining efforts. These particular unexploded ordnance (UXO) devices pose a challenge to conventional geophysical detection methods, due their plastic-body design and small size. Additionally, the PFM-1s represent a particularly heinous UXO, due to their low mass ( 25 lb) trigger limit and "butterfly" wing design, earning them the reputation of a "toy mine" - disproportionally impacting children across post-conflict areas. We developed a detection algorithm based on data acquired by a thermal infrared camera mounted to a commercial UAV to detect time-variable temperature difference between the PFM-1 and the surrounding environment. We present results of a field study focused on thermal detection and identification of the PFM-1 anti-personnel landmines from a remotely operated unmanned aerial vehicle (UAV). We conducted a series of field detection experiments meant to simulate the mountainous terrains where PFM-1 mines were historically deployed and remain in place. In our tests, 18 inert PFM-1 mines along with the aluminum KSF-1 casing were randomly dispersed to mimic an ellipsoidal minefield of 8-10 x 18-20 m dimensions in a de-vegetated rubble yard at Chenango Valley State Park (New York State). We collected multiple thermal infrared imagery datasets focused on these model minefields with the FLIR Vue Pro R attached to the 3DR Solo UAV flying at approximately at 2 m. We identified different environmental variables to constrain the optimal time of day and daily temperature variations to reveal presence of these plastic UXOs. We show that in the early-morning hours when thermal inertia is greatest, the PFM-1 mines can be detected based on their differential thermal inertia. Because the mines have statistically different temperatures than background and a characteristic shape, we were able to train a

  10. Development of a general learning algorithm with applications in nuclear reactor systems

    Energy Technology Data Exchange (ETDEWEB)

    Brittain, C.R.; Otaduy, P.J.; Perez, R.B.

    1989-12-01

    The objective of this study was development of a generalized learning algorithm that can learn to predict a particular feature of a process by observation of a set of representative input examples. The algorithm uses pattern matching and statistical analysis techniques to find a functional relationship between descriptive attributes of the input examples and the feature to be predicted. The algorithm was tested by applying it to a set of examples consisting of performance descriptions for 277 fuel cycles of Oak Ridge National Laboratory's High Flux Isotope Reactor (HFIR). The program learned to predict the critical rod position for the HFIR from core configuration data prior to reactor startup. The functional relationship bases its predictions on initial core reactivity, the number of certain targets placed in the center of the reactor, and the total exposure of the control plates. Twelve characteristic fuel cycle clusters were identified. Nine fuel cycles were diagnosed as having noisy data, and one could not be predicted by the functional relationship. 13 refs., 6 figs.

  11. Development of a general learning algorithm with applications in nuclear reactor systems

    International Nuclear Information System (INIS)

    Brittain, C.R.; Otaduy, P.J.; Perez, R.B.

    1989-12-01

    The objective of this study was development of a generalized learning algorithm that can learn to predict a particular feature of a process by observation of a set of representative input examples. The algorithm uses pattern matching and statistical analysis techniques to find a functional relationship between descriptive attributes of the input examples and the feature to be predicted. The algorithm was tested by applying it to a set of examples consisting of performance descriptions for 277 fuel cycles of Oak Ridge National Laboratory's High Flux Isotope Reactor (HFIR). The program learned to predict the critical rod position for the HFIR from core configuration data prior to reactor startup. The functional relationship bases its predictions on initial core reactivity, the number of certain targets placed in the center of the reactor, and the total exposure of the control plates. Twelve characteristic fuel cycle clusters were identified. Nine fuel cycles were diagnosed as having noisy data, and one could not be predicted by the functional relationship. 13 refs., 6 figs

  12. Far infrared photoconductors

    International Nuclear Information System (INIS)

    Leotin, J.; Meny, C.

    1990-01-01

    This paper presents the development of far infrared photoconductors for the focal plane of a spaceborne instrument named SAFIRE. SAFIRE (Spectroscopy of the Atmosphere using Far-Infrared Emission) belongs to the EOS program (Earth Observing System) and is now in the definition phase. It is a joint effort by scientists from the United States, Great Britain, Italy and France for a new generation of atmosphere sensor. The overall goal of the SAFIRE experiment is to improve the understanding of the ozone distribution in the middle atmosphere by conducting global scale measurements of the important chemical, radiative and dynamical processes which influence its changes. This will be accomplished by the measurement of the far infrared thermal limb emission in seven spectral channels covering the range 80 to 400 cm -1 with a maximum resolution of 0.004 cm -1 . For example key gases like OH, O, HO 2 , N 2 O 5 will be probed for the first time. Achievement of the required detector sensitivity in the far-infrared imposes the choice of photoconductive detectors operating at liquid helium temperatures. Germanium doped with gallium is selected for six channels whereas germanium doped with beryllium is suitable for the N 2 O 5 channel. Both photoconductors Ge:Ga and Ge:Be benefit from a well established material technology. A better wavelength coverage of channel 1 is achieved by applying a small uniaxial stress of the order of 0.1 GPa on the Ge:Ga photoconductors. The channel 6B wavelength coverage could be improved by using zinc-doped-germanium (Ge:Zn) or, much better, by using a Blocked Impurity band silicon detector doped with antimony (BIB Si:Sb). The later is developed as an optional basis

  13. The infrared retina

    International Nuclear Information System (INIS)

    Krishna, Sanjay

    2009-01-01

    As infrared imaging systems have evolved from the first generation of linear devices to the second generation of small format staring arrays to the present 'third-gen' systems, there is an increased emphasis on large area focal plane arrays (FPAs) with multicolour operation and higher operating temperature. In this paper, we discuss how one needs to develop an increased functionality at the pixel level for these next generation FPAs. This functionality could manifest itself as spectral, polarization, phase or dynamic range signatures that could extract more information from a given scene. This leads to the concept of an infrared retina, which is an array that works similarly to the human eye that has a 'single' FPA but multiple cones, which are photoreceptor cells in the retina of the eye that enable the perception of colour. These cones are then coupled with powerful signal processing techniques that allow us to process colour information from a scene, even with a limited basis of colour cones. Unlike present day multi or hyperspectral systems, which are bulky and expensive, the idea would be to build a poor man's 'infrared colour' camera. We use examples such as plasmonic tailoring of the resonance or bias dependent dynamic tuning based on quantum confined Stark effect or incorporation of avalanche gain to achieve embodiments of the infrared retina.

  14. Rapid assessment of selected free amino acids during Edam cheese ripening by near infrared spectroscopy

    Directory of Open Access Journals (Sweden)

    Jiří Mlček

    2013-01-01

    Full Text Available The study focuses on rapid determination of free amino acids produced during the ripening of cheese, by using near infrared spectroscopy. Analyses of 96 samples of Edam cheese (30% and 45% of fat in dry matter were performed at monthly intervals up to the ripening age of 6 months. In total, 19 amino acids were analysed with infrared spectrometer using two different methods, either in the regime of reflectance in the integrating sphere of the apparatus or using a fibre optic apparatus with the fibre optic probe. Reference data based on high-performance liquid chromatography were used for calibration of the spectrophotometer. Calibration models were developed using a partial least square algorithm and tested by means of cross-validation. When measured with the integrating sphere and with the probe, the values of correlation coefficients ranged from 0.835 to 0.993 and from 0.739 to 0.995, respectively. Paired t-test did not show significant differences between the reference and predicted values (P < 0.05. The results of this new calibration method showed the possibility of near infrared technology for fast determination of free amino acids, which occur during the ripening of Edam cheese. The content of free amino acids allow us to prepare Edam cheese quickly and efficiently for sale or to prepare the material for processed cheese.

  15. Developing operation algorithms for vision subsystems in autonomous mobile robots

    Science.gov (United States)

    Shikhman, M. V.; Shidlovskiy, S. V.

    2018-05-01

    The paper analyzes algorithms for selecting keypoints on the image for the subsequent automatic detection of people and obstacles. The algorithm is based on the histogram of oriented gradients and the support vector method. The combination of these methods allows successful selection of dynamic and static objects. The algorithm can be applied in various autonomous mobile robots.

  16. Impact of Missing Passive Microwave Sensors on Multi-Satellite Precipitation Retrieval Algorithm

    Directory of Open Access Journals (Sweden)

    Bin Yong

    2015-01-01

    Full Text Available The impact of one or two missing passive microwave (PMW input sensors on the end product of multi-satellite precipitation products is an interesting but obscure issue for both algorithm developers and data users. On 28 January 2013, the Version-7 TRMM Multi-satellite Precipitation Analysis (TMPA products were reproduced and re-released by National Aeronautics and Space Administration (NASA Goddard Space Flight Center because the Advanced Microwave Sounding Unit-B (AMSU-B and the Special Sensor Microwave Imager-Sounder-F16 (SSMIS-F16 input data were unintentionally disregarded in the prior retrieval. Thus, this study investigates the sensitivity of TMPA algorithm results to missing PMW sensors by intercomparing the “early” and “late” Version-7 TMPA real-time (TMPA-RT precipitation estimates (i.e., without and with AMSU-B, SSMIS-F16 sensors with an independent high-density gauge network of 200 tipping-bucket rain gauges over the Chinese Jinghe river basin (45,421 km2. The retrieval counts and retrieval frequency of various PMW and Infrared (IR sensors incorporated into the TMPA system were also analyzed to identify and diagnose the impacts of sensor availability on the TMPA-RT retrieval accuracy. Results show that the incorporation of AMSU-B and SSMIS-F16 has substantially reduced systematic errors. The improvement exhibits rather strong seasonal and topographic dependencies. Our analyses suggest that one or two single PMW sensors might play a key role in affecting the end product of current combined microwave-infrared precipitation estimates. This finding supports algorithm developers’ current endeavor in spatiotemporally incorporating as many PMW sensors as possible in the multi-satellite precipitation retrieval system called Integrated Multi-satellitE Retrievals for Global Precipitation Measurement mission (IMERG. This study also recommends users of satellite precipitation products to switch to the newest Version-7 TMPA datasets and

  17. Secondary cell wall development in cotton fibers as examined with attenuated total reflection Fourier transform infrared spectroscopy

    Science.gov (United States)

    Cotton fibers harvested at 18, 20, 24, 28, 32, 36 and 40 days after flowering were examined using attenuated total reflection Fourier transform-infrared (ATR FT-IR) spectroscopy. The selected harvesting points coincide with secondary cell wall (SCW) development in the fibers. Progressive but moderat...

  18. The study of infrared target recognition at sea background based on visual attention computational model

    Science.gov (United States)

    Wang, Deng-wei; Zhang, Tian-xu; Shi, Wen-jun; Wei, Long-sheng; Wang, Xiao-ping; Ao, Guo-qing

    2009-07-01

    Infrared images at sea background are notorious for the low signal-to-noise ratio, therefore, the target recognition of infrared image through traditional methods is very difficult. In this paper, we present a novel target recognition method based on the integration of visual attention computational model and conventional approach (selective filtering and segmentation). The two distinct techniques for image processing are combined in a manner to utilize the strengths of both. The visual attention algorithm searches the salient regions automatically, and represented them by a set of winner points, at the same time, demonstrated the salient regions in terms of circles centered at these winner points. This provides a priori knowledge for the filtering and segmentation process. Based on the winner point, we construct a rectangular region to facilitate the filtering and segmentation, then the labeling operation will be added selectively by requirement. Making use of the labeled information, from the final segmentation result we obtain the positional information of the interested region, label the centroid on the corresponding original image, and finish the localization for the target. The cost time does not depend on the size of the image but the salient regions, therefore the consumed time is greatly reduced. The method is used in the recognition of several kinds of real infrared images, and the experimental results reveal the effectiveness of the algorithm presented in this paper.

  19. An integrated environment for fast development and performance assessment of sonar image processing algorithms - SSIE

    DEFF Research Database (Denmark)

    Henriksen, Lars

    1996-01-01

    The sonar simulator integrated environment (SSIE) is a tool for developing high performance processing algorithms for single or sequences of sonar images. The tool is based on MATLAB providing a very short lead time from concept to executable code and thereby assessment of the algorithms tested...... of the algorithms is the availability of sonar images. To accommodate this problem the SSIE has been equipped with a simulator capable of generating high fidelity sonar images for a given scene of objects, sea-bed AUV path, etc. In the paper the main components of the SSIE is described and examples of different...... processing steps are given...

  20. Advanced infrared optically black baffle materials

    International Nuclear Information System (INIS)

    Seals, R.D.; Egert, C.M.; Allred, D.D.

    1990-01-01

    Infrared optically black baffle surfaces are an essential component of many advanced optical systems. All internal surfaces in advanced infrared optical sensors that require stray light management to achieve resolution are of primary concern in baffle design. Current industrial materials need improvements to meet advanced optical sensor systems requirements for optical, survivability, and endurability. Baffles are required to survive and operate in potentially severe environments. Robust diffuse-absorptive black surfaces, which are thermally and mechanically stable to threats of x-ray, launch, and in-flight maneuver conditions, with specific densities to allow an acceptable weight load, handleable during assembly, cleanable, and adaptive to affordable manufacturing, are required as optical baffle materials. In this paper an overview of recently developed advanced infrared optical baffle materials, requirements, manufacturing strategies, and the Optics MODIL (Manufacturing Operations Development and Integration Laboratory) Advanced Baffle Program are discussed

  1. Development of real time diagnostics and feedback algorithms for JET in view of the next step

    Energy Technology Data Exchange (ETDEWEB)

    Murari, A.; Barana, O. [Consorzio RFX Associazione EURATOM ENEA per la Fusione, Corso Stati Uniti 4, Padua (Italy); Felton, R.; Zabeo, L.; Piccolo, F.; Sartori, F. [Euratom/UKAEA Fusion Assoc., Culham Science Centre, Abingdon, Oxon (United Kingdom); Joffrin, E.; Mazon, D.; Laborde, L.; Moreau, D. [Association EURATOM-CEA, CEA Cadarache, 13 - Saint-Paul-lez-Durance (France); Albanese, R. [Assoc. Euratom-ENEA-CREATE, Univ. Mediterranea RC (Italy); Arena, P.; Bruno, M. [Assoc. Euratom-ENEA-CREATE, Univ.di Catania (Italy); Ambrosino, G.; Ariola, M. [Assoc. Euratom-ENEA-CREATE, Univ. Napoli Federico Napoli (Italy); Crisanti, F. [Associazone EURATOM ENEA sulla Fusione, C.R. Frascati (Italy); Luna, E. de la; Sanchez, J. [Associacion EURATOM CIEMAT para Fusion, Madrid (Spain)

    2004-07-01

    Real time control of many plasma parameters will be an essential aspect in the development of reliable high performance operation of Next Step Tokamaks. The main prerequisites for any feedback scheme are the precise real-time determination of the quantities to be controlled, requiring top quality and highly reliable diagnostics, and the availability of robust control algorithms. A new set of real time diagnostics was recently implemented on JET to prove the feasibility of determining, with high accuracy and time resolution, the most important plasma quantities. With regard to feedback algorithms, new model-based controllers were developed to allow a more robust control of several plasma parameters. Both diagnostics and algorithms were successfully used in several experiments, ranging from H-mode plasmas to configuration with ITBs (internal thermal barriers). Since elaboration of computationally heavy measurements is often required, significant attention was devoted to non-algorithmic methods like Digital or Cellular Neural/Nonlinear Networks. The real time hardware and software adopted architectures are also described with particular attention to their relevance to ITER. (authors)

  2. Development of real time diagnostics and feedback algorithms for JET in view of the next step

    International Nuclear Information System (INIS)

    Murari, A.; Felton, R.; Zabeo, L.; Piccolo, F.; Sartori, F.; Murari, A.; Barana, O.; Albanese, R.; Joffrin, E.; Mazon, D.; Laborde, L.; Moreau, D.; Arena, P.; Bruno, M.; Ambrosino, G.; Ariola, M.; Crisanti, F.; Luna, E. de la; Sanchez, J.

    2004-01-01

    Real time control of many plasma parameters will be an essential aspect in the development of reliable high performance operation of Next Step Tokamaks. The main prerequisites for any feedback scheme are the precise real-time determination of the quantities to be controlled, requiring top quality and highly reliable diagnostics, and the availability of robust control algorithms. A new set of real time diagnostics was recently implemented on JET to prove the feasibility of determining, with high accuracy and time resolution, the most important plasma quantities. With regard to feedback algorithms, new model-based controllers were developed to allow a more robust control of several plasma parameters. Both diagnostics and algorithms were successfully used in several experiments, ranging from H-mode plasmas to configuration with internal transport barriers. Since elaboration of computationally heavy measurements is often required, significant attention was devoted to non-algorithmic methods like Digital or Cellular Neural/Nonlinear Networks. The real time hardware and software adopted architectures are also described with particular attention to their relevance to ITER. (authors)

  3. Development of real time diagnostics and feedback algorithms for JET in view of the next step

    International Nuclear Information System (INIS)

    Murari, A.; Barana, O.; Murari, A.; Felton, R.; Zabeo, L.; Piccolo, F.; Sartori, F.; Joffrin, E.; Mazon, D.; Laborde, L.; Moreau, D.; Albanese, R.; Arena, P.; Bruno, M.; Ambrosino, G.; Ariola, M.; Crisanti, F.; Luna, E. de la; Sanchez, J.

    2004-01-01

    Real time control of many plasma parameters will be an essential aspect in the development of reliable high performance operation of Next Step Tokamaks. The main prerequisites for any feedback scheme are the precise real-time determination of the quantities to be controlled, requiring top quality and highly reliable diagnostics, and the availability of robust control algorithms. A new set of real time diagnostics was recently implemented on JET to prove the feasibility of determining, with high accuracy and time resolution, the most important plasma quantities. With regard to feedback algorithms, new model-based controllers were developed to allow a more robust control of several plasma parameters. Both diagnostics and algorithms were successfully used in several experiments, ranging from H-mode plasmas to configuration with ITBs (internal thermal barriers). Since elaboration of computationally heavy measurements is often required, significant attention was devoted to non-algorithmic methods like Digital or Cellular Neural/Nonlinear Networks. The real time hardware and software adopted architectures are also described with particular attention to their relevance to ITER. (authors)

  4. Bobcat 2013: a hyperspectral data collection supporting the development and evaluation of spatial-spectral algorithms

    Science.gov (United States)

    Kaufman, Jason; Celenk, Mehmet; White, A. K.; Stocker, Alan D.

    2014-06-01

    The amount of hyperspectral imagery (HSI) data currently available is relatively small compared to other imaging modalities, and what is suitable for developing, testing, and evaluating spatial-spectral algorithms is virtually nonexistent. In this work, a significant amount of coincident airborne hyperspectral and high spatial resolution panchromatic imagery that supports the advancement of spatial-spectral feature extraction algorithms was collected to address this need. The imagery was collected in April 2013 for Ohio University by the Civil Air Patrol, with their Airborne Real-time Cueing Hyperspectral Enhanced Reconnaissance (ARCHER) sensor. The target materials, shapes, and movements throughout the collection area were chosen such that evaluation of change detection algorithms, atmospheric compensation techniques, image fusion methods, and material detection and identification algorithms is possible. This paper describes the collection plan, data acquisition, and initial analysis of the collected imagery.

  5. Development of an image reconstruction algorithm for a few number of projection data

    International Nuclear Information System (INIS)

    Vieira, Wilson S.; Brandao, Luiz E.; Braz, Delson

    2007-01-01

    An image reconstruction algorithm was developed for specific cases of radiotracer applications in industry (rotating cylindrical mixers), involving a very few number of projection data. The algorithm was planned for imaging radioactive isotope distributions around the center of circular planes. The method consists of adapting the original expectation maximization algorithm (EM) to solve the ill-posed emission tomography inverse problem in order to reconstruct transversal 2D images of an object with only four projections. To achieve this aim, counts of photons emitted by selected radioactive sources in the plane, after they had been simulated using the commercial software MICROSHIELD 5.05, constitutes the projections and a computational code (SPECTEM) was developed to generate activity vectors or images related to those sources. SPECTEM is flexible to support simultaneous changes of the detectors's geometry, the medium under investigation and the properties of the gamma radiation. As a consequence of the code had been followed correctly the proposed method, good results were obtained and they encouraged us to continue the next step of the research: the validation of SPECTEM utilizing experimental data to check its real performance. We aim this code will improve considerably radiotracer methodology, making easier the diagnosis of fails in industrial processes. (author)

  6. Development of an image reconstruction algorithm for a few number of projection data

    Energy Technology Data Exchange (ETDEWEB)

    Vieira, Wilson S.; Brandao, Luiz E. [Instituto de Engenharia Nuclear (IEN-CNEN/RJ), Rio de Janeiro , RJ (Brazil)]. E-mails: wilson@ien.gov.br; brandao@ien.gov.br; Braz, Delson [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Coordenacao dos Programa de Pos-graduacao de Engenharia (COPPE). Lab. de Instrumentacao Nuclear]. E-mail: delson@mailhost.lin.ufrj.br

    2007-07-01

    An image reconstruction algorithm was developed for specific cases of radiotracer applications in industry (rotating cylindrical mixers), involving a very few number of projection data. The algorithm was planned for imaging radioactive isotope distributions around the center of circular planes. The method consists of adapting the original expectation maximization algorithm (EM) to solve the ill-posed emission tomography inverse problem in order to reconstruct transversal 2D images of an object with only four projections. To achieve this aim, counts of photons emitted by selected radioactive sources in the plane, after they had been simulated using the commercial software MICROSHIELD 5.05, constitutes the projections and a computational code (SPECTEM) was developed to generate activity vectors or images related to those sources. SPECTEM is flexible to support simultaneous changes of the detectors's geometry, the medium under investigation and the properties of the gamma radiation. As a consequence of the code had been followed correctly the proposed method, good results were obtained and they encouraged us to continue the next step of the research: the validation of SPECTEM utilizing experimental data to check its real performance. We aim this code will improve considerably radiotracer methodology, making easier the diagnosis of fails in industrial processes. (author)

  7. Development of estimation algorithm of loose parts and analysis of impact test data

    International Nuclear Information System (INIS)

    Kim, Jung Soo; Ham, Chang Sik; Jung, Chul Hwan; Hwang, In Koo; Kim, Tak Hwane; Kim, Tae Hwane; Park, Jin Ho

    1999-11-01

    Loose parts are produced by being parted from the structure of the reactor coolant system or by coming into RCS from the outside during test operation, refueling, and overhaul time. These loose parts are mixed with reactor coolant fluid and collide with RCS components. When loose parts are occurred within RCS, it is necessary to estimate the impact point and the mass of loose parts. In this report an analysis algorithm for the estimation of the impact point and mass of loose part is developed. The developed algorithm was tested with the impact test data of Yonggwang-3. The estimated impact point using the proposed algorithm in this report had 5 percent error to the real test data. The estimated mass was analyzed within 28 percent error bound using the same unit's data. We analyzed the characteristic frequency of each sensor because this frequency effected the estimation of impact point and mass. The characteristic frequency of the background noise during normal operation was compared with that of the impact test data. The result of the comparison illustrated that the characteristic frequency bandwidth of the impact test data was lower than that of the background noise during normal operation. by the comparison, the integrity of sensor and monitoring system could be checked, too. (author)

  8. AUTOMATION OF CALCULATION ALGORITHMS FOR EFFICIENCY ESTIMATION OF TRANSPORT INFRASTRUCTURE DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    Sergey Kharitonov

    2015-06-01

    Full Text Available Optimum transport infrastructure usage is an important aspect of the development of the national economy of the Russian Federation. Thus, development of instruments for assessing the efficiency of infrastructure is impossible without constant monitoring of a number of significant indicators. This work is devoted to the selection of indicators and the method of their calculation in relation to the transport subsystem as airport infrastructure. The work also reflects aspects of the evaluation of the possibilities of algorithmic computational mechanisms to improve the tools of public administration transport subsystems.

  9. Rapid mental computation system as a tool for algorithmic thinking of elementary school students development

    OpenAIRE

    Ziatdinov, Rushan; Musa, Sajid

    2013-01-01

    In this paper, we describe the possibilities of using a rapid mental computation system in elementary education. The system consists of a number of readily memorized operations that allow one to perform arithmetic computations very quickly. These operations are actually simple algorithms which can develop or improve the algorithmic thinking of pupils. Using a rapid mental computation system allows forming the basis for the study of computer science in secondary school.

  10. Development of Nuclear Power Plant Safety Evaluation Method for the Automation Algorithm Application

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Seung Geun; Seong, Poong Hyun [KAIST, Daejeon (Korea, Republic of)

    2016-10-15

    It is commonly believed that replacing human operators to the automated system would guarantee greater efficiency, lower workloads, and fewer human error. Conventional machine learning techniques are considered as not capable to handle complex situations in NPP. Due to these kinds of issues, automation is not actively adopted although human error probability drastically increases during abnormal situations in NPP due to overload of information, high workload, and short time available for diagnosis. Recently, new machine learning techniques, which are known as ‘deep learning’ techniques have been actively applied to many fields, and the deep learning technique-based artificial intelligences (AIs) are showing better performance than conventional AIs. In 2015, deep Q-network (DQN) which is one of the deep learning techniques was developed and applied to train AI that automatically plays various Atari 2800 games, and this AI surpassed the human-level playing in many kind of games. Also in 2016, ‘Alpha-Go’, which was developed by ‘Google Deepmind’ based on deep learning technique to play the game of Go (i.e. Baduk), was defeated Se-dol Lee who is the World Go champion with score of 4:1. By the effort for reducing human error in NPPs, the ultimate goal of this study is the development of automation algorithm which can cover various situations in NPPs. As the first part, quantitative and real-time NPP safety evaluation method is being developed in order to provide the training criteria for automation algorithm. For that, EWS concept of medical field was adopted, and the applicability is investigated in this paper. Practically, the application of full automation (i.e. fully replaces human operators) may requires much more time for the validation and investigation of side-effects after the development of automation algorithm, and so the adoption in the form of full automation will take long time.

  11. Development of Nuclear Power Plant Safety Evaluation Method for the Automation Algorithm Application

    International Nuclear Information System (INIS)

    Kim, Seung Geun; Seong, Poong Hyun

    2016-01-01

    It is commonly believed that replacing human operators to the automated system would guarantee greater efficiency, lower workloads, and fewer human error. Conventional machine learning techniques are considered as not capable to handle complex situations in NPP. Due to these kinds of issues, automation is not actively adopted although human error probability drastically increases during abnormal situations in NPP due to overload of information, high workload, and short time available for diagnosis. Recently, new machine learning techniques, which are known as ‘deep learning’ techniques have been actively applied to many fields, and the deep learning technique-based artificial intelligences (AIs) are showing better performance than conventional AIs. In 2015, deep Q-network (DQN) which is one of the deep learning techniques was developed and applied to train AI that automatically plays various Atari 2800 games, and this AI surpassed the human-level playing in many kind of games. Also in 2016, ‘Alpha-Go’, which was developed by ‘Google Deepmind’ based on deep learning technique to play the game of Go (i.e. Baduk), was defeated Se-dol Lee who is the World Go champion with score of 4:1. By the effort for reducing human error in NPPs, the ultimate goal of this study is the development of automation algorithm which can cover various situations in NPPs. As the first part, quantitative and real-time NPP safety evaluation method is being developed in order to provide the training criteria for automation algorithm. For that, EWS concept of medical field was adopted, and the applicability is investigated in this paper. Practically, the application of full automation (i.e. fully replaces human operators) may requires much more time for the validation and investigation of side-effects after the development of automation algorithm, and so the adoption in the form of full automation will take long time

  12. Orbiting Carbon Observatory-2 (OCO-2) cloud screening algorithms; validation against collocated MODIS and CALIOP data

    Science.gov (United States)

    Taylor, T. E.; O'Dell, C. W.; Frankenberg, C.; Partain, P.; Cronk, H. Q.; Savtchenko, A.; Nelson, R. R.; Rosenthal, E. J.; Chang, A. Y.; Fisher, B.; Osterman, G.; Pollock, R. H.; Crisp, D.; Eldering, A.; Gunson, M. R.

    2015-12-01

    The objective of the National Aeronautics and Space Administration's (NASA) Orbiting Carbon Observatory-2 (OCO-2) mission is to retrieve the column-averaged carbon dioxide (CO2) dry air mole fraction (XCO2) from satellite measurements of reflected sunlight in the near-infrared. These estimates can be biased by clouds and aerosols within the instrument's field of view (FOV). Screening of the most contaminated soundings minimizes unnecessary calls to the computationally expensive Level 2 (L2) XCO2 retrieval algorithm. Hence, robust cloud screening methods have been an important focus of the OCO-2 algorithm development team. Two distinct, computationally inexpensive cloud screening algorithms have been developed for this application. The A-Band Preprocessor (ABP) retrieves the surface pressure using measurements in the 0.76 μm O2 A-band, neglecting scattering by clouds and aerosols, which introduce photon path-length (PPL) differences that can cause large deviations between the expected and retrieved surface pressure. The Iterative Maximum A-Posteriori (IMAP) Differential Optical Absorption Spectroscopy (DOAS) Preprocessor (IDP) retrieves independent estimates of the CO2 and H2O column abundances using observations taken at 1.61 μm (weak CO2 band) and 2.06 μm (strong CO2 band), while neglecting atmospheric scattering. The CO2 and H2O column abundances retrieved in these two spectral regions differ significantly in the presence of cloud and scattering aerosols. The combination of these two algorithms, which key off of different features in the spectra, provides the basis for cloud screening of the OCO-2 data set. To validate the OCO-2 cloud screening approach, collocated measurements from NASA's Moderate Resolution Imaging Spectrometer (MODIS), aboard the Aqua platform, were compared to results from the two OCO-2 cloud screening algorithms. With tuning to allow throughputs of ≃ 30 %, agreement between the OCO-2 and MODIS cloud screening methods is found to be

  13. Near infrared and visible face recognition based on decision fusion of LBP and DCT features

    Science.gov (United States)

    Xie, Zhihua; Zhang, Shuai; Liu, Guodong; Xiong, Jinquan

    2018-03-01

    Visible face recognition systems, being vulnerable to illumination, expression, and pose, can not achieve robust performance in unconstrained situations. Meanwhile, near infrared face images, being light- independent, can avoid or limit the drawbacks of face recognition in visible light, but its main challenges are low resolution and signal noise ratio (SNR). Therefore, near infrared and visible fusion face recognition has become an important direction in the field of unconstrained face recognition research. In order to extract the discriminative complementary features between near infrared and visible images, in this paper, we proposed a novel near infrared and visible face fusion recognition algorithm based on DCT and LBP features. Firstly, the effective features in near-infrared face image are extracted by the low frequency part of DCT coefficients and the partition histograms of LBP operator. Secondly, the LBP features of visible-light face image are extracted to compensate for the lacking detail features of the near-infrared face image. Then, the LBP features of visible-light face image, the DCT and LBP features of near-infrared face image are sent to each classifier for labeling. Finally, decision level fusion strategy is used to obtain the final recognition result. The visible and near infrared face recognition is tested on HITSZ Lab2 visible and near infrared face database. The experiment results show that the proposed method extracts the complementary features of near-infrared and visible face images and improves the robustness of unconstrained face recognition. Especially for the circumstance of small training samples, the recognition rate of proposed method can reach 96.13%, which has improved significantly than 92.75 % of the method based on statistical feature fusion.

  14. Developing a Random Forest Algorithm for MODIS Global Burned Area Classification

    Directory of Open Access Journals (Sweden)

    Rubén Ramo

    2017-11-01

    Full Text Available This paper aims to develop a global burned area (BA algorithm for MODIS BRDF-corrected images based on the Random Forest (RF classifier. Two RF models were generated, including: (1 all MODIS reflective bands; and (2 only the red (R and near infrared (NIR bands. Active fire information, vegetation indices and auxiliary variables were taken into account as well. Both RF models were trained using a statistically designed sample of 130 reference sites, which took into account the global diversity of fire conditions. For each site, fire perimeters were obtained from multitemporal pairs of Landsat TM/ETM+ images acquired in 2008. Those fire perimeters were used to extract burned and unburned areas to train the RF models. Using the standard MD43A4 resolution (500 × 500 m, the training dataset included 48,365 burned pixels and 6,293,205 unburned pixels. Different combinations of number of trees and number of parameters were tested. The final RF models included 600 trees and 5 attributes. The RF full model (considering all bands provided a balanced accuracy of 0.94, while the RF RNIR model had 0.93. As a first assessment of these RF models, they were used to classify daily MCD43A4 images in three test sites for three consecutive years (2006–2008. The selected sites included different ecosystems: Australia (Tropical, Boreal (Canada and Temperate (California, and extended coverage (totaling more than 2,500,000 km2. Results from both RF models for those sites were compared with national fire perimeters, as well as with two existing BA MODIS products; the MCD45 and MCD64. Considering all three years and three sites, commission error for the RF Full model was 0.16, with an omission error of 0.23. For the RF RNIR model, these errors were 0.19 and 0.21, respectively. The existing MODIS BA products had lower commission errors, but higher omission errors (0.09 and 0.33 for the MCD45 and 0.10 and 0.29 for the MCD64 than those obtained with the RF models, and

  15. Portable Health Algorithms Test System

    Science.gov (United States)

    Melcher, Kevin J.; Wong, Edmond; Fulton, Christopher E.; Sowers, Thomas S.; Maul, William A.

    2010-01-01

    A document discusses the Portable Health Algorithms Test (PHALT) System, which has been designed as a means for evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT system allows systems health management algorithms to be developed in a graphical programming environment, to be tested and refined using system simulation or test data playback, and to be evaluated in a real-time hardware-in-the-loop mode with a live test article. The integrated hardware and software development environment provides a seamless transition from algorithm development to real-time implementation. The portability of the hardware makes it quick and easy to transport between test facilities. This hard ware/software architecture is flexible enough to support a variety of diagnostic applications and test hardware, and the GUI-based rapid prototyping capability is sufficient to support development execution, and testing of custom diagnostic algorithms. The PHALT operating system supports execution of diagnostic algorithms under real-time constraints. PHALT can perform real-time capture and playback of test rig data with the ability to augment/ modify the data stream (e.g. inject simulated faults). It performs algorithm testing using a variety of data input sources, including real-time data acquisition, test data playback, and system simulations, and also provides system feedback to evaluate closed-loop diagnostic response and mitigation control.

  16. New maxillofacial infrared detection technologies

    Energy Technology Data Exchange (ETDEWEB)

    Reshetnikov, A. P.; Kopylov, M. V.; Nasyrov, M. R., E-mail: marat.1994@me.com; Fisher, E. L.; Chernova, L. V. [Izhevsk State Medical Academy, Izhevsk, Russia (426034, Izhevsk, Kommunarov street, 281) (Russian Federation); Soicher, E. M. [Moscow State University of Medicine and Dentistry named after A.I. Evdokimov of the Ministry of Health of the Russian Federation, Moscow, Russia, (127473, Moscow, Delegatskaya str., 20/1) (Russian Federation)

    2015-11-17

    At the dental clinic the infrared range radiation spectrum of tissues was used to study the dynamics of local temperature and structure of the skin, subcutaneous fat, and other tissues of the maxillofacial area in adult healthy volunteers and patients. In particular, we studied the dynamics of local temperature of mucous membranes of the mouth, teeth, and places in the mouth and dental structures in the norm and in various pathological conditions of the lips, gums, teeth, tongue, palate, and cheeks before, during and after chewing food, drinking water, medication, and inhalation of air. High safety and informational content of infrared thermography are prospective for the development of diagnostics in medicine. We have 3 new methods for infrared detection protected by patents in Russia.

  17. Scientific Payload Of The Emirates Mars Mission: Emirates Mars Infrared Spectrometer (Emirs) Overview.

    Science.gov (United States)

    Altunaiji, E. S.; Edwards, C. S.; Christensen, P. R.; Smith, M. D.; Badri, K. M., Sr.

    2017-12-01

    The Emirates Mars Mission (EMM) will launch in 2020 to explore the dynamics in the atmosphere of Mars on a global scale. EMM has three scientific instruments to an improved understanding of circulation and weather in the Martian lower and middle atmosphere. Two of the EMM's instruments, which are the Emirates eXploration Imager (EXI) and Emirates Mars Infrared Spectrometer (EMIRS) will focus on the lower atmosphere observing dust, ice clouds, water vapor and ozone. On the other hand, the third instrument Emirates Mars Ultraviolet Spectrometer (EMUS) will focus on both the thermosphere of the planet and its exosphere. The EMIRS instrument, shown in Figure 1, is an interferometric thermal infrared spectrometer that is jointly developed by Arizona State University (ASU) and Mohammed Bin Rashid Space Centre (MBRSC). It builds on a long heritage of thermal infrared spectrometers designed, built, and managed, by ASU's Mars Space Flight Facility, including the Thermal Emission Spectrometer (TES), Miniature Thermal Emission Spectrometer (Mini-TES), and the OSIRIS-REx Thermal Emission Spectrometer (OTES). EMIRS operates in the 6-40+ µm range with 5 cm-1 spectral sampling, enabled by a Chemical Vapor-Deposited (CVD) diamond beamsplitter and state of the art electronics. This instrument utilizes a 3×3 detector array and a scan mirror to make high-precision infrared radiance measurements over most of a Martian hemisphere. The EMIRS instrument is optimized to capture the integrated, lower-middle atmosphere dynamics over a Martian hemisphere and will capture 60 global images per week ( 20 images per orbit) at a resolution of 100-300 km/pixel. After processing through an atmospheric retrieval algorithm, EMIRS will determine the vertical temperature profiles to 50km altitude and measure the column integrated global distribution and abundances of key atmospheric parameters (e.g. dust, water ice (clouds) and water vapor) over the Martian day, seasons and year.

  18. Development of models for thermal infrared radiation above and within plant canopies

    Science.gov (United States)

    Paw u, Kyaw T.

    1992-01-01

    Any significant angular dependence of the emitted longwave radiation could result in errors in remotely estimated energy budgets or evapotranspiration. Empirical data and thermal infrared radiation models are reviewed in reference to anisotropic emissions from the plant canopy. The biometeorological aspects of linking longwave models with plant canopy energy budgets and micrometeorology are discussed. A new soil plant atmosphere model applied to anisotropic longwave emissions from a canopy is presented. Time variation of thermal infrared emission measurements is discussed.

  19. A novel method for surface defect inspection of optic cable with short-wave infrared illuminance

    Science.gov (United States)

    Chen, Xiaohong; Liu, Ning; You, Bo; Xiao, Bin

    2016-07-01

    Intelligent on-line detection of cable quality is a crucial issue in optic cable factory, and defects on the surface of optic cable can dramatically depress cable grade. Manual inspection in optic cable quality cannot catch up with the development of optic cable industry due to its low detection efficiency and huge human cost. Therefore, real-time is highly demanded by industry in order to replace the subjective and repetitive process of manual inspection. For this reason, automatic cable defect inspection has been a trend. In this paper, a novel method for surface defect inspection of optic cable with short-wave infrared illuminance is presented. The special condition of short-wave infrared cannot only provide illumination compensation for the weak illumination environment, but also can avoid the problem of exposure when using visible light illuminance, which affects the accuracy of inspection algorithm. A series of image processing algorithms are set up to analyze cable image for the verification of real-time and veracity of the detection method. Unlike some existing detection algorithms which concentrate on the characteristics of defects with an active search way, the proposed method removes the non-defective areas of the image passively at the same time of image processing, which reduces a large amount of computation. OTSU algorithm is used to convert the gray image to the binary image. Furthermore, a threshold window is designed to eliminate the fake defects, and the threshold represents the considered minimum size of defects ε . Besides, a new regional suppression method is proposed to deal with the edge burrs of the cable, which shows the superior performance compared with that of Open-Close operation of mathematical morphological in the boundary processing. Experimental results of 10,000 samples show that the rates of miss detection and false detection are 2.35% and 0.78% respectively when ε equals to 0.5 mm, and the average processing period of one frame

  20. Development of a generally applicable morphokinetic algorithm capable of predicting the implantation potential of embryos transferred on Day 3

    Science.gov (United States)

    Petersen, Bjørn Molt; Boel, Mikkel; Montag, Markus; Gardner, David K.

    2016-01-01

    STUDY QUESTION Can a generally applicable morphokinetic algorithm suitable for Day 3 transfers of time-lapse monitored embryos originating from different culture conditions and fertilization methods be developed for the purpose of supporting the embryologist's decision on which embryo to transfer back to the patient in assisted reproduction? SUMMARY ANSWER The algorithm presented here can be used independently of culture conditions and fertilization method and provides predictive power not surpassed by other published algorithms for ranking embryos according to their blastocyst formation potential. WHAT IS KNOWN ALREADY Generally applicable algorithms have so far been developed only for predicting blastocyst formation. A number of clinics have reported validated implantation prediction algorithms, which have been developed based on clinic-specific culture conditions and clinical environment. However, a generally applicable embryo evaluation algorithm based on actual implantation outcome has not yet been reported. STUDY DESIGN, SIZE, DURATION Retrospective evaluation of data extracted from a database of known implantation data (KID) originating from 3275 embryos transferred on Day 3 conducted in 24 clinics between 2009 and 2014. The data represented different culture conditions (reduced and ambient oxygen with various culture medium strategies) and fertilization methods (IVF, ICSI). The capability to predict blastocyst formation was evaluated on an independent set of morphokinetic data from 11 218 embryos which had been cultured to Day 5. PARTICIPANTS/MATERIALS, SETTING, METHODS The algorithm was developed by applying automated recursive partitioning to a large number of annotation types and derived equations, progressing to a five-fold cross-validation test of the complete data set and a validation test of different incubation conditions and fertilization methods. The results were expressed as receiver operating characteristics curves using the area under the

  1. Powerful infrared emitting diodes

    Directory of Open Access Journals (Sweden)

    Kogan L. M.

    2012-02-01

    Full Text Available Powerful infrared LEDs with emission wavelength 805 ± 10, 870 ± 20 and 940 ± 10 nm developed at SPC OED "OPTEL" are presented in the article. The radiant intensity of beam diode is under 4 W/sr in the continuous mode and under 100 W/sr in the pulse mode. The radiation power of wide-angle LEDs reaches 1 W in continuous mode. The external quantum efficiency of emission IR diodes runs up to 30%. There also has been created infrared diode modules with a block of flat Fresnel lenses with radiant intensity under 70 W/sr.

  2. The development of gamma energy identify algorithm for compact radiation sensors using stepwise refinement technique

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Hyun Jun [Div. of Radiation Regulation, Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of); Kim, Ye Won; Kim, Hyun Duk; Cho, Gyu Seong [Dept. of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Yi, Yun [Dept. of of Electronics and Information Engineering, Korea University, Seoul (Korea, Republic of)

    2017-06-15

    A gamma energy identifying algorithm using spectral decomposition combined with smoothing method was suggested to confirm the existence of the artificial radio isotopes. The algorithm is composed by original pattern recognition method and smoothing method to enhance the performance to identify gamma energy of radiation sensors that have low energy resolution. The gamma energy identifying algorithm for the compact radiation sensor is a three-step of refinement process. Firstly, the magnitude set is calculated by the original spectral decomposition. Secondly, the magnitude of modeling error in the magnitude set is reduced by the smoothing method. Thirdly, the expected gamma energy is finally decided based on the enhanced magnitude set as a result of the spectral decomposition with the smoothing method. The algorithm was optimized for the designed radiation sensor composed of a CsI (Tl) scintillator and a silicon pin diode. The two performance parameters used to estimate the algorithm are the accuracy of expected gamma energy and the number of repeated calculations. The original gamma energy was accurately identified with the single energy of gamma radiation by adapting this modeling error reduction method. Also the average error decreased by half with the multi energies of gamma radiation in comparison to the original spectral decomposition. In addition, the number of repeated calculations also decreased by half even in low fluence conditions under 104 (/0.09 cm{sup 2} of the scintillator surface). Through the development of this algorithm, we have confirmed the possibility of developing a product that can identify artificial radionuclides nearby using inexpensive radiation sensors that are easy to use by the public. Therefore, it can contribute to reduce the anxiety of the public exposure by determining the presence of artificial radionuclides in the vicinity.

  3. Performance of a convective, infrared and combined infrared- convective heated conveyor-belt dryer.

    Science.gov (United States)

    El-Mesery, Hany S; Mwithiga, Gikuru

    2015-05-01

    A conveyor-belt dryer was developed using a combined infrared and hot air heating system that can be used in the drying of fruits and vegetables. The drying system having two chambers was fitted with infrared radiation heaters and through-flow hot air was provided from a convective heating system. The system was designed to operate under either infrared radiation and cold air (IR-CA) settings of 2000 W/m(2) with forced ambient air at 30 °C and air flow of 0.6 m/s or combined infrared and hot air convection (IR-HA) dryer setting with infrared intensity set at 2000 W/m(2) and hot at 60 °C being blown through the dryer at a velocity of 0.6 m/s or hot air convection (HA) at an air temperature of 60 °C and air flow velocity 0.6 m/s but without infrared heating. Apple slices dried under the different dryer settings were evaluated for quality and energy requirements. It was found that drying of apple (Golden Delicious) slices took place in the falling rate drying period and no constant rate period of drying was observed under any of the test conditions. The IR-HA setting was 57.5 and 39.1 % faster than IR-CA and HA setting, respectively. Specific energy consumption was lower and thermal efficiency was higher for the IR-HA setting when compared to both IR-CA and HA settings. The rehydration ratio, shrinkage and colour properties of apples dried under IR-HA conditions were better than for either IR-CA or HA.

  4. Geometric approximation algorithms

    CERN Document Server

    Har-Peled, Sariel

    2011-01-01

    Exact algorithms for dealing with geometric objects are complicated, hard to implement in practice, and slow. Over the last 20 years a theory of geometric approximation algorithms has emerged. These algorithms tend to be simple, fast, and more robust than their exact counterparts. This book is the first to cover geometric approximation algorithms in detail. In addition, more traditional computational geometry techniques that are widely used in developing such algorithms, like sampling, linear programming, etc., are also surveyed. Other topics covered include approximate nearest-neighbor search, shape approximation, coresets, dimension reduction, and embeddings. The topics covered are relatively independent and are supplemented by exercises. Close to 200 color figures are included in the text to illustrate proofs and ideas.

  5. Challenges to Global Implementation of Infrared Thermography Technology: Current Perspective.

    Science.gov (United States)

    Shterenshis, Michael

    2017-01-01

    Medical infrared thermography (IT) produces an image of the infrared waves emitted by the human body as part of the thermoregulation process that can vary in intensity based on the health of the person. This review analyzes recent developments in the use of infrared thermography as a screening and diagnostic tool in clinical and nonclinical settings, and identifies possible future routes for improvement of the method. Currently, infrared thermography is not considered to be a fully reliable diagnostic method. If standard infrared protocol is established and a normative database is available, infrared thermography may become a reliable method for detecting inflammatory processes.

  6. Applications of infrared technology; Proceedings of the Meeting, London, England, June 9, 10, 1988

    International Nuclear Information System (INIS)

    Williams, T.L.

    1988-01-01

    Recent developments in thermal imaging and other infrared systems relating to military, industrial, medical, and scientific applications are reviewed. Papers are presented on a new thermal imager using a linear pyroelectric detector array; multichannel near infrared spectroradiometer; technological constraints on the use of thermal imagery for remote sensing; and infrared optical system of the improved stratospheric and mesospheric sounder. Other topics discussed include infrared thermography development for composite material evaluation; infrared process linescanner, and optical infrared starting radiometer

  7. FY 2004 Infrared Photonics Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Anheier, Norman C.; Allen, Paul J.; Keller, Paul E.; Bennett, Wendy D.; Martin, Peter M.; Johnson, Bradley R.; Sundaram, S. K.; Riley, Brian J.; Martinez, James E.; Qiao, Hong (Amy); Schultz, John F.

    2004-10-01

    Research done by the Infrared Photonics team at PNNL is focused on developing miniaturized integrated optics for the MWIR and LWIR by exploiting the unique optical and material properties of chalcogenide glass. PNNL has developed thin film deposition capabilities, direct-laser writing techniques, IR photonic device demonstration, holographic optical element design and fabrication, photonic device modeling, and advanced optical metrology - all specific to chalcogenide glass. Chalcogenide infrared photonics provides a pathway to Quantum Cascade Laser (QCL) transmitter miniaturization. QCLs provide a viable infrared laser source for a new class of laser transmitters capable of meeting the performance requirements for a variety of national security sensing applications. The high output power, small size, and superb stability and modulation characteristics of QCLs make them amenable for integration as transmitters into ultra-sensitive, ultra-selective point sampling and remote short-range chemical sensors that are particularly useful for nuclear nonproliferation missions.

  8. Orbiting Carbon Observatory-2 (OCO-2) cloud screening algorithms: validation against collocated MODIS and CALIOP data

    Science.gov (United States)

    Taylor, Thomas E.; O'Dell, Christopher W.; Frankenberg, Christian; Partain, Philip T.; Cronk, Heather Q.; Savtchenko, Andrey; Nelson, Robert R.; Rosenthal, Emily J.; Chang, Albert Y.; Fisher, Brenden; Osterman, Gregory B.; Pollock, Randy H.; Crisp, David; Eldering, Annmarie; Gunson, Michael R.

    2016-03-01

    The objective of the National Aeronautics and Space Administration's (NASA) Orbiting Carbon Observatory-2 (OCO-2) mission is to retrieve the column-averaged carbon dioxide (CO2) dry air mole fraction (XCO2) from satellite measurements of reflected sunlight in the near-infrared. These estimates can be biased by clouds and aerosols, i.e., contamination, within the instrument's field of view. Screening of the most contaminated soundings minimizes unnecessary calls to the computationally expensive Level 2 (L2) XCO2 retrieval algorithm. Hence, robust cloud screening methods have been an important focus of the OCO-2 algorithm development team. Two distinct, computationally inexpensive cloud screening algorithms have been developed for this application. The A-Band Preprocessor (ABP) retrieves the surface pressure using measurements in the 0.76 µm O2 A band, neglecting scattering by clouds and aerosols, which introduce photon path-length differences that can cause large deviations between the expected and retrieved surface pressure. The Iterative Maximum A Posteriori (IMAP) Differential Optical Absorption Spectroscopy (DOAS) Preprocessor (IDP) retrieves independent estimates of the CO2 and H2O column abundances using observations taken at 1.61 µm (weak CO2 band) and 2.06 µm (strong CO2 band), while neglecting atmospheric scattering. The CO2 and H2O column abundances retrieved in these two spectral regions differ significantly in the presence of cloud and scattering aerosols. The combination of these two algorithms, which are sensitive to different features in the spectra, provides the basis for cloud screening of the OCO-2 data set.To validate the OCO-2 cloud screening approach, collocated measurements from NASA's Moderate Resolution Imaging Spectrometer (MODIS), aboard the Aqua platform, were compared to results from the two OCO-2 cloud screening algorithms. With tuning of algorithmic threshold parameters that allows for processing of ≃ 20-25 % of all OCO-2 soundings

  9. Fluid-structure-coupling algorithm

    International Nuclear Information System (INIS)

    McMaster, W.H.; Gong, E.Y.; Landram, C.S.; Quinones, D.F.

    1980-01-01

    A fluid-structure-interaction algorithm has been developed and incorporated into the two dimensional code PELE-IC. This code combines an Eulerian incompressible fluid algorithm with a Lagrangian finite element shell algorithm and incorporates the treatment of complex free surfaces. The fluid structure, and coupling algorithms have been verified by the calculation of solved problems from the literature and from air and steam blowdown experiments. The code has been used to calculate loads and structural response from air blowdown and the oscillatory condensation of steam bubbles in water suppression pools typical of boiling water reactors. The techniques developed here have been extended to three dimensions and implemented in the computer code PELE-3D

  10. Fluid structure coupling algorithm

    International Nuclear Information System (INIS)

    McMaster, W.H.; Gong, E.Y.; Landram, C.S.; Quinones, D.F.

    1980-01-01

    A fluid-structure-interaction algorithm has been developed and incorporated into the two-dimensional code PELE-IC. This code combines an Eulerian incompressible fluid algorithm with a Lagrangian finite element shell algorithm and incorporates the treatment of complex free surfaces. The fluid structure and coupling algorithms have been verified by the calculation of solved problems from the literature and from air and steam blowdown experiments. The code has been used to calculate loads and structural response from air blowdown and the oscillatory condensation of steam bubbles in water suppression pools typical of boiling water reactors. The techniques developed have been extended to three dimensions and implemented in the computer code PELE-3D

  11. Type Ia Supernova Light Curve Inference: Hierarchical Models for Nearby SN Ia in the Optical and Near Infrared

    Science.gov (United States)

    Mandel, Kaisey; Kirshner, R. P.; Narayan, G.; Wood-Vasey, W. M.; Friedman, A. S.; Hicken, M.

    2010-01-01

    I have constructed a comprehensive statistical model for Type Ia supernova light curves spanning optical through near infrared data simultaneously. The near infrared light curves are found to be excellent standard candles (sigma(MH) = 0.11 +/- 0.03 mag) that are less vulnerable to systematic error from dust extinction, a major confounding factor for cosmological studies. A hierarchical statistical framework incorporates coherently multiple sources of randomness and uncertainty, including photometric error, intrinsic supernova light curve variations and correlations, dust extinction and reddening, peculiar velocity dispersion and distances, for probabilistic inference with Type Ia SN light curves. Inferences are drawn from the full probability density over individual supernovae and the SN Ia and dust populations, conditioned on a dataset of SN Ia light curves and redshifts. To compute probabilistic inferences with hierarchical models, I have developed BayeSN, a Markov Chain Monte Carlo algorithm based on Gibbs sampling. This code explores and samples the global probability density of parameters describing individual supernovae and the population. I have applied this hierarchical model to optical and near infrared data of over 100 nearby Type Ia SN from PAIRITEL, the CfA3 sample, and the literature. Using this statistical model, I find that SN with optical and NIR data have a smaller residual scatter in the Hubble diagram than SN with only optical data. The continued study of Type Ia SN in the near infrared will be important for improving their utility as precise and accurate cosmological distance indicators.

  12. Feasibility of field portable near infrared (NIR) spectroscopy to determine cyanide concentrations in soil

    Science.gov (United States)

    Sut, Magdalena; Fischer, Thomas; Repmann, Frank; Raab, Thomas

    2013-04-01

    In Germany, at more than 1000 sites, soil is polluted with an anthropogenic contaminant in form of iron-cyanide complexes. These contaminations are caused by former Manufactured Gas Plants (MGPs), where electricity for lighting was produced in the process of coal gasification. The production of manufactured gas was restrained in 1950, which caused cessation of MGPs. Our study describes the application of Polychromix Handheld Field Portable Near-Infrared (NIR) Analyzer to predict the cyanide concentrations in soil. In recent times, when the soil remediation is of major importance, there is a need to develop rapid and non-destructive methods for contaminant determination in the field. In situ analysis enables determination of 'hot spots', is cheap and time saving in comparison to laboratory methods. This paper presents a novel usage of NIR spectroscopy, where a calibration model was developed, using multivariate calibration algorithms, in order to determine NIR spectral response to the cyanide concentration in soil samples. As a control, the contaminant concentration was determined using conventional Flow Injection Analysis (FIA). The experiments revealed that portable near-infrared spectrometers could be a reliable device for identification of contamination 'hot spots', where cyanide concentration are higher than 2400 mg kg-1 in the field and >1750 mg kg-1 after sample preparation in the laboratory, but cannot replace traditional laboratory analyses due to high limits of detection.

  13. An improved non-uniformity correction algorithm and its GPU parallel implementation

    Science.gov (United States)

    Cheng, Kuanhong; Zhou, Huixin; Qin, Hanlin; Zhao, Dong; Qian, Kun; Rong, Shenghui

    2018-05-01

    The performance of SLP-THP based non-uniformity correction algorithm is seriously affected by the result of SLP filter, which always leads to image blurring and ghosting artifacts. To address this problem, an improved SLP-THP based non-uniformity correction method with curvature constraint was proposed. Here we put forward a new way to estimate spatial low frequency component. First, the details and contours of input image were obtained respectively by minimizing local Gaussian curvature and mean curvature of image surface. Then, the guided filter was utilized to combine these two parts together to get the estimate of spatial low frequency component. Finally, we brought this SLP component into SLP-THP method to achieve non-uniformity correction. The performance of proposed algorithm was verified by several real and simulated infrared image sequences. The experimental results indicated that the proposed algorithm can reduce the non-uniformity without detail losing. After that, a GPU based parallel implementation that runs 150 times faster than CPU was presented, which showed the proposed algorithm has great potential for real time application.

  14. A physics-based algorithm for retrieving land-surface emissivity and temperature from EOS/MODIS data

    International Nuclear Information System (INIS)

    Wan, Z.; Li, Z.L.

    1997-01-01

    The authors have developed a physics-based land-surface temperature (LST) algorithm for simultaneously retrieving surface band-averaged emissivities and temperatures from day/night pairs of MODIS (Moderate Resolution Imaging Spectroradiometer) data in seven thermal infrared bands. The set of 14 nonlinear equations in the algorithm is solved with the statistical regression method and the least-squares fit method. This new LST algorithm was tested with simulated MODIS data for 80 sets of band-averaged emissivities calculated from published spectral data of terrestrial materials in wide ranges of atmospheric and surface temperature conditions. Comprehensive sensitivity and error analysis has been made to evaluate the performance of the new LST algorithm and its dependence on variations in surface emissivity and temperature, upon atmospheric conditions, as well as the noise-equivalent temperature difference (NEΔT) and calibration accuracy specifications of the MODIS instrument. In cases with a systematic calibration error of 0.5%, the standard deviations of errors in retrieved surface daytime and nighttime temperatures fall between 0.4--0.5 K over a wide range of surface temperatures for mid-latitude summer conditions. The standard deviations of errors in retrieved emissivities in bands 31 and 32 (in the 10--12.5 microm IR spectral window region) are 0.009, and the maximum error in retrieved LST values falls between 2--3 K

  15. Development of Human-level Decision Making Algorithm for NPPs through Deep Neural Networks : Conceptual Approach

    International Nuclear Information System (INIS)

    Kim, Seung Geun; Seong, Poong Hyun

    2017-01-01

    Development of operation support systems and automation systems are closely related to machine learning field. However, since it is hard to achieve human-level delicacy and flexibility for complex tasks with conventional machine learning technologies, only operation support systems with simple purposes were developed and high-level automation related studies were not actively conducted. As one of the efforts for reducing human error in NPPs and technical advance toward automation, the ultimate goal of this research is to develop human-level decision making algorithm for NPPs during emergency situations. The concepts of SL, RL, policy network, value network, and MCTS, which were applied to decision making algorithm for other fields are introduced and combined with nuclear field specifications. Since the research is currently at the conceptual stage, more research is warranted.

  16. Golden Sine Algorithm: A Novel Math-Inspired Algorithm

    Directory of Open Access Journals (Sweden)

    TANYILDIZI, E.

    2017-05-01

    Full Text Available In this study, Golden Sine Algorithm (Gold-SA is presented as a new metaheuristic method for solving optimization problems. Gold-SA has been developed as a new search algorithm based on population. This math-based algorithm is inspired by sine that is a trigonometric function. In the algorithm, random individuals are created as many as the number of search agents with uniform distribution for each dimension. The Gold-SA operator searches to achieve a better solution in each iteration by trying to bring the current situation closer to the target value. The solution space is narrowed by the golden section so that the areas that are supposed to give only good results are scanned instead of the whole solution space scan. In the tests performed, it is seen that Gold-SA has better results than other population based methods. In addition, Gold-SA has fewer algorithm-dependent parameters and operators than other metaheuristic methods, increasing the importance of this method by providing faster convergence of this new method.

  17. Development of Electronic Nose and Near Infrared Spectroscopy Analysis Techniques to Monitor the Critical Time in SSF Process of Feed Protein

    Directory of Open Access Journals (Sweden)

    Hui Jiang

    2014-10-01

    Full Text Available In order to assure the consistency of the final product quality, a fast and effective process monitoring is a growing need in solid state fermentation (SSF industry. This work investigated the potential of non-invasive techniques combined with the chemometrics method, to monitor time-related changes that occur during SSF process of feed protein. Four fermentation trials conducted were monitored by an electronic nose device and a near infrared spectroscopy (NIRS spectrometer. Firstly, principal component analysis (PCA and independent component analysis (ICA were respectively applied to the feature extraction and information fusion. Then, the BP_AdaBoost algorithm was used to develop the fused model for monitoring of the critical time in SSF process of feed protein. Experimental results showed that the identified results of the fusion model are much better than those of the single technique model both in the training and validation sets, and the complexity of the fusion model was also less than that of the single technique model. The overall results demonstrate that it has a high potential in online monitoring of the critical moment in SSF process by use of integrating electronic nose and NIRS techniques, and data fusion from multi-technique could significantly improve the monitoring performance of SSF process.

  18. Handbook of infrared standards II with spectral coverage between

    CERN Document Server

    Meurant, Gerard

    1993-01-01

    This timely compilation of infrared standards has been developed for use by infrared researchers in chemistry, physics, engineering, astrophysics, and laser and atmospheric sciences. Providing maps of closely spaced molecular spectra along with their measured wavenumbers between 1.4vm and 4vm, this handbook will complement the 1986 Handbook of Infrared Standards that included special coverage between 3 and 2600vm. It will serve as a necessary reference for all researchers conducting spectroscopic investigations in the near-infrared region.Key Features:- Provides all new spec

  19. Near infrared spectroscopy in the development of solid dosage forms.

    Science.gov (United States)

    Räsänen, Eetu; Sandler, Niklas

    2007-02-01

    The use of near infrared (NIR) spectroscopy has rapidly grown partly due to demands of process analytical applications in the pharmaceutical industry. Furthermore, newest regulatory guidelines have advanced the increase of the use of NIR technologies. The non-destructive and non-invasive nature of measurements makes NIR a powerful tool in characterization of pharmaceutical solids. These benefits among others often make NIR advantageous over traditional analytical methods. However, in addition to NIR, a wide variety of other tools are naturally also available for analysis in pharmaceutical development and manufacturing, and those can often be more suitable for a given application. The versatility and rapidness of NIR will ensure its contribution to increased process understanding, better process control and improved quality of drug products. This review concentrates on the use of NIR spectroscopy from a process research perspective and highlights recent applications in the field.

  20. Collaboration space division in collaborative product development based on a genetic algorithm

    Science.gov (United States)

    Qian, Xueming; Ma, Yanqiao; Feng, Huan

    2018-02-01

    The advance in the global environment, rapidly changing markets, and information technology has created a new stage for design. In such an environment, one strategy for success is the Collaborative Product Development (CPD). Organizing people effectively is the goal of Collaborative Product Development, and it solves the problem with certain foreseeability. The development group activities are influenced not only by the methods and decisions available, but also by correlation among personnel. Grouping the personnel according to their correlation intensity is defined as collaboration space division (CSD). Upon establishment of a correlation matrix (CM) of personnel and an analysis of the collaboration space, the genetic algorithm (GA) and minimum description length (MDL) principle may be used as tools in optimizing collaboration space. The MDL principle is used in setting up an object function, and the GA is used as a methodology. The algorithm encodes spatial information as a chromosome in binary. After repetitious crossover, mutation, selection and multiplication, a robust chromosome is found, which can be decoded into an optimal collaboration space. This new method can calculate the members in sub-spaces and individual groupings within the staff. Furthermore, the intersection of sub-spaces and public persons belonging to all sub-spaces can be determined simultaneously.

  1. Wireless infrared computer control

    Science.gov (United States)

    Chen, George C.; He, Xiaofei

    2004-04-01

    Wireless mouse is not restricted by cable"s length and has advantage over its wired counterpart. However, all the mice available in the market have detection range less than 2 meters and angular coverage less than 180 degrees. Furthermore, commercial infrared mice are based on track ball and rollers to detect movements. This restricts them to be used in those occasions where users want to have dynamic movement, such as presentations and meetings etc. This paper presents our newly developed infrared wireless mouse, which has a detection range of 6 meters and angular coverage of 180 degrees. This new mouse uses buttons instead of traditional track ball and is developed to be a hand-held device like remote controller. It enables users to control cursor with a distance closed to computer and the mouse to be free from computer operation.

  2. Development of a new diffuse near-infrared food measuring

    Science.gov (United States)

    Zhang, Jun; Piao, Renguan

    2006-11-01

    Industries from agriculture to petrochemistry have found near infrared (NIR) spectroscopic analysis useful for quality control and quantitative analysis of materials and products. The general chemical, polymer chemistry, petrochemistry, agriculture, food and textile industries are currently using NIR spectroscopic methods for analysis. In this study, we developed a new sort NIR instrument for food measuring. The instrument consists of a light source, 12 filters to the prismatic part. The special part is that we use a mirror to get two beams of light. And two PbS detectors were used. One detector collected the radiation of one light beam directly and the value was set as the standard instead the standard white surface. Another light beam irradiate the sample surface, and the diffuse light was collected by another detector. The value of the two detectors was compared and the absorbency was computed. We tested the performance of the NIR instrument in determining the protein and fat content of milk powder. The calibration showed the accuracy of the instrument in practice.

  3. SIBI: A compact hyperspectral camera in the mid-infrared

    Science.gov (United States)

    Pola Fossi, Armande; Ferrec, Yann; Domel, Roland; Coudrain, Christophe; Guerineau, Nicolas; Roux, Nicolas; D'Almeida, Oscar; Bousquet, Marc; Kling, Emmanuel; Sauer, Hervé

    2015-10-01

    Recent developments in unmanned aerial vehicles have increased the demand for more and more compact optical systems. In order to bring solutions to this demand, several infrared systems are being developed at ONERA such as spectrometers, imaging devices, multispectral and hyperspectral imaging systems. In the field of compact infrared hyperspectral imaging devices, ONERA and Sagem Défense et Sécurité have collaborated to develop a prototype called SIBI, which stands for "Spectro-Imageur Birefringent Infrarouge". It is a static Fourier transform imaging spectrometer which operates in the mid-wavelength infrared spectral range and uses a birefringent lateral shearing interferometer. Up to now, birefringent interferometers have not been often used for hyperspectral imaging in the mid-infrared because of the lack of crystal manufacturers, contrary to the visible spectral domain where the production of uniaxial crystals like calcite are mastered for various optical applications. In the following, we will present the design and the realization of SIBI as well as the first experimental results.

  4. Challenges to Global Implementation of Infrared Thermography Technology: Current Perspective

    Directory of Open Access Journals (Sweden)

    Michael Shterenshis

    2017-10-01

    Full Text Available Medical infrared thermography (IT produces an image of the infrared waves emitted by the human body as part of the thermoregulation process that can vary in intensity based on the health of the person. This review analyzes recent developments in the use of infrared thermography as a screening and diagnostic tool in clinical and nonclinical settings, and identifies possible future routes for improvement of the method. Currently, infrared thermography is not considered to be a fully reliable diagnostic method. If standard infrared protocol is established and a normative database is available, infrared thermography may become a reliable method for detecting inflammatory processes.

  5. Infrared irradiation of skin for the development of non-invasive health monitoring technologies

    Science.gov (United States)

    Abdussamad Abbas, Hisham; Triplett, Gregory

    2015-06-01

    Infrared radiation was employed to study the optical transmission properties of pigskin and the factors that influence transmission at room temperature. The skin samples from the forehead of piglets were irradiated using an infrared-pulsed source by varying the beam properties such as optical power, power density, duty cycle, as well as sample thickness. Because infrared radiation in select instances can penetrate through thick-fleshy skin more easily than visible radiation, temperature fluctuations observed within the skin samples stemming from exposure-dependent absorption revealed interesting transmission properties and the limits of optical exposure. Pigskin was selected for this study since its structure most closely resembles that of human skin. Furthermore, the pulsed beam technique compared to continuous operation offers more precise control of heat generation within the skin. Through this effort, the correlated pulsed-beam parameters that influence infrared transmission were identified and varied to minimize the internal absorption losses through the dermis layers. The two most significant parameters that reduce absorption losses were frequency and duty cycle of the pulsed beam. Using the Bouger-Beer-Lambert Law, the absorption coefficient from empirical data is approximated, while accepting that the absorption coefficient is neither uniform nor linear. Given that the optical source used in this study was single mode, the infrared spectra obtained from irradiated samples also reveal characteristics of the skin structure. Realization of appropriate sample conditions and exposure parameters that reduce light attenuation within the skin and sample degradation could give way to novel non-invasive measuring techniques for health monitoring purposes.

  6. Genetic evolutionary taboo search for optimal marker placement in infrared patient setup

    International Nuclear Information System (INIS)

    Riboldi, M; Baroni, G; Spadea, M F; Tagaste, B; Garibaldi, C; Cambria, R; Orecchia, R; Pedotti, A

    2007-01-01

    In infrared patient setup adequate selection of the external fiducial configuration is required for compensating inner target displacements (target registration error, TRE). Genetic algorithms (GA) and taboo search (TS) were applied in a newly designed approach to optimal marker placement: the genetic evolutionary taboo search (GETS) algorithm. In the GETS paradigm, multiple solutions are simultaneously tested in a stochastic evolutionary scheme, where taboo-based decision making and adaptive memory guide the optimization process. The GETS algorithm was tested on a group of ten prostate patients, to be compared to standard optimization and to randomly selected configurations. The changes in the optimal marker configuration, when TRE is minimized for OARs, were specifically examined. Optimal GETS configurations ensured a 26.5% mean decrease in the TRE value, versus 19.4% for conventional quasi-Newton optimization. Common features in GETS marker configurations were highlighted in the dataset of ten patients, even when multiple runs of the stochastic algorithm were performed. Including OARs in TRE minimization did not considerably affect the spatial distribution of GETS marker configurations. In conclusion, the GETS algorithm proved to be highly effective in solving the optimal marker placement problem. Further work is needed to embed site-specific deformation models in the optimization process

  7. Chronic wrist pain: diagnosis and management. Development and use of a new algorithm

    NARCIS (Netherlands)

    van Vugt, R. M.; Bijlsma, J. W.; van Vugt, A. C.

    1999-01-01

    Chronic wrist pain can be difficult to manage and the differential diagnosis is extensive. To provide guidelines for assessment of the painful wrist an algorithm was developed to encourage a structured approach to the diagnosis and management of these patients. A review of the literature on causes

  8. Infrared astronomy

    International Nuclear Information System (INIS)

    Setti, G.; Fazio, G.

    1978-01-01

    This volume contains lectures describing the important achievements in infrared astronomy. The topics included are galactic infrared sources and their role in star formation, the nature of the interstellar medium and galactic structure, the interpretation of infrared, optical and radio observations of extra-galactic sources and their role in the origin and structure of the universe, instrumental techniques and a review of future space observations. (C.F.)

  9. Drogue pose estimation for unmanned aerial vehicle autonomous aerial refueling system based on infrared vision sensor

    Science.gov (United States)

    Chen, Shanjun; Duan, Haibin; Deng, Yimin; Li, Cong; Zhao, Guozhi; Xu, Yan

    2017-12-01

    Autonomous aerial refueling is a significant technology that can significantly extend the endurance of unmanned aerial vehicles. A reliable method that can accurately estimate the position and attitude of the probe relative to the drogue is the key to such a capability. A drogue pose estimation method based on infrared vision sensor is introduced with the general goal of yielding an accurate and reliable drogue state estimate. First, by employing direct least squares ellipse fitting and convex hull in OpenCV, a feature point matching and interference point elimination method is proposed. In addition, considering the conditions that some infrared LEDs are damaged or occluded, a missing point estimation method based on perspective transformation and affine transformation is designed. Finally, an accurate and robust pose estimation algorithm improved by the runner-root algorithm is proposed. The feasibility of the designed visual measurement system is demonstrated by flight test, and the results indicate that our proposed method enables precise and reliable pose estimation of the probe relative to the drogue, even in some poor conditions.

  10. Scheduling language and algorithm development study. Appendix: Study approach and activity summary

    Science.gov (United States)

    1974-01-01

    The approach and organization of the study to develop a high level computer programming language and a program library are presented. The algorithm and problem modeling analyses are summarized. The approach used to identify and specify the capabilities required in the basic language is described. Results of the analyses used to define specifications for the scheduling module library are presented.

  11. Robust infrared target tracking using discriminative and generative approaches

    Science.gov (United States)

    Asha, C. S.; Narasimhadhan, A. V.

    2017-09-01

    The process of designing an efficient tracker for thermal infrared imagery is one of the most challenging tasks in computer vision. Although a lot of advancement has been achieved in RGB videos over the decades, textureless and colorless properties of objects in thermal imagery pose hard constraints in the design of an efficient tracker. Tracking of an object using a single feature or a technique often fails to achieve greater accuracy. Here, we propose an effective method to track an object in infrared imagery based on a combination of discriminative and generative approaches. The discriminative technique makes use of two complementary methods such as kernelized correlation filter with spatial feature and AdaBoost classifier with pixel intesity features to operate in parallel. After obtaining optimized locations through discriminative approaches, the generative technique is applied to determine the best target location using a linear search method. Unlike the baseline algorithms, the proposed method estimates the scale of the target by Lucas-Kanade homography estimation. To evaluate the proposed method, extensive experiments are conducted on 17 challenging infrared image sequences obtained from LTIR dataset and a significant improvement of mean distance precision and mean overlap precision is accomplished as compared with the existing trackers. Further, a quantitative and qualitative assessment of the proposed approach with the state-of-the-art trackers is illustrated to clearly demonstrate an overall increase in performance.

  12. Development of a thermal control algorithm using artificial neural network models for improved thermal comfort and energy efficiency in accommodation buildings

    International Nuclear Information System (INIS)

    Moon, Jin Woo; Jung, Sung Kwon

    2016-01-01

    Highlights: • An ANN model for predicting optimal start moment of the cooling system was developed. • An ANN model for predicting the amount of cooling energy consumption was developed. • An optimal control algorithm was developed employing two ANN models. • The algorithm showed the advanced thermal comfort and energy efficiency. - Abstract: The aim of this study was to develop a control algorithm to demonstrate the improved thermal comfort and building energy efficiency of accommodation buildings in the cooling season. For this, two artificial neural network (ANN)-based predictive and adaptive models were developed and employed in the algorithm. One model predicted the cooling energy consumption during the unoccupied period for different setback temperatures and the other predicted the time required for restoring current indoor temperature to the normal set-point temperature. Using numerical simulation methods, the prediction accuracy of the two ANN models and the performance of the algorithm were tested. Through the test result analysis, the two ANN models showed their prediction accuracy with an acceptable error rate when applied in the control algorithm. In addition, the two ANN models based algorithm can be used to provide a more comfortable and energy efficient indoor thermal environment than the two conventional control methods, which respectively employed a fixed set-point temperature for the entire day and a setback temperature during the unoccupied period. Therefore, the operating range was 23–26 °C during the occupied period and 25–28 °C during the unoccupied period. Based on the analysis, it can be concluded that the optimal algorithm with two predictive and adaptive ANN models can be used to design a more comfortable and energy efficient indoor thermal environment for accommodation buildings in a comprehensive manner.

  13. Development of response models for the Earth Radiation Budget Experiment (ERBE) sensors. Part 4: Preliminary nonscanner models and count conversion algorithms

    Science.gov (United States)

    Halyo, Nesim; Choi, Sang H.

    1987-01-01

    Two count conversion algorithms and the associated dynamic sensor model for the M/WFOV nonscanner radiometers are defined. The sensor model provides and updates the constants necessary for the conversion algorithms, though the frequency with which these updates were needed was uncertain. This analysis therefore develops mathematical models for the conversion of irradiance at the sensor field of view (FOV) limiter into data counts, derives from this model two algorithms for the conversion of data counts to irradiance at the sensor FOV aperture and develops measurement models which account for a specific target source together with a sensor. The resulting algorithms are of the gain/offset and Kalman filter types. The gain/offset algorithm was chosen since it provided sufficient accuracy using simpler computations.

  14. Challenges to Global Implementation of Infrared Thermography Technology: Current Perspective

    OpenAIRE

    Michael Shterenshis

    2017-01-01

    Medical infrared thermography (IT) produces an image of the infrared waves emitted by the human body as part of the thermoregulation process that can vary in intensity based on the health of the person. This review analyzes recent developments in the use of infrared thermography as a screening and diagnostic tool in clinical and nonclinical settings, and identifies possible future routes for improvement of the method. Currently, infrared thermography is not considered to be a fully reliable d...

  15. Development of a parallel genetic algorithm using MPI and its application in a nuclear reactor core. Design optimization

    International Nuclear Information System (INIS)

    Waintraub, Marcel; Pereira, Claudio M.N.A.; Baptista, Rafael P.

    2005-01-01

    This work presents the development of a distributed parallel genetic algorithm applied to a nuclear reactor core design optimization. In the implementation of the parallelism, a 'Message Passing Interface' (MPI) library, standard for parallel computation in distributed memory platforms, has been used. Another important characteristic of MPI is its portability for various architectures. The main objectives of this paper are: validation of the results obtained by the application of this algorithm in a nuclear reactor core optimization problem, through comparisons with previous results presented by Pereira et al.; and performance test of the Brazilian Nuclear Engineering Institute (IEN) cluster in reactors physics optimization problems. The experiments demonstrated that the developed parallel genetic algorithm using the MPI library presented significant gains in the obtained results and an accentuated reduction of the processing time. Such results ratify the use of the parallel genetic algorithms for the solution of nuclear reactor core optimization problems. (author)

  16. Developing Subdomain Allocation Algorithms Based on Spatial and Communicational Constraints to Accelerate Dust Storm Simulation

    Science.gov (United States)

    Gui, Zhipeng; Yu, Manzhu; Yang, Chaowei; Jiang, Yunfeng; Chen, Songqing; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Hassan, Mohammed Anowarul; Jin, Baoxuan

    2016-01-01

    Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1) an Integer Linear Programming (ILP) based algorithm from combinational optimization perspective; 2) a K-Means and Kernighan-Lin combined heuristic algorithm (K&K) integrating geometric and coordinate-free methods by merging local and global partitioning; 3) an automatic seeded region growing based geometric and local partitioning algorithm (ASRG). The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust) and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric and numerical

  17. Computational and experimental research on infrared trace by human being contact

    Energy Technology Data Exchange (ETDEWEB)

    Xiong Zonglong; Yang Kuntao; Ding Wenxiu; Zhang Nanyangsheng; Zheng Wenheng

    2010-06-20

    The indoor detection of the human body's thermal trace plays an important role in the fields of infrared detecting, scouting, infrared camouflage, and infrared rescuing and tracking. Currently, quantitative description and analysis for this technology are lacking due to the absence of human infrared radiation analysis. To solve this problem, we study the heating and cooling process by observing body contact and removal on an object, respectively. Through finite-element simulation and carefully designed experiments, an analytical model of the infrared trace of body contact is developed based on infrared physics and heat transfer theory. Using this model, the impact of body temperature on material thermal parameters is investigated. The sensitivity of material thermal parameters, the thermal distribution, and the changes of the thermograph's contrast are then found and analyzed. Excellent matching results achieved between the simulation and the experiments demonstrate the strong impact of temperature on material thermal parameters. Conclusively, the new model, simulation, and experimental results are beneficial to the future development and implementation of infrared trace technology.

  18. Computational and experimental research on infrared trace by human being contact

    International Nuclear Information System (INIS)

    Xiong Zonglong; Yang Kuntao; Ding Wenxiu; Zhang Nanyangsheng; Zheng Wenheng

    2010-01-01

    The indoor detection of the human body's thermal trace plays an important role in the fields of infrared detecting, scouting, infrared camouflage, and infrared rescuing and tracking. Currently, quantitative description and analysis for this technology are lacking due to the absence of human infrared radiation analysis. To solve this problem, we study the heating and cooling process by observing body contact and removal on an object, respectively. Through finite-element simulation and carefully designed experiments, an analytical model of the infrared trace of body contact is developed based on infrared physics and heat transfer theory. Using this model, the impact of body temperature on material thermal parameters is investigated. The sensitivity of material thermal parameters, the thermal distribution, and the changes of the thermograph's contrast are then found and analyzed. Excellent matching results achieved between the simulation and the experiments demonstrate the strong impact of temperature on material thermal parameters. Conclusively, the new model, simulation, and experimental results are beneficial to the future development and implementation of infrared trace technology.

  19. Observations of the Hubble Deep Field with the Infrared Space Observatory .2. Source detection and photometry

    DEFF Research Database (Denmark)

    Goldschmidt, P.; Oliver, S.J.; Serjeant, S.B.G.

    1997-01-01

    We present positions and fluxes of point sources found in the Infrared Space Observatory (ISO) images of the Hubble Deep Field (HDF) at 6.7 and 15 mu m. We have constructed algorithmically selected 'complete' flux-limited samples of 19 sources in the 15-mu m image, and seven sources in the 6.7-mu m...

  20. Measuring river from the cloud - River width algorithm development on Google Earth Engine

    Science.gov (United States)

    Yang, X.; Pavelsky, T.; Allen, G. H.; Donchyts, G.

    2017-12-01

    Rivers are some of the most dynamic features of the terrestrial land surface. They help distribute freshwater, nutrients, sediment, and they are also responsible for some of the greatest natural hazards. Despite their importance, our understanding of river behavior is limited at the global scale, in part because we do not have a river observational dataset that spans both time and space. Remote sensing data represent a rich, largely untapped resource for observing river dynamics. In particular, publicly accessible archives of satellite optical imagery, which date back to the 1970s, can be used to study the planview morphodynamics of rivers at the global scale. Here we present an image processing algorithm developed using the Google Earth Engine cloud-based platform, that can automatically extracts river centerlines and widths from Landsat 5, 7, and 8 scenes at 30 m resolution. Our algorithm makes use of the latest monthly global surface water history dataset and an existing Global River Width from Landsat (GRWL) dataset to efficiently extract river masks from each Landsat scene. Then a combination of distance transform and skeletonization techniques are used to extract river centerlines. Finally, our algorithm calculates wetted river width at each centerline pixel perpendicular to its local centerline direction. We validated this algorithm using in situ data estimated from 16 USGS gauge stations (N=1781). We find that 92% of the width differences are within 60 m (i.e. the minimum length of 2 Landsat pixels). Leveraging Earth Engine's infrastructure of collocated data and processing power, our goal is to use this algorithm to reconstruct the morphodynamic history of rivers globally by processing over 100,000 Landsat 5 scenes, covering from 1984 to 2013.

  1. Systems Engineering Approach to Develop Guidance, Navigation and Control Algorithms for Unmanned Ground Vehicle

    Science.gov (United States)

    2016-09-01

    Global Positioning System HNA hybrid navigation algorithm HRI human-robot interface IED Improvised Explosive Device IMU inertial measurement unit...Potential Field Method R&D research and development RDT&E Research, development, test and evaluation RF radiofrequency RGB red, green and blue ROE...were radiofrequency (RF) controlled and pneumatically actuated upon receiving the wireless commands from the radio operator. The pairing of such an

  2. Rapid Mental Сomputation System as a Tool for Algorithmic Thinking of Elementary School Students Development

    Directory of Open Access Journals (Sweden)

    Rushan Ziatdinov

    2012-07-01

    Full Text Available In this paper, we describe the possibilities of using a rapid mental computation system in elementary education. The system consists of a number of readily memorized operations that allow one to perform arithmetic computations very quickly. These operations are actually simple algorithms which can develop or improve the algorithmic thinking of pupils. Using a rapid mental computation system allows forming the basis for the study of computer science in secondary school.

  3. Retrieval of macrophysical cloud parameters from MIPAS: algorithm description

    Directory of Open Access Journals (Sweden)

    J. Hurley

    2011-04-01

    Full Text Available The Michelson Interferometer for Passive Atmospheric Sounding (MIPAS onboard ENVISAT has the potential to be particularly useful for studying high, thin clouds, which have been difficult to observe in the past. This paper details the development, implementation and testing of an optimal-estimation-type retrieval for three macrophysical cloud parameters (cloud top height, cloud top temperature and cloud extinction coefficient from infrared spectra measured by MIPAS. A preliminary estimation of a parameterisation of the optical and geometrical filling of the measurement field-of-view by cloud is employed as the first step of the retrieval process to improve the choice of a priori for the macrophysical parameters themselves.

    Preliminary application to single-scattering simulations indicates that the retrieval error stemming from uncertainties introduced by noise and by a priori variances in the retrieval process itself is small – although it should be noted that these retrieval errors do not include the significant errors stemming from the assumption of homogeneity and the non-scattering nature of the forward model. Such errors are preliminarily and qualitatively assessed here, and are likely to be the dominant error sources. The retrieval converges for 99% of input cases, although sometimes fails to converge for vetically-thin (<1 km clouds. The retrieval algorithm is applied to MIPAS data; the results of which are qualitatively compared with CALIPSO cloud top heights and PARASOL cloud opacities. From comparison with CALIPSO cloud products, it must be noted that the cloud detection method used in this algorithm appears to potentially misdetect stratospheric aerosol layers as cloud.

    This algorithm has been adopted by the European Space Agency's "MIPclouds" project.

  4. Space Infrared Telescope Facility (SIRTF) - Operations concept. [decreasing development and operations cost

    Science.gov (United States)

    Miller, Richard B.

    1992-01-01

    The development and operations costs of the Space IR Telescope Facility (SIRTF) are discussed in the light of minimizing total outlays and optimizing efficiency. The development phase cannot extend into the post-launch segment which is planned to only support system verification and calibration followed by operations with a 70-percent efficiency goal. The importance of reducing the ground-support staff is demonstrated, and the value of the highly sensitive observations to the general astronomical community is described. The Failure Protection Algorithm for the SIRTF is designed for the 5-yr lifetime and the continuous venting of cryogen, and a science driven ground/operations system is described. Attention is given to balancing cost and performance, prototyping during the development phase, incremental development, the utilization of standards, and the integration of ground system/operations with flight system integration and test.

  5. The development of a new algorithm to calculate a survival function in non-parametric ways

    International Nuclear Information System (INIS)

    Ahn, Kwang Won; Kim, Yoon Ik; Chung, Chang Hyun; Kim, Kil Yoo

    2001-01-01

    In this study, a generalized formula of the Kaplan-Meier method is developed. The idea of this algorithm is that the result of the Kaplan-Meier estimator is the same as that of the redistribute-to-the right algorithm. Hence, the result of the Kaplan-Meier estimator is used when we redistribute to the right. This can be explained as the following steps, at first, the same mass is distributed to all the points. At second, when you reach the censored points, you must redistribute the mass of that point to the right according to the following rule; to normalize the masses, which are located to the right of the censored point, and redistribute the mass of the censored point to the right according to the ratio of the normalized mass. Until now, we illustrate the main idea of this algorithm.The meaning of that idea is more efficient than PL-estimator in the sense that it decreases the mass of after that area. Just like a redistribute to the right algorithm, this method is enough for the probability theory

  6. Free-Form Deformation Approach for Registration of Visible and Infrared Facial Images in Fever Screening

    Directory of Open Access Journals (Sweden)

    Yedukondala Narendra Dwith Chenna

    2018-01-01

    Full Text Available Fever screening based on infrared (IR thermographs (IRTs is an approach that has been implemented during infectious disease pandemics, such as Ebola and Severe Acute Respiratory Syndrome. A recently published international standard indicates that regions medially adjacent to the inner canthi provide accurate estimates of core body temperature and are preferred sites for fever screening. Therefore, rapid, automated identification of the canthi regions within facial IR images may greatly facilitate rapid fever screening of asymptomatic travelers. However, it is more difficult to accurately identify the canthi regions from IR images than from visible images that are rich with exploitable features. In this study, we developed and evaluated techniques for multi-modality image registration (MMIR of simultaneously captured visible and IR facial images for fever screening. We used free form deformation (FFD models based on edge maps to improve registration accuracy after an affine transformation. Two widely used FFD models in medical image registration based on the Demons and cubic B-spline algorithms were qualitatively compared. The results showed that the Demons algorithm outperformed the cubic B-spline algorithm, likely due to overfitting of outliers by the latter method. The quantitative measure of registration accuracy, obtained through selected control point correspondence, was within 2.8 ± 1.2 mm, which enables accurate and automatic localization of canthi regions in the IR images for temperature measurement.

  7. A color fusion method of infrared and low-light-level images based on visual perception

    Science.gov (United States)

    Han, Jing; Yan, Minmin; Zhang, Yi; Bai, Lianfa

    2014-11-01

    The color fusion images can be obtained through the fusion of infrared and low-light-level images, which will contain both the information of the two. The fusion images can help observers to understand the multichannel images comprehensively. However, simple fusion may lose the target information due to inconspicuous targets in long-distance infrared and low-light-level images; and if targets extraction is adopted blindly, the perception of the scene information will be affected seriously. To solve this problem, a new fusion method based on visual perception is proposed in this paper. The extraction of the visual targets ("what" information) and parallel processing mechanism are applied in traditional color fusion methods. The infrared and low-light-level color fusion images are achieved based on efficient typical targets learning. Experimental results show the effectiveness of the proposed method. The fusion images achieved by our algorithm can not only improve the detection rate of targets, but also get rich natural information of the scenes.

  8. New Optimization Algorithms in Physics

    CERN Document Server

    Hartmann, Alexander K

    2004-01-01

    Many physicists are not aware of the fact that they can solve their problems by applying optimization algorithms. Since the number of such algorithms is steadily increasing, many new algorithms have not been presented comprehensively until now. This presentation of recently developed algorithms applied in physics, including demonstrations of how they work and related results, aims to encourage their application, and as such the algorithms selected cover concepts and methods from statistical physics to optimization problems emerging in theoretical computer science.

  9. Learning from nature: Nature-inspired algorithms

    DEFF Research Database (Denmark)

    Albeanu, Grigore; Madsen, Henrik; Popentiu-Vladicescu, Florin

    2016-01-01

    .), genetic and evolutionary strategies, artificial immune systems etc. Well-known examples of applications include: aircraft wing design, wind turbine design, bionic car, bullet train, optimal decisions related to traffic, appropriate strategies to survive under a well-adapted immune system etc. Based......During last decade, the nature has inspired researchers to develop new algorithms. The largest collection of nature-inspired algorithms is biology-inspired: swarm intelligence (particle swarm optimization, ant colony optimization, cuckoo search, bees' algorithm, bat algorithm, firefly algorithm etc...... on collective social behaviour of organisms, researchers have developed optimization strategies taking into account not only the individuals, but also groups and environment. However, learning from nature, new classes of approaches can be identified, tested and compared against already available algorithms...

  10. Algorithms

    Indian Academy of Sciences (India)

    ticians but also forms the foundation of computer science. Two ... with methods of developing algorithms for solving a variety of problems but ... applications of computers in science and engineer- ... numerical calculus are as important. We will ...

  11. Infrared sensor for water pollution and monitoring

    Science.gov (United States)

    Baudet, E.; Gutierrez-Arrovo, A.; Bailleul, M.; Rinnert, E.; Nemec, P.; Charrier, J.; Bodiou, L.; Colas, F.; Compère, C.; Boussard, C.; Bureau, B.; Michel, K.; Nazabal, V.

    2017-05-01

    Development of Mid-infrared sensors for the detection of biochemical molecules is a challenge of great importance. Mid-infrared range (4000 - 400 cm-1) contains the absorption bands related to the vibrations of organic molecules (nitrates, hydrocarbons, pesticides, etc.). Chalcogenide glasses are an important class of amorphous materials appropriate for sensing applications. Indeed, they are mainly studied and used for their wide transparency in the infrared range (up to 15 μm for selenide glasses) and high refractive index (between 2 and 3). The aim of this study is to synthesize and characterize chalcogenide thin films for developing mid-IR optical waveguides. Therefore, two (GeSe2)100-x(Sb2Se3)x chalcogenide glasses, where x=10 and 50 were chosen for their good mid-IR transparency, high stability against crystallization and their refractive index contrast suitable for mid-IR waveguiding. Chalcogenide glasses were prepared using the conventional melting and quenching method and then used for RF magnetron sputtering deposition. Sputtered thin films were characterized in order to determine dispersion of refractive index in UV-Vis-NIR-MIR. Obtained results were used for the simulation of the optical design in mid-infrared (λ = 7.7 μm). Selenide ridge waveguide were prepared by RIE-ICP dry etching process. Single-mode propagation at 7.7 μm was observed. Optical losses of 0.7 +/- 0.3 and 2.5 +/- 0.1 dB.cm-1 were measured in near-infrared (λ = 1.55 μm) and midinfrared (λ = 7.7 μm), respectively. Achieved results are promising for the fabrication of an integrated optical sensor operating in the mid-infrared.

  12. Generation of Mid-Infrared Frequency Combs for Spectroscopic Applications

    Science.gov (United States)

    Maser, Daniel L.

    Mid-infrared laser sources prove to be a valuable tool in exploring a vast array of phenomena, finding their way into applications ranging from trace gas detection to X-ray generation and carbon dating. Mid-infrared frequency combs, in particular, are well-suited for many of these applications, owing to their inherent low-noise and broadband nature. Frequency comb technology is well-developed in the near-infrared as a result of immense technological development by the telecommunication industry in silica fiber and the existence of readily-available glass dopants such as ytterbium and erbium that enable oscillators at 1 and 1.5 ?m. However, options become substantially more limited at longer wavelengths, as silica is no longer transparent and the components required in a mid-infrared frequency comb system (oscillators, fibers, and both fiber and free-space components) are far less technologically mature. This thesis explores several different approaches to generating frequency comb sources in the mid-infrared region, and the development of sources used in the nonlinear processes implemented to reach these wavelengths. An optical parametric oscillator, two approaches to difference frequency generation, and nonlinear spectral broadening in chip-scale waveguides are developed, characterized, and spectroscopic potential for these techniques is demonstrated. The source used for these nonlinear processes, the erbium-doped fiber amplifier, is also studied and discussed throughout the design and optimization process. The nonlinear optical processes critical to this work are numerically modeled and used to confirm and predict experimental behavior.

  13. Convolutional Neural Network Based on Extreme Learning Machine for Maritime Ships Recognition in Infrared Images.

    Science.gov (United States)

    Khellal, Atmane; Ma, Hongbin; Fei, Qing

    2018-05-09

    The success of Deep Learning models, notably convolutional neural networks (CNNs), makes them the favorable solution for object recognition systems in both visible and infrared domains. However, the lack of training data in the case of maritime ships research leads to poor performance due to the problem of overfitting. In addition, the back-propagation algorithm used to train CNN is very slow and requires tuning many hyperparameters. To overcome these weaknesses, we introduce a new approach fully based on Extreme Learning Machine (ELM) to learn useful CNN features and perform a fast and accurate classification, which is suitable for infrared-based recognition systems. The proposed approach combines an ELM based learning algorithm to train CNN for discriminative features extraction and an ELM based ensemble for classification. The experimental results on VAIS dataset, which is the largest dataset of maritime ships, confirm that the proposed approach outperforms the state-of-the-art models in term of generalization performance and training speed. For instance, the proposed model is up to 950 times faster than the traditional back-propagation based training of convolutional neural networks, primarily for low-level features extraction.

  14. Development of a differential infrared absorption method to measure the deuterium content of natural water

    International Nuclear Information System (INIS)

    D'Alessio, Enrique; Bonadeo, Hernan; Karaianev de Del Carril, Stiliana.

    1975-07-01

    A system to measure the deuterium content of natural water using differential infrared spectroscopy is described. Parameters conducing to an optimized design are analyzed, and the construction of the system is described. A Perkin Elmer 225 infrared spectrometer, to which a scale expansion system has been added, is used. Sample and reference waters are alternatively introduced by a pneumatical-mechanical system into a unique F Ca thermostatized infrared cell. Results and calibration curves shown prove that the system is capable of measuring deuterium content with a precision of 1 part per million. (author)

  15. Explaining algorithms using metaphors

    CERN Document Server

    Forišek, Michal

    2013-01-01

    There is a significant difference between designing a new algorithm, proving its correctness, and teaching it to an audience. When teaching algorithms, the teacher's main goal should be to convey the underlying ideas and to help the students form correct mental models related to the algorithm. This process can often be facilitated by using suitable metaphors. This work provides a set of novel metaphors identified and developed as suitable tools for teaching many of the 'classic textbook' algorithms taught in undergraduate courses worldwide. Each chapter provides exercises and didactic notes fo

  16. Validation of new satellite aerosol optical depth retrieval algorithm using Raman lidar observations at radiative transfer laboratory in Warsaw

    Science.gov (United States)

    Zawadzka, Olga; Stachlewska, Iwona S.; Markowicz, Krzysztof M.; Nemuc, Anca; Stebel, Kerstin

    2018-04-01

    During an exceptionally warm September of 2016, the unique, stable weather conditions over Poland allowed for an extensive testing of the new algorithm developed to improve the Meteosat Second Generation (MSG) Spinning Enhanced Visible and Infrared Imager (SEVIRI) aerosol optical depth (AOD) retrieval. The development was conducted in the frame of the ESA-ESRIN SAMIRA project. The new AOD algorithm aims at providing the aerosol optical depth maps over the territory of Poland with a high temporal resolution of 15 minutes. It was tested on the data set obtained between 11-16 September 2016, during which a day of relatively clean atmospheric background related to an Arctic airmass inflow was surrounded by a few days with well increased aerosol load of different origin. On the clean reference day, for estimating surface reflectance the AOD forecast available on-line via the Copernicus Atmosphere Monitoring Service (CAMS) was used. The obtained AOD maps were validated against AODs available within the Poland-AOD and AERONET networks, and with AOD values obtained from the PollyXT-UW lidar. of the University of Warsaw (UW).

  17. Toward optimal spatial and spectral quality in widefield infrared spectromicroscopy of IR labelled single cells.

    Science.gov (United States)

    Mattson, Eric C; Unger, Miriam; Clède, Sylvain; Lambert, François; Policar, Clotilde; Imtiaz, Asher; D'Souza, Roshan; Hirschmugl, Carol J

    2013-10-07

    Advancements in widefield infrared spectromicroscopy have recently been demonstrated following the commissioning of IRENI (InfraRed ENvironmental Imaging), a Fourier Transform infrared (FTIR) chemical imaging beamline at the Synchrotron Radiation Center. The present study demonstrates the effects of magnification, spatial oversampling, spectral pre-processing and deconvolution, focusing on the intracellular detection and distribution of an exogenous metal tris-carbonyl derivative 1 in a single MDA-MB-231 breast cancer cell. We demonstrate here that spatial oversampling for synchrotron-based infrared imaging is critical to obtain accurate diffraction-limited images at all wavelengths simultaneously. Resolution criteria and results from raw and deconvoluted images for two Schwarzschild objectives (36×, NA 0.5 and 74×, NA 0.65) are compared to each other and to prior reports for raster-scanned, confocal microscopes. The resolution of the imaging data can be improved by deconvolving the instrumental broadening that is determined with the measured PSFs, which is implemented with GPU programming architecture for fast hyperspectral processing. High definition, rapidly acquired, FTIR chemical images of respective spectral signatures of the cell 1 and shows that 1 is localized next to the phosphate- and Amide-rich regions, in agreement with previous infrared and luminescence studies. The infrared image contrast, localization and definition are improved after applying proven spectral pre-processing (principal component analysis based noise reduction and RMie scattering correction algorithms) to individual pixel spectra in the hyperspectral cube.

  18. Molecular Convergence of Infrared Vision in Snakes

    Science.gov (United States)

    Yokoyama, Shozo; Altun, Ahmet; DeNardo, Dale F.

    2011-01-01

    It has been discovered that the transient receptor potential ankyrin 1 (TRPA1) proteins of Boidae (boas), Pythonidae (pythons), and Crotalinae (pit vipers) are used to detect infrared radiation, but the molecular mechanism for detecting the infrared radiation is unknown. Here, relating the amino acid substitutions in their TRPA1 proteins and the functional differentiations, we propose that three parallel amino acid changes (L330M, Q391H, and S434T) are responsible for the development of infrared vision in the three groups of snakes. Protein modeling shows that the three amino acid changes alter the structures of the central region of their ankyrin repeats. PMID:20937734

  19. Development and validation of a simple algorithm for initiation of CPAP in neonates with respiratory distress in Malawi.

    Science.gov (United States)

    Hundalani, Shilpa G; Richards-Kortum, Rebecca; Oden, Maria; Kawaza, Kondwani; Gest, Alfred; Molyneux, Elizabeth

    2015-07-01

    Low-cost bubble continuous positive airway pressure (bCPAP) systems have been shown to improve survival in neonates with respiratory distress, in developing countries including Malawi. District hospitals in Malawi implementing CPAP requested simple and reliable guidelines to enable healthcare workers with basic skills and minimal training to determine when treatment with CPAP is necessary. We developed and validated TRY (T: Tone is good, R: Respiratory Distress and Y=Yes) CPAP, a simple algorithm to identify neonates with respiratory distress who would benefit from CPAP. To validate the TRY CPAP algorithm for neonates with respiratory distress in a low-resource setting. We constructed an algorithm using a combination of vital signs, tone and birth weight to determine the need for CPAP in neonates with respiratory distress. Neonates admitted to the neonatal ward of Queen Elizabeth Central Hospital, in Blantyre, Malawi, were assessed in a prospective, cross-sectional study. Nurses and paediatricians-in-training assessed neonates to determine whether they required CPAP using the TRY CPAP algorithm. To establish the accuracy of the TRY CPAP algorithm in evaluating the need for CPAP, their assessment was compared with the decision of a neonatologist blinded to the TRY CPAP algorithm findings. 325 neonates were evaluated over a 2-month period; 13% were deemed to require CPAP by the neonatologist. The inter-rater reliability with the algorithm was 0.90 for nurses and 0.97 for paediatricians-in-training using the neonatologist's assessment as the reference standard. The TRY CPAP algorithm has the potential to be a simple and reliable tool to assist nurses and clinicians in identifying neonates who require treatment with CPAP in low-resource settings. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  20. DEVELOPMENT OF 2D HUMAN BODY MODELING USING THINNING ALGORITHM

    Directory of Open Access Journals (Sweden)

    K. Srinivasan

    2010-11-01

    Full Text Available Monitoring the behavior and activities of people in Video surveillance has gained more applications in Computer vision. This paper proposes a new approach to model the human body in 2D view for the activity analysis using Thinning algorithm. The first step of this work is Background subtraction which is achieved by the frame differencing algorithm. Thinning algorithm has been used to find the skeleton of the human body. After thinning, the thirteen feature points like terminating points, intersecting points, shoulder, elbow, and knee points have been extracted. Here, this research work attempts to represent the body model in three different ways such as Stick figure model, Patch model and Rectangle body model. The activities of humans have been analyzed with the help of 2D model for the pre-defined poses from the monocular video data. Finally, the time consumption and efficiency of our proposed algorithm have been evaluated.

  1. Greenhouse Gas Concentration Data Recovery Algorithm for a Low Cost, Laser Heterodyne Radiometer

    Science.gov (United States)

    Miller, J. H.; Melroy, H.; Ott, L.; McLinden, M. L.; Holben, B. N.; Wilson, E. L.

    2012-12-01

    The goal of a coordinated effort between groups at GWU and NASA GSFC is the development of a low-cost, global, surface instrument network that continuously monitors three key carbon cycle gases in the atmospheric column: carbon dioxide (CO2), methane (CH4), carbon monoxide (CO), as well as oxygen (O2) for atmospheric pressure profiles. The network will implement a low-cost, miniaturized, laser heterodyne radiometer (mini-LHR) that has recently been developed at NASA Goddard Space Flight Center. This mini-LHR is designed to operate in tandem with the passive aerosol sensor currently used in AERONET (a well established network of more than 450 ground aerosol monitoring instruments worldwide), and could be rapidly deployed into this established global network. Laser heterodyne radiometry is a well-established technique for detecting weak signals that was adapted from radio receiver technology. Here, a weak light signal, that has undergone absorption by atmospheric components, is mixed with light from a distributed feedback (DFB) telecommunications laser on a single-mode optical fiber. The RF component of the signal is detected on a fast photoreceiver. Scanning the laser through an absorption feature in the infrared, results in a scanned heterodyne signal in the RF. Deconvolution of this signal through the retrieval algorithm allows for the extraction of altitude contributions to the column signal. The retrieval algorithm is based on a spectral simulation program, SpecSyn, developed at GWU for high-resolution infrared spectroscopies. Variations in pressure, temperature, composition, and refractive index through the atmosphere; that are all functions of latitude, longitude, time of day, altitude, etc.; are modeled using algorithms developed in the MODTRAN program developed in part by the US Air Force Research Laboratory. In these calculations the atmosphere is modeled as a series of spherically symmetric shells with boundaries specified at defined altitudes. Temperature

  2. Greenhouse Gas Concentration Data Recovery Algorithm for a Low Cost, Laser Heterodyne Radiometer

    Science.gov (United States)

    Miller, J. Houston; Melroy, Hilary R.; Ott, Lesley E.; Mclinden, Matthew L.; Holben, Brent; Wilson, Emily L.

    2012-01-01

    The goal of a coordinated effort between groups at GWU and NASA GSFC is the development of a low-cost, global, surface instrument network that continuously monitors three key carbon cycle gases in the atmospheric column: carbon dioxide (CO2), methane (CH4), carbon monoxide (CO), as well as oxygen (O2) for atmospheric pressure profiles. The network will implement a low-cost, miniaturized, laser heterodyne radiometer (mini-LHR) that has recently been developed at NASA Goddard Space Flight Center. This mini-LHR is designed to operate in tandem with the passive aerosol sensor currently used in AERONET (a well established network of more than 450 ground aerosol monitoring instruments worldwide), and could be rapidly deployed into this established global network. Laser heterodyne radiometry is a well-established technique for detecting weak signals that was adapted from radio receiver technology. Here, a weak light signal, that has undergone absorption by atmospheric components, is mixed with light from a distributed feedback (DFB) telecommunications laser on a single-mode optical fiber. The RF component of the signal is detected on a fast photoreceiver. Scanning the laser through an absorption feature in the infrared, results in a scanned heterodyne signal io the RF. Deconvolution of this signal through the retrieval algorithm allows for the extraction of altitude contributions to the column signal. The retrieval algorithm is based on a spectral simulation program, SpecSyn, developed at GWU for high-resolution infrared spectroscopies. Variations io pressure, temperature, composition, and refractive index through the atmosphere; that are all functions of latitude, longitude, time of day, altitude, etc.; are modeled using algorithms developed in the MODTRAN program developed in part by the US Air Force Research Laboratory. In these calculations the atmosphere is modeled as a series of spherically symmetric shells with boundaries specified at defined altitudes. Temperature

  3. The Development of Advanced Processing and Analysis Algorithms for Improved Neutron Multiplicity Measurements

    International Nuclear Information System (INIS)

    Santi, P.; Favalli, A.; Hauck, D.; Henzl, V.; Henzlova, D.; Ianakiev, K.; Iliev, M.; Swinhoe, M.; Croft, S.; Worrall, L.

    2015-01-01

    One of the most distinctive and informative signatures of special nuclear materials is the emission of correlated neutrons from either spontaneous or induced fission. Because the emission of correlated neutrons is a unique and unmistakable signature of nuclear materials, the ability to effectively detect, process, and analyze these emissions will continue to play a vital role in the non-proliferation, safeguards, and security missions. While currently deployed neutron measurement techniques based on 3He proportional counter technology, such as neutron coincidence and multiplicity counters currently used by the International Atomic Energy Agency, have proven to be effective over the past several decades for a wide range of measurement needs, a number of technical and practical limitations exist in continuing to apply this technique to future measurement needs. In many cases, those limitations exist within the algorithms that are used to process and analyze the detected signals from these counters that were initially developed approximately 20 years ago based on the technology and computing power that was available at that time. Over the past three years, an effort has been undertaken to address the general shortcomings in these algorithms by developing new algorithms that are based on fundamental physics principles that should lead to the development of more sensitive neutron non-destructive assay instrumentation. Through this effort, a number of advancements have been made in correcting incoming data for electronic dead time, connecting the two main types of analysis techniques used to quantify the data (Shift register analysis and Feynman variance to mean analysis), and in the underlying physical model, known as the point model, that is used to interpret the data in terms of the characteristic properties of the item being measured. The current status of the testing and evaluation of these advancements in correlated neutron analysis techniques will be discussed

  4. Concept development for the ITER equatorial port visible/infrared wide angle viewing system

    International Nuclear Information System (INIS)

    Reichle, R.; Beaumont, B.; Boilson, D.; Bouhamou, R.; Direz, M.-F.; Encheva, A.; Henderson, M.; Kazarian, F.; Lamalle, Ph.; Lisgo, S.; Mitteau, R.; Patel, K. M.; Pitcher, C. S.; Pitts, R. A.; Prakash, A.; Raffray, R.; Schunke, B.; Snipes, J.; Diaz, A. Suarez; Udintsev, V. S.

    2012-01-01

    The ITER equatorial port visible/infrared wide angle viewing system concept is developed from the measurement requirements. The proposed solution situates 4 viewing systems in the equatorial ports 3, 9, 12, and 17 with 4 views each (looking at the upper target, the inner divertor, and tangentially left and right). This gives sufficient coverage. The spatial resolution of the divertor system is 2 times higher than the other views. For compensation of vacuum-vessel movements, an optical hinge concept is proposed. Compactness and low neutron streaming is achieved by orienting port plug doglegs horizontally. Calibration methods, risks, and R and D topics are outlined.

  5. The Development of Geo-KOMPSAT-2A (GK-2A) Convective Initiation Algorithm over the Korea peninsular

    Science.gov (United States)

    Kim, H. S.; Chung, S. R.; Lee, B. I.; Baek, S.; Jeon, E.

    2016-12-01

    The rapid development of convection can bring heavy rainfall that suffers a great deal of damages to society as well as threatens human life. The high accurate forecast of the strong convection is essentially demanded to prevent those disasters from the severe weather. Since a geostationary satellite is the most suitable instrument for monitoring the single cloud's lifecycle from its formation to extinction, it has been attempted to capture the precursor signals of convection clouds by satellite. Keeping pace with the launch of Geo-KOMPSAT-2A (GK-2A) in 2018, we planned to produce convective initiation (CI) defined as the indicator of potential cloud objects to bring heavy precipitation within two hours. The CI algorithm for GK-2A is composed of four stages. The beginning is to subtract mature cloud pixels, a sort of convective cloud mask by visible (VIS) albedo and infrared (IR) brightness temperature thresholds. Then, the remained immature cloud pixels are clustered as a cloud object by watershed techniques. Each clustering object is undergone 'Interest Fields' tests for IR data that reflect cloud microphysical properties at the current and their temporal changes; the cloud depth, updraft strength and production of glaciations. All thresholds of 'Interest fields' were optimized for Korean-type convective clouds. Based on scores from tests, it is decided whether the cloud object would develop as a convective cell or not. Here we show the result of case study in this summer over the Korea peninsular by using Himawari-8 VIS and IR data. Radar echo and data were used for validation. This study suggests that CI products of GK-2A would contribute to enhance accuracy of the very short range forecast over the Korea peninsular.

  6. Development of pattern recognition algorithms for particles detection from atmospheric images

    International Nuclear Information System (INIS)

    Khatchadourian, S.

    2010-01-01

    The HESS experiment consists of a system of telescopes destined to observe cosmic rays. Since the project has achieved a high level of performances, a second phase of the project has been initiated. This implies the addition of a new telescope which is more sensitive than its predecessors and which is capable of collecting a huge amount of images. In this context, all data collected by the telescope can not be retained because of storage limitations. Therefore, a new real-time system trigger must be designed in order to select interesting events on the fly. The purpose of this thesis was to propose a trigger solution to efficiently discriminate events (images) which are captured by the telescope. The first part of this thesis was to develop pattern recognition algorithms to be implemented within the trigger. A processing chain based on neural networks and Zernike moments has been validated. The second part of the thesis has focused on the implementation of the proposed algorithms onto an FPGA target, taking into account the application constraints in terms of resources and execution time. (author)

  7. Bias correction of daily satellite precipitation data using genetic algorithm

    Science.gov (United States)

    Pratama, A. W.; Buono, A.; Hidayat, R.; Harsa, H.

    2018-05-01

    Climate Hazards Group InfraRed Precipitation with Stations (CHIRPS) was producted by blending Satellite-only Climate Hazards Group InfraRed Precipitation (CHIRP) with Stasion observations data. The blending process was aimed to reduce bias of CHIRP. However, Biases of CHIRPS on statistical moment and quantil values were high during wet season over Java Island. This paper presented a bias correction scheme to adjust statistical moment of CHIRP using observation precipitation data. The scheme combined Genetic Algorithm and Nonlinear Power Transformation, the results was evaluated based on different season and different elevation level. The experiment results revealed that the scheme robustly reduced bias on variance around 100% reduction and leaded to reduction of first, and second quantile biases. However, bias on third quantile only reduced during dry months. Based on different level of elevation, the performance of bias correction process is only significantly different on skewness indicators.

  8. Imaging open-path Fourier transform infrared spectrometer for 3D cloud profiling

    Science.gov (United States)

    Rentz Dupuis, Julia; Mansur, David J.; Vaillancourt, Robert; Carlson, David; Evans, Thomas; Schundler, Elizabeth; Todd, Lori; Mottus, Kathleen

    2010-04-01

    OPTRA has developed an imaging open-path Fourier transform infrared (I-OP-FTIR) spectrometer for 3D profiling of chemical and biological agent simulant plumes released into test ranges and chambers. An array of I-OP-FTIR instruments positioned around the perimeter of the test site, in concert with advanced spectroscopic algorithms, enables real time tomographic reconstruction of the plume. The approach is intended as a referee measurement for test ranges and chambers. This Small Business Technology Transfer (STTR) effort combines the instrumentation and spectroscopic capabilities of OPTRA, Inc. with the computed tomographic expertise of the University of North Carolina, Chapel Hill. In this paper, we summarize the design and build and detail system characterization and test of a prototype I-OP-FTIR instrument. System characterization includes radiometric performance and spectral resolution. Results from a series of tomographic reconstructions of sulfur hexafluoride plumes in a laboratory setting are also presented.

  9. An empirical line-by-line model for the infrared solar transmittance spectrum from 700 to 5000cm{sup -1}

    Energy Technology Data Exchange (ETDEWEB)

    Hase, F. [Institut fuer Meteorologie und Klimaforschung, Forschungszentrum Karlsruhe, Postfach 3640, D-76021 Karlsruhe (Germany)]. E-mail: frank.hase@imk.fzk.de; Demoulin, P. [Institut d' Astrophysique et de Geophysique, allee du VI aout, 17, batiment B5a, B-4000, Liege (Belgium); Sauval, A.J. [Observatoire Royal de Belgique, avenue circulaire, 3, B-1180, Bruxelles (Belgium); Toon, G.C. [Jet Propulsion Laboratory, 4800 Oak Grove Drive, Pasadena, CA 91109 (United States); Bernath, P.F. [Department of Chemistry, University of Waterloo, Waterloo, Ont., Canada N2L3G1 (Canada); Goldman, A. [Department of Physics, University of Denver, Denver, CO 80208 (United States); Hannigan, J.W. [Atmospheric Chemistry Division, National Center for Atmospheric Research, P.O. Box 3000, Boulder, CO 80303 (United States); Rinsland, C.P. [NASA Langley Research Center, Hampton, VA 23681-2199 (United States)

    2006-12-15

    An empirical line-by-line model for the infrared solar transmittance spectrum is presented. The model can be incorporated into radiative transfer codes to allow fast calculation of all relevant emission and absorption features in the solar spectrum in the mid-infrared region from 700 to 5000cm{sup -1}. The transmittance is modelled as a function of the diameter of the field-of-view centered on the solar disk: the line broadening due to solar rotation as well as center-to-limb variations in strength and width are taken into account for stronger lines. Applications of the model presented here are in the fields of terrestrial remote sensing in the mid-infrared spectral region when the sun is used as radiation source or scattered solar radiation contributes to the measured signal and in the fields of atmospheric radiative transfer algorithms which compute the propagation of infrared solar radiation in the terrestrial atmosphere.

  10. Development and image quality assessment of a contrast-enhancement algorithm for display of digital chest radiographs

    International Nuclear Information System (INIS)

    Rehm, K.

    1992-01-01

    This dissertation presents a contrast-enhancement algorithm Artifact-Suppressed Adaptive Histogram Equalization (ASAHE). This algorithm was developed as part of a larger effort to replace the film radiographs currently used in radiology departments with digital images. Among the expected benefits of digital radiology are improved image management and greater diagnostic accuracy. Film radiographs record X-ray transmission data at high spatial resolution, and a wide dynamic range of signal. Current digital radiography systems record an image at reduced spatial resolution and with coarse sampling of the available dynamic range. These reductions have a negative impact on diagnostic accuracy. The contrast-enhancement algorithm presented in this dissertation is designed to boost diagnostic accuracy of radiologists using digital images. The ASAHE algorithm is an extension of an earlier technique called Adaptive Histogram Equalization (AHE). The AHE algorithm is unsuitable for chest radiographs because it over-enhances noise, and introduces boundary artifacts. The modifications incorporated in ASAHE suppress the artifacts and allow processing of chest radiographs. This dissertation describes the psychophysical methods used to evaluate the effects of processing algorithms on human observer performance. An experiment conducted with anthropomorphic phantoms and simulated nodules showed the ASAHE algorithm to be superior for human detection of nodules when compared to a computed radiography system's algorithm that is in current use. An experiment conducted using clinical images demonstrating pneumothoraces (partial lung collapse) indicated no difference in human observer accuracy when ASAHE images were compared to computed radiography images, but greater ease of diagnosis when ASAHE images were used. These results provide evidence to suggest that Artifact-Suppressed Adaptive Histogram Equalization can be effective in increasing diagnostic accuracy and efficiency

  11. Development and Comparative Study of Effects of Training Algorithms on Performance of Artificial Neural Network Based Analog and Digital Automatic Modulation Recognition

    Directory of Open Access Journals (Sweden)

    Jide Julius Popoola

    2015-11-01

    Full Text Available This paper proposes two new classifiers that automatically recognise twelve combined analog and digital modulated signals without any a priori knowledge of the modulation schemes and the modulation parameters. The classifiers are developed using pattern recognition approach. Feature keys extracted from the instantaneous amplitude, instantaneous phase and the spectrum symmetry of the simulated signals are used as inputs to the artificial neural network employed in developing the classifiers. The two developed classifiers are trained using scaled conjugate gradient (SCG and conjugate gradient (CONJGRAD training algorithms. Sample results of the two classifiers show good success recognition performance with an average overall recognition rate above 99.50% at signal-to-noise ratio (SNR value from 0 dB and above with the two training algorithms employed and an average overall recognition rate slightly above 99.00% and 96.40% respectively at - 5 dB SNR value for SCG and CONJGRAD training algorithms. The comparative performance evaluation of the two developed classifiers using the two training algorithms shows that the two training algorithms have different effects on both the response rate and efficiency of the two developed artificial neural networks classifiers. In addition, the result of the performance evaluation carried out on the overall success recognition rates between the two developed classifiers in this study using pattern recognition approach with the two training algorithms and one reported classifier in surveyed literature using decision-theoretic approach shows that the classifiers developed in this study perform favourably with regard to accuracy and performance probability as compared to classifier presented in previous study.

  12. Far-infrared pedestrian detection for advanced driver assistance systems using scene context

    Science.gov (United States)

    Wang, Guohua; Liu, Qiong; Wu, Qingyao

    2016-04-01

    Pedestrian detection is one of the most critical but challenging components in advanced driver assistance systems. Far-infrared (FIR) images are well-suited for pedestrian detection even in a dark environment. However, most current detection approaches just focus on pedestrian patterns themselves, where robust and real-time detection cannot be well achieved. We propose a fast FIR pedestrian detection approach, called MAP-HOGLBP-T, to explicitly exploit the scene context for the driver assistance system. In MAP-HOGLBP-T, three algorithms are developed to exploit the scene contextual information from roads, vehicles, and background objects of high homogeneity, and we employ the Bayesian approach to build a classifier learner which respects the scene contextual information. We also develop a multiframe approval scheme to enhance the detection performance based on spatiotemporal continuity of pedestrians. Our empirical study on real-world datasets has demonstrated the efficiency and effectiveness of the proposed method. The performance is shown to be better than that of state-of-the-art low-level feature-based approaches.

  13. A DIFFERENTIAL EVOLUTION ALGORITHM DEVELOPED FOR A NURSE SCHEDULING PROBLEM

    Directory of Open Access Journals (Sweden)

    Shahnazari-Shahrezaei, P.

    2012-11-01

    Full Text Available Nurse scheduling is a type of manpower allocation problem that tries to satisfy hospital managers objectives and nurses preferences as much as possible by generating fair shift schedules. This paper presents a nurse scheduling problem based on a real case study, and proposes two meta-heuristics a differential evolution algorithm (DE and a greedy randomised adaptive search procedure (GRASP to solve it. To investigate the efficiency of the proposed algorithms, two problems are solved. Furthermore, some comparison metrics are applied to examine the reliability of the proposed algorithms. The computational results in this paper show that the proposed DE outperforms the GRASP.

  14. AILES: the infrared and THz beamline on SOLEIL synchrotron radiation source

    International Nuclear Information System (INIS)

    Roy, P.; Brubach, J.B.; Rouzieres, M.; Pirali, O.; Kwabia Tchana, F.; Manceron, L.

    2008-01-01

    The development of a new infrared beamline (ligne de lumiere AILES) at the third generation Synchrotron Radiation source SOLEIL is underway. This beamline utilizes infrared synchrotron radiation from both the edge emission and the constant field conventional source. The expected performances including flux, spatial distribution of the photons, spectral range and stability are calculated and discussed. The optical system, spectroscopic stations and workspace are described. The calculation in the near field approach and the simulation by ray tracing show that the source with its adapted optics offers high flux and brilliance for a variety of infrared experiments. We also review the main research themes and the articulation and developments of the infrared sources at SOLEIL. (authors)

  15. New generation of gas infrared point heaters

    Energy Technology Data Exchange (ETDEWEB)

    Schink, Damian [Pintsch Aben B.V., Dinslaken (Germany)

    2011-11-15

    It is more than thirty years since gas infrared heating for points was introduced on the railway network of what is now Deutsche Bahn. These installations have remained in service right through to the present, with virtually no modifications. More stringent requirements as regards availability, maintainability and remote monitoring have, however, led to the development of a new system of gas infrared heating for points - truly a new generation. (orig.)

  16. Infrared image background modeling based on improved Susan filtering

    Science.gov (United States)

    Yuehua, Xia

    2018-02-01

    When SUSAN filter is used to model the infrared image, the Gaussian filter lacks the ability of direction filtering. After filtering, the edge information of the image cannot be preserved well, so that there are a lot of edge singular points in the difference graph, increase the difficulties of target detection. To solve the above problems, the anisotropy algorithm is introduced in this paper, and the anisotropic Gauss filter is used instead of the Gauss filter in the SUSAN filter operator. Firstly, using anisotropic gradient operator to calculate a point of image's horizontal and vertical gradient, to determine the long axis direction of the filter; Secondly, use the local area of the point and the neighborhood smoothness to calculate the filter length and short axis variance; And then calculate the first-order norm of the difference between the local area of the point's gray-scale and mean, to determine the threshold of the SUSAN filter; Finally, the built SUSAN filter is used to convolution the image to obtain the background image, at the same time, the difference between the background image and the original image is obtained. The experimental results show that the background modeling effect of infrared image is evaluated by Mean Squared Error (MSE), Structural Similarity (SSIM) and local Signal-to-noise Ratio Gain (GSNR). Compared with the traditional filtering algorithm, the improved SUSAN filter has achieved better background modeling effect, which can effectively preserve the edge information in the image, and the dim small target is effectively enhanced in the difference graph, which greatly reduces the false alarm rate of the image.

  17. Atmospheric turbulence and sensor system effects on biometric algorithm performance

    Science.gov (United States)

    Espinola, Richard L.; Leonard, Kevin R.; Byrd, Kenneth A.; Potvin, Guy

    2015-05-01

    Biometric technologies composed of electro-optical/infrared (EO/IR) sensor systems and advanced matching algorithms are being used in various force protection/security and tactical surveillance applications. To date, most of these sensor systems have been widely used in controlled conditions with varying success (e.g., short range, uniform illumination, cooperative subjects). However the limiting conditions of such systems have yet to be fully studied for long range applications and degraded imaging environments. Biometric technologies used for long range applications will invariably suffer from the effects of atmospheric turbulence degradation. Atmospheric turbulence causes blur, distortion and intensity fluctuations that can severely degrade image quality of electro-optic and thermal imaging systems and, for the case of biometrics technology, translate to poor matching algorithm performance. In this paper, we evaluate the effects of atmospheric turbulence and sensor resolution on biometric matching algorithm performance. We use a subset of the Facial Recognition Technology (FERET) database and a commercial algorithm to analyze facial recognition performance on turbulence degraded facial images. The goal of this work is to understand the feasibility of long-range facial recognition in degraded imaging conditions, and the utility of camera parameter trade studies to enable the design of the next generation biometrics sensor systems.

  18. Algorithm FIRE-Feynman Integral REduction

    International Nuclear Information System (INIS)

    Smirnov, A.V.

    2008-01-01

    The recently developed algorithm FIRE performs the reduction of Feynman integrals to master integrals. It is based on a number of strategies, such as applying the Laporta algorithm, the s-bases algorithm, region-bases and integrating explicitly over loop momenta when possible. Currently it is being used in complicated three-loop calculations.

  19. Extragalactic infrared astronomy

    International Nuclear Information System (INIS)

    Gondhalekar, P.M.

    1985-05-01

    The paper concerns the field of Extragalactic Infrared Astronomy, discussed at the Fourth RAL Workshop on Astronomy and Astrophysics. Fifteen papers were presented on infrared emission from extragalactic objects. Both ground-(and aircraft-) based and IRAS infrared data were reviewed. The topics covered star formation in galaxies, active galactic nuclei and cosmology. (U.K.)

  20. Recent developments in structure-preserving algorithms for oscillatory differential equations

    CERN Document Server

    Wu, Xinyuan

    2018-01-01

    The main theme of this book is recent progress in structure-preserving algorithms for solving initial value problems of oscillatory differential equations arising in a variety of research areas, such as astronomy, theoretical physics, electronics, quantum mechanics and engineering. It systematically describes the latest advances in the development of structure-preserving integrators for oscillatory differential equations, such as structure-preserving exponential integrators, functionally fitted energy-preserving integrators, exponential Fourier collocation methods, trigonometric collocation methods, and symmetric and arbitrarily high-order time-stepping methods. Most of the material presented here is drawn from the recent literature. Theoretical analysis of the newly developed schemes shows their advantages in the context of structure preservation. All the new methods introduced in this book are proven to be highly effective compared with the well-known codes in the scientific literature. This book also addre...

  1. Infrared thermography

    CERN Document Server

    Meola, Carosena

    2012-01-01

    This e-book conveys information about basic IRT theory, infrared detectors, signal digitalization and applications of infrared thermography in many fields such as medicine, foodstuff conservation, fluid-dynamics, architecture, anthropology, condition monitoring, non destructive testing and evaluation of materials and structures.

  2. Development of an algorithm for X-ray exposures using the Panasonic UD-802A thermoluminescent dosemeter

    International Nuclear Information System (INIS)

    McKittrick, Leo; Currivan, Lorraine; Pollard, David; Nicholls, Colyn; Romero, A.M.; Palethorpe, Jeffrey

    2008-01-01

    Full text: As part of its continuous quality improvement the Dosimetry Service of the Radiological Protection Institute of Ireland (RPII) in conjunction with Panasonic Industrial Europe (UK) has investigated further the use of the standard Panasonic algorithm for X-ray exposures using the Panasonic UD-802A TL dosemeter. Originally developed to satisfy the obsolete standard ANSI 13.11-1983, the standard Panasonic dose algorithm has undergone several revisions such as HPS N13.11-2001. This paper presents a dose algorithm that can be used to correct the dose response at low energies such as X-ray radiation using a four element TL dosemeter due to the behaviour of two different independent phosphors. A series of irradiations with a range of energies using N-20 up to Co-60 were carried out with our particular interest being in responses to X-ray irradiations. Irradiations were performed at: RRPPS, University Hospital Birmingham NHS Foundation Trust, U.K.; HPA, U.K. and CIEMAT, Madrid, Spain. Different irradiation conditions were employed which included: X-ray from narrow and wide spectra as described by ISO 4037-1 (1996), and ISO water slab phantom and PMMA slab phantom respectively. Using the UD-802A TLD and UD-854AT hanger combination, the response data from the series of irradiations was utilised to validate and if necessary, modify the photon/beta branches of the algorithm to: 1. Best estimate Hp(10) and Hp(0.07); 2. Provide information on irradiation energies; 3. Verification by performance tests. This work further advances the algorithm developed at CIEMAT whereby a best-fit, polynomial trend is used with the dose response variations between the independent phosphors. (author)

  3. A pipelined FPGA implementation of an encryption algorithm based on genetic algorithm

    Science.gov (United States)

    Thirer, Nonel

    2013-05-01

    With the evolution of digital data storage and exchange, it is essential to protect the confidential information from every unauthorized access. High performance encryption algorithms were developed and implemented by software and hardware. Also many methods to attack the cipher text were developed. In the last years, the genetic algorithm has gained much interest in cryptanalysis of cipher texts and also in encryption ciphers. This paper analyses the possibility to use the genetic algorithm as a multiple key sequence generator for an AES (Advanced Encryption Standard) cryptographic system, and also to use a three stages pipeline (with four main blocks: Input data, AES Core, Key generator, Output data) to provide a fast encryption and storage/transmission of a large amount of data.

  4. Development of a Mobile Robot Test Platform and Methods for Validation of Prognostics-Enabled Decision Making Algorithms

    Directory of Open Access Journals (Sweden)

    Jose R. Celaya

    2013-01-01

    Full Text Available As fault diagnosis and prognosis systems in aerospace applications become more capable, the ability to utilize information supplied by them becomes increasingly important. While certain types of vehicle health data can be effectively processed and acted upon by crew or support personnel, others, due to their complexity or time constraints, require either automated or semi-automated reasoning. Prognostics-enabled Decision Making (PDM is an emerging research area that aims to integrate prognostic health information and knowledge about the future operating conditions into the process of selecting subsequent actions for the system. The newly developed PDM algorithms require suitable software and hardware platforms for testing under realistic fault scenarios. The paper describes the development of such a platform, based on the K11 planetary rover prototype. A variety of injectable fault modes are being investigated for electrical, mechanical, and power subsystems of the testbed, along with methods for data collection and processing. In addition to the hardware platform, a software simulator with matching capabilities has been developed. The simulator allows for prototyping and initial validation of the algorithms prior to their deployment on the K11. The simulator is also available to the PDM algorithms to assist with the reasoning process. A reference set of diagnostic, prognostic, and decision making algorithms is also described, followed by an overview of the current test scenarios and the results of their execution on the simulator.