WorldWideScience

Sample records for level noisy images

  1. Efficient Filtering of Noisy Fingerprint Images

    Directory of Open Access Journals (Sweden)

    Maria Liliana Costin

    2016-01-01

    Full Text Available Fingerprint identification is an important field in the wide domain of biometrics with many applications, in different areas such: judicial, mobile phones, access systems, airports. There are many elaborated algorithms for fingerprint identification, but none of them can guarantee that the results of identification are always 100 % accurate. A first step in a fingerprint image analysing process consists in the pre-processing or filtering. If the result after this step is not by a good quality the upcoming identification process can fail. A major difficulty can appear in case of fingerprint identification if the images that should be identified from a fingerprint image database are noisy with different type of noise. The objectives of the paper are: the successful completion of the noisy digital image filtering, a novel more robust algorithm of identifying the best filtering algorithm and the classification and ranking of the images. The choice about the best filtered images of a set of 9 algorithms is made with a dual method of fuzzy and aggregation model. We are proposing through this paper a set of 9 filters with different novelty designed for processing the digital images using the following methods: quartiles, medians, average, thresholds and histogram equalization, applied all over the image or locally on small areas. Finally the statistics reveal the classification and ranking of the best algorithms.

  2. Estimation of object motion parameters from noisy images.

    Science.gov (United States)

    Broida, T J; Chellappa, R

    1986-01-01

    An approach is presented for the estimation of object motion parameters based on a sequence of noisy images. The problem considered is that of a rigid body undergoing unknown rotational and translational motion. The measurement data consists of a sequence of noisy image coordinates of two or more object correspondence points. By modeling the object dynamics as a function of time, estimates of the model parameters (including motion parameters) can be extracted from the data using recursive and/or batch techniques. This permits a desired degree of smoothing to be achieved through the use of an arbitrarily large number of images. Some assumptions regarding object structure are presently made. Results are presented for a recursive estimation procedure: the case considered here is that of a sequence of one dimensional images of a two dimensional object. Thus, the object moves in one transverse dimension, and in depth, preserving the fundamental ambiguity of the central projection image model (loss of depth information). An iterated extended Kalman filter is used for the recursive solution. Noise levels of 5-10 percent of the object image size are used. Approximate Cramer-Rao lower bounds are derived for the model parameter estimates as a function of object trajectory and noise level. This approach may be of use in situations where it is difficult to resolve large numbers of object match points, but relatively long sequences of images (10 to 20 or more) are available.

  3. Machine printed text and handwriting identification in noisy document images.

    Science.gov (United States)

    Zheng, Yefeng; Li, Huiping; Doermann, David

    2004-03-01

    In this paper, we address the problem of the identification of text in noisy document images. We are especially focused on segmenting and identifying between handwriting and machine printed text because: 1) Handwriting in a document often indicates corrections, additions, or other supplemental information that should be treated differently from the main content and 2) the segmentation and recognition techniques requested for machine printed and handwritten text are significantly different. A novel aspect of our approach is that we treat noise as a separate class and model noise based on selected features. Trained Fisher classifiers are used to identify machine printed text and handwriting from noise and we further exploit context to refine the classification. A Markov Random Field-based (MRF) approach is used to model the geometrical structure of the printed text, handwriting, and noise to rectify misclassifications. Experimental results show that our approach is robust and can significantly improve page segmentation in noisy document collections.

  4. Shape adaptive, robust iris feature extraction from noisy iris images.

    Science.gov (United States)

    Ghodrati, Hamed; Dehghani, Mohammad Javad; Danyali, Habibolah

    2013-10-01

    In the current iris recognition systems, noise removing step is only used to detect noisy parts of the iris region and features extracted from there will be excluded in matching step. Whereas depending on the filter structure used in feature extraction, the noisy parts may influence relevant features. To the best of our knowledge, the effect of noise factors on feature extraction has not been considered in the previous works. This paper investigates the effect of shape adaptive wavelet transform and shape adaptive Gabor-wavelet for feature extraction on the iris recognition performance. In addition, an effective noise-removing approach is proposed in this paper. The contribution is to detect eyelashes and reflections by calculating appropriate thresholds by a procedure called statistical decision making. The eyelids are segmented by parabolic Hough transform in normalized iris image to decrease computational burden through omitting rotation term. The iris is localized by an accurate and fast algorithm based on coarse-to-fine strategy. The principle of mask code generation is to assign the noisy bits in an iris code in order to exclude them in matching step is presented in details. An experimental result shows that by using the shape adaptive Gabor-wavelet technique there is an improvement on the accuracy of recognition rate.

  5. External Prior Guided Internal Prior Learning for Real-World Noisy Image Denoising

    Science.gov (United States)

    Xu, Jun; Zhang, Lei; Zhang, David

    2018-06-01

    Most of existing image denoising methods learn image priors from either external data or the noisy image itself to remove noise. However, priors learned from external data may not be adaptive to the image to be denoised, while priors learned from the given noisy image may not be accurate due to the interference of corrupted noise. Meanwhile, the noise in real-world noisy images is very complex, which is hard to be described by simple distributions such as Gaussian distribution, making real noisy image denoising a very challenging problem. We propose to exploit the information in both external data and the given noisy image, and develop an external prior guided internal prior learning method for real noisy image denoising. We first learn external priors from an independent set of clean natural images. With the aid of learned external priors, we then learn internal priors from the given noisy image to refine the prior model. The external and internal priors are formulated as a set of orthogonal dictionaries to efficiently reconstruct the desired image. Extensive experiments are performed on several real noisy image datasets. The proposed method demonstrates highly competitive denoising performance, outperforming state-of-the-art denoising methods including those designed for real noisy images.

  6. Fuzzy Logic Based Edge Detection in Smooth and Noisy Clinical Images.

    Directory of Open Access Journals (Sweden)

    Izhar Haq

    Full Text Available Edge detection has beneficial applications in the fields such as machine vision, pattern recognition and biomedical imaging etc. Edge detection highlights high frequency components in the image. Edge detection is a challenging task. It becomes more arduous when it comes to noisy images. This study focuses on fuzzy logic based edge detection in smooth and noisy clinical images. The proposed method (in noisy images employs a 3 × 3 mask guided by fuzzy rule set. Moreover, in case of smooth clinical images, an extra mask of contrast adjustment is integrated with edge detection mask to intensify the smooth images. The developed method was tested on noise-free, smooth and noisy images. The results were compared with other established edge detection techniques like Sobel, Prewitt, Laplacian of Gaussian (LOG, Roberts and Canny. When the developed edge detection technique was applied to a smooth clinical image of size 270 × 290 pixels having 24 dB 'salt and pepper' noise, it detected very few (22 false edge pixels, compared to Sobel (1931, Prewitt (2741, LOG (3102, Roberts (1451 and Canny (1045 false edge pixels. Therefore it is evident that the developed method offers improved solution to the edge detection problem in smooth and noisy clinical images.

  7. Enhancement of noisy EDX HRSTEM spectrum-images by combination of filtering and PCA.

    Science.gov (United States)

    Potapov, Pavel; Longo, Paolo; Okunishi, Eiji

    2017-05-01

    STEM spectrum-imaging with collecting EDX signal is considered in view of the extraction of maximum information from very noisy data. It is emphasized that spectrum-images with weak EDX signal often suffer from information loss in the course of PCA treatment. The loss occurs when the level of random noise exceeds a certain threshold. Weighted PCA, though potentially helpful in isolation of meaningful variations from noise, might provoke the complete loss of information in the situation of weak EDX signal. Filtering datasets prior PCA can improve the situation and recover the lost information. In particular, Gaussian kernel filters are found to be efficient. A new filter useful in the case of sparse atomic-resolution EDX spectrum-images is suggested. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Using the generalized Radon transform for detection of curves in noisy images

    DEFF Research Database (Denmark)

    Toft, Peter Aundal

    1996-01-01

    In this paper the discrete generalized Radon transform will be investigated as a tool for detection of curves in noisy digital images. The discrete generalized Radon transform maps an image into a parameter domain, where curves following a specific parameterized curve form will correspond to a peak...

  9. A variational ensemble scheme for noisy image data assimilation

    Science.gov (United States)

    Yang, Yin; Robinson, Cordelia; Heitz, Dominique; Mémin, Etienne

    2014-05-01

    Data assimilation techniques aim at recovering a system state variables trajectory denoted as X, along time from partially observed noisy measurements of the system denoted as Y. These procedures, which couple dynamics and noisy measurements of the system, fulfill indeed a twofold objective. On one hand, they provide a denoising - or reconstruction - procedure of the data through a given model framework and on the other hand, they provide estimation procedures for unknown parameters of the dynamics. A standard variational data assimilation problem can be formulated as the minimization of the following objective function with respect to the initial discrepancy, η, from the background initial guess: δ« J(η(x)) = 1∥Xb (x) - X (t ,x)∥2 + 1 tf∥H(X (t,x ))- Y (t,x)∥2dt. 2 0 0 B 2 t0 R (1) where the observation operator H links the state variable and the measurements. The cost function can be interpreted as the log likelihood function associated to the a posteriori distribution of the state given the past history of measurements and the background. In this work, we aim at studying ensemble based optimal control strategies for data assimilation. Such formulation nicely combines the ingredients of ensemble Kalman filters and variational data assimilation (4DVar). It is also formulated as the minimization of the objective function (1), but similarly to ensemble filter, it introduces in its objective function an empirical ensemble-based background-error covariance defined as: B ≡ )(Xb - )T>. (2) Thus, it works in an off-line smoothing mode rather than on the fly like sequential filters. Such resulting ensemble variational data assimilation technique corresponds to a relatively new family of methods [1,2,3]. It presents two main advantages: first, it does not require anymore to construct the adjoint of the dynamics tangent linear operator, which is a considerable advantage with respect to the method's implementation, and second, it enables the handling of a flow

  10. Detection of electrophysiology catheters in noisy fluoroscopy images.

    Science.gov (United States)

    Franken, Erik; Rongen, Peter; van Almsick, Markus; ter Haar Romeny, Bart

    2006-01-01

    Cardiac catheter ablation is a minimally invasive medical procedure to treat patients with heart rhythm disorders. It is useful to know the positions of the catheters and electrodes during the intervention, e.g. for the automatization of cardiac mapping. Our goal is therefore to develop a robust image analysis method that can detect the catheters in X-ray fluoroscopy images. Our method uses steerable tensor voting in combination with a catheter-specific multi-step extraction algorithm. The evaluation on clinical fluoroscopy images shows that especially the extraction of the catheter tip is successful and that the use of tensor voting accounts for a large increase in performance.

  11. Blind Compressed Image Watermarking for Noisy Communication Channels

    Science.gov (United States)

    2015-10-26

    Lenna test image [11] for our simulations, and gradient projection for sparse recon- struction (GPSR) [12] to solve the convex optimization prob- lem...E. Candes, J. Romberg , and T. Tao, “Robust uncertainty prin- ciples: exact signal reconstruction from highly incomplete fre- quency information,” IEEE...Images - Requirements and Guidelines,” ITU-T Recommen- dation T.81, 1992. [6] M. Gkizeli, D. Pados, and M. Medley, “Optimal signature de - sign for

  12. Accurate estimation of motion blur parameters in noisy remote sensing image

    Science.gov (United States)

    Shi, Xueyan; Wang, Lin; Shao, Xiaopeng; Wang, Huilin; Tao, Zhong

    2015-05-01

    The relative motion between remote sensing satellite sensor and objects is one of the most common reasons for remote sensing image degradation. It seriously weakens image data interpretation and information extraction. In practice, point spread function (PSF) should be estimated firstly for image restoration. Identifying motion blur direction and length accurately is very crucial for PSF and restoring image with precision. In general, the regular light-and-dark stripes in the spectrum can be employed to obtain the parameters by using Radon transform. However, serious noise existing in actual remote sensing images often causes the stripes unobvious. The parameters would be difficult to calculate and the error of the result relatively big. In this paper, an improved motion blur parameter identification method to noisy remote sensing image is proposed to solve this problem. The spectrum characteristic of noisy remote sensing image is analyzed firstly. An interactive image segmentation method based on graph theory called GrabCut is adopted to effectively extract the edge of the light center in the spectrum. Motion blur direction is estimated by applying Radon transform on the segmentation result. In order to reduce random error, a method based on whole column statistics is used during calculating blur length. Finally, Lucy-Richardson algorithm is applied to restore the remote sensing images of the moon after estimating blur parameters. The experimental results verify the effectiveness and robustness of our algorithm.

  13. Restoration of a single superresolution image from several blurred, noisy, and undersampled measured images.

    Science.gov (United States)

    Elad, M; Feuer, A

    1997-01-01

    The three main tools in the single image restoration theory are the maximum likelihood (ML) estimator, the maximum a posteriori probability (MAP) estimator, and the set theoretic approach using projection onto convex sets (POCS). This paper utilizes the above known tools to propose a unified methodology toward the more complicated problem of superresolution restoration. In the superresolution restoration problem, an improved resolution image is restored from several geometrically warped, blurred, noisy and downsampled measured images. The superresolution restoration problem is modeled and analyzed from the ML, the MAP, and POCS points of view, yielding a generalization of the known superresolution restoration methods. The proposed restoration approach is general but assumes explicit knowledge of the linear space- and time-variant blur, the (additive Gaussian) noise, the different measured resolutions, and the (smooth) motion characteristics. A hybrid method combining the simplicity of the ML and the incorporation of nonellipsoid constraints is presented, giving improved restoration performance, compared with the ML and the POCS approaches. The hybrid method is shown to converge to the unique optimal solution of a new definition of the optimization problem. Superresolution restoration from motionless measurements is also discussed. Simulations demonstrate the power of the proposed methodology.

  14. Information Extraction with Character-level Neural Networks and Free Noisy Supervision

    OpenAIRE

    Meerkamp, Philipp; Zhou, Zhengyi

    2016-01-01

    We present an architecture for information extraction from text that augments an existing parser with a character-level neural network. The network is trained using a measure of consistency of extracted data with existing databases as a form of noisy supervision. Our architecture combines the ability of constraint-based information extraction systems to easily incorporate domain knowledge and constraints with the ability of deep neural networks to leverage large amounts of data to learn compl...

  15. A Novel Image Encryption Scheme Based on Clifford Attractor and Noisy Logistic Map for Secure Transferring Images in Navy

    Directory of Open Access Journals (Sweden)

    Mohadeseh Kanafchian

    2017-04-01

    In this paper, we first give a brief introduction into chaotic image encryption and then we investigate some important properties and behaviour of the logistic map. The logistic map, aperiodic trajectory, or random-like fluctuation, could not be obtained with some choice of initial condition. Therefore, a noisy logistic map with an additive system noise is introduced. The proposed scheme is based on the extended map of the Clifford strange attractor, where each dimension has a specific role in the encryption process. Two dimensions are used for pixel permutation and the third dimension is used for pixel diffusion. In order to optimize the Clifford encryption system we increase the space key by using the noisy logistic map and a novel encryption scheme based on the Clifford attractor and the noisy logistic map for secure transfer images is proposed. This algorithm consists of two parts: the noisy logistic map shuffle of the pixel position and the pixel value. We use times for shuffling the pixel position and value then we generate the new pixel position and value by the Clifford system. To illustrate the efficiency of the proposed scheme, various types of security analysis are tested. It can be concluded that the proposed image encryption system is a suitable choice for practical applications.

  16. Algebraically approximate and noisy realization of discrete-time systems and digital images

    CERN Document Server

    Hasegawa, Yasumichi

    2009-01-01

    This monograph deals with approximation and noise cancellation of dynamical systems which include linear and nonlinear input/output relationships. It also deal with approximation and noise cancellation of two dimensional arrays. It will be of special interest to researchers, engineers and graduate students who have specialized in filtering theory and system theory and digital images. This monograph is composed of two parts. Part I and Part II will deal with approximation and noise cancellation of dynamical systems or digital images respectively. From noiseless or noisy data, reduction will be

  17. Automated marker tracking using noisy X-ray images degraded by the treatment beam

    International Nuclear Information System (INIS)

    Wisotzky, E.; Fast, M.F.; Nill, S.

    2015-01-01

    This study demonstrates the feasibility of automated marker tracking for the real-time detection of intrafractional target motion using noisy kilovoltage (kV) X-ray images degraded by the megavoltage (MV) treatment beam. The authors previously introduced the in-line imaging geometry, in which the flat-panel detector (FPD) is mounted directly underneath the treatment head of the linear accelerator. They found that the 121 kVp image quality was severely compromised by the 6 MV beam passing through the FPD at the same time. Specific MV-induced artefacts present a considerable challenge for automated marker detection algorithms. For this study, the authors developed a new imaging geometry by re-positioning the FPD and the X-ray tube. This improved the contrast-to-noise-ratio between 40% and 72% at the 1.2 mAs/image exposure setting. The increase in image quality clearly facilitates the quick and stable detection of motion with the aid of a template matching algorithm. The setup was tested with an anthropomorphic lung phantom (including an artificial lung tumour). In the tumour one or three Calypso registered beacons were embedded to achieve better contrast during MV radiation. For a single beacon, image acquisition and automated marker detection typically took around 76±6 ms. The success rate was found to be highly dependent on imaging dose and gantry angle. To eliminate possible false detections, the authors implemented a training phase prior to treatment beam irradiation and also introduced speed limits for motion between subsequent images.

  18. Automated marker tracking using noisy X-ray images degraded by the treatment beam

    Energy Technology Data Exchange (ETDEWEB)

    Wisotzky, E. [Fraunhofer Institute for Production Systems and Design Technology (IPK), Berlin (Germany); German Cancer Research Center (DKFZ), Heidelberg (Germany); Fast, M.F.; Nill, S. [The Royal Marsden NHS Foundation Trust, London (United Kingdom). Joint Dept. of Physics; Oelfke, U. [The Royal Marsden NHS Foundation Trust, London (United Kingdom). Joint Dept. of Physics; German Cancer Research Center (DKFZ), Heidelberg (Germany)

    2015-09-01

    This study demonstrates the feasibility of automated marker tracking for the real-time detection of intrafractional target motion using noisy kilovoltage (kV) X-ray images degraded by the megavoltage (MV) treatment beam. The authors previously introduced the in-line imaging geometry, in which the flat-panel detector (FPD) is mounted directly underneath the treatment head of the linear accelerator. They found that the 121 kVp image quality was severely compromised by the 6 MV beam passing through the FPD at the same time. Specific MV-induced artefacts present a considerable challenge for automated marker detection algorithms. For this study, the authors developed a new imaging geometry by re-positioning the FPD and the X-ray tube. This improved the contrast-to-noise-ratio between 40% and 72% at the 1.2 mAs/image exposure setting. The increase in image quality clearly facilitates the quick and stable detection of motion with the aid of a template matching algorithm. The setup was tested with an anthropomorphic lung phantom (including an artificial lung tumour). In the tumour one or three Calypso {sup registered} beacons were embedded to achieve better contrast during MV radiation. For a single beacon, image acquisition and automated marker detection typically took around 76±6 ms. The success rate was found to be highly dependent on imaging dose and gantry angle. To eliminate possible false detections, the authors implemented a training phase prior to treatment beam irradiation and also introduced speed limits for motion between subsequent images.

  19. Toward a unified view of radiological imaging systems. Part II: Noisy images

    International Nuclear Information System (INIS)

    Wagner, R.F.

    1977-01-01

    ''The imaging process is fundamentally a sampling process.'' This philosophy of Otto Schade, utilizing the concepts of sample number and sampling aperture, is applied to a systems analysis of radiographic imaging, including some aspects of vision. It leads to a simple modification of the Rose statistical model; this results in excellent fits to the Blackwell data on the detectability of disks as a function of contrast and size. It gives a straightforward prescription for calculating a signal-to-noise ratio, which is applicable to the detection of low-contrast detail in screen--film imaging, including the effects of magnification. The model lies between the optimistic extreme of the Rose model and the pessimistic extreme of the Morgan model. For high-contrast detail, the rules for the evaluation of noiseless images are recovered

  20. Learning from Weak and Noisy Labels for Semantic Segmentation

    KAUST Repository

    Lu, Zhiwu

    2016-04-08

    A weakly supervised semantic segmentation (WSSS) method aims to learn a segmentation model from weak (image-level) as opposed to strong (pixel-level) labels. By avoiding the tedious pixel-level annotation process, it can exploit the unlimited supply of user-tagged images from media-sharing sites such as Flickr for large scale applications. However, these ‘free’ tags/labels are often noisy and few existing works address the problem of learning with both weak and noisy labels. In this work, we cast the WSSS problem into a label noise reduction problem. Specifically, after segmenting each image into a set of superpixels, the weak and potentially noisy image-level labels are propagated to the superpixel level resulting in highly noisy labels; the key to semantic segmentation is thus to identify and correct the superpixel noisy labels. To this end, a novel L1-optimisation based sparse learning model is formulated to directly and explicitly detect noisy labels. To solve the L1-optimisation problem, we further develop an efficient learning algorithm by introducing an intermediate labelling variable. Extensive experiments on three benchmark datasets show that our method yields state-of-the-art results given noise-free labels, whilst significantly outperforming the existing methods when the weak labels are also noisy.

  1. Learning from Weak and Noisy Labels for Semantic Segmentation

    KAUST Repository

    Lu, Zhiwu; Fu, Zhenyong; Xiang, Tao; Han, Peng; Wang, Liwei; Gao, Xin

    2016-01-01

    A weakly supervised semantic segmentation (WSSS) method aims to learn a segmentation model from weak (image-level) as opposed to strong (pixel-level) labels. By avoiding the tedious pixel-level annotation process, it can exploit the unlimited supply of user-tagged images from media-sharing sites such as Flickr for large scale applications. However, these ‘free’ tags/labels are often noisy and few existing works address the problem of learning with both weak and noisy labels. In this work, we cast the WSSS problem into a label noise reduction problem. Specifically, after segmenting each image into a set of superpixels, the weak and potentially noisy image-level labels are propagated to the superpixel level resulting in highly noisy labels; the key to semantic segmentation is thus to identify and correct the superpixel noisy labels. To this end, a novel L1-optimisation based sparse learning model is formulated to directly and explicitly detect noisy labels. To solve the L1-optimisation problem, we further develop an efficient learning algorithm by introducing an intermediate labelling variable. Extensive experiments on three benchmark datasets show that our method yields state-of-the-art results given noise-free labels, whilst significantly outperforming the existing methods when the weak labels are also noisy.

  2. Gravel Image Segmentation in Noisy Background Based on Partial Entropy Method

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    Because of wide variation in gray levels and particle dimensions and the presence of many small gravel objects in the background, as well as corrupting the image by noise, it is difficult o segment gravel objects. In this paper, we develop a partial entropy method and succeed to realize gravel objects segmentation. We give entropy principles and fur calculation methods. Moreover, we use minimum entropy error automaticly to select a threshold to segment image. We introduce the filter method using mathematical morphology. The segment experiments are performed by using different window dimensions for a group of gravel image and demonstrates that this method has high segmentation rate and low noise sensitivity.

  3. Higher-Order Statistics for the Detection of Small Objects in a Noisy Background Application on Sonar Imaging

    Directory of Open Access Journals (Sweden)

    M. Amate

    2007-01-01

    Full Text Available An original algorithm for the detection of small objects in a noisy background is proposed. Its application to underwater objects detection by sonar imaging is addressed. This new method is based on the use of higher-order statistics (HOS that are locally estimated on the images. The proposed algorithm is divided into two steps. In a first step, HOS (skewness and kurtosis are estimated locally using a square sliding computation window. Small deterministic objects have different statistical properties from the background they are thus highlighted. The influence of the signal-to-noise ratio (SNR on the results is studied in the case of Gaussian noise. Mathematical expressions of the estimators and of the expected performances are derived and are experimentally confirmed. In a second step, the results are focused by a matched filter using a theoretical model. This enables the precise localization of the regions of interest. The proposed method generalizes to other statistical distributions and we derive the theoretical expressions of the HOS estimators in the case of a Weibull distribution (both when only noise is present or when a small deterministic object is present within the filtering window. This enables the application of the proposed technique to the processing of synthetic aperture sonar data containing underwater mines whose echoes have to be detected and located. Results on real data sets are presented and quantitatively evaluated using receiver operating characteristic (ROC curves.

  4. Fusion of Thresholding Rules During Wavelet-Based Noisy Image Compression

    Directory of Open Access Journals (Sweden)

    Bekhtin Yury

    2016-01-01

    Full Text Available The new method for combining semisoft thresholding rules during wavelet-based data compression of images with multiplicative noise is suggested. The method chooses the best thresholding rule and the threshold value using the proposed criteria which provide the best nonlinear approximations and take into consideration errors of quantization. The results of computer modeling have shown that the suggested method provides relatively good image quality after restoration in the sense of some criteria such as PSNR, SSIM, etc.

  5. Feature evaluation of complex hysteresis smoothing and its practical applications to noisy SEM images.

    Science.gov (United States)

    Suzuki, Kazuhiko; Oho, Eisaku

    2013-01-01

    Quality of a scanning electron microscopy (SEM) image is strongly influenced by noise. This is a fundamental drawback of the SEM instrument. Complex hysteresis smoothing (CHS) has been previously developed for noise removal of SEM images. This noise removal is performed by monitoring and processing properly the amplitude of the SEM signal. As it stands now, CHS may not be so utilized, though it has several advantages for SEM. For example, the resolution of image processed by CHS is basically equal to that of the original image. In order to find wide application of the CHS method in microscopy, the feature of CHS, which has not been so clarified until now is evaluated correctly. As the application of the result obtained by the feature evaluation, cursor width (CW), which is the sole processing parameter of CHS, is determined more properly using standard deviation of noise Nσ. In addition, disadvantage that CHS cannot remove the noise with excessively large amplitude is improved by a certain postprocessing. CHS is successfully applicable to SEM images with various noise amplitudes. © Wiley Periodicals, Inc.

  6. Noisy-or classifier

    Czech Academy of Sciences Publication Activity Database

    Vomlel, Jiří

    2006-01-01

    Roč. 21, č. 3 (2006), s. 381-389 ISSN 0884-8173 R&D Projects: GA ČR(CZ) GA201/04/0393 Institutional research plan: CEZ:AV0Z10750506 Keywords : automatic classification * probabilistic models * EM algorithm * noisy-or model Subject RIV: JD - Computer Applications, Robotics Impact factor: 0.429, year: 2006

  7. Level set method for image segmentation based on moment competition

    Science.gov (United States)

    Min, Hai; Wang, Xiao-Feng; Huang, De-Shuang; Jin, Jing; Wang, Hong-Zhi; Li, Hai

    2015-05-01

    We propose a level set method for image segmentation which introduces the moment competition and weakly supervised information into the energy functional construction. Different from the region-based level set methods which use force competition, the moment competition is adopted to drive the contour evolution. Here, a so-called three-point labeling scheme is proposed to manually label three independent points (weakly supervised information) on the image. Then the intensity differences between the three points and the unlabeled pixels are used to construct the force arms for each image pixel. The corresponding force is generated from the global statistical information of a region-based method and weighted by the force arm. As a result, the moment can be constructed and incorporated into the energy functional to drive the evolving contour to approach the object boundary. In our method, the force arm can take full advantage of the three-point labeling scheme to constrain the moment competition. Additionally, the global statistical information and weakly supervised information are successfully integrated, which makes the proposed method more robust than traditional methods for initial contour placement and parameter setting. Experimental results with performance analysis also show the superiority of the proposed method on segmenting different types of complicated images, such as noisy images, three-phase images, images with intensity inhomogeneity, and texture images.

  8. Classification of cryo electron microscopy images, noisy tomographic images recorded with unknown projection directions, by simultaneously estimating reconstructions and application to an assembly mutant of Cowpea Chlorotic Mottle Virus and portals of the bacteriophage P22

    Science.gov (United States)

    Lee, Junghoon; Zheng, Yili; Yin, Zhye; Doerschuk, Peter C.; Johnson, John E.

    2010-08-01

    Cryo electron microscopy is frequently used on biological specimens that show a mixture of different types of object. Because the electron beam rapidly destroys the specimen, the beam current is minimized which leads to noisy images (SNR substantially less than 1) and only one projection image per object (with an unknown projection direction) is collected. For situations where the objects can reasonably be described as coming from a finite set of classes, an approach based on joint maximum likelihood estimation of the reconstruction of each class and then use of the reconstructions to label the class of each image is described and demonstrated on two challenging problems: an assembly mutant of Cowpea Chlorotic Mottle Virus and portals of the bacteriophage P22.

  9. Authentication over Noisy Channels

    OpenAIRE

    Lai, Lifeng; Gamal, Hesham El; Poor, H. Vincent

    2008-01-01

    In this work, message authentication over noisy channels is studied. The model developed in this paper is the authentication theory counterpart of Wyner's wiretap channel model. Two types of opponent attacks, namely impersonation attacks and substitution attacks, are investigated for both single message and multiple message authentication scenarios. For each scenario, information theoretic lower and upper bounds on the opponent's success probability are derived. Remarkably, in both scenarios,...

  10. Using the value of Lin's concordance correlation coefficient as a criterion for efficient estimation of areas of leaves of eelgrass from noisy digital images.

    Science.gov (United States)

    Echavarría-Heras, Héctor; Leal-Ramírez, Cecilia; Villa-Diharce, Enrique; Castillo, Oscar

    2014-01-01

    Eelgrass is a cosmopolitan seagrass species that provides important ecological services in coastal and near-shore environments. Despite its relevance, loss of eelgrass habitats is noted worldwide. Restoration by replanting plays an important role, and accurate measurements of the standing crop and productivity of transplants are important for evaluating restoration of the ecological functions of natural populations. Traditional assessments are destructive, and although they do not harm natural populations, in transplants the destruction of shoots might cause undesirable alterations. Non-destructive assessments of the aforementioned variables are obtained through allometric proxies expressed in terms of measurements of the lengths or areas of leaves. Digital imagery could produce measurements of leaf attributes without the removal of shoots, but sediment attachments, damage infringed by drag forces or humidity contents induce noise-effects, reducing precision. Available techniques for dealing with noise caused by humidity contents on leaves use the concepts of adjacency, vicinity, connectivity and tolerance of similarity between pixels. Selection of an interval of tolerance of similarity for efficient measurements requires extended computational routines with tied statistical inferences making concomitant tasks complicated and time consuming. The present approach proposes a simplified and cost-effective alternative, and also a general tool aimed to deal with any sort of noise modifying eelgrass leaves images. Moreover, this selection criterion relies only on a single statistics; the calculation of the maximum value of the Concordance Correlation Coefficient for reproducibility of observed areas of leaves through proxies obtained from digital images. Available data reveals that the present method delivers simplified, consistent estimations of areas of eelgrass leaves taken from noisy digital images. Moreover, the proposed procedure is robust because both the optimal

  11. Noisy quantum game

    International Nuclear Information System (INIS)

    Chen Jingling; Kwek, L.C.; Oh, C.H.

    2002-01-01

    In a recent paper [D. A. Meyer, Phys. Rev. Lett. 82, 1052 (1999)], it has been shown that a classical zero-sum strategic game can become a winning quantum game for the player with a quantum device. Nevertheless, it is well known that quantum systems easily decohere in noisy environments. In this paper, we show that if the handicapped player with classical means can delay his action for a sufficiently long time, the quantum version reverts to the classical zero-sum game under decoherence

  12. A statistically harmonized alignment-classification in image space enables accurate and robust alignment of noisy images in single particle analysis.

    Science.gov (United States)

    Kawata, Masaaki; Sato, Chikara

    2007-06-01

    In determining the three-dimensional (3D) structure of macromolecular assemblies in single particle analysis, a large representative dataset of two-dimensional (2D) average images from huge number of raw images is a key for high resolution. Because alignments prior to averaging are computationally intensive, currently available multireference alignment (MRA) software does not survey every possible alignment. This leads to misaligned images, creating blurred averages and reducing the quality of the final 3D reconstruction. We present a new method, in which multireference alignment is harmonized with classification (multireference multiple alignment: MRMA). This method enables a statistical comparison of multiple alignment peaks, reflecting the similarities between each raw image and a set of reference images. Among the selected alignment candidates for each raw image, misaligned images are statistically excluded, based on the principle that aligned raw images of similar projections have a dense distribution around the correctly aligned coordinates in image space. This newly developed method was examined for accuracy and speed using model image sets with various signal-to-noise ratios, and with electron microscope images of the Transient Receptor Potential C3 and the sodium channel. In every data set, the newly developed method outperformed conventional methods in robustness against noise and in speed, creating 2D average images of higher quality. This statistically harmonized alignment-classification combination should greatly improve the quality of single particle analysis.

  13. Cryptography from noisy storage.

    Science.gov (United States)

    Wehner, Stephanie; Schaffner, Christian; Terhal, Barbara M

    2008-06-06

    We show how to implement cryptographic primitives based on the realistic assumption that quantum storage of qubits is noisy. We thereby consider individual-storage attacks; i.e., the dishonest party attempts to store each incoming qubit separately. Our model is similar to the model of bounded-quantum storage; however, we consider an explicit noise model inspired by present-day technology. To illustrate the power of this new model, we show that a protocol for oblivious transfer is secure for any amount of quantum-storage noise, as long as honest players can perform perfect quantum operations. Our model also allows us to show the security of protocols that cope with noise in the operations of the honest players and achieve more advanced tasks such as secure identification.

  14. Food Image Recognition via Superpixel Based Low-Level and Mid-Level Distance Coding for Smart Home Applications

    Directory of Open Access Journals (Sweden)

    Jiannan Zheng

    2017-05-01

    Full Text Available Food image recognition is a key enabler for many smart home applications such as smart kitchen and smart personal nutrition log. In order to improve living experience and life quality, smart home systems collect valuable insights of users’ preferences, nutrition intake and health conditions via accurate and robust food image recognition. In addition, efficiency is also a major concern since many smart home applications are deployed on mobile devices where high-end GPUs are not available. In this paper, we investigate compact and efficient food image recognition methods, namely low-level and mid-level approaches. Considering the real application scenario where only limited and noisy data are available, we first proposed a superpixel based Linear Distance Coding (LDC framework where distinctive low-level food image features are extracted to improve performance. On a challenging small food image dataset where only 12 training images are available per category, our framework has shown superior performance in both accuracy and robustness. In addition, to better model deformable food part distribution, we extend LDC’s feature-to-class distance idea and propose a mid-level superpixel food parts-to-class distance mining framework. The proposed framework show superior performance on a benchmark food image datasets compared to other low-level and mid-level approaches in the literature.

  15. Noisy time-dependent spectra

    International Nuclear Information System (INIS)

    Shore, B.W.; Eberly, J.H.

    1983-01-01

    The definition of a time-dependent spectrum registered by an idealized spectrometer responding to a time-varying electromagnetic field as proposed by Eberly and Wodkiewicz and subsequently applied to the spectrum of laser-induced fluorescence by Eberly, Kunasz, and Wodkiewicz is here extended to allow a stochastically fluctuating (interruption model) environment: we provide an algorithm for numerical determination of the time-dependent fluorescence spectrum of an atom subject to excitation by an intense noisy laser and interruptive relaxation

  16. Smoothing of, and parameter estimation from, noisy biophysical recordings.

    Directory of Open Access Journals (Sweden)

    Quentin J M Huys

    2009-05-01

    Full Text Available Biophysically detailed models of single cells are difficult to fit to real data. Recent advances in imaging techniques allow simultaneous access to various intracellular variables, and these data can be used to significantly facilitate the modelling task. These data, however, are noisy, and current approaches to building biophysically detailed models are not designed to deal with this. We extend previous techniques to take the noisy nature of the measurements into account. Sequential Monte Carlo ("particle filtering" methods, in combination with a detailed biophysical description of a cell, are used for principled, model-based smoothing of noisy recording data. We also provide an alternative formulation of smoothing where the neural nonlinearities are estimated in a non-parametric manner. Biophysically important parameters of detailed models (such as channel densities, intercompartmental conductances, input resistances, and observation noise are inferred automatically from noisy data via expectation-maximization. Overall, we find that model-based smoothing is a powerful, robust technique for smoothing of noisy biophysical data and for inference of biophysical parameters in the face of recording noise.

  17. Active learning for noisy oracle via density power divergence.

    Science.gov (United States)

    Sogawa, Yasuhiro; Ueno, Tsuyoshi; Kawahara, Yoshinobu; Washio, Takashi

    2013-10-01

    The accuracy of active learning is critically influenced by the existence of noisy labels given by a noisy oracle. In this paper, we propose a novel pool-based active learning framework through robust measures based on density power divergence. By minimizing density power divergence, such as β-divergence and γ-divergence, one can estimate the model accurately even under the existence of noisy labels within data. Accordingly, we develop query selecting measures for pool-based active learning using these divergences. In addition, we propose an evaluation scheme for these measures based on asymptotic statistical analyses, which enables us to perform active learning by evaluating an estimation error directly. Experiments with benchmark datasets and real-world image datasets show that our active learning scheme performs better than several baseline methods. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Analysis of gene expression levels in individual bacterial cells without image segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Kwak, In Hae; Son, Minjun [Physics Department, University of Florida, P.O. Box 118440, Gainesville, FL 32611-8440 (United States); Hagen, Stephen J., E-mail: sjhagen@ufl.edu [Physics Department, University of Florida, P.O. Box 118440, Gainesville, FL 32611-8440 (United States)

    2012-05-11

    Highlights: Black-Right-Pointing-Pointer We present a method for extracting gene expression data from images of bacterial cells. Black-Right-Pointing-Pointer The method does not employ cell segmentation and does not require high magnification. Black-Right-Pointing-Pointer Fluorescence and phase contrast images of the cells are correlated through the physics of phase contrast. Black-Right-Pointing-Pointer We demonstrate the method by characterizing noisy expression of comX in Streptococcus mutans. -- Abstract: Studies of stochasticity in gene expression typically make use of fluorescent protein reporters, which permit the measurement of expression levels within individual cells by fluorescence microscopy. Analysis of such microscopy images is almost invariably based on a segmentation algorithm, where the image of a cell or cluster is analyzed mathematically to delineate individual cell boundaries. However segmentation can be ineffective for studying bacterial cells or clusters, especially at lower magnification, where outlines of individual cells are poorly resolved. Here we demonstrate an alternative method for analyzing such images without segmentation. The method employs a comparison between the pixel brightness in phase contrast vs fluorescence microscopy images. By fitting the correlation between phase contrast and fluorescence intensity to a physical model, we obtain well-defined estimates for the different levels of gene expression that are present in the cell or cluster. The method reveals the boundaries of the individual cells, even if the source images lack the resolution to show these boundaries clearly.

  19. Analysis of gene expression levels in individual bacterial cells without image segmentation

    International Nuclear Information System (INIS)

    Kwak, In Hae; Son, Minjun; Hagen, Stephen J.

    2012-01-01

    Highlights: ► We present a method for extracting gene expression data from images of bacterial cells. ► The method does not employ cell segmentation and does not require high magnification. ► Fluorescence and phase contrast images of the cells are correlated through the physics of phase contrast. ► We demonstrate the method by characterizing noisy expression of comX in Streptococcus mutans. -- Abstract: Studies of stochasticity in gene expression typically make use of fluorescent protein reporters, which permit the measurement of expression levels within individual cells by fluorescence microscopy. Analysis of such microscopy images is almost invariably based on a segmentation algorithm, where the image of a cell or cluster is analyzed mathematically to delineate individual cell boundaries. However segmentation can be ineffective for studying bacterial cells or clusters, especially at lower magnification, where outlines of individual cells are poorly resolved. Here we demonstrate an alternative method for analyzing such images without segmentation. The method employs a comparison between the pixel brightness in phase contrast vs fluorescence microscopy images. By fitting the correlation between phase contrast and fluorescence intensity to a physical model, we obtain well-defined estimates for the different levels of gene expression that are present in the cell or cluster. The method reveals the boundaries of the individual cells, even if the source images lack the resolution to show these boundaries clearly.

  20. Security with noisy data (Extended abstract of invited talk)

    NARCIS (Netherlands)

    Skoric, B.; Böhme, R.; Fong, P.W.L.; Safavi-Naini, R.

    2010-01-01

    An overview was given of security applications where noisy data plays a substantial role. Secure Sketches and Fuzzy Extractors were discussed at tutorial level, and two simple Fuzzy Extractor constructions were shown. One of the latest developments was presented: quantum-readout PUFs.

  1. Quantum communication in noisy environments

    International Nuclear Information System (INIS)

    Aschauer, H.

    2004-01-01

    In this thesis, we investigate how protocols in quantum communication theory are influenced by noise. Specifically, we take into account noise during the transmission of quantum information and noise during the processing of quantum information. We describe three novel quantum communication protocols which can be accomplished efficiently in a noisy environment: (1) Factorization of Eve: We show that it is possible to disentangle transmitted qubits a posteriori from the quantum channel's degrees of freedom. (2) Cluster state purification: We give multi-partite entanglement purification protocols for a large class of entangled quantum states. (3) Entanglement purification protocols from quantum codes: We describe a constructive method to create bipartite entanglement purification protocols form quantum error correcting codes, and investigate the properties of these protocols, which can be operated in two different modes, which are related to quantum communication and quantum computation protocols, respectively

  2. Penguins and their noisy world

    Directory of Open Access Journals (Sweden)

    Thierry Aubin

    2004-06-01

    Full Text Available Penguins identify their mate or chick by an acoustic signal, the display call. This identification is realized in a particularly constraining environment: the noisy world of a colony of thousands of birds. To fully understand how birds solve this problem of communication, we have done observations, acoustic analysis, propagation and playback experiments with 6 species of penguins studied in the field. According to our results, it appears that penguins use a particularly efficient ''anti-confusion'' and ''anti-noise'' coding system, allowing a quick identification and localization of individuals on the move in a noisy crowd.Os pingüins identificam seu parceiro ou seu filhote através de um sinal acústico, o grito de exibição. Esta identificação está realizada num ambiente particularmente exigente: o mundo barulhento de uma colônia de milhares de aves. Para entender totalmente como essas aves resolvem este problema de comunicação, realizamos observações, análises acústicas e experiências de propagação e de ''play-back'' com 6 espécies de pingüins estudados no campo. Segundo nossos resultados, parece que os pingüins usam um sistema de codificação ''anti-confusão'' e ''anti-barulho'' particularmente eficiente, permitindo uma rápida identificação e localização dos indivíduos em movimento numa multidão barulhenta.

  3. Noisy Ocular Recognition Based on Three Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Min Beom Lee

    2017-12-01

    Full Text Available In recent years, the iris recognition system has been gaining increasing acceptance for applications such as access control and smartphone security. When the images of the iris are obtained under unconstrained conditions, an issue of undermined quality is caused by optical and motion blur, off-angle view (the user’s eyes looking somewhere else, not into the front of the camera, specular reflection (SR and other factors. Such noisy iris images increase intra-individual variations and, as a result, reduce the accuracy of iris recognition. A typical iris recognition system requires a near-infrared (NIR illuminator along with an NIR camera, which are larger and more expensive than fingerprint recognition equipment. Hence, many studies have proposed methods of using iris images captured by a visible light camera without the need for an additional illuminator. In this research, we propose a new recognition method for noisy iris and ocular images by using one iris and two periocular regions, based on three convolutional neural networks (CNNs. Experiments were conducted by using the noisy iris challenge evaluation-part II (NICE.II training dataset (selected from the university of Beira iris (UBIRIS.v2 database, mobile iris challenge evaluation (MICHE database, and institute of automation of Chinese academy of sciences (CASIA-Iris-Distance database. As a result, the method proposed by this study outperformed previous methods.

  4. Noisy Ocular Recognition Based on Three Convolutional Neural Networks.

    Science.gov (United States)

    Lee, Min Beom; Hong, Hyung Gil; Park, Kang Ryoung

    2017-12-17

    In recent years, the iris recognition system has been gaining increasing acceptance for applications such as access control and smartphone security. When the images of the iris are obtained under unconstrained conditions, an issue of undermined quality is caused by optical and motion blur, off-angle view (the user's eyes looking somewhere else, not into the front of the camera), specular reflection (SR) and other factors. Such noisy iris images increase intra-individual variations and, as a result, reduce the accuracy of iris recognition. A typical iris recognition system requires a near-infrared (NIR) illuminator along with an NIR camera, which are larger and more expensive than fingerprint recognition equipment. Hence, many studies have proposed methods of using iris images captured by a visible light camera without the need for an additional illuminator. In this research, we propose a new recognition method for noisy iris and ocular images by using one iris and two periocular regions, based on three convolutional neural networks (CNNs). Experiments were conducted by using the noisy iris challenge evaluation-part II (NICE.II) training dataset (selected from the university of Beira iris (UBIRIS).v2 database), mobile iris challenge evaluation (MICHE) database, and institute of automation of Chinese academy of sciences (CASIA)-Iris-Distance database. As a result, the method proposed by this study outperformed previous methods.

  5. Oversampling smoothness: an effective algorithm for phase retrieval of noisy diffraction intensities.

    Science.gov (United States)

    Rodriguez, Jose A; Xu, Rui; Chen, Chien-Chun; Zou, Yunfei; Miao, Jianwei

    2013-04-01

    Coherent diffraction imaging (CDI) is high-resolution lensless microscopy that has been applied to image a wide range of specimens using synchrotron radiation, X-ray free-electron lasers, high harmonic generation, soft X-ray lasers and electrons. Despite recent rapid advances, it remains a challenge to reconstruct fine features in weakly scattering objects such as biological specimens from noisy data. Here an effective iterative algorithm, termed oversampling smoothness (OSS), for phase retrieval of noisy diffraction intensities is presented. OSS exploits the correlation information among the pixels or voxels in the region outside of a support in real space. By properly applying spatial frequency filters to the pixels or voxels outside the support at different stages of the iterative process ( i.e. a smoothness constraint), OSS finds a balance between the hybrid input-output (HIO) and error reduction (ER) algorithms to search for a global minimum in solution space, while reducing the oscillations in the reconstruction. Both numerical simulations with Poisson noise and experimental data from a biological cell indicate that OSS consistently outperforms the HIO, ER-HIO and noise robust (NR)-HIO algorithms at all noise levels in terms of accuracy and consistency of the reconstructions. It is expected that OSS will find application in the rapidly growing CDI field, as well as other disciplines where phase retrieval from noisy Fourier magnitudes is needed. The MATLAB (The MathWorks Inc., Natick, MA, USA) source code of the OSS algorithm is freely available from http://www.physics.ucla.edu/research/imaging.

  6. Diagnostic reference levels in medical imaging

    International Nuclear Information System (INIS)

    Rosenstein, M.

    2001-01-01

    The paper proposes additional advice to national or local authorities and the clinical community on the application of diagnostic reference levels as a practical tool to manage radiation doses to patients in diagnostic radiology and nuclear medicine. A survey was made of the various approaches that have been taken by authoritative bodies to establish diagnostic reference levels for medical imaging tasks. There are a variety of ways to implement the idea of diagnostic reference levels, depending on the medical imaging task of interest, the national or local state of practice and the national or local preferences for technical implementation. The existing International Commission on Radiological Protection (ICRP) guidance is reviewed, the survey information is summarized, a set of unifying principles is espoused and a statement of additional advice that has been proposed to ICRP Committee 3 is presented. The proposed advice would meet a need for a unifying set of principles to provide a framework for diagnostic reference levels but would allow flexibility in their selection and use. While some illustrative examples are given, the proposed advice does not specify the specific quantities to be used, the numerical values to be set for the quantities or the technical details of how national or local authorities should implement diagnostic reference levels. (author)

  7. Quantitative myocardial perfusion PET parametric imaging at the voxel-level

    International Nuclear Information System (INIS)

    Mohy-ud-Din, Hassan; Rahmim, Arman; Lodge, Martin A

    2015-01-01

    Quantitative myocardial perfusion (MP) PET has the potential to enhance detection of early stages of atherosclerosis or microvascular dysfunction, characterization of flow-limiting effects of coronary artery disease (CAD), and identification of balanced reduction of flow due to multivessel stenosis. We aim to enable quantitative MP-PET at the individual voxel level, which has the potential to allow enhanced visualization and quantification of myocardial blood flow (MBF) and flow reserve (MFR) as computed from uptake parametric images. This framework is especially challenging for the 82 Rb radiotracer. The short half-life enables fast serial imaging and high patient throughput; yet, the acquired dynamic PET images suffer from high noise-levels introducing large variability in uptake parametric images and, therefore, in the estimates of MBF and MFR. Robust estimation requires substantial post-smoothing of noisy data, degrading valuable functional information of physiological and pathological importance. We present a feasible and robust approach to generate parametric images at the voxel-level that substantially reduces noise without significant loss of spatial resolution. The proposed methodology, denoted physiological clustering, makes use of the functional similarity of voxels to penalize deviation of voxel kinetics from physiological partners. The results were validated using extensive simulations (with transmural and non-transmural perfusion defects) and clinical studies. Compared to post-smoothing, physiological clustering depicted enhanced quantitative noise versus bias performance as well as superior recovery of perfusion defects (as quantified by CNR) with minimal increase in bias. Overall, parametric images obtained from the proposed methodology were robust in the presence of high-noise levels as manifested in the voxel time-activity-curves. (paper)

  8. Weighting of field heights for sharpness and noisiness

    Science.gov (United States)

    Keelan, Brian W.; Jin, Elaine W.

    2009-01-01

    Weighting of field heights is important in cases when a single numerical value needs to be calculated that characterizes an attribute's overall impact on perceived image quality. In this paper we report an observer study to derive the weighting of field heights for sharpness and noisiness. One-hundred-forty images were selected to represent a typical consumer photo space distribution. Fifty-three sample points were sampled per image, representing field heights of 0, 14, 32, 42, 51, 58, 71, 76, 86% and 100%. Six observers participated in this study. The field weights derived in this report include both: the effect of area versus field height (which is a purely objective, geometric factor); and the effect of the spatial distribution of image content that draws attention to or masks each of these image structure attributes. The results show that relative to the geometrical area weights, sharpness weights were skewed to lower field heights, because sharpness-critical subject matter was often positioned relatively near the center of an image. Conversely, because noise can be masked by signal, noisiness-critical content (such as blue skies, skin tones, walls, etc.) tended to occur farther from the center of an image, causing the weights to be skewed to higher field heights.

  9. Images crossing borders: image and workflow sharing on multiple levels.

    Science.gov (United States)

    Ross, Peeter; Pohjonen, Hanna

    2011-04-01

    Digitalisation of medical data makes it possible to share images and workflows between related parties. In addition to linear data flow where healthcare professionals or patients are the information carriers, a new type of matrix of many-to-many connections is emerging. Implementation of shared workflow brings challenges of interoperability and legal clarity. Sharing images or workflows can be implemented on different levels with different challenges: inside the organisation, between organisations, across country borders, or between healthcare institutions and citizens. Interoperability issues vary according to the level of sharing and are either technical or semantic, including language. Legal uncertainty increases when crossing national borders. Teleradiology is regulated by multiple European Union (EU) directives and legal documents, which makes interpretation of the legal system complex. To achieve wider use of eHealth and teleradiology several strategic documents were published recently by the EU. Despite EU activities, responsibility for organising, providing and funding healthcare systems remains with the Member States. Therefore, the implementation of new solutions requires strong co-operation between radiologists, societies of radiology, healthcare administrators, politicians and relevant EU authorities. The aim of this article is to describe different dimensions of image and workflow sharing and to analyse legal acts concerning teleradiology in the EU.

  10. Coupling regularizes individual units in noisy populations

    International Nuclear Information System (INIS)

    Ly Cheng; Ermentrout, G. Bard

    2010-01-01

    The regularity of a noisy system can modulate in various ways. It is well known that coupling in a population can lower the variability of the entire network; the collective activity is more regular. Here, we show that diffusive (reciprocal) coupling of two simple Ornstein-Uhlenbeck (O-U) processes can regularize the individual, even when it is coupled to a noisier process. In cellular networks, the regularity of individual cells is important when a select few play a significant role. The regularizing effect of coupling surprisingly applies also to general nonlinear noisy oscillators. However, unlike with the O-U process, coupling-induced regularity is robust to different kinds of coupling. With two coupled noisy oscillators, we derive an asymptotic formula assuming weak noise and coupling for the variance of the period (i.e., spike times) that accurately captures this effect. Moreover, we find that reciprocal coupling can regularize the individual period of higher dimensional oscillators such as the Morris-Lecar and Brusselator models, even when coupled to noisier oscillators. Coupling can have a counterintuitive and beneficial effect on noisy systems. These results have implications for the role of connectivity with noisy oscillators and the modulation of variability of individual oscillators.

  11. Robust speaker recognition in noisy environments

    CERN Document Server

    Rao, K Sreenivasa

    2014-01-01

    This book discusses speaker recognition methods to deal with realistic variable noisy environments. The text covers authentication systems for; robust noisy background environments, functions in real time and incorporated in mobile devices. The book focuses on different approaches to enhance the accuracy of speaker recognition in presence of varying background environments. The authors examine: (a) Feature compensation using multiple background models, (b) Feature mapping using data-driven stochastic models, (c) Design of super vector- based GMM-SVM framework for robust speaker recognition, (d) Total variability modeling (i-vectors) in a discriminative framework and (e) Boosting method to fuse evidences from multiple SVM models.

  12. Global games with noisy sharing of information

    KAUST Repository

    Touri, Behrouz

    2014-12-15

    We provide a framework for the study of global games with noisy sharing of information. In contrast to the previous works where it is shown that an intuitive threshold policy is an equilibrium for such games, we show that noisy sharing of information leads to non-existence of such an equilibrium. We also investigate the group best-response dynamics of two groups of agents sharing the same information to threshold policies based on each group\\'s observation and show the convergence of such dynamics.

  13. Global games with noisy sharing of information

    KAUST Repository

    Touri, Behrouz; Shamma, Jeff S.

    2014-01-01

    We provide a framework for the study of global games with noisy sharing of information. In contrast to the previous works where it is shown that an intuitive threshold policy is an equilibrium for such games, we show that noisy sharing of information leads to non-existence of such an equilibrium. We also investigate the group best-response dynamics of two groups of agents sharing the same information to threshold policies based on each group's observation and show the convergence of such dynamics.

  14. Speech Emotion Recognition Based on Power Normalized Cepstral Coefficients in Noisy Conditions

    Directory of Open Access Journals (Sweden)

    M. Bashirpour

    2016-09-01

    Full Text Available Automatic recognition of speech emotional states in noisy conditions has become an important research topic in the emotional speech recognition area, in recent years. This paper considers the recognition of emotional states via speech in real environments. For this task, we employ the power normalized cepstral coefficients (PNCC in a speech emotion recognition system. We investigate its performance in emotion recognition using clean and noisy speech materials and compare it with the performances of the well-known MFCC, LPCC, RASTA-PLP, and also TEMFCC features. Speech samples are extracted from the Berlin emotional speech database (Emo DB and Persian emotional speech database (Persian ESD which are corrupted with 4 different noise types under various SNR levels. The experiments are conducted in clean train/noisy test scenarios to simulate practical conditions with noise sources. Simulation results show that higher recognition rates are achieved for PNCC as compared with the conventional features under noisy conditions.

  15. Continuous Variables Quantum Information in Noisy Environments

    DEFF Research Database (Denmark)

    Berni, Adriano

    safe from the detrimental effects of noise and losses. In the present work we investigate continuous variables Gaussian quantum information in noisy environments, studying the effects of various noise sources in the cases of a quantum metrological task, an error correction scheme and discord...

  16. Function reconstruction from noisy local averages

    International Nuclear Information System (INIS)

    Chen Yu; Huang Jianguo; Han Weimin

    2008-01-01

    A regularization method is proposed for the function reconstruction from noisy local averages in any dimension. Error bounds for the approximate solution in L 2 -norm are derived. A number of numerical examples are provided to show computational performance of the method, with the regularization parameters selected by different strategies

  17. Multiple Equilibria in Noisy Rational Expectations Economies

    DEFF Research Database (Denmark)

    Palvolgyi, Domotor; Venter, Gyuri

    This paper studies equilibrium uniqueness in standard noisy rational expectations economies with asymmetric or differential information a la Grossman and Stiglitz (1980) and Hellwig (1980). We show that the standard linear equilibrium of Grossman and Stiglitz (1980) is the unique equilibrium...

  18. Signal detection by active, noisy hair bundles

    Science.gov (United States)

    O'Maoiléidigh, Dáibhid; Salvi, Joshua D.; Hudspeth, A. J.

    2018-05-01

    Vertebrate ears employ hair bundles to transduce mechanical movements into electrical signals, but their performance is limited by noise. Hair bundles are substantially more sensitive to periodic stimulation when they are mechanically active, however, than when they are passive. We developed a model of active hair-bundle mechanics that predicts the conditions under which a bundle is most sensitive to periodic stimulation. The model relies only on the existence of mechanotransduction channels and an active adaptation mechanism that recloses the channels. For a frequency-detuned stimulus, a noisy hair bundle's phase-locked response and degree of entrainment as well as its detection bandwidth are maximized when the bundle exhibits low-amplitude spontaneous oscillations. The phase-locked response and entrainment of a bundle are predicted to peak as functions of the noise level. We confirmed several of these predictions experimentally by periodically forcing hair bundles held near the onset of self-oscillation. A hair bundle's active process amplifies the stimulus preferentially over the noise, allowing the bundle to detect periodic forces less than 1 pN in amplitude. Moreover, the addition of noise can improve a bundle's ability to detect the stimulus. Although, mechanical activity has not yet been observed in mammalian hair bundles, a related model predicts that active but quiescent bundles can oscillate spontaneously when they are loaded by a sufficiently massive object such as the tectorial membrane. Overall, this work indicates that auditory systems rely on active elements, composed of hair cells and their mechanical environment, that operate on the brink of self-oscillation.

  19. Security Inference from Noisy Data

    Science.gov (United States)

    2008-04-08

    and RFID chips, introduce new ways of communication and sharing data. For ex- ample, the Nike +iPod Sport Kit is a new wireless accessory for the iPod...Agrawal show: • A wide variety (e.g. different keyboards of the same model, different models, different brands ) of keyboards have keys with distinct...grammar level and spelling level in this case) are built into a single model. Algorithms to maximize global joint probability may improve the

  20. Noisy signaling: theory and experiment

    NARCIS (Netherlands)

    de Haan, T.; Offerman, T.; Sloof, R.

    2011-01-01

    We introduce noise in the signaling technology of an otherwise standard wasteful signaling model (Spence, 1973). We theoretically derive the properties of the equilibria under different levels of noise and we experimentally test how behavior changes with noise. We obtain three main insights. First,

  1. Semantics by levels: An example for an image language

    International Nuclear Information System (INIS)

    Fasciano, M.; Levialdi, S.; Tortora, G.

    1984-01-01

    Ambiguities in formal language constructs may decrease both the understanding and the coding efficiency of a program. Within an image language, two semantic levels have been detected, corresponding to the lower level (pixel-based) and to the higher level (image-based). Denotational semantics has been used to define both levels within PIXAL (an image language) in order to enable the reader to visualize a concrete application of the semantic levels and their implications in a programming environment. This paper presents the semantics of different levels of conceptualization in the abstract formal description of an image language. The disambiguation of the meaning of special purpose constructs that imply either the elementary (pixels) level or the high image (array) level is naturally obtained by means of such semantic clauses. Perhaps non Von architectures on which hierarchical computations may be performed could also benefit from the use of semantic clauses to explicit the different levels where such computations are executed

  2. MODIS Level-3 Standard Mapped Image

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NOAA CoastWatch distributes chlorophyll-a concentration data from NASA's Aqua Spacecraft. Measurements are gathered by the Moderate Resolution Imaging...

  3. Symbolic dynamics of noisy chaos

    Energy Technology Data Exchange (ETDEWEB)

    Crutchfield, J P; Packard, N H

    1983-05-01

    One model of randomness observed in physical systems is that low-dimensional deterministic chaotic attractors underly the observations. A phenomenological theory of chaotic dynamics requires an accounting of the information flow fromthe observed system to the observer, the amount of information available in observations, and just how this information affects predictions of the system's future behavior. In an effort to develop such a description, the information theory of highly discretized observations of random behavior is discussed. Metric entropy and topological entropy are well-defined invariant measures of such an attractor's level of chaos, and are computable using symbolic dynamics. Real physical systems that display low dimensional dynamics are, however, inevitably coupled to high-dimensional randomness, e.g. thermal noise. We investigate the effects of such fluctuations coupled to deterministic chaotic systems, in particular, the metric entropy's response to the fluctuations. It is found that the entropy increases with a power law in the noise level, and that the convergence of the entropy and the effect of fluctuations can be cast as a scaling theory. It is also argued that in addition to the metric entropy, there is a second scaling invariant quantity that characterizes a deterministic system with added fluctuations: I/sub 0/, the maximum average information obtainable about the initial condition that produces a particular sequence of measurements (or symbols). 46 references, 14 figures, 1 table.

  4. On covariance structure in noisy, big data

    Science.gov (United States)

    Paffenroth, Randy C.; Nong, Ryan; Du Toit, Philip C.

    2013-09-01

    Herein we describe theory and algorithms for detecting covariance structures in large, noisy data sets. Our work uses ideas from matrix completion and robust principal component analysis to detect the presence of low-rank covariance matrices, even when the data is noisy, distorted by large corruptions, and only partially observed. In fact, the ability to handle partial observations combined with ideas from randomized algorithms for matrix decomposition enables us to produce asymptotically fast algorithms. Herein we will provide numerical demonstrations of the methods and their convergence properties. While such methods have applicability to many problems, including mathematical finance, crime analysis, and other large-scale sensor fusion problems, our inspiration arises from applying these methods in the context of cyber network intrusion detection.

  5. Generalizations of the noisy-or model

    Czech Academy of Sciences Publication Activity Database

    Vomlel, Jiří

    2015-01-01

    Roč. 51, č. 3 (2015), s. 508-524 ISSN 0023-5954 R&D Projects: GA ČR GA13-20012S Institutional support: RVO:67985556 Keywords : Bayesian networks * noisy-or model * classification * generalized linear models Subject RIV: JD - Computer Applications, Robotics Impact factor: 0.628, year: 2015 http://library.utia.cas.cz/separaty/2015/MTR/vomlel-0447357.pdf

  6. Auditory Modeling for Noisy Speech Recognition.

    Science.gov (United States)

    2000-01-01

    multiple platforms including PCs, workstations, and DSPs. A prototype version of the SOS process was tested on the Japanese Hiragana language with good...judgment among linguists. American English has 48 phonetic sounds in the ARPABET representation. Hiragana , the Japanese phonetic language, has only 20... Japanese Hiragana ," H.L. Pfister, FL 95, 1995. "State Recognition for Noisy Dynamic Systems," H.L. Pfister, Tech 2005, Chicago, 1995. "Experiences

  7. Noise Measurement and Frequency Analysis of Commercially Available Noisy Toys

    Directory of Open Access Journals (Sweden)

    Shohreh Jalaie

    2005-06-01

    Full Text Available Objective: Noise measurement and frequency analysis of commercially available noisy toys were the main purposes of the study. Materials and Methods: 181 noisy toys commonly found in toy stores in different zones of Tehran were selected and categorized into 10 groups. Noise measurement were done at 2, 25, and 50 cm from toys in dBA. The noisiest toy of each group was frequency analyzed in octave bands. Results: The highest and the lowest intensity levels belonged to the gun (mean=112 dBA and range of 100-127 dBA and to the rattle-box (mean=84 dBA and range of 74-95 dBA, respectively. Noise intensity levels significantly decreased with increasing distance except for two toys. Noise frequency analysis indicated energy in effective hearing frequencies. Most of the toys energies were in the middle and high frequency region. Conclusion: As intensity level of the toys is considerable, mostly more than 90 dBA, and also their energy exist in the middle and high frequency region, toys should be considered as a cause of the hearing impairment.

  8. Noisy Spiking in Visual Area V2 of Amblyopic Monkeys.

    Science.gov (United States)

    Wang, Ye; Zhang, Bin; Tao, Xiaofeng; Wensveen, Janice M; Smith, Earl L; Chino, Yuzo M

    2017-01-25

    being noisy by perceptual and modeling studies, the exact nature or origin of this elevated perceptual noise is not known. We show that elevated and noisy spontaneous activity and contrast-dependent noisy spiking (spiking irregularity and trial-to-trial fluctuations in spiking) in neurons of visual area V2 could limit the visual performance of amblyopic primates. Moreover, we discovered that the noisy spiking is linked to a high level of binocular suppression in visual cortex during development. Copyright © 2017 the authors 0270-6474/17/370922-14$15.00/0.

  9. A new level set model for cell image segmentation

    Science.gov (United States)

    Ma, Jing-Feng; Hou, Kai; Bao, Shang-Lian; Chen, Chun

    2011-02-01

    In this paper we first determine three phases of cell images: background, cytoplasm and nucleolus according to the general physical characteristics of cell images, and then develop a variational model, based on these characteristics, to segment nucleolus and cytoplasm from their relatively complicated backgrounds. In the meantime, the preprocessing obtained information of cell images using the OTSU algorithm is used to initialize the level set function in the model, which can speed up the segmentation and present satisfactory results in cell image processing.

  10. Application of morphological associative memories and Fourier descriptors for classification of noisy subsurface signatures

    Science.gov (United States)

    Ortiz, Jorge L.; Parsiani, Hamed; Tolstoy, Leonid

    2004-02-01

    This paper presents a method for recognition of Noisy Subsurface Images using Morphological Associative Memories (MAM). MAM are type of associative memories that use a new kind of neural networks based in the algebra system known as semi-ring. The operations performed in this algebraic system are highly nonlinear providing additional strength when compared to other transformations. Morphological associative memories are a new kind of neural networks that provide a robust performance with noisy inputs. Two representations of morphological associative memories are used called M and W matrices. M associative memory provides a robust association with input patterns corrupted by dilative random noise, while the W associative matrix performs a robust recognition in patterns corrupted with erosive random noise. The robust performance of MAM is used in combination of the Fourier descriptors for the recognition of underground objects in Ground Penetrating Radar (GPR) images. Multiple 2-D GPR images of a site are made available by NASA-SSC center. The buried objects in these images appear in the form of hyperbolas which are the results of radar backscatter from the artifacts or objects. The Fourier descriptors of the prototype hyperbola-like and shapes from non-hyperbola shapes in the sub-surface images are used to make these shapes scale-, shift-, and rotation-invariant. Typical hyperbola-like and non-hyperbola shapes are used to calculate the morphological associative memories. The trained MAMs are used to process other noisy images to detect the presence of these underground objects. The outputs from the MAM using the noisy patterns may be equal to the training prototypes, providing a positive identification of the artifacts. The results are images with recognized hyperbolas which indicate the presence of buried artifacts. A model using MATLAB has been developed and results are presented.

  11. A new level set model for cell image segmentation

    International Nuclear Information System (INIS)

    Ma Jing-Feng; Chen Chun; Hou Kai; Bao Shang-Lian

    2011-01-01

    In this paper we first determine three phases of cell images: background, cytoplasm and nucleolus according to the general physical characteristics of cell images, and then develop a variational model, based on these characteristics, to segment nucleolus and cytoplasm from their relatively complicated backgrounds. In the meantime, the preprocessing obtained information of cell images using the OTSU algorithm is used to initialize the level set function in the model, which can speed up the segmentation and present satisfactory results in cell image processing. (cross-disciplinary physics and related areas of science and technology)

  12. The Noisiness of Low-Frequency One-Third Octave Bands of Noise. M.S. Thesis - Southampton Univ.

    Science.gov (United States)

    Lawton, B. W.

    1975-01-01

    This study examined the relative noisiness of low frequency one-third octave bands of noise bounded by the bands centered at 25 Hz and 200 Hz, with intensities ranging from 50 db sound pressure level (SPL) to 95 db SPL. The thirty-two subjects used a method-of-adjustment technique, producing comparison-band intensities as noisy as standard bands centered at 100 Hz and 200 Hz with intensities of 60 db SPL and 72 db SPL. Four contours of equal noisiness were developed for one-third octave bands, extending down to 25 Hz and ranging in intensity from approximately 58 db SPL to 86 db SPL. These curves were compared with the contours of equal noisiness of Kryter and Pearsons. In the region of overlap (between 50 Hz and 200 Hz) the agreement was good.

  13. Acceptable levels of digital image compression in chest radiology

    International Nuclear Information System (INIS)

    Smith, I.

    2000-01-01

    The introduction of picture archival and communications systems (PACS) and teleradiology has prompted an examination of techniques that optimize the storage capacity and speed of digital storage and distribution networks. The general acceptance of the move to replace conventional screen-film capture with computed radiography (CR) is an indication that clinicians within the radiology community are willing to accept images that have been 'compressed'. The question to be answered, therefore, is what level of compression is acceptable. The purpose of the present study is to provide an assessment of the ability of a group of imaging professionals to determine whether an image has been compressed. To undertake this study a single mobile chest image, selected for the presence of some subtle pathology in the form of a number of septal lines in both costphrenic angles, was compressed to levels of 10:1, 20:1 and 30:1. These images were randomly ordered and shown to the observers for interpretation. Analysis of the responses indicates that in general it was not possible to distinguish the original image from its compressed counterparts. Furthermore, a preference appeared to be shown for images that have undergone low levels of compression. This preference can most likely be attributed to the 'de-noising' effect of the compression algorithm at low levels. Copyright (1999) Blackwell Science Pty. Ltd

  14. Multi-level tree analysis of pulmonary artery/vein trees in non-contrast CT images

    Science.gov (United States)

    Gao, Zhiyun; Grout, Randall W.; Hoffman, Eric A.; Saha, Punam K.

    2012-02-01

    Diseases like pulmonary embolism and pulmonary hypertension are associated with vascular dystrophy. Identifying such pulmonary artery/vein (A/V) tree dystrophy in terms of quantitative measures via CT imaging significantly facilitates early detection of disease or a treatment monitoring process. A tree structure, consisting of nodes and connected arcs, linked to the volumetric representation allows multi-level geometric and volumetric analysis of A/V trees. Here, a new theory and method is presented to generate multi-level A/V tree representation of volumetric data and to compute quantitative measures of A/V tree geometry and topology at various tree hierarchies. The new method is primarily designed on arc skeleton computation followed by a tree construction based topologic and geometric analysis of the skeleton. The method starts with a volumetric A/V representation as input and generates its topologic and multi-level volumetric tree representations long with different multi-level morphometric measures. A new recursive merging and pruning algorithms are introduced to detect bad junctions and noisy branches often associated with digital geometric and topologic analysis. Also, a new notion of shortest axial path is introduced to improve the skeletal arc joining two junctions. The accuracy of the multi-level tree analysis algorithm has been evaluated using computer generated phantoms and pulmonary CT images of a pig vessel cast phantom while the reproducibility of method is evaluated using multi-user A/V separation of in vivo contrast-enhanced CT images of a pig lung at different respiratory volumes.

  15. Quantum simulations with noisy quantum computers

    Science.gov (United States)

    Gambetta, Jay

    Quantum computing is a new computational paradigm that is expected to lie beyond the standard model of computation. This implies a quantum computer can solve problems that can't be solved by a conventional computer with tractable overhead. To fully harness this power we need a universal fault-tolerant quantum computer. However the overhead in building such a machine is high and a full solution appears to be many years away. Nevertheless, we believe that we can build machines in the near term that cannot be emulated by a conventional computer. It is then interesting to ask what these can be used for. In this talk we will present our advances in simulating complex quantum systems with noisy quantum computers. We will show experimental implementations of this on some small quantum computers.

  16. Fractional Diffusion in Gaussian Noisy Environment

    Directory of Open Access Journals (Sweden)

    Guannan Hu

    2015-03-01

    Full Text Available We study the fractional diffusion in a Gaussian noisy environment as described by the fractional order stochastic heat equations of the following form: \\(D_t^{(\\alpha} u(t, x=\\textit{B}u+u\\cdot \\dot W^H\\, where \\(D_t^{(\\alpha}\\ is the Caputo fractional derivative of order \\(\\alpha\\in (0,1\\ with respect to the time variable \\(t\\, \\(\\textit{B}\\ is a second order elliptic operator with respect to the space variable \\(x\\in\\mathbb{R}^d\\ and \\(\\dot W^H\\ a time homogeneous fractional Gaussian noise of Hurst parameter \\(H=(H_1, \\cdots, H_d\\. We obtain conditions satisfied by \\(\\alpha\\ and \\(H\\, so that the square integrable solution \\(u\\ exists uniquely.

  17. Improving the precision of noisy oscillators

    Science.gov (United States)

    Moehlis, Jeff

    2014-04-01

    We consider how the period of an oscillator is affected by white noise, with special attention given to the cases of additive noise and parameter fluctuations. Our treatment is based upon the concepts of isochrons, which extend the notion of the phase of a stable periodic orbit to the basin of attraction of the periodic orbit, and phase response curves, which can be used to understand the geometry of isochrons near the periodic orbit. This includes a derivation of the leading-order effect of noise on the statistics of an oscillator’s period. Several examples are considered in detail, which illustrate the use and validity of the theory, and demonstrate how to improve a noisy oscillator’s precision by appropriately tuning system parameters or operating away from a bifurcation point. It is also shown that appropriately timed impulsive kicks can give further improvements to oscillator precision.

  18. Collective fluctuations in networks of noisy components

    International Nuclear Information System (INIS)

    Masuda, Naoki; Kawamura, Yoji; Kori, Hiroshi

    2010-01-01

    Collective dynamics result from interactions among noisy dynamical components. Examples include heartbeats, circadian rhythms and various pattern formations. Because of noise in each component, collective dynamics inevitably involve fluctuations, which may crucially affect the functioning of the system. However, the relation between the fluctuations in isolated individual components and those in collective dynamics is not clear. Here, we study a linear dynamical system of networked components subjected to independent Gaussian noise and analytically show that the connectivity of networks determines the intensity of fluctuations in the collective dynamics. Remarkably, in general directed networks including scale-free networks, the fluctuations decrease more slowly with system size than the standard law stated by the central limit theorem. They even remain finite for a large system size when global directionality of the network exists. Moreover, such non-trivial behavior appears even in undirected networks when nonlinear dynamical systems are considered. We demonstrate it with a coupled oscillator system.

  19. Low level cloud motion vectors from Kalpana-1 visible images

    Indian Academy of Sciences (India)

    . In this paper, an attempt has been made to retrieve low-level cloud motion vectors using Kalpana-1 visible (VIS) images at every half an hour. The VIS channel provides better detection of low level clouds, which remain obscure in thermal IR ...

  20. SENTINEL-2 Level 1 Products and Image Processing Performances

    Science.gov (United States)

    Baillarin, S. J.; Meygret, A.; Dechoz, C.; Petrucci, B.; Lacherade, S.; Tremas, T.; Isola, C.; Martimort, P.; Spoto, F.

    2012-07-01

    In partnership with the European Commission and in the frame of the Global Monitoring for Environment and Security (GMES) program, the European Space Agency (ESA) is developing the Sentinel-2 optical imaging mission devoted to the operational monitoring of land and coastal areas. The Sentinel-2 mission is based on a satellites constellation deployed in polar sun-synchronous orbit. While ensuring data continuity of former SPOT and LANDSAT multi-spectral missions, Sentinel-2 will also offer wide improvements such as a unique combination of global coverage with a wide field of view (290 km), a high revisit (5 days with two satellites), a high resolution (10 m, 20 m and 60 m) and multi-spectral imagery (13 spectral bands in visible and shortwave infra-red domains). In this context, the Centre National d'Etudes Spatiales (CNES) supports ESA to define the system image products and to prototype the relevant image processing techniques. This paper offers, first, an overview of the Sentinel-2 system and then, introduces the image products delivered by the ground processing: the Level-0 and Level-1A are system products which correspond to respectively raw compressed and uncompressed data (limited to internal calibration purposes), the Level-1B is the first public product: it comprises radiometric corrections (dark signal, pixels response non uniformity, crosstalk, defective pixels, restoration, and binning for 60 m bands); and an enhanced physical geometric model appended to the product but not applied, the Level-1C provides ortho-rectified top of atmosphere reflectance with a sub-pixel multi-spectral and multi-date registration; a cloud and land/water mask is associated to the product. Note that the cloud mask also provides an indication about cirrus. The ground sampling distance of Level-1C product will be 10 m, 20 m or 60 m according to the band. The final Level-1C product is tiled following a pre-defined grid of 100x100 km2, based on UTM/WGS84 reference frame. The

  1. SENTINEL-2 LEVEL 1 PRODUCTS AND IMAGE PROCESSING PERFORMANCES

    Directory of Open Access Journals (Sweden)

    S. J. Baillarin

    2012-07-01

    Full Text Available In partnership with the European Commission and in the frame of the Global Monitoring for Environment and Security (GMES program, the European Space Agency (ESA is developing the Sentinel-2 optical imaging mission devoted to the operational monitoring of land and coastal areas. The Sentinel-2 mission is based on a satellites constellation deployed in polar sun-synchronous orbit. While ensuring data continuity of former SPOT and LANDSAT multi-spectral missions, Sentinel-2 will also offer wide improvements such as a unique combination of global coverage with a wide field of view (290 km, a high revisit (5 days with two satellites, a high resolution (10 m, 20 m and 60 m and multi-spectral imagery (13 spectral bands in visible and shortwave infra-red domains. In this context, the Centre National d'Etudes Spatiales (CNES supports ESA to define the system image products and to prototype the relevant image processing techniques. This paper offers, first, an overview of the Sentinel-2 system and then, introduces the image products delivered by the ground processing: the Level-0 and Level-1A are system products which correspond to respectively raw compressed and uncompressed data (limited to internal calibration purposes, the Level-1B is the first public product: it comprises radiometric corrections (dark signal, pixels response non uniformity, crosstalk, defective pixels, restoration, and binning for 60 m bands; and an enhanced physical geometric model appended to the product but not applied, the Level-1C provides ortho-rectified top of atmosphere reflectance with a sub-pixel multi-spectral and multi-date registration; a cloud and land/water mask is associated to the product. Note that the cloud mask also provides an indication about cirrus. The ground sampling distance of Level-1C product will be 10 m, 20 m or 60 m according to the band. The final Level-1C product is tiled following a pre-defined grid of 100x100 km2, based on UTM/WGS84 reference frame

  2. Low-level processing for real-time image analysis

    Science.gov (United States)

    Eskenazi, R.; Wilf, J. M.

    1979-01-01

    A system that detects object outlines in television images in real time is described. A high-speed pipeline processor transforms the raw image into an edge map and a microprocessor, which is integrated into the system, clusters the edges, and represents them as chain codes. Image statistics, useful for higher level tasks such as pattern recognition, are computed by the microprocessor. Peak intensity and peak gradient values are extracted within a programmable window and are used for iris and focus control. The algorithms implemented in hardware and the pipeline processor architecture are described. The strategy for partitioning functions in the pipeline was chosen to make the implementation modular. The microprocessor interface allows flexible and adaptive control of the feature extraction process. The software algorithms for clustering edge segments, creating chain codes, and computing image statistics are also discussed. A strategy for real time image analysis that uses this system is given.

  3. Training Methods for Image Noise Level Estimation on Wavelet Components

    Directory of Open Access Journals (Sweden)

    A. De Stefano

    2004-12-01

    Full Text Available The estimation of the standard deviation of noise contaminating an image is a fundamental step in wavelet-based noise reduction techniques. The method widely used is based on the mean absolute deviation (MAD. This model-based method assumes specific characteristics of the noise-contaminated image component. Three novel and alternative methods for estimating the noise standard deviation are proposed in this work and compared with the MAD method. Two of these methods rely on a preliminary training stage in order to extract parameters which are then used in the application stage. The sets used for training and testing, 13 and 5 images, respectively, are fully disjoint. The third method assumes specific statistical distributions for image and noise components. Results showed the prevalence of the training-based methods for the images and the range of noise levels considered.

  4. Shape-based grey-level image interpolation

    International Nuclear Information System (INIS)

    Keh-Shih Chuang; Chun-Yuan Chen; Ching-Kai Yeh

    1999-01-01

    The three-dimensional (3D) object data obtained from a CT scanner usually have unequal sampling frequencies in the x-, y- and z-directions. Generally, the 3D data are first interpolated between slices to obtain isotropic resolution, reconstructed, then operated on using object extraction and display algorithms. The traditional grey-level interpolation introduces a layer of intermediate substance and is not suitable for objects that are very different from the opposite background. The shape-based interpolation method transfers a pixel location to a parameter related to the object shape and the interpolation is performed on that parameter. This process is able to achieve a better interpolation but its application is limited to binary images only. In this paper, we present an improved shape-based interpolation method for grey-level images. The new method uses a polygon to approximate the object shape and performs the interpolation using polygon vertices as references. The binary images representing the shape of the object were first generated via image segmentation on the source images. The target object binary image was then created using regular shape-based interpolation. The polygon enclosing the object for each slice can be generated from the shape of that slice. We determined the relative location in the source slices of each pixel inside the target polygon using the vertices of a polygon as the reference. The target slice grey-level was interpolated from the corresponding source image pixels. The image quality of this interpolation method is better and the mean squared difference is smaller than with traditional grey-level interpolation. (author)

  5. Bi-level image compression with tree coding

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren

    1996-01-01

    Presently, tree coders are the best bi-level image coders. The current ISO standard, JBIG, is a good example. By organising code length calculations properly a vast number of possible models (trees) can be investigated within reasonable time prior to generating code. Three general-purpose coders...... are constructed by this principle. A multi-pass free tree coding scheme produces superior compression results for all test images. A multi-pass fast free template coding scheme produces much better results than JBIG for difficult images, such as halftonings. Rissanen's algorithm `Context' is presented in a new...

  6. Efficient OCT Image Enhancement Based on Collaborative Shock Filtering.

    Science.gov (United States)

    Liu, Guohua; Wang, Ziyu; Mu, Guoying; Li, Peijin

    2018-01-01

    Efficient enhancement of noisy optical coherence tomography (OCT) images is a key task for interpreting them correctly. In this paper, to better enhance details and layered structures of a human retina image, we propose a collaborative shock filtering for OCT image denoising and enhancement. Noisy OCT image is first denoised by a collaborative filtering method with new similarity measure, and then the denoised image is sharpened by a shock-type filtering for edge and detail enhancement. For dim OCT images, in order to improve image contrast for the detection of tiny lesions, a gamma transformation is first used to enhance the images within proper gray levels. The proposed method integrating image smoothing and sharpening simultaneously obtains better visual results in experiments.

  7. Limiting hazardous noise exposure from noisy toys: simple, sticky solutions.

    Science.gov (United States)

    Weinreich, Heather M; Jabbour, Noel; Levine, Samuel; Yueh, Bevan

    2013-09-01

    To assess noise levels of toys from the Sight & Hearing Association (SHA) 2010 Noisy Toys List and evaluate the change in noise of these toys after covering the speakers with tape or glue. One Group Pretest-Posttest Design. SHA 2010 Toys List (n = 18) toys were tested at distances of 0 and 25 cm from sound source in a soundproof booth using a digital sound-level meter. The dBA level of sound produced by toy was obtained. Toys with speakers (n = 16) were tested before and after altering speakers with plastic packing tape or nontoxic glue. Mean noise level for non-taped toys at 0 and 25 cm was 107.6 dBA (SD ± 8.5) and 82.5 dBA (SD ± 8.8), respectively. With tape, there was a statistically significant decrease in noise level at 0 and 25 cm: 84.2 dBA and 68.2 dBA (P toys. However, there was no significant difference between tape or glue. Overall, altering the toy can significantly decrease the sound a child may experience when playing with toys. However, some toys, even after altering, still produce sound levels that may be considered dangerous. Copyright © 2013 The American Laryngological, Rhinological and Otological Society, Inc.

  8. Image quality enhancement in low-light-level ghost imaging using modified compressive sensing method

    Science.gov (United States)

    Shi, Xiaohui; Huang, Xianwei; Nan, Suqin; Li, Hengxing; Bai, Yanfeng; Fu, Xiquan

    2018-04-01

    Detector noise has a significantly negative impact on ghost imaging at low light levels, especially for existing recovery algorithm. Based on the characteristics of the additive detector noise, a method named modified compressive sensing ghost imaging is proposed to reduce the background imposed by the randomly distributed detector noise at signal path. Experimental results show that, with an appropriate choice of threshold value, modified compressive sensing ghost imaging algorithm can dramatically enhance the contrast-to-noise ratio of the object reconstruction significantly compared with traditional ghost imaging and compressive sensing ghost imaging methods. The relationship between the contrast-to-noise ratio of the reconstruction image and the intensity ratio (namely, the average signal intensity to average noise intensity ratio) for the three reconstruction algorithms are also discussed. This noise suppression imaging technique will have great applications in remote-sensing and security areas.

  9. SuperPixel based mid-level image description for image recognition

    NARCIS (Netherlands)

    Tasli, H.E.; Sicre, R.; Gevers, T.

    2015-01-01

    This study proposes a mid-level feature descriptor and aims to validate improvement on image classification and retrieval tasks. In this paper, we propose a method to explore the conventional feature extraction techniques in the image classification pipeline from a different perspective where

  10. Noisy non-transitive quantum games

    International Nuclear Information System (INIS)

    Ramzan, M; Khan, Salman; Khan, M Khalid

    2010-01-01

    We study the effect of quantum noise in 3 x 3 entangled quantum games. By taking into account different noisy quantum channels, we analyze how a two-player, three-strategy Rock-Scissor-Paper game is influenced by the quantum noise. We consider the winning non-transitive strategies R, S and P such that R beats S, S beats P and P beats R. The game behaves as a noiseless game for the maximum value of the quantum noise. It is seen that Alice's payoff is heavily influenced by the depolarizing noise as compared to the amplitude damping noise. A depolarizing channel causes a monotonic decrease in players' payoffs as we increase the amount of quantum noise. In the case of the amplitude damping channel, Alice's payoff function reaches its minimum for α = 0.5 and is symmetrical. This means that larger values of quantum noise influence the game weakly. On the other hand, the phase damping channel does not influence the game. Furthermore, the Nash equilibrium and non-transitive character of the game are not affected under the influence of quantum noise.

  11. Noisy non-transitive quantum games

    Energy Technology Data Exchange (ETDEWEB)

    Ramzan, M; Khan, Salman; Khan, M Khalid, E-mail: mramzan@phys.qau.edu.p [Department of Physics Quaid-i-Azam University, Islamabad 45320 (Pakistan)

    2010-07-02

    We study the effect of quantum noise in 3 x 3 entangled quantum games. By taking into account different noisy quantum channels, we analyze how a two-player, three-strategy Rock-Scissor-Paper game is influenced by the quantum noise. We consider the winning non-transitive strategies R, S and P such that R beats S, S beats P and P beats R. The game behaves as a noiseless game for the maximum value of the quantum noise. It is seen that Alice's payoff is heavily influenced by the depolarizing noise as compared to the amplitude damping noise. A depolarizing channel causes a monotonic decrease in players' payoffs as we increase the amount of quantum noise. In the case of the amplitude damping channel, Alice's payoff function reaches its minimum for {alpha} = 0.5 and is symmetrical. This means that larger values of quantum noise influence the game weakly. On the other hand, the phase damping channel does not influence the game. Furthermore, the Nash equilibrium and non-transitive character of the game are not affected under the influence of quantum noise.

  12. Quantum steganography with noisy quantum channels

    International Nuclear Information System (INIS)

    Shaw, Bilal A.; Brun, Todd A.

    2011-01-01

    Steganography is the technique of hiding secret information by embedding it in a seemingly ''innocent'' message. We present protocols for hiding quantum information by disguising it as noise in a codeword of a quantum error-correcting code. The sender (Alice) swaps quantum information into the codeword and applies a random choice of unitary operation, drawing on a secret random key she shares with the receiver (Bob). Using the key, Bob can retrieve the information, but an eavesdropper (Eve) with the power to monitor the channel, but without the secret key, cannot distinguish the message from channel noise. We consider two types of protocols: one in which the hidden quantum information is stored locally in the codeword, and another in which it is embedded in the space of error syndromes. We analyze how difficult it is for Eve to detect the presence of secret messages, and estimate rates of steganographic communication and secret key consumption for specific protocols and examples of error channels. We consider both the case where there is no actual noise in the channel (so that all errors in the codeword result from the deliberate actions of Alice), and the case where the channel is noisy and not controlled by Alice and Bob.

  13. A deep level set method for image segmentation

    OpenAIRE

    Tang, Min; Valipour, Sepehr; Zhang, Zichen Vincent; Cobzas, Dana; MartinJagersand

    2017-01-01

    This paper proposes a novel image segmentation approachthat integrates fully convolutional networks (FCNs) with a level setmodel. Compared with a FCN, the integrated method can incorporatesmoothing and prior information to achieve an accurate segmentation.Furthermore, different than using the level set model as a post-processingtool, we integrate it into the training phase to fine-tune the FCN. Thisallows the use of unlabeled data during training in a semi-supervisedsetting. Using two types o...

  14. Shape-based interpolation of multidimensional grey-level images

    International Nuclear Information System (INIS)

    Grevera, G.J.; Udupa, J.K.

    1996-01-01

    Shape-based interpolation as applied to binary images causes the interpolation process to be influenced by the shape of the object. It accomplishes this by first applying a distance transform to the data. This results in the creation of a grey-level data set in which the value at each point represents the minimum distance from that point to the surface of the object. (By convention, points inside the object are assigned positive values; points outside are assigned negative values.) This distance transformed data set is then interpolated using linear or higher-order interpolation and is then thresholded at a distance value of zero to produce the interpolated binary data set. In this paper, the authors describe a new method that extends shape-based interpolation to grey-level input data sets. This generalization consists of first lifting the n-dimensional (n-D) image data to represent it as a surface, or equivalently as a binary image, in an (n + 1)-dimensional [(n + 1)-D] space. The binary shape-based method is then applied to this image to create an (n + 1)-D binary interpolated image. Finally, this image is collapsed (inverse of lifting) to create the n-D interpolated grey-level data set. The authors have conducted several evaluation studies involving patient computed tomography (CT) and magnetic resonance (MR) data as well as mathematical phantoms. They all indicate that the new method produces more accurate results than commonly used grey-level linear interpolation methods, although at the cost of increased computation

  15. Threshold policy for global games with noisy information sharing

    KAUST Repository

    Mahdavifar, Hessam

    2015-12-15

    It is known that global games with noisy sharing of information do not admit a certain type of threshold policies [1]. Motivated by this result, we investigate the existence of threshold-type policies on global games with noisy sharing of information and show that such equilibrium strategies exist and are unique if the sharing of information happens over a sufficiently noisy environment. To show this result, we establish that if a threshold function is an equilibrium strategy, then it will be a solution to a fixed point equation. Then, we show that for a sufficiently noisy environment, the functional fixed point equation leads to a contraction mapping, and hence, its iterations converge to a unique continuous threshold policy.

  16. The Noisiness of Low Frequency Bands of Noise

    Science.gov (United States)

    Lawton, B. W.

    1975-01-01

    The relative noisiness of low frequency 1/3-octave bands of noise was examined. The frequency range investigated was bounded by the bands centered at 25 and 200 Hz, with intensities ranging from 50 to 95 db (SPL). Thirty-two subjects used a method of adjustment technique, producing comparison band intensities as noisy as 100 and 200 Hz standard bands at 60 and 72 db. The work resulted in contours of equal noisiness for 1/3-octave bands, ranging in intensity from approximately 58 to 86 db (SPL). These contours were compared with the standard equal noisiness contours; in the region of overlap, between 50 and 200 Hz, the agreement was good.

  17. Development of wireless intercom for work of excessive noisy places

    International Nuclear Information System (INIS)

    Shiba, Kazuo; Yamashita, Shinichi; Fujita, Tsuneaki; Yamazaki, Katsuyoshi; Sakai, Manabu; Nakanishi, Tomokazu.

    1996-01-01

    Nuclear power stations are often excessively noisy working environments, where conversation and verbal communication are hampered to the extreme. We have developed a small wireless intercom for this and other extremely noisy environments. In the first step of this study, we studied work environment noise and vibration. Results formed the basis of intercom system development. In addition, we have examined the possibilities of optical and microwave intercom systems. (author)

  18. Effect of glucose level on brain FDG-PET images

    Energy Technology Data Exchange (ETDEWEB)

    Kim, In Young; Lee, Yong Ki; Ahn, Sung Min [Dept. of Radiological Science, Gachon University, Seongnam (Korea, Republic of)

    2017-06-15

    In addition to tumors, normal tissues, such as the brain and myocardium can intake {sup 18}F-FDG, and the amount of {sup 18}F-FDG intake by normal tissues can be altered by the surrounding environment. Therefore, a process is necessary during which the contrasts of the tumor and normal tissues can be enhanced. Thus, this study examines the effects of glucose levels on FDG PET images of brain tissues, which features high glucose activity at all times, in small animals. Micro PET scan was performed on fourteen mice after injecting {sup 18}F-FDG. The images were compared in relation to fasting. The findings showed that the mean SUV value w as 0 .84 higher in fasted mice than in non-fasted mice. During observation, the images from non-fasted mice showed high accumulation in organs other than the brain with increased surrounding noise. In addition, compared to the non-fasted mice, the fasted mice showed higher early intake and curve increase. The findings of this study suggest that fasting is important in assessing brain functions in brain PET using {sup 18}F-FDG. Additional studies to investigate whether caffeine levels and other preprocessing items have an impact on the acquired images would contribute to reducing radiation exposure in patients.

  19. Effect of glucose level on brain FDG-PET images

    International Nuclear Information System (INIS)

    Kim, In Young; Lee, Yong Ki; Ahn, Sung Min

    2017-01-01

    In addition to tumors, normal tissues, such as the brain and myocardium can intake 18 F-FDG, and the amount of 18 F-FDG intake by normal tissues can be altered by the surrounding environment. Therefore, a process is necessary during which the contrasts of the tumor and normal tissues can be enhanced. Thus, this study examines the effects of glucose levels on FDG PET images of brain tissues, which features high glucose activity at all times, in small animals. Micro PET scan was performed on fourteen mice after injecting 18 F-FDG. The images were compared in relation to fasting. The findings showed that the mean SUV value w as 0 .84 higher in fasted mice than in non-fasted mice. During observation, the images from non-fasted mice showed high accumulation in organs other than the brain with increased surrounding noise. In addition, compared to the non-fasted mice, the fasted mice showed higher early intake and curve increase. The findings of this study suggest that fasting is important in assessing brain functions in brain PET using 18 F-FDG. Additional studies to investigate whether caffeine levels and other preprocessing items have an impact on the acquired images would contribute to reducing radiation exposure in patients

  20. Sentence comprehension in aphasia: A noisy channel approach

    Directory of Open Access Journals (Sweden)

    Michael Walsh Dickey

    2014-04-01

    Full Text Available Probabilistic accounts of language understanding assume that comprehension involves determining the probability of an intended message (m given an input utterance (u (P(m|u; e.g. Gibson et al, 2013a; Levy et al, 2009. One challenge is that communication occurs within a noisy channel; i.e. the comprehender’s representation of u may have been distorted, e.g., by a typo or by impairment associated with aphasia. Bayes’ rule provides a model of how comprehenders can combine the prior probability of m (P(m with the probability that m would have been distorted to u (P(mu to calculate the probability of m given u (P(m|u  P(mP(mu. This formalism can capture the observation that people with aphasia (PWA rely more on semantics than syntax during comprehension (e.g., Caramazza & Zurif, 1976: given the high probability that their representation of the input is unreliable, they weigh message likelihood more heavily. Gibson et al. (2013a showed that unimpaired adults are sensitive to P(m and P(mu: they more often chose interpretations that increased message plausibility or involved distortions requiring fewer changes, and/or deletions instead of insertions (see Figure 1a for examples. Gibson et al. (2013b found PWA were also sensitive to both P(m and P(mu in an act-out task, but relied more heavily than unimpaired controls on P(m. This shows group-level optimization towards the less noisy (semantic channel in PWA. The current experiment (8 PWA; 7 age-matched controls investigated noisy channel optimization at the level of individual PWA. It also included active/passive items with a weaker plausibility manipulation to test whether P(m is higher for implausible than impossible strings. The task was forced-choice sentence-picture matching (Figure 1b. Experimental sentences crossed active versus passive (A-P structures with plausibility (Set 1 or impossibility (Set 2, and prepositional-object versus double-object structures (PO-DO: Set 3 with

  1. Image-guided regularization level set evolution for MR image segmentation and bias field correction.

    Science.gov (United States)

    Wang, Lingfeng; Pan, Chunhong

    2014-01-01

    Magnetic resonance (MR) image segmentation is a crucial step in surgical and treatment planning. In this paper, we propose a level-set-based segmentation method for MR images with intensity inhomogeneous problem. To tackle the initialization sensitivity problem, we propose a new image-guided regularization to restrict the level set function. The maximum a posteriori inference is adopted to unify segmentation and bias field correction within a single framework. Under this framework, both the contour prior and the bias field prior are fully used. As a result, the image intensity inhomogeneity can be well solved. Extensive experiments are provided to evaluate the proposed method, showing significant improvements in both segmentation and bias field correction accuracies as compared with other state-of-the-art approaches. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. Patient dose with quality image under diagnostic reference levels

    International Nuclear Information System (INIS)

    Akula, Suresh Kumar; Singh, Gurvinder; Chougule, Arun

    2016-01-01

    Need to set Diagnostic Reference Level (DRL) for locations for all diagnostic procedures in local as compared to National. The review of DRL's should compare local with national or referenced averages and a note made of any significant variances to these averages and the justification for it. To survey and asses radiation doses to patient and reduce the redundancy in patient imaging to maintain DRLs

  3. Slope Estimation in Noisy Piecewise Linear Functions.

    Science.gov (United States)

    Ingle, Atul; Bucklew, James; Sethares, William; Varghese, Tomy

    2015-03-01

    This paper discusses the development of a slope estimation algorithm called MAPSlope for piecewise linear data that is corrupted by Gaussian noise. The number and locations of slope change points (also known as breakpoints) are assumed to be unknown a priori though it is assumed that the possible range of slope values lies within known bounds. A stochastic hidden Markov model that is general enough to encompass real world sources of piecewise linear data is used to model the transitions between slope values and the problem of slope estimation is addressed using a Bayesian maximum a posteriori approach. The set of possible slope values is discretized, enabling the design of a dynamic programming algorithm for posterior density maximization. Numerical simulations are used to justify choice of a reasonable number of quantization levels and also to analyze mean squared error performance of the proposed algorithm. An alternating maximization algorithm is proposed for estimation of unknown model parameters and a convergence result for the method is provided. Finally, results using data from political science, finance and medical imaging applications are presented to demonstrate the practical utility of this procedure.

  4. Image denoising by exploring external and internal correlations.

    Science.gov (United States)

    Yue, Huanjing; Sun, Xiaoyan; Yang, Jingyu; Wu, Feng

    2015-06-01

    Single image denoising suffers from limited data collection within a noisy image. In this paper, we propose a novel image denoising scheme, which explores both internal and external correlations with the help of web images. For each noisy patch, we build internal and external data cubes by finding similar patches from the noisy and web images, respectively. We then propose reducing noise by a two-stage strategy using different filtering approaches. In the first stage, since the noisy patch may lead to inaccurate patch selection, we propose a graph based optimization method to improve patch matching accuracy in external denoising. The internal denoising is frequency truncation on internal cubes. By combining the internal and external denoising patches, we obtain a preliminary denoising result. In the second stage, we propose reducing noise by filtering of external and internal cubes, respectively, on transform domain. In this stage, the preliminary denoising result not only enhances the patch matching accuracy but also provides reliable estimates of filtering parameters. The final denoising image is obtained by fusing the external and internal filtering results. Experimental results show that our method constantly outperforms state-of-the-art denoising schemes in both subjective and objective quality measurements, e.g., it achieves >2 dB gain compared with BM3D at a wide range of noise levels.

  5. Noisiness of the Surfaces on Low-Speed Roads

    Directory of Open Access Journals (Sweden)

    Wladyslaw Gardziejczyk

    2016-03-01

    Full Text Available Traffic noise is a particular threat to the environment in the vicinity of roads. The level of the noise is influenced by traffic density and traffic composition, as well as vehicle speed and the type of surface. The article presents the results of studies on tire/road noise from passing vehicles at a speed of 40–80 kph, carried out by using the statistical pass-by method (SPB, on seven surfaces with different characteristics. It has been shown that increasing the speed from 40 kph to 50 kph contributes to the increase in the maximum A-weighted sound pressure level by about 3 dB, regardless of the type of surface. For larger differences in speed (30 kph–40 kph increase in noise levels reaches values about 10 dB. In the case of higher speeds, this increase is slightly lower. In this article, special attention is paid to the noisiness from surfaces made of porous asphalt concrete (PAC, BBTM (thin asphalt layer, and stone mastic asphalt (SMA with a maximum aggregate size of 8 mm and 5 mm. It has also been proved that surfaces of porous asphalt concrete, within two years after the commissioning, significantly contribute to a reduction of the maximum level of noise in the streets and roads with lower speed of passing cars. Reduction of the maximum A-weighted sound pressure level of a statistical car traveling at 60 kph reaches values of up to about 6 dB, as compared with the SMA11. Along with the exploitation of the road, air voids in the low-noise surface becomes clogged and acoustic properties of the road decrease to a level similar to standard asphalt.

  6. A comparative study of image low level feature extraction algorithms

    Directory of Open Access Journals (Sweden)

    M.M. El-gayar

    2013-07-01

    Full Text Available Feature extraction and matching is at the base of many computer vision problems, such as object recognition or structure from motion. Current methods for assessing the performance of popular image matching algorithms are presented and rely on costly descriptors for detection and matching. Specifically, the method assesses the type of images under which each of the algorithms reviewed herein perform to its maximum or highest efficiency. The efficiency is measured in terms of the number of matches founds by the algorithm and the number of type I and type II errors encountered when the algorithm is tested against a specific pair of images. Current comparative studies asses the performance of the algorithms based on the results obtained in different criteria such as speed, sensitivity, occlusion, and others. This study addresses the limitations of the existing comparative tools and delivers a generalized criterion to determine beforehand the level of efficiency expected from a matching algorithm given the type of images evaluated. The algorithms and the respective images used within this work are divided into two groups: feature-based and texture-based. And from this broad classification only three of the most widely used algorithms are assessed: color histogram, FAST (Features from Accelerated Segment Test, SIFT (Scale Invariant Feature Transform, PCA-SIFT (Principal Component Analysis-SIFT, F-SIFT (fast-SIFT and SURF (speeded up robust features. The performance of the Fast-SIFT (F-SIFT feature detection methods are compared for scale changes, rotation, blur, illumination changes and affine transformations. All the experiments use repeatability measurement and the number of correct matches for the evaluation measurements. SIFT presents its stability in most situations although its slow. F-SIFT is the fastest one with good performance as the same as SURF, SIFT, PCA-SIFT show its advantages in rotation and illumination changes.

  7. Recent progress in low-level gamma imaging

    International Nuclear Information System (INIS)

    Mahe, C.; Girones, Ph.; Lamadie, F.; Le Goaller, C.

    2007-01-01

    The CEA's Aladin gamma imaging system has been operated successfully for several years in nuclear plants and during decommissioning projects with additional tools such as gamma spectrometry detectors and dose rate probes. The radiological information supplied by these devices is becoming increasingly useful for establishing robust and optimized decommissioning scenarios. Recent technical improvements allow this gamma imaging system to be operated in low-level applications and with shorter acquisition times suitable for decommissioning projects. The compact portable system can be used in places inaccessible to operators. It is quick and easy to implement, notably for onsite component characterization. Feasibility trials and in situ measurements were recently carried out under low-level conditions, mainly on waste packages and glove boxes for decommissioning projects. This paper describes recent low-level in situ applications. These characterization campaigns mainly concerned gamma emitters with γ energy < 700 keV. In many cases, the localization of hot spots by gamma camera was confirmed by additional measurements such as dose rate mapping and gamma spectrometry measurements. These complementary techniques associated with advanced calculation codes (MCNP, Mercure 6.2, Visiplan and Siren) offer a mobile and compact tool for specific assessment of waste packages and glove boxes. (authors)

  8. Blood oxygenation level dependent (BOLD). Renal imaging. Concepts and applications

    International Nuclear Information System (INIS)

    Nissen, Johanna C.; Haneder, Stefan; Schoenberg, Stefan O.; Michaely, Henrik J.

    2010-01-01

    Many renal diseases as well as several pharmacons cause a change in renal blood flow and/or renal oxygenation. The blood oxygenation level dependent (BOLD) imaging takes advantage of local field inhomogeneities and is based on a T2 * -weighted sequence. BOLD is a non-invasive method allowing an estimation of the renal, particularly the medullary oxygenation, and an indirect measurement of blood flow without administration of contrast agents. Thus, effects of different drugs on the kidney and various renal diseases can be controlled and observed. This work will provide an overview of the studies carried out so far and identify ways how BOLD can be used in clinical studies. (orig.)

  9. Lossy/lossless coding of bi-level images

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren

    1997-01-01

    Summary form only given. We present improvements to a general type of lossless, lossy, and refinement coding of bi-level images (Martins and Forchhammer, 1996). Loss is introduced by flipping pixels. The pixels are coded using arithmetic coding of conditional probabilities obtained using a template...... as is known from JBIG and proposed in JBIG-2 (Martins and Forchhammer). Our new state-of-the-art results are obtained using the more general free tree instead of a template. Also we introduce multiple refinement template coding. The lossy algorithm is analogous to the greedy `rate...

  10. Detection of electrophysiology catheters in noisy fluoroscopy images

    NARCIS (Netherlands)

    Franken, E.M.; Rongen, P.M.J.; Almsick, van M.A.; Haar Romenij, ter B.M.

    2006-01-01

    Cardiac catheter ablation is a minimally invasive medical procedure to treat patients with heart rhythm disorders. It is useful to know the positions of the catheters and electrodes during the intervention, e.g. for the automatization of cardiac mapping. Our goal is therefore to develop a robust

  11. Simulation of noisy dynamical system by Deep Learning

    Science.gov (United States)

    Yeo, Kyongmin

    2017-11-01

    Deep learning has attracted huge attention due to its powerful representation capability. However, most of the studies on deep learning have been focused on visual analytics or language modeling and the capability of the deep learning in modeling dynamical systems is not well understood. In this study, we use a recurrent neural network to model noisy nonlinear dynamical systems. In particular, we use a long short-term memory (LSTM) network, which constructs internal nonlinear dynamics systems. We propose a cross-entropy loss with spatial ridge regularization to learn a non-stationary conditional probability distribution from a noisy nonlinear dynamical system. A Monte Carlo procedure to perform time-marching simulations by using the LSTM is presented. The behavior of the LSTM is studied by using noisy, forced Van der Pol oscillator and Ikeda equation.

  12. Fast noise level estimation algorithm based on principal component analysis transform and nonlinear rectification

    Science.gov (United States)

    Xu, Shaoping; Zeng, Xiaoxia; Jiang, Yinnan; Tang, Yiling

    2018-01-01

    We proposed a noniterative principal component analysis (PCA)-based noise level estimation (NLE) algorithm that addresses the problem of estimating the noise level with a two-step scheme. First, we randomly extracted a number of raw patches from a given noisy image and took the smallest eigenvalue of the covariance matrix of the raw patches as the preliminary estimation of the noise level. Next, the final estimation was directly obtained with a nonlinear mapping (rectification) function that was trained on some representative noisy images corrupted with different known noise levels. Compared with the state-of-art NLE algorithms, the experiment results show that the proposed NLE algorithm can reliably infer the noise level and has robust performance over a wide range of image contents and noise levels, showing a good compromise between speed and accuracy in general.

  13. Iterative estimation of the background in noisy spectroscopic data

    International Nuclear Information System (INIS)

    Zhu, M.H.; Liu, L.G.; Cheng, Y.S.; Dong, T.K.; You, Z.; Xu, A.A.

    2009-01-01

    In this paper, we present an iterative filtering method to estimate the background of noisy spectroscopic data. The proposed method avoids the calculation of the average full width at half maximum (FWHM) of the whole spectrum and the peak regions, and it can estimate the background efficiently, especially for spectroscopic data with the Compton continuum.

  14. Threshold policy for global games with noisy information sharing

    KAUST Repository

    Mahdavifar, Hessam; Beirami, Ahmad; Touri, Behrouz; Shamma, Jeff S.

    2015-01-01

    of information and show that such equilibrium strategies exist and are unique if the sharing of information happens over a sufficiently noisy environment. To show this result, we establish that if a threshold function is an equilibrium strategy, then it will be a

  15. Data and Network Science for Noisy Heterogeneous Systems

    Science.gov (United States)

    Rider, Andrew Kent

    2013-01-01

    Data in many growing fields has an underlying network structure that can be taken advantage of. In this dissertation we apply data and network science to problems in the domains of systems biology and healthcare. Data challenges in these fields include noisy, heterogeneous data, and a lack of ground truth. The primary thesis of this work is that…

  16. Three methods to distill multipartite entanglement over bipartite noisy channels

    International Nuclear Information System (INIS)

    Lee, Soojoon; Park, Jungjoon

    2008-01-01

    We first assume that there are only bipartite noisy qubit channels in a given multipartite system, and present three methods to distill the general Greenberger-Horne-Zeilinger state. By investigating the methods, we show that multipartite entanglement distillation by bipartite entanglement distillation has higher yield than ones in the previous multipartite entanglement distillations

  17. Extortion under uncertainty: Zero-determinant strategies in noisy games

    Science.gov (United States)

    Hao, Dong; Rong, Zhihai; Zhou, Tao

    2015-05-01

    Repeated game theory has been one of the most prevailing tools for understanding long-running relationships, which are the foundation in building human society. Recent works have revealed a new set of "zero-determinant" (ZD) strategies, which is an important advance in repeated games. A ZD strategy player can exert unilateral control on two players' payoffs. In particular, he can deterministically set the opponent's payoff or enforce an unfair linear relationship between the players' payoffs, thereby always seizing an advantageous share of payoffs. One of the limitations of the original ZD strategy, however, is that it does not capture the notion of robustness when the game is subjected to stochastic errors. In this paper, we propose a general model of ZD strategies for noisy repeated games and find that ZD strategies have high robustness against errors. We further derive the pinning strategy under noise, by which the ZD strategy player coercively sets the opponent's expected payoff to his desired level, although his payoff control ability declines with the increase of noise strength. Due to the uncertainty caused by noise, the ZD strategy player cannot ensure his payoff to be permanently higher than the opponent's, which implies dominant extortions do not exist even under low noise. While we show that the ZD strategy player can still establish a novel kind of extortions, named contingent extortions, where any increase of his own payoff always exceeds that of the opponent's by a fixed percentage, and the conditions under which the contingent extortions can be realized are more stringent as the noise becomes stronger.

  18. Graph state generation with noisy mirror-inverting spin chains

    International Nuclear Information System (INIS)

    Clark, Stephen R; Klein, Alexander; Bruderer, Martin; Jaksch, Dieter

    2007-01-01

    We investigate the influence of noise on a graph state generation scheme which exploits a mirror inverting spin chain. Within this scheme the spin chain is used repeatedly as an entanglement bus (EB) to create multi-partite entanglement. The noise model we consider comprises of each spin of this EB being exposed to independent local noise which degrades the capabilities of the EB. Here we concentrate on quantifying its performance as a single-qubit channel and as a mediator of a two-qubit entangling gate, since these are basic operations necessary for graph state generation using the EB. In particular, for the single-qubit case we numerically calculate the average channel fidelity and whether the channel becomes entanglement breaking, i.e. expunges any entanglement the transferred qubit may have with other external qubits. We find that neither local decay nor dephasing noise cause entanglement breaking. This is in contrast to local thermal and depolarizing noise where we determine a critical length and critical noise coupling, respectively, at which entanglement breaking occurs. The critical noise coupling for local depolarizing noise is found to exhibit a power-law dependence on the chain length. For two-qubits we similarly compute the average gate fidelity and whether the ability for this gate to create entanglement is maintained. The concatenation of these noisy gates for the construction of a five-qubit linear cluster state and a Greenberger-Horne-Zeilinger state indicates that the level of noise that can be tolerated for graph state generation is tightly constrained

  19. Inferring causality from noisy time series data

    DEFF Research Database (Denmark)

    Mønster, Dan; Fusaroli, Riccardo; Tylén, Kristian

    2016-01-01

    Convergent Cross-Mapping (CCM) has shown high potential to perform causal inference in the absence of models. We assess the strengths and weaknesses of the method by varying coupling strength and noise levels in coupled logistic maps. We find that CCM fails to infer accurate coupling strength...... and even causality direction in synchronized time-series and in the presence of intermediate coupling. We find that the presence of noise deterministically reduces the level of cross-mapping fidelity, while the convergence rate exhibits higher levels of robustness. Finally, we propose that controlled noise...

  20. Advances in low-level color image processing

    CERN Document Server

    Smolka, Bogdan

    2014-01-01

    Color perception plays an important role in object recognition and scene understanding both for humans and intelligent vision systems. Recent advances in digital color imaging and computer hardware technology have led to an explosion in the use of color images in a variety of applications including medical imaging, content-based image retrieval, biometrics, watermarking, digital inpainting, remote sensing, visual quality inspection, among many others. As a result, automated processing and analysis of color images has become an active area of research, to which the large number of publications of the past two decades bears witness. The multivariate nature of color image data presents new challenges for researchers and practitioners as the numerous methods developed for single channel images are often not directly applicable to multichannel  ones. The goal of this volume is to summarize the state-of-the-art in the early stages of the color image processing pipeline.

  1. Interaction between High-Level and Low-Level Image Analysis for Semantic Video Object Extraction

    Directory of Open Access Journals (Sweden)

    Andrea Cavallaro

    2004-06-01

    Full Text Available The task of extracting a semantic video object is split into two subproblems, namely, object segmentation and region segmentation. Object segmentation relies on a priori assumptions, whereas region segmentation is data-driven and can be solved in an automatic manner. These two subproblems are not mutually independent, and they can benefit from interactions with each other. In this paper, a framework for such interaction is formulated. This representation scheme based on region segmentation and semantic segmentation is compatible with the view that image analysis and scene understanding problems can be decomposed into low-level and high-level tasks. Low-level tasks pertain to region-oriented processing, whereas the high-level tasks are closely related to object-level processing. This approach emulates the human visual system: what one “sees” in a scene depends on the scene itself (region segmentation as well as on the cognitive task (semantic segmentation at hand. The higher-level segmentation results in a partition corresponding to semantic video objects. Semantic video objects do not usually have invariant physical properties and the definition depends on the application. Hence, the definition incorporates complex domain-specific knowledge and is not easy to generalize. For the specific implementation used in this paper, motion is used as a clue to semantic information. In this framework, an automatic algorithm is presented for computing the semantic partition based on color change detection. The change detection strategy is designed to be immune to the sensor noise and local illumination variations. The lower-level segmentation identifies the partition corresponding to perceptually uniform regions. These regions are derived by clustering in an N-dimensional feature space, composed of static as well as dynamic image attributes. We propose an interaction mechanism between the semantic and the region partitions which allows to

  2. Cnn Based Retinal Image Upscaling Using Zero Component Analysis

    Science.gov (United States)

    Nasonov, A.; Chesnakov, K.; Krylov, A.

    2017-05-01

    The aim of the paper is to obtain high quality of image upscaling for noisy images that are typical in medical image processing. A new training scenario for convolutional neural network based image upscaling method is proposed. Its main idea is a novel dataset preparation method for deep learning. The dataset contains pairs of noisy low-resolution images and corresponding noiseless highresolution images. To achieve better results at edges and textured areas, Zero Component Analysis is applied to these images. The upscaling results are compared with other state-of-the-art methods like DCCI, SI-3 and SRCNN on noisy medical ophthalmological images. Objective evaluation of the results confirms high quality of the proposed method. Visual analysis shows that fine details and structures like blood vessels are preserved, noise level is reduced and no artifacts or non-existing details are added. These properties are essential in retinal diagnosis establishment, so the proposed algorithm is recommended to be used in real medical applications.

  3. Low level image processing techniques using the pipeline image processing engine in the flight telerobotic servicer

    Science.gov (United States)

    Nashman, Marilyn; Chaconas, Karen J.

    1988-01-01

    The sensory processing system for the NASA/NBS Standard Reference Model (NASREM) for telerobotic control is described. This control system architecture was adopted by NASA of the Flight Telerobotic Servicer. The control system is hierarchically designed and consists of three parallel systems: task decomposition, world modeling, and sensory processing. The Sensory Processing System is examined, and in particular the image processing hardware and software used to extract features at low levels of sensory processing for tasks representative of those envisioned for the Space Station such as assembly and maintenance are described.

  4. Noisy Oscillations in the Actin Cytoskeleton of Chemotactic Amoeba

    Science.gov (United States)

    Negrete, Jose; Pumir, Alain; Hsu, Hsin-Fang; Westendorf, Christian; Tarantola, Marco; Beta, Carsten; Bodenschatz, Eberhard

    2016-09-01

    Biological systems with their complex biochemical networks are known to be intrinsically noisy. Here we investigate the dynamics of actin polymerization of amoeboid cells, which are close to the onset of oscillations. We show that the large phenotypic variability in the polymerization dynamics can be accurately captured by a generic nonlinear oscillator model in the presence of noise. We determine the relative role of the noise with a single dimensionless, experimentally accessible parameter, thus providing a quantitative description of the variability in a population of cells. Our approach, which rests on a generic description of a system close to a Hopf bifurcation and includes the effect of noise, can characterize the dynamics of a large class of noisy systems close to an oscillatory instability.

  5. Population coding in sparsely connected networks of noisy neurons

    OpenAIRE

    Tripp, Bryan P.; Orchard, Jeff

    2012-01-01

    This study examines the relationship between population coding and spatial connection statistics in networks of noisy neurons. Encoding of sensory information in the neocortex is thought to require coordinated neural populations, because individual cortical neurons respond to a wide range of stimuli, and exhibit highly variable spiking in response to repeated stimuli. Population coding is rooted in network structure, because cortical neurons receive information only from other neurons, and be...

  6. A method for extracting chaotic signal from noisy environment

    International Nuclear Information System (INIS)

    Shang, L.-J.; Shyu, K.-K.

    2009-01-01

    In this paper, we propose a approach for extracting chaos signal from noisy environment where the chaotic signal has been contaminated by white Gaussian noise. The traditional type of independent component analysis (ICA) is capable of separating mixed signals and retrieving them independently; however, the separated signal shows unreal amplitude. The results of this study show with our method the real chaos signal can be effectively recovered.

  7. Indium-111 labeled leukocyte images demonstrating a lung abscess with prominent fluid level

    International Nuclear Information System (INIS)

    Massie, J.D.; Winer-Muram, H.

    1986-01-01

    In-111 labeled leukocyte images show an abscess cavity with a fluid level on 24-hour upright images. Fluid levels, frequently seen on radiographs, are uncommon on nuclear images. This finding demonstrates rapid migration of labeled leukocytes into purulent abscess fluid

  8. Noisy covariance matrices and portfolio optimization II

    Science.gov (United States)

    Pafka, Szilárd; Kondor, Imre

    2003-03-01

    Recent studies inspired by results from random matrix theory (Galluccio et al.: Physica A 259 (1998) 449; Laloux et al.: Phys. Rev. Lett. 83 (1999) 1467; Risk 12 (3) (1999) 69; Plerou et al.: Phys. Rev. Lett. 83 (1999) 1471) found that covariance matrices determined from empirical financial time series appear to contain such a high amount of noise that their structure can essentially be regarded as random. This seems, however, to be in contradiction with the fundamental role played by covariance matrices in finance, which constitute the pillars of modern investment theory and have also gained industry-wide applications in risk management. Our paper is an attempt to resolve this embarrassing paradox. The key observation is that the effect of noise strongly depends on the ratio r= n/ T, where n is the size of the portfolio and T the length of the available time series. On the basis of numerical experiments and analytic results for some toy portfolio models we show that for relatively large values of r (e.g. 0.6) noise does, indeed, have the pronounced effect suggested by Galluccio et al. (1998), Laloux et al. (1999) and Plerou et al. (1999) and illustrated later by Laloux et al. (Int. J. Theor. Appl. Finance 3 (2000) 391), Plerou et al. (Phys. Rev. E, e-print cond-mat/0108023) and Rosenow et al. (Europhys. Lett., e-print cond-mat/0111537) in a portfolio optimization context, while for smaller r (around 0.2 or below), the error due to noise drops to acceptable levels. Since the length of available time series is for obvious reasons limited in any practical application, any bound imposed on the noise-induced error translates into a bound on the size of the portfolio. In a related set of experiments we find that the effect of noise depends also on whether the problem arises in asset allocation or in a risk measurement context: if covariance matrices are used simply for measuring the risk of portfolios with a fixed composition rather than as inputs to optimization, the

  9. The effect of base image window level selection on the dimensional measurement accuracy of resultant three-dimensional image displays

    International Nuclear Information System (INIS)

    Kurmis, A.P.; Hearn, T.C.; Reynolds, K.J.

    2003-01-01

    Purpose: The aim of this study was to determine the effect of base image window level selection on direct linear measurement of knee structures displayed using new magnetic resonance (MR)-based three-dimensional reconstructed computer imaging techniques. Methods: A prospective comparative study was performed using a series of three-dimensional knee images, generated from conventional MR imaging (MRI) sections. Thirty distinct anatomical structural features were identified within the image series of which repeated measurements were compared at 10 different window grey scale levels. Results: Statistical analysis demonstrated an excellent raw correlation between measurements and suggested no significant difference between measurements made at each of the 10 window level settings (P>0.05). Conclusions: The findings of this study suggest that unlike conventional MR or CT applications, grey scale window level selection at the time of imaging does not significantly affect the visual quality of resultant three-dimensional reconstructed images and hence the accuracy of subsequent direct linear measurement. The diagnostic potential of clinical progression from routine two-dimensional to advanced three-dimensional reconstructed imaging techniques may therefore be less likely to be degraded by inappropriate MR technician image windowing during the capturing of image series

  10. Level set segmentation of medical images based on local region statistics and maximum a posteriori probability.

    Science.gov (United States)

    Cui, Wenchao; Wang, Yi; Lei, Tao; Fan, Yangyu; Feng, Yan

    2013-01-01

    This paper presents a variational level set method for simultaneous segmentation and bias field estimation of medical images with intensity inhomogeneity. In our model, the statistics of image intensities belonging to each different tissue in local regions are characterized by Gaussian distributions with different means and variances. According to maximum a posteriori probability (MAP) and Bayes' rule, we first derive a local objective function for image intensities in a neighborhood around each pixel. Then this local objective function is integrated with respect to the neighborhood center over the entire image domain to give a global criterion. In level set framework, this global criterion defines an energy in terms of the level set functions that represent a partition of the image domain and a bias field that accounts for the intensity inhomogeneity of the image. Therefore, image segmentation and bias field estimation are simultaneously achieved via a level set evolution process. Experimental results for synthetic and real images show desirable performances of our method.

  11. Least-squares methods for identifying biochemical regulatory networks from noisy measurements

    Directory of Open Access Journals (Sweden)

    Heslop-Harrison Pat

    2007-01-01

    Full Text Available Abstract Background We consider the problem of identifying the dynamic interactions in biochemical networks from noisy experimental data. Typically, approaches for solving this problem make use of an estimation algorithm such as the well-known linear Least-Squares (LS estimation technique. We demonstrate that when time-series measurements are corrupted by white noise and/or drift noise, more accurate and reliable identification of network interactions can be achieved by employing an estimation algorithm known as Constrained Total Least Squares (CTLS. The Total Least Squares (TLS technique is a generalised least squares method to solve an overdetermined set of equations whose coefficients are noisy. The CTLS is a natural extension of TLS to the case where the noise components of the coefficients are correlated, as is usually the case with time-series measurements of concentrations and expression profiles in gene networks. Results The superior performance of the CTLS method in identifying network interactions is demonstrated on three examples: a genetic network containing four genes, a network describing p53 activity and mdm2 messenger RNA interactions, and a recently proposed kinetic model for interleukin (IL-6 and (IL-12b messenger RNA expression as a function of ATF3 and NF-κB promoter binding. For the first example, the CTLS significantly reduces the errors in the estimation of the Jacobian for the gene network. For the second, the CTLS reduces the errors from the measurements that are corrupted by white noise and the effect of neglected kinetics. For the third, it allows the correct identification, from noisy data, of the negative regulation of (IL-6 and (IL-12b by ATF3. Conclusion The significant improvements in performance demonstrated by the CTLS method under the wide range of conditions tested here, including different levels and types of measurement noise and different numbers of data points, suggests that its application will enable

  12. Entanglement-assisted quantum parameter estimation from a noisy qubit pair: A Fisher information analysis

    Energy Technology Data Exchange (ETDEWEB)

    Chapeau-Blondeau, François, E-mail: chapeau@univ-angers.fr

    2017-04-25

    Benefit from entanglement in quantum parameter estimation in the presence of noise or decoherence is investigated, with the quantum Fisher information to asses the performance. When an input probe experiences any (noisy) transformation introducing the parameter dependence, the performance is always maximized by a pure probe. As a generic estimation task, for estimating the phase of a unitary transformation on a qubit affected by depolarizing noise, the optimal separable probe and its performance are characterized as a function of the level of noise. By entangling qubits in pairs, enhancements of performance over that of the optimal separable probe are quantified, in various settings of the entangled pair. In particular, in the presence of the noise, enhancement over the performance of the one-qubit optimal probe can always be obtained with a second entangled qubit although never interacting with the process to be estimated. Also, enhancement over the performance of the two-qubit optimal separable probe can always be achieved by a two-qubit entangled probe, either partially or maximally entangled depending on the level of the depolarizing noise. - Highlights: • Quantum parameter estimation from a noisy qubit pair is investigated. • The quantum Fisher information is used to assess the ultimate best performance. • Theoretical expressions are established and analyzed for the Fisher information. • Enhanced performances are quantified with various entanglements of the pair. • Enhancement is shown even with one entangled qubit noninteracting with the process.

  13. Image of а head of law-enforcement body on micro level (empirical experimentation

    Directory of Open Access Journals (Sweden)

    D. G. Perednya

    2016-01-01

    Full Text Available The article determines image of the head of law-enforcement body. Subjects and objects of image are described. Inhomogenuity of image is cleared up. Method of examination is shortly micro level described. It is talking about image, which is formed in mind of members of team of law-enforcement body, who are subordinated to object of image. State-of-the-art is illustrated, according to received data. Hypothesis about negative image of the head in mind of subordinates is disproved. It is shown contradiction of images in collective mind and social mind.

  14. Cerebral Metabolic Rate of Oxygen (CMRO2 ) Mapping by Combining Quantitative Susceptibility Mapping (QSM) and Quantitative Blood Oxygenation Level-Dependent Imaging (qBOLD).

    Science.gov (United States)

    Cho, Junghun; Kee, Youngwook; Spincemaille, Pascal; Nguyen, Thanh D; Zhang, Jingwei; Gupta, Ajay; Zhang, Shun; Wang, Yi

    2018-03-07

    To map the cerebral metabolic rate of oxygen (CMRO 2 ) by estimating the oxygen extraction fraction (OEF) from gradient echo imaging (GRE) using phase and magnitude of the GRE data. 3D multi-echo gradient echo imaging and perfusion imaging with arterial spin labeling were performed in 11 healthy subjects. CMRO 2 and OEF maps were reconstructed by joint quantitative susceptibility mapping (QSM) to process GRE phases and quantitative blood oxygen level-dependent (qBOLD) modeling to process GRE magnitudes. Comparisons with QSM and qBOLD alone were performed using ROI analysis, paired t-tests, and Bland-Altman plot. The average CMRO 2 value in cortical gray matter across subjects were 140.4 ± 14.9, 134.1 ± 12.5, and 184.6 ± 17.9 μmol/100 g/min, with corresponding OEFs of 30.9 ± 3.4%, 30.0 ± 1.8%, and 40.9 ± 2.4% for methods based on QSM, qBOLD, and QSM+qBOLD, respectively. QSM+qBOLD provided the highest CMRO 2 contrast between gray and white matter, more uniform OEF than QSM, and less noisy OEF than qBOLD. Quantitative CMRO 2 mapping that fits the entire complex GRE data is feasible by combining QSM analysis of phase and qBOLD analysis of magnitude. © 2018 International Society for Magnetic Resonance in Medicine.

  15. Minimum decoherence cat-like states in Gaussian noisy channels

    Energy Technology Data Exchange (ETDEWEB)

    Serafini, A [Dipartimento di Fisica ' E R Caianiello' , Universita di Salerno, INFM UdR Salerno, INFN Sezione Napoli, G C Salerno, Via S Allende, 84081 Baronissi, SA (Italy); De Siena, S [Dipartimento di Fisica ' E R Caianiello' , Universita di Salerno, INFM UdR Salerno, INFN Sezione Napoli, G C Salerno, Via S Allende, 84081 Baronissi, SA (Italy); Illuminati, F [Dipartimento di Fisica ' E R Caianiello' , Universita di Salerno, INFM UdR Salerno, INFN Sezione Napoli, G C Salerno, Via S Allende, 84081 Baronissi, SA (Italy); Paris, M G A [ISIS ' A Sorbelli' , I-41026 Pavullo nel Frignano, MO (Italy)

    2004-06-01

    We address the evolution of cat-like states in general Gaussian noisy channels, by considering superpositions of coherent and squeezed coherent states coupled to an arbitrarily squeezed bath. The phase space dynamics is solved and decoherence is studied, keeping track of the purity of the evolving state. The influence of the choice of the state and channel parameters on purity is discussed and optimal working regimes that minimize the decoherence rate are determined. In particular, we show that squeezing the bath to protect a non-squeezed cat state against decoherence is equivalent to orthogonally squeezing the initial cat state while letting the bath be phase insensitive.

  16. Predicting speech intelligibility in conditions with nonlinearly processed noisy speech

    DEFF Research Database (Denmark)

    Jørgensen, Søren; Dau, Torsten

    2013-01-01

    The speech-based envelope power spectrum model (sEPSM; [1]) was proposed in order to overcome the limitations of the classical speech transmission index (STI) and speech intelligibility index (SII). The sEPSM applies the signal-tonoise ratio in the envelope domain (SNRenv), which was demonstrated...... to successfully predict speech intelligibility in conditions with nonlinearly processed noisy speech, such as processing with spectral subtraction. Moreover, a multiresolution version (mr-sEPSM) was demonstrated to account for speech intelligibility in various conditions with stationary and fluctuating...

  17. Modeling evolution of crosstalk in noisy signal transduction networks

    Science.gov (United States)

    Tareen, Ammar; Wingreen, Ned S.; Mukhopadhyay, Ranjan

    2018-02-01

    Signal transduction networks can form highly interconnected systems within cells due to crosstalk between constituent pathways. To better understand the evolutionary design principles underlying such networks, we study the evolution of crosstalk for two parallel signaling pathways that arise via gene duplication. We use a sequence-based evolutionary algorithm and evolve the network based on two physically motivated fitness functions related to information transmission. We find that one fitness function leads to a high degree of crosstalk while the other leads to pathway specificity. Our results offer insights on the relationship between network architecture and information transmission for noisy biomolecular networks.

  18. Optimal resampling for the noisy OneMax problem

    OpenAIRE

    Liu, Jialin; Fairbank, Michael; Pérez-Liébana, Diego; Lucas, Simon M.

    2016-01-01

    The OneMax problem is a standard benchmark optimisation problem for a binary search space. Recent work on applying a Bandit-Based Random Mutation Hill-Climbing algorithm to the noisy OneMax Problem showed that it is important to choose a good value for the resampling number to make a careful trade off between taking more samples in order to reduce noise, and taking fewer samples to reduce the total computational cost. This paper extends that observation, by deriving an analytical expression f...

  19. Minimum decoherence cat-like states in Gaussian noisy channels

    International Nuclear Information System (INIS)

    Serafini, A; De Siena, S; Illuminati, F; Paris, M G A

    2004-01-01

    We address the evolution of cat-like states in general Gaussian noisy channels, by considering superpositions of coherent and squeezed coherent states coupled to an arbitrarily squeezed bath. The phase space dynamics is solved and decoherence is studied, keeping track of the purity of the evolving state. The influence of the choice of the state and channel parameters on purity is discussed and optimal working regimes that minimize the decoherence rate are determined. In particular, we show that squeezing the bath to protect a non-squeezed cat state against decoherence is equivalent to orthogonally squeezing the initial cat state while letting the bath be phase insensitive

  20. A Noisy-Channel Approach to Question Answering

    Science.gov (United States)

    2003-01-01

    question “When did Elvis Presley die?” To do this, we build a noisy channel model that makes explicit how answer sentence parse trees are mapped into...in Figure 1, the algorithm above generates the following training example: Q: When did Elvis Presley die ? SA: Presley died PP PP in A_DATE, and...engine as a potential candidate for finding the answer to the question “When did Elvis Presley die?” In this case, we don’t know what the answer is

  1. Noisy Spins and the Richardson-Gaudin Model

    Science.gov (United States)

    Rowlands, Daniel A.; Lamacraft, Austen

    2018-03-01

    We study a system of spins (qubits) coupled to a common noisy environment, each precessing at its own frequency. The correlated noise experienced by the spins implies long-lived correlations that relax only due to the differing frequencies. We use a mapping to a non-Hermitian integrable Richardson-Gaudin model to find the exact spectrum of the quantum master equation in the high-temperature limit and, hence, determine the decay rate. Our solution can be used to evaluate the effect of inhomogeneous splittings on a system of qubits coupled to a common bath.

  2. Selecting optimal monochromatic level with spectral CT imaging for improving imaging quality in hepatic venography

    International Nuclear Information System (INIS)

    Sun Jun; Luo Xianfu; Wang Shou'an; Wang Jun; Sun Jiquan; Wang Zhijun; Wu Jingtao

    2013-01-01

    Objective: To investigate the effect of spectral CT monochromatic images for improving imaging quality in hepatic venography. Methods: Thirty patients underwent spectral CT examination on a GE Discovery CT 750 HD scanner. During portal phase, 1.25 mm slice thickness polychromatic images and optimal monochromatic images were obtained, and volume rendering and maximum intensity projection were created to show the hepatic veins respectively. The overall imaging quality was evaluated on a five-point scale by two radiologists. Inter-observer agreement in subjective image quality grading was assessed by Kappa statistics. Paired-sample t test were used to compare hepatic vein attenuation, hepatic parenchyma attenuation, CT value difference between the hepatic vein and the liver parenchyma, image noise, vein-to-liver contrast-to-noise ratio (CNR), the image quality score of hepatic venography between the two image data sets. Results: The monochromatic images at 50 keV were found to demonstrate the best CNR for hepatic vein.The hepatic vein attenuation [(329 ± 47) HU], hepatic parenchyma attenuation [(178 ± 33) HU], CT value difference between the hepatic vein and the liver parenchyma [(151 ± 33) HU], image noise (17.33 ± 4.18), CNR (9.13 ± 2.65), the image quality score (4.2 ± 0.6) of optimal monochromatic images were significantly higher than those of polychromatic images [(149 ± 18) HU], [(107 ± 14) HU], [(43 ±11) HU], 12.55 ± 3.02, 3.53 ± 1.03, 3.1 ± 0.8 (t values were 24.79, 13.95, 18.85, 9.07, 13.25 and 12.04, respectively, P < 0.01). In the comparison of image quality, Kappa value was 0.81 with optimal monochromatic images and 0.69 with polychromatic images. Conclusion: Monochromatic images of spectral CT could improve CNR for displaying hepatic vein and improve the image quality compared to the conventional polychromatic images. (authors)

  3. SENTINEL-2 LEVEL 1 PRODUCTS AND IMAGE PROCESSING PERFORMANCES

    OpenAIRE

    S. J. Baillarin; A. Meygret; C. Dechoz; B. Petrucci; S. Lacherade; T. Tremas; C. Isola; P. Martimort; F. Spoto

    2012-01-01

    In partnership with the European Commission and in the frame of the Global Monitoring for Environment and Security (GMES) program, the European Space Agency (ESA) is developing the Sentinel-2 optical imaging mission devoted to the operational monitoring of land and coastal areas. The Sentinel-2 mission is based on a satellites constellation deployed in polar sun-synchronous orbit. While ensuring data continuity of former SPOT and LANDSAT multi-spectral missions, Sentinel-2 wil...

  4. TRADEMARK IMAGE RETRIEVAL USING LOW LEVEL FEATURE EXTRACTION IN CBIR

    OpenAIRE

    Latika Pinjarkar*, Manisha Sharma, Smita Selot

    2016-01-01

    Trademarks work as significant responsibility in industry and commerce. Trademarks are important component of its industrial property, and violation can have severe penalty. Therefore designing an efficient trademark retrieval system and its assessment for uniqueness is thus becoming very important task now a days. Trademark image retrieval system where a new candidate trademark is compared with already registered trademarks to check that there is no possibility of resembl...

  5. Scanning ion images; analysis of pharmaceutical drugs at organelle levels

    Science.gov (United States)

    Larras-Regard, E.; Mony, M.-C.

    1995-05-01

    With the ion analyser IMS 4F used in microprobe mode, it is possible to obtain images of fields of 10 × 10 [mu]m2, corresponding to an effective magnification of 7000 with lateral resolution of 250 nm, technical characteristics that are appropriate for the size of cell organelles. It is possible to characterize organelles by their relative CN-, P- and S- intensities when the tissues are prepared by freeze fixation and freeze substitution. The recognition of organelles enables correlation of the tissue distribution of ebselen, a pharmaceutical drug containing selenium. The various metabolites characterized in plasma, bile and urine during biotransformation of ebselen all contain selenium, so the presence of the drug and its metabolites can be followed by images of Se. We were also able to detect the endogenous content of Se in tissue, due to the increased sensitivity of ion analysis in microprobe mode. Our results show a natural occurrence of Se in the border corresponding to the basal lamina of cells of proximal but not distal tubules of the kidney. After treatment of rats with ebselen, an additional site of Se is found in the lysosomes. We suggest that in addition to direct elimination of ebselen and its metabolites by glomerular filtration and urinary elimination, a second process of elimination may occur: Se compounds reaching the epithelial cells via the basal lamina accumulate in lysosomes prior to excretion into the tubular fluid. The technical developments of using the IMS 4F instrument in the microprobe mode and the improvement in preparation of samples by freeze fixation and substitution further extend the limit of ion analysis in biology. Direct imaging of trace elements and molecules marked with a tracer make it possible to determine their targets by comparison with images of subcellular structures. This is a promising advance in the study of pathways of compounds within tissues, cells and the whole organism.

  6. A Study of Light Level Effect on the Accuracy of Image Processing-based Tomato Grading

    Science.gov (United States)

    Prijatna, D.; Muhaemin, M.; Wulandari, R. P.; Herwanto, T.; Saukat, M.; Sugandi, W. K.

    2018-05-01

    Image processing method has been used in non-destructive tests of agricultural products. Compared to manual method, image processing method may produce more objective and consistent results. Image capturing box installed in currently used tomato grading machine (TEP-4) is equipped with four fluorescence lamps to illuminate the processed tomatoes. Since the performance of any lamp will decrease if its service time has exceeded its lifetime, it is predicted that this will affect tomato classification. The objective of this study was to determine the minimum light levels which affect classification accuracy. This study was conducted by varying light level from minimum and maximum on tomatoes in image capturing boxes and then investigates its effects on image characteristics. Research results showed that light intensity affects two variables which are important for classification, for example, area and color of captured image. Image processing program was able to determine correctly the weight and classification of tomatoes when light level was 30 lx to 140 lx.

  7. Imaging network level language recovery after left PCA stroke.

    Science.gov (United States)

    Sebastian, Rajani; Long, Charltien; Purcell, Jeremy J; Faria, Andreia V; Lindquist, Martin; Jarso, Samson; Race, David; Davis, Cameron; Posner, Joseph; Wright, Amy; Hillis, Argye E

    2016-05-11

    The neural mechanisms that support aphasia recovery are not yet fully understood. Our goal was to evaluate longitudinal changes in naming recovery in participants with posterior cerebral artery (PCA) stroke using a case-by-case analysis. Using task based and resting state functional magnetic resonance imaging (fMRI) and detailed language testing, we longitudinally studied the recovery of the naming network in four participants with PCA stroke with naming deficits at the acute (0 week), sub acute (3-5 weeks), and chronic time point (5-7 months) post stroke. Behavioral and imaging analyses (task related and resting state functional connectivity) were carried out to elucidate longitudinal changes in naming recovery. Behavioral and imaging analysis revealed that an improvement in naming accuracy from the acute to the chronic stage was reflected by increased connectivity within and between left and right hemisphere "language" regions. One participant who had persistent moderate naming deficit showed weak and decreasing connectivity longitudinally within and between left and right hemisphere language regions. These findings emphasize a network view of aphasia recovery, and show that the degree of inter- and intra- hemispheric balance between the language-specific regions is necessary for optimal recovery of naming, at least in participants with PCA stroke.

  8. Effect of blood glucose level on 18F-FDG PET/CT imaging

    International Nuclear Information System (INIS)

    Tan Haibo; Lin Xiangtong; Guan Yihui; Zhao Jun; Zuo Chuantao; Hua Fengchun; Tang Wenying

    2008-01-01

    Objective: The aim of this study was to investigate the effect of blood glucose level on the image quality of 18 F-fluorodeoxyglucose (FDG) PET/CT imaging. Methods: Eighty patients referred to the authors' department for routine whole-body 18 F-FDG PET/CT check up were recruited into this study. The patients were classified into 9 groups according to their blood glucose level: normal group avg and SUV max ) of liver on different slices. SPSS 12.0 was used to analyse the data. Results: (1) There were significant differences among the 9 groups in image quality scores and image noises (all P avg and SUV max : 0.60 and 0.33, P<0.05). Conclusions: The higher the blood glucose level, the worse the image quality. When the blood glucose level is more than or equal to 12.0 mmol/L, the image quality will significantly degrade. (authors)

  9. Reducing surgical levels by paraspinal mapping and diffusion tensor imaging techniques in lumbar spinal stenosis

    OpenAIRE

    Chen, Hua-Biao; Wan, Qi; Xu, Qi-Feng; Chen, Yi; Bai, Bo

    2016-01-01

    Background Correlating symptoms and physical examination findings with surgical levels based on common imaging results is not reliable. In patients who have no concordance between radiological and clinical symptoms, the surgical levels determined by conventional magnetic resonance imaging (MRI) and neurogenic examination (NE) may lead to a more extensive surgery and significant complications. We aimed to confirm that whether the use of diffusion tensor imaging (DTI) and paraspinal mapping (PM...

  10. Effect of weak measurement on entanglement distribution over noisy channels.

    Science.gov (United States)

    Wang, Xin-Wen; Yu, Sixia; Zhang, Deng-Yu; Oh, C H

    2016-03-03

    Being able to implement effective entanglement distribution in noisy environments is a key step towards practical quantum communication, and long-term efforts have been made on the development of it. Recently, it has been found that the null-result weak measurement (NRWM) can be used to enhance probabilistically the entanglement of a single copy of amplitude-damped entangled state. This paper investigates remote distributions of bipartite and multipartite entangled states in the amplitudedamping environment by combining NRWMs and entanglement distillation protocols (EDPs). We show that the NRWM has no positive effect on the distribution of bipartite maximally entangled states and multipartite Greenberger-Horne-Zeilinger states, although it is able to increase the amount of entanglement of each source state (noisy entangled state) of EDPs with a certain probability. However, we find that the NRWM would contribute to remote distributions of multipartite W states. We demonstrate that the NRWM can not only reduce the fidelity thresholds for distillability of decohered W states, but also raise the distillation efficiencies of W states. Our results suggest a new idea for quantifying the ability of a local filtering operation in protecting entanglement from decoherence.

  11. Noisy: Identification of problematic columns in multiple sequence alignments

    Directory of Open Access Journals (Sweden)

    Grünewald Stefan

    2008-06-01

    Full Text Available Abstract Motivation Sequence-based methods for phylogenetic reconstruction from (nucleic acid sequence data are notoriously plagued by two effects: homoplasies and alignment errors. Large evolutionary distances imply a large number of homoplastic sites. As most protein-coding genes show dramatic variations in substitution rates that are not uncorrelated across the sequence, this often leads to a patchwork pattern of (i phylogenetically informative and (ii effectively randomized regions. In highly variable regions, furthermore, alignment errors accumulate resulting in sometimes misleading signals in phylogenetic reconstruction. Results We present here a method that, based on assessing the distribution of character states along a cyclic ordering of the taxa, allows the identification of phylogenetically uninformative homoplastic sites in a multiple sequence alignment. Removal of these sites appears to improve the performance of phylogenetic reconstruction algorithms as measured by various indices of "tree quality". In particular, we obtain more stable trees due to the exclusion of phylogenetically incompatible sites that most likely represent strongly randomized characters. Software The computer program noisy implements this approach. It can be employed to improving phylogenetic reconstruction capability with quite a considerable success rate whenever (1 the average bootstrap support obtained from the original alignment is low, and (2 there are sufficiently many taxa in the data set – at least, say, 12 to 15 taxa. The software can be obtained under the GNU Public License from http://www.bioinf.uni-leipzig.de/Software/noisy/.

  12. Analysis and Extension of the PCA Method, Estimating a Noise Curve from a Single Image

    Directory of Open Access Journals (Sweden)

    Miguel Colom

    2016-12-01

    Full Text Available In the article 'Image Noise Level Estimation by Principal Component Analysis', S. Pyatykh, J. Hesser, and L. Zheng propose a new method to estimate the variance of the noise in an image from the eigenvalues of the covariance matrix of the overlapping blocks of the noisy image. Instead of using all the patches of the noisy image, the authors propose an iterative strategy to adaptively choose the optimal set containing the patches with lowest variance. Although the method measures uniform Gaussian noise, it can be easily adapted to deal with signal-dependent noise, which is realistic with the Poisson noise model obtained by a CMOS or CCD device in a digital camera.

  13. Robust boundary detection of left ventricles on ultrasound images using ASM-level set method.

    Science.gov (United States)

    Zhang, Yaonan; Gao, Yuan; Li, Hong; Teng, Yueyang; Kang, Yan

    2015-01-01

    Level set method has been widely used in medical image analysis, but it has difficulties when being used in the segmentation of left ventricular (LV) boundaries on echocardiography images because the boundaries are not very distinguish, and the signal-to-noise ratio of echocardiography images is not very high. In this paper, we introduce the Active Shape Model (ASM) into the traditional level set method to enforce shape constraints. It improves the accuracy of boundary detection and makes the evolution more efficient. The experiments conducted on the real cardiac ultrasound image sequences show a positive and promising result.

  14. Subpixel level mapping of remotely sensed image using colorimetry

    Directory of Open Access Journals (Sweden)

    M. Suresh

    2018-04-01

    Full Text Available The problem of extracting proportion of classes present within a pixel has been a challenge for researchers for which already numerous methodologies have been developed but still saturation is far ahead, since still the methods accounting these mixed classes are not perfect and they would never be perfect until one can talk about one to one correspondence for each pixel and ground data, which is practically impossible. In this paper a step towards generation of new method for finding out mixed class proportions in a pixel on the basis of the mixing property of colors as per colorimetry. The methodology involves locating the class color of a mixed pixel on chromaticity diagram and then using contextual information mainly the location of neighboring pixels on chromaticity diagram to estimate the proportion of classes in the mixed pixel.Also the resampling method would be more accurate when accounting for sharp and exact boundaries. With the usage of contextual information can generate the resampled image containing only the colors which really exist. The process is simply accounting the fraction and then the number of pixels by multiplying the fraction by total number of pixels into which one pixel is splitted to get number of pixels of each color based on contextual information. Keywords: Subpixel classification, Remote sensing imagery, Colorimetric color space, Sampling and subpixel mapping

  15. Simultaneous reconstruction, segmentation, and edge enhancement of relatively piecewise continuous images with intensity-level information

    International Nuclear Information System (INIS)

    Liang, Z.; Jaszczak, R.; Coleman, R.; Johnson, V.

    1991-01-01

    A multinomial image model is proposed which uses intensity-level information for reconstruction of contiguous image regions. The intensity-level information assumes that image intensities are relatively constant within contiguous regions over the image-pixel array and that intensity levels of these regions are determined either empirically or theoretically by information criteria. These conditions may be valid, for example, for cardiac blood-pool imaging, where the intensity levels (or radionuclide activities) of myocardium, blood-pool, and background regions are distinct and the activities within each region of muscle, blood, or background are relatively uniform. To test the model, a mathematical phantom over a 64x64 array was constructed. The phantom had three contiguous regions. Each region had a different intensity level. Measurements from the phantom were simulated using an emission-tomography geometry. Fifty projections were generated over 180 degree, with 64 equally spaced parallel rays per projection. Projection data were randomized to contain Poisson noise. Image reconstructions were performed using an iterative maximum a posteriori probability procedure. The contiguous regions corresponding to the three intensity levels were automatically segmented. Simultaneously, the edges of the regions were sharpened. Noise in the reconstructed images was significantly suppressed. Convergence of the iterative procedure to the phantom was observed. Compared with maximum likelihood and filtered-backprojection approaches, the results obtained using the maximum a posteriori probability with the intensity-level information demonstrated qualitative and quantitative improvement in localizing the regions of varying intensities

  16. Stabilized quasi-Newton optimization of noisy potential energy surfaces

    International Nuclear Information System (INIS)

    Schaefer, Bastian; Goedecker, Stefan; Alireza Ghasemi, S.; Roy, Shantanu

    2015-01-01

    Optimizations of atomic positions belong to the most commonly performed tasks in electronic structure calculations. Many simulations like global minimum searches or characterizations of chemical reactions require performing hundreds or thousands of minimizations or saddle computations. To automatize these tasks, optimization algorithms must not only be efficient but also very reliable. Unfortunately, computational noise in forces and energies is inherent to electronic structure codes. This computational noise poses a severe problem to the stability of efficient optimization methods like the limited-memory Broyden–Fletcher–Goldfarb–Shanno algorithm. We here present a technique that allows obtaining significant curvature information of noisy potential energy surfaces. We use this technique to construct both, a stabilized quasi-Newton minimization method and a stabilized quasi-Newton saddle finding approach. We demonstrate with the help of benchmarks that both the minimizer and the saddle finding approach are superior to comparable existing methods

  17. Stabilized quasi-Newton optimization of noisy potential energy surfaces

    Energy Technology Data Exchange (ETDEWEB)

    Schaefer, Bastian; Goedecker, Stefan, E-mail: stefan.goedecker@unibas.ch [Department of Physics, University of Basel, Klingelbergstrasse 82, CH-4056 Basel (Switzerland); Alireza Ghasemi, S. [Institute for Advanced Studies in Basic Sciences, P.O. Box 45195-1159, IR-Zanjan (Iran, Islamic Republic of); Roy, Shantanu [Computational and Systems Biology, Biozentrum, University of Basel, CH-4056 Basel (Switzerland)

    2015-01-21

    Optimizations of atomic positions belong to the most commonly performed tasks in electronic structure calculations. Many simulations like global minimum searches or characterizations of chemical reactions require performing hundreds or thousands of minimizations or saddle computations. To automatize these tasks, optimization algorithms must not only be efficient but also very reliable. Unfortunately, computational noise in forces and energies is inherent to electronic structure codes. This computational noise poses a severe problem to the stability of efficient optimization methods like the limited-memory Broyden–Fletcher–Goldfarb–Shanno algorithm. We here present a technique that allows obtaining significant curvature information of noisy potential energy surfaces. We use this technique to construct both, a stabilized quasi-Newton minimization method and a stabilized quasi-Newton saddle finding approach. We demonstrate with the help of benchmarks that both the minimizer and the saddle finding approach are superior to comparable existing methods.

  18. Bounds on the dynamics of sink populations with noisy immigration.

    Science.gov (United States)

    Eager, Eric Alan; Guiver, Chris; Hodgson, Dave; Rebarber, Richard; Stott, Iain; Townley, Stuart

    2014-03-01

    Sink populations are doomed to decline to extinction in the absence of immigration. The dynamics of sink populations are not easily modelled using the standard framework of per capita rates of immigration, because numbers of immigrants are determined by extrinsic sources (for example, source populations, or population managers). Here we appeal to a systems and control framework to place upper and lower bounds on both the transient and future dynamics of sink populations that are subject to noisy immigration. Immigration has a number of interpretations and can fit a wide variety of models found in the literature. We apply the results to case studies derived from published models for Chinook salmon (Oncorhynchus tshawytscha) and blowout penstemon (Penstemon haydenii). Copyright © 2013 Elsevier Inc. All rights reserved.

  19. Optimal Power Constrained Distributed Detection over a Noisy Multiaccess Channel

    Directory of Open Access Journals (Sweden)

    Zhiwen Hu

    2015-01-01

    Full Text Available The problem of optimal power constrained distributed detection over a noisy multiaccess channel (MAC is addressed. Under local power constraints, we define the transformation function for sensor to realize the mapping from local decision to transmitted waveform. The deflection coefficient maximization (DCM is used to optimize the performance of power constrained fusion system. Using optimality conditions, we derive the closed-form solution to the considered problem. Monte Carlo simulations are carried out to evaluate the performance of the proposed new method. Simulation results show that the proposed method could significantly improve the detection performance of the fusion system with low signal-to-noise ratio (SNR. We also show that the proposed new method has a robust detection performance for broad SNR region.

  20. Sketch of a Noisy Channel Model for the Translation Process

    DEFF Research Database (Denmark)

    Carl, Michael

    default rendering" procedure, later conscious processes are triggered by a monitor who interferes when something goes wrong. An attempt is made to explain monitor activities with relevance theoretic concepts according to which a translator needs to ensure the similarity of explicatures and implicatures......The paper develops a Noisy Channel Model for the translation process that is based on actual user activity data. It builds on the monitor model and makes a distinction between early, automatic and late, conscious translation processes: while early priming processes are at the basis of a "literal...... of the source and the target texts. It is suggested that events and parameters in the model need be measurable and quantifiable in the user activity data so as to trace back monitoring activities in the translation process data. Michael Carl is a Professor with special responsibilities at the Department...

  1. Trading in markets with noisy information: an evolutionary analysis

    Science.gov (United States)

    Bloembergen, Daan; Hennes, Daniel; McBurney, Peter; Tuyls, Karl

    2015-07-01

    We analyse the value of information in a stock market where information can be noisy and costly, using techniques from empirical game theory. Previous work has shown that the value of information follows a J-curve, where averagely informed traders perform below market average, and only insiders prevail. Here we show that both noise and cost can change this picture, in several cases leading to opposite results where insiders perform below market average, and averagely informed traders prevail. Moreover, we investigate the effect of random explorative actions on the market dynamics, showing how these lead to a mix of traders being sustained in equilibrium. These results provide insight into the complexity of real marketplaces, and show under which conditions a broad mix of different trading strategies might be sustainable.

  2. Information jet: Handling noisy big data from weakly disconnected network

    Science.gov (United States)

    Aurongzeb, Deeder

    Sudden aggregation (information jet) of large amount of data is ubiquitous around connected social networks, driven by sudden interacting and non-interacting events, network security threat attacks, online sales channel etc. Clustering of information jet based on time series analysis and graph theory is not new but little work is done to connect them with particle jet statistics. We show pre-clustering based on context can element soft network or network of information which is critical to minimize time to calculate results from noisy big data. We show difference between, stochastic gradient boosting and time series-graph clustering. For disconnected higher dimensional information jet, we use Kallenberg representation theorem (Kallenberg, 2005, arXiv:1401.1137) to identify and eliminate jet similarities from dense or sparse graph.

  3. Capturing spike variability in noisy Izhikevich neurons using point process generalized linear models

    DEFF Research Database (Denmark)

    Østergaard, Jacob; Kramer, Mark A.; Eden, Uri T.

    2018-01-01

    current. We then fit these spike train datawith a statistical model (a generalized linear model, GLM, with multiplicative influences of past spiking). For different levels of noise, we show how the GLM captures both the deterministic features of the Izhikevich neuron and the variability driven...... by the noise. We conclude that the GLM captures essential features of the simulated spike trains, but for near-deterministic spike trains, goodness-of-fit analyses reveal that the model does not fit very well in a statistical sense; the essential random part of the GLM is not captured....... are separately applied; understanding the relationships between these modeling approaches remains an area of active research. In this letter, we examine this relationship using simulation. To do so, we first generate spike train data from a well-known dynamical model, the Izhikevich neuron, with a noisy input...

  4. Security Analysis of Measurement-Device-Independent Quantum Key Distribution in Collective-Rotation Noisy Environment

    Science.gov (United States)

    Li, Na; Zhang, Yu; Wen, Shuang; Li, Lei-lei; Li, Jian

    2018-01-01

    Noise is a problem that communication channels cannot avoid. It is, thus, beneficial to analyze the security of MDI-QKD in noisy environment. An analysis model for collective-rotation noise is introduced, and the information theory methods are used to analyze the security of the protocol. The maximum amount of information that Eve can eavesdrop is 50%, and the eavesdropping can always be detected if the noise level ɛ ≤ 0.68. Therefore, MDI-QKD protocol is secure as quantum key distribution protocol. The maximum probability that the relay outputs successful results is 16% when existing eavesdropping. Moreover, the probability that the relay outputs successful results when existing eavesdropping is higher than the situation without eavesdropping. The paper validates that MDI-QKD protocol has better robustness.

  5. Topological quantum computing with a very noisy network and local error rates approaching one percent.

    Science.gov (United States)

    Nickerson, Naomi H; Li, Ying; Benjamin, Simon C

    2013-01-01

    A scalable quantum computer could be built by networking together many simple processor cells, thus avoiding the need to create a single complex structure. The difficulty is that realistic quantum links are very error prone. A solution is for cells to repeatedly communicate with each other and so purify any imperfections; however prior studies suggest that the cells themselves must then have prohibitively low internal error rates. Here we describe a method by which even error-prone cells can perform purification: groups of cells generate shared resource states, which then enable stabilization of topologically encoded data. Given a realistically noisy network (≥10% error rate) we find that our protocol can succeed provided that intra-cell error rates for initialisation, state manipulation and measurement are below 0.82%. This level of fidelity is already achievable in several laboratory systems.

  6. Bifurcation analysis of a noisy vibro-impact oscillator with two kinds of fractional derivative elements

    Science.gov (United States)

    Yang, YongGe; Xu, Wei; Yang, Guidong

    2018-04-01

    To the best of authors' knowledge, little work was referred to the study of a noisy vibro-impact oscillator with a fractional derivative. Stochastic bifurcations of a vibro-impact oscillator with two kinds of fractional derivative elements driven by Gaussian white noise excitation are explored in this paper. We can obtain the analytical approximate solutions with the help of non-smooth transformation and stochastic averaging method. The numerical results from Monte Carlo simulation of the original system are regarded as the benchmark to verify the accuracy of the developed method. The results demonstrate that the proposed method has a satisfactory level of accuracy. We also discuss the stochastic bifurcation phenomena induced by the fractional coefficients and fractional derivative orders. The important and interesting result we can conclude in this paper is that the effect of the first fractional derivative order on the system is totally contrary to that of the second fractional derivative order.

  7. A Variational Level Set Approach Based on Local Entropy for Image Segmentation and Bias Field Correction.

    Science.gov (United States)

    Tang, Jian; Jiang, Xiaoliang

    2017-01-01

    Image segmentation has always been a considerable challenge in image analysis and understanding due to the intensity inhomogeneity, which is also commonly known as bias field. In this paper, we present a novel region-based approach based on local entropy for segmenting images and estimating the bias field simultaneously. Firstly, a local Gaussian distribution fitting (LGDF) energy function is defined as a weighted energy integral, where the weight is local entropy derived from a grey level distribution of local image. The means of this objective function have a multiplicative factor that estimates the bias field in the transformed domain. Then, the bias field prior is fully used. Therefore, our model can estimate the bias field more accurately. Finally, minimization of this energy function with a level set regularization term, image segmentation, and bias field estimation can be achieved. Experiments on images of various modalities demonstrated the superior performance of the proposed method when compared with other state-of-the-art approaches.

  8. Improved detection probability of low level light and infrared image fusion system

    Science.gov (United States)

    Luo, Yuxiang; Fu, Rongguo; Zhang, Junju; Wang, Wencong; Chang, Benkang

    2018-02-01

    Low level light(LLL) image contains rich information on environment details, but is easily affected by the weather. In the case of smoke, rain, cloud or fog, much target information will lose. Infrared image, which is from the radiation produced by the object itself, can be "active" to obtain the target information in the scene. However, the image contrast and resolution is bad, the ability of the acquisition of target details is very poor, and the imaging mode does not conform to the human visual habit. The fusion of LLL and infrared image can make up for the deficiency of each sensor and give play to the advantages of single sensor. At first, we show the hardware design of fusion circuit. Then, through the recognition probability calculation of the target(one person) and the background image(trees), we find that the trees detection probability of LLL image is higher than that of the infrared image, and the person detection probability of the infrared image is obviously higher than that of LLL image. The detection probability of fusion image for one person and trees is higher than that of single detector. Therefore, image fusion can significantly enlarge recognition probability and improve detection efficiency.

  9. Solving for the capacity of a noisy lossy bosonic channel via the master equation

    International Nuclear Information System (INIS)

    Qin Tao; Zhao Meisheng; Zhang Yongde

    2006-01-01

    We discuss the noisy lossy bosonic channel by exploiting master equations. The capacity of the noisy lossy bosonic channel and the criterion for the optimal capacities are derived. Consequently, we verify that master equations can be a tool to study bosonic channels

  10. Texture Feature Analysis for Different Resolution Level of Kidney Ultrasound Images

    Science.gov (United States)

    Kairuddin, Wan Nur Hafsha Wan; Mahmud, Wan Mahani Hafizah Wan

    2017-08-01

    Image feature extraction is a technique to identify the characteristic of the image. The objective of this work is to discover the texture features that best describe a tissue characteristic of a healthy kidney from ultrasound (US) image. Three ultrasound machines that have different specifications are used in order to get a different quality (different resolution) of the image. Initially, the acquired images are pre-processed to de-noise the speckle to ensure the image preserve the pixels in a region of interest (ROI) for further extraction. Gaussian Low- pass Filter is chosen as the filtering method in this work. 150 of enhanced images then are segmented by creating a foreground and background of image where the mask is created to eliminate some unwanted intensity values. Statistical based texture features method is used namely Intensity Histogram (IH), Gray-Level Co-Occurance Matrix (GLCM) and Gray-level run-length matrix (GLRLM).This method is depends on the spatial distribution of intensity values or gray levels in the kidney region. By using One-Way ANOVA in SPSS, the result indicated that three features (Contrast, Difference Variance and Inverse Difference Moment Normalized) from GLCM are not statistically significant; this concludes that these three features describe a healthy kidney characteristics regardless of the ultrasound image quality.

  11. Bit-level plane image encryption based on coupled map lattice with time-varying delay

    Science.gov (United States)

    Lv, Xiupin; Liao, Xiaofeng; Yang, Bo

    2018-04-01

    Most of the existing image encryption algorithms had two basic properties: confusion and diffusion in a pixel-level plane based on various chaotic systems. Actually, permutation in a pixel-level plane could not change the statistical characteristics of an image, and many of the existing color image encryption schemes utilized the same method to encrypt R, G and B components, which means that the three color components of a color image are processed three times independently. Additionally, dynamical performance of a single chaotic system degrades greatly with finite precisions in computer simulations. In this paper, a novel coupled map lattice with time-varying delay therefore is applied in color images bit-level plane encryption to solve the above issues. Spatiotemporal chaotic system with both much longer period in digitalization and much excellent performances in cryptography is recommended. Time-varying delay embedded in coupled map lattice enhances dynamical behaviors of the system. Bit-level plane image encryption algorithm has greatly reduced the statistical characteristics of an image through the scrambling processing. The R, G and B components cross and mix with one another, which reduces the correlation among the three components. Finally, simulations are carried out and all the experimental results illustrate that the proposed image encryption algorithm is highly secure, and at the same time, also demonstrates superior performance.

  12. Behavioural changes in response to sound exposure and no spatial avoidance of noisy conditions in captive zebrafish

    Directory of Open Access Journals (Sweden)

    Yik Yaw (Errol eNeo

    2015-02-01

    Full Text Available Auditory sensitivity in fish serves various important functions, but also makes fish susceptible to noise pollution. Human-generated sounds may affect behavioural patterns of fish, both in natural conditions and in captivity. Fish are often kept for consumption in aquaculture, on display in zoos and hobby aquaria, and for medical sciences in research facilities, but little is known about the impact of ambient sounds in fish tanks. In this study, we conducted two indoor exposure experiments with zebrafish (Danio rerio. The first experiment demonstrated that exposure to moderate sound levels (112 dB re 1 μPa can affect the swimming behaviour of fish by changing group cohesion, swimming speed and swimming height. Effects were brief for both continuous and intermittent noise treatments. In the second experiment, fish could influence exposure to higher sound levels by swimming freely between an artificially noisy fish tank (120-140 dB re 1 μPa and another with ambient noise levels (89 dB re 1 μPa. Despite initial startle responses, and a brief period in which many individuals in the noisy tank dived down to the bottom, there was no spatial avoidance or noise-dependent tank preference at all. The frequent exchange rate of about 60 fish passages per hour between tanks was not affected by continuous or intermittent exposures. In conclusion, small groups of captive zebrafish were able to detect sounds already at relatively low sound levels and adjust their behaviour to it. Relatively high sound levels were at least at the on-set disturbing, but did not lead to spatial avoidance. Further research is needed to show whether zebrafish are not able to avoid noisy areas or just not bothered. Quantitatively, these data are not directly applicable to other fish species or other fish tanks, but they do indicate that sound exposure may affect fish behaviour in any captive condition.

  13. Behavioral changes in response to sound exposure and no spatial avoidance of noisy conditions in captive zebrafish.

    Science.gov (United States)

    Neo, Yik Yaw; Parie, Lisa; Bakker, Frederique; Snelderwaard, Peter; Tudorache, Christian; Schaaf, Marcel; Slabbekoorn, Hans

    2015-01-01

    Auditory sensitivity in fish serves various important functions, but also makes fish susceptible to noise pollution. Human-generated sounds may affect behavioral patterns of fish, both in natural conditions and in captivity. Fish are often kept for consumption in aquaculture, on display in zoos and hobby aquaria, and for medical sciences in research facilities, but little is known about the impact of ambient sounds in fish tanks. In this study, we conducted two indoor exposure experiments with zebrafish (Danio rerio). The first experiment demonstrated that exposure to moderate sound levels (112 dB re 1 μPa) can affect the swimming behavior of fish by changing group cohesion, swimming speed and swimming height. Effects were brief for both continuous and intermittent noise treatments. In the second experiment, fish could influence exposure to higher sound levels by swimming freely between an artificially noisy fish tank (120-140 dB re 1 μPa) and another with ambient noise levels (89 dB re 1 μPa). Despite initial startle responses, and a brief period in which many individuals in the noisy tank dived down to the bottom, there was no spatial avoidance or noise-dependent tank preference at all. The frequent exchange rate of about 60 fish passages per hour between tanks was not affected by continuous or intermittent exposures. In conclusion, small groups of captive zebrafish were able to detect sounds already at relatively low sound levels and adjust their behavior to it. Relatively high sound levels were at least at the on-set disturbing, but did not lead to spatial avoidance. Further research is needed to show whether zebrafish are not able to avoid noisy areas or just not bothered. Quantitatively, these data are not directly applicable to other fish species or other fish tanks, but they do indicate that sound exposure may affect fish behavior in any captive condition.

  14. County-Level Population Economic Status and Medicare Imaging Resource Consumption.

    Science.gov (United States)

    Rosenkrantz, Andrew B; Hughes, Danny R; Prabhakar, Anand M; Duszak, Richard

    2017-06-01

    The aim of this study was to assess relationships between county-level variation in Medicare beneficiary imaging resource consumption and measures of population economic status. The 2013 CMS Geographic Variation Public Use File was used to identify county-level per capita Medicare fee-for-service imaging utilization and nationally standardized costs to the Medicare program. The County Health Rankings public data set was used to identify county-level measures of population economic status. Regional variation was assessed, and multivariate regressions were performed. Imaging events per 1,000 Medicare beneficiaries varied 1.8-fold (range, 2,723-4,843) at the state level and 5.3-fold (range, 1,228-6,455) at the county level. Per capita nationally standardized imaging costs to Medicare varied 4.2-fold (range, $84-$353) at the state level and 14.1-fold (range, $33-$471) at the county level. Within individual states, county-level utilization varied on average 2.0-fold (range, 1.1- to 3.1-fold), and costs varied 2.8-fold (range, 1.1- to 6.4-fold). For both large urban populations and small rural states, Medicare imaging resource consumption was heterogeneously variable at the county level. Adjusting for county-level gender, ethnicity, rural status, and population density, countywide unemployment rates showed strong independent positive associations with Medicare imaging events (β = 26.96) and costs (β = 4.37), whereas uninsured rates showed strong independent positive associations with Medicare imaging costs (β = 2.68). Medicare imaging utilization and costs both vary far more at the county than at the state level. Unfavorable measures of county-level population economic status in the non-Medicare population are independently associated with greater Medicare imaging resource consumption. Future efforts to optimize Medicare imaging use should consider the influence of local indigenous socioeconomic factors outside the scope of traditional beneficiary-focused policy

  15. 3D shape recovery from image focus using gray level co-occurrence matrix

    Science.gov (United States)

    Mahmood, Fahad; Munir, Umair; Mehmood, Fahad; Iqbal, Javaid

    2018-04-01

    Recovering a precise and accurate 3-D shape of the target object utilizing robust 3-D shape recovery algorithm is an ultimate objective of computer vision community. Focus measure algorithm plays an important role in this architecture which convert the color values of each pixel of the acquired 2-D image dataset into corresponding focus values. After convolving the focus measure filter with the input 2-D image dataset, a 3-D shape recovery approach is applied which will recover the depth map. In this document, we are concerned with proposing Gray Level Co-occurrence Matrix along with its statistical features for computing the focus information of the image dataset. The Gray Level Co-occurrence Matrix quantifies the texture present in the image using statistical features and then applies joint probability distributive function of the gray level pairs of the input image. Finally, we quantify the focus value of the input image using Gaussian Mixture Model. Due to its little computational complexity, sharp focus measure curve, robust to random noise sources and accuracy, it is considered as superior alternative to most of recently proposed 3-D shape recovery approaches. This algorithm is deeply investigated on real image sequences and synthetic image dataset. The efficiency of the proposed scheme is also compared with the state of art 3-D shape recovery approaches. Finally, by means of two global statistical measures, root mean square error and correlation, we claim that this approach -in spite of simplicity generates accurate results.

  16. Heat source reconstruction from noisy temperature fields using an optimised derivative Gaussian filter

    Science.gov (United States)

    Delpueyo, D.; Balandraud, X.; Grédiac, M.

    2013-09-01

    The aim of this paper is to present a post-processing technique based on a derivative Gaussian filter to reconstruct heat source fields from temperature fields measured by infrared thermography. Heat sources can be deduced from temperature variations thanks to the heat diffusion equation. Filtering and differentiating are key-issues which are closely related here because the temperature fields which are processed are unavoidably noisy. We focus here only on the diffusion term because it is the most difficult term to estimate in the procedure, the reason being that it involves spatial second derivatives (a Laplacian for isotropic materials). This quantity can be reasonably estimated using a convolution of the temperature variation fields with second derivatives of a Gaussian function. The study is first based on synthetic temperature variation fields corrupted by added noise. The filter is optimised in order to reconstruct at best the heat source fields. The influence of both the dimension and the level of a localised heat source is discussed. Obtained results are also compared with another type of processing based on an averaging filter. The second part of this study presents an application to experimental temperature fields measured with an infrared camera on a thin plate in aluminium alloy. Heat sources are generated with an electric heating patch glued on the specimen surface. Heat source fields reconstructed from measured temperature fields are compared with the imposed heat sources. Obtained results illustrate the relevancy of the derivative Gaussian filter to reliably extract heat sources from noisy temperature fields for the experimental thermomechanics of materials.

  17. Underwater Image Enhancement by Adaptive Gray World and Differential Gray-Levels Histogram Equalization

    Directory of Open Access Journals (Sweden)

    WONG, S.-L.

    2018-05-01

    Full Text Available Most underwater images tend to be dominated by a single color cast. This paper presents a solution to remove the color cast and improve the contrast in underwater images. However, after the removal of the color cast using Gray World (GW method, the resultant image is not visually pleasing. Hence, we propose an integrated approach using Adaptive GW (AGW and Differential Gray-Levels Histogram Equalization (DHE that operate in parallel. The AGW is applied to remove the color cast while DHE is used to improve the contrast of the underwater image. The outputs of both chromaticity components of AGW and intensity components of DHE are combined to form the enhanced image. The results of the proposed method are compared with three existing methods using qualitative and quantitative measures. The proposed method increased the visibility of underwater images and in most cases produces better quantitative scores when compared to the three existing methods.

  18. Local gray level S-curve transformation - A generalized contrast enhancement technique for medical images.

    Science.gov (United States)

    Gandhamal, Akash; Talbar, Sanjay; Gajre, Suhas; Hani, Ahmad Fadzil M; Kumar, Dileep

    2017-04-01

    Most medical images suffer from inadequate contrast and brightness, which leads to blurred or weak edges (low contrast) between adjacent tissues resulting in poor segmentation and errors in classification of tissues. Thus, contrast enhancement to improve visual information is extremely important in the development of computational approaches for obtaining quantitative measurements from medical images. In this research, a contrast enhancement algorithm that applies gray-level S-curve transformation technique locally in medical images obtained from various modalities is investigated. The S-curve transformation is an extended gray level transformation technique that results into a curve similar to a sigmoid function through a pixel to pixel transformation. This curve essentially increases the difference between minimum and maximum gray values and the image gradient, locally thereby, strengthening edges between adjacent tissues. The performance of the proposed technique is determined by measuring several parameters namely, edge content (improvement in image gradient), enhancement measure (degree of contrast enhancement), absolute mean brightness error (luminance distortion caused by the enhancement), and feature similarity index measure (preservation of the original image features). Based on medical image datasets comprising 1937 images from various modalities such as ultrasound, mammograms, fluorescent images, fundus, X-ray radiographs and MR images, it is found that the local gray-level S-curve transformation outperforms existing techniques in terms of improved contrast and brightness, resulting in clear and strong edges between adjacent tissues. The proposed technique can be used as a preprocessing tool for effective segmentation and classification of tissue structures in medical images. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Synergistic Instance-Level Subspace Alignment for Fine-Grained Sketch-Based Image Retrieval.

    Science.gov (United States)

    Li, Ke; Pang, Kaiyue; Song, Yi-Zhe; Hospedales, Timothy M; Xiang, Tao; Zhang, Honggang

    2017-08-25

    We study the problem of fine-grained sketch-based image retrieval. By performing instance-level (rather than category-level) retrieval, it embodies a timely and practical application, particularly with the ubiquitous availability of touchscreens. Three factors contribute to the challenging nature of the problem: (i) free-hand sketches are inherently abstract and iconic, making visual comparisons with photos difficult, (ii) sketches and photos are in two different visual domains, i.e. black and white lines vs. color pixels, and (iii) fine-grained distinctions are especially challenging when executed across domain and abstraction-level. To address these challenges, we propose to bridge the image-sketch gap both at the high-level via parts and attributes, as well as at the low-level, via introducing a new domain alignment method. More specifically, (i) we contribute a dataset with 304 photos and 912 sketches, where each sketch and image is annotated with its semantic parts and associated part-level attributes. With the help of this dataset, we investigate (ii) how strongly-supervised deformable part-based models can be learned that subsequently enable automatic detection of part-level attributes, and provide pose-aligned sketch-image comparisons. To reduce the sketch-image gap when comparing low-level features, we also (iii) propose a novel method for instance-level domain-alignment, that exploits both subspace and instance-level cues to better align the domains. Finally (iv) these are combined in a matching framework integrating aligned low-level features, mid-level geometric structure and high-level semantic attributes. Extensive experiments conducted on our new dataset demonstrate effectiveness of the proposed method.

  20. Communication in a noisy environment: Perception of one's own voice and speech enhancement

    Science.gov (United States)

    Le Cocq, Cecile

    Workers in noisy industrial environments are often confronted to communication problems. Lost of workers complain about not being able to communicate easily with their coworkers when they wear hearing protectors. In consequence, they tend to remove their protectors, which expose them to the risk of hearing loss. In fact this communication problem is a double one: first the hearing protectors modify one's own voice perception; second they interfere with understanding speech from others. This double problem is examined in this thesis. When wearing hearing protectors, the modification of one's own voice perception is partly due to the occlusion effect which is produced when an earplug is inserted in the car canal. This occlusion effect has two main consequences: first the physiological noises in low frequencies are better perceived, second the perception of one's own voice is modified. In order to have a better understanding of this phenomenon, the literature results are analyzed systematically, and a new method to quantify the occlusion effect is developed. Instead of stimulating the skull with a bone vibrator or asking the subject to speak as is usually done in the literature, it has been decided to excite the buccal cavity with an acoustic wave. The experiment has been designed in such a way that the acoustic wave which excites the buccal cavity does not excite the external car or the rest of the body directly. The measurement of the hearing threshold in open and occluded car has been used to quantify the subjective occlusion effect for an acoustic wave in the buccal cavity. These experimental results as well as those reported in the literature have lead to a better understanding of the occlusion effect and an evaluation of the role of each internal path from the acoustic source to the internal car. The speech intelligibility from others is altered by both the high sound levels of noisy industrial environments and the speech signal attenuation due to hearing

  1. A Variational Level Set Model Combined with FCMS for Image Clustering Segmentation

    Directory of Open Access Journals (Sweden)

    Liming Tang

    2014-01-01

    Full Text Available The fuzzy C means clustering algorithm with spatial constraint (FCMS is effective for image segmentation. However, it lacks essential smoothing constraints to the cluster boundaries and enough robustness to the noise. Samson et al. proposed a variational level set model for image clustering segmentation, which can get the smooth cluster boundaries and closed cluster regions due to the use of level set scheme. However it is very sensitive to the noise since it is actually a hard C means clustering model. In this paper, based on Samson’s work, we propose a new variational level set model combined with FCMS for image clustering segmentation. Compared with FCMS clustering, the proposed model can get smooth cluster boundaries and closed cluster regions due to the use of level set scheme. In addition, a block-based energy is incorporated into the energy functional, which enables the proposed model to be more robust to the noise than FCMS clustering and Samson’s model. Some experiments on the synthetic and real images are performed to assess the performance of the proposed model. Compared with some classical image segmentation models, the proposed model has a better performance for the images contaminated by different noise levels.

  2. 3D change detection at street level using mobile laser scanning point clouds and terrestrial images

    Science.gov (United States)

    Qin, Rongjun; Gruen, Armin

    2014-04-01

    Automatic change detection and geo-database updating in the urban environment are difficult tasks. There has been much research on detecting changes with satellite and aerial images, but studies have rarely been performed at the street level, which is complex in its 3D geometry. Contemporary geo-databases include 3D street-level objects, which demand frequent data updating. Terrestrial images provides rich texture information for change detection, but the change detection with terrestrial images from different epochs sometimes faces problems with illumination changes, perspective distortions and unreliable 3D geometry caused by the lack of performance of automatic image matchers, while mobile laser scanning (MLS) data acquired from different epochs provides accurate 3D geometry for change detection, but is very expensive for periodical acquisition. This paper proposes a new method for change detection at street level by using combination of MLS point clouds and terrestrial images: the accurate but expensive MLS data acquired from an early epoch serves as the reference, and terrestrial images or photogrammetric images captured from an image-based mobile mapping system (MMS) at a later epoch are used to detect the geometrical changes between different epochs. The method will automatically mark the possible changes in each view, which provides a cost-efficient method for frequent data updating. The methodology is divided into several steps. In the first step, the point clouds are recorded by the MLS system and processed, with data cleaned and classified by semi-automatic means. In the second step, terrestrial images or mobile mapping images at a later epoch are taken and registered to the point cloud, and then point clouds are projected on each image by a weighted window based z-buffering method for view dependent 2D triangulation. In the next step, stereo pairs of the terrestrial images are rectified and re-projected between each other to check the geometrical

  3. A color fusion method of infrared and low-light-level images based on visual perception

    Science.gov (United States)

    Han, Jing; Yan, Minmin; Zhang, Yi; Bai, Lianfa

    2014-11-01

    The color fusion images can be obtained through the fusion of infrared and low-light-level images, which will contain both the information of the two. The fusion images can help observers to understand the multichannel images comprehensively. However, simple fusion may lose the target information due to inconspicuous targets in long-distance infrared and low-light-level images; and if targets extraction is adopted blindly, the perception of the scene information will be affected seriously. To solve this problem, a new fusion method based on visual perception is proposed in this paper. The extraction of the visual targets ("what" information) and parallel processing mechanism are applied in traditional color fusion methods. The infrared and low-light-level color fusion images are achieved based on efficient typical targets learning. Experimental results show the effectiveness of the proposed method. The fusion images achieved by our algorithm can not only improve the detection rate of targets, but also get rich natural information of the scenes.

  4. Automatic adjustment of display window (gray-level condition) for MR images using neural networks

    International Nuclear Information System (INIS)

    Ohhashi, Akinami; Nambu, Kyojiro.

    1992-01-01

    We have developed a system to automatically adjust the display window width and level (WWL) for MR images using neural networks. There were three main points in the development of our system as follows: 1) We defined an index for the clarity of a displayed image, and called 'EW'. EW is a quantitative measure of the clarity of an image displayed in a certain WWL, and can be derived from the difference between gray-level with the WWL adjusted by a human expert and with a certain WWL. 2) We extracted a group of six features from a gray-level histogram of a displayed image. We designed two neural networks which are able to learn the relationship between these features and the desired output (teaching signal), 'EQ', which is normalized to 0 to 1.0 from EW. Two neural networks were used to share the patterns to be learned; one learns a variety of patterns with less accuracy, and the other learns similar patterns with accuracy. Learning was performed using a back-propagation method. As a result, the neural networks after learning are able to provide a quantitative measure, 'Q', of the clarity of images displayed in the designated WWL. 3) Using the 'Hill climbing' method, we have been able to determine the best possible WWL for a displaying image. We have tested this technique for MR brain images. The results show that this system can adjust WWL comparable to that adjusted by a human expert for the majority of test images. The neural network is effective for the automatic adjustment of the display window for MR images. We are now studying the application of this method to MR images of another regions. (author)

  5. Combining low level features and visual attributes for VHR remote sensing image classification

    Science.gov (United States)

    Zhao, Fumin; Sun, Hao; Liu, Shuai; Zhou, Shilin

    2015-12-01

    Semantic classification of very high resolution (VHR) remote sensing images is of great importance for land use or land cover investigation. A large number of approaches exploiting different kinds of low level feature have been proposed in the literature. Engineers are often frustrated by their conclusions and a systematic assessment of various low level features for VHR remote sensing image classification is needed. In this work, we firstly perform an extensive evaluation of eight features including HOG, dense SIFT, SSIM, GIST, Geo color, LBP, Texton and Tiny images for classification of three public available datasets. Secondly, we propose to transfer ground level scene attributes to remote sensing images. Thirdly, we combine both low-level features and mid-level visual attributes to further improve the classification performance. Experimental results demonstrate that i) Dene SIFT and HOG features are more robust than other features for VHR scene image description. ii) Visual attribute competes with a combination of low level features. iii) Multiple feature combination achieves the best performance under different settings.

  6. NoGOA: predicting noisy GO annotations using evidences and sparse representation.

    Science.gov (United States)

    Yu, Guoxian; Lu, Chang; Wang, Jun

    2017-07-21

    Gene Ontology (GO) is a community effort to represent functional features of gene products. GO annotations (GOA) provide functional associations between GO terms and gene products. Due to resources limitation, only a small portion of annotations are manually checked by curators, and the others are electronically inferred. Although quality control techniques have been applied to ensure the quality of annotations, the community consistently report that there are still considerable noisy (or incorrect) annotations. Given the wide application of annotations, however, how to identify noisy annotations is an important but yet seldom studied open problem. We introduce a novel approach called NoGOA to predict noisy annotations. NoGOA applies sparse representation on the gene-term association matrix to reduce the impact of noisy annotations, and takes advantage of sparse representation coefficients to measure the semantic similarity between genes. Secondly, it preliminarily predicts noisy annotations of a gene based on aggregated votes from semantic neighborhood genes of that gene. Next, NoGOA estimates the ratio of noisy annotations for each evidence code based on direct annotations in GOA files archived on different periods, and then weights entries of the association matrix via estimated ratios and propagates weights to ancestors of direct annotations using GO hierarchy. Finally, it integrates evidence-weighted association matrix and aggregated votes to predict noisy annotations. Experiments on archived GOA files of six model species (H. sapiens, A. thaliana, S. cerevisiae, G. gallus, B. Taurus and M. musculus) demonstrate that NoGOA achieves significantly better results than other related methods and removing noisy annotations improves the performance of gene function prediction. The comparative study justifies the effectiveness of integrating evidence codes with sparse representation for predicting noisy GO annotations. Codes and datasets are available at http://mlda.swu.edu.cn/codes.php?name=NoGOA .

  7. Tile-Level Annotation of Satellite Images Using Multi-Level Max-Margin Discriminative Random Field

    Directory of Open Access Journals (Sweden)

    Hong Sun

    2013-05-01

    Full Text Available This paper proposes a multi-level max-margin discriminative analysis (M3DA framework, which takes both coarse and fine semantics into consideration, for the annotation of high-resolution satellite images. In order to generate more discriminative topic-level features, the M3DA uses the maximum entropy discrimination latent Dirichlet Allocation (MedLDA model. Moreover, for improving the spatial coherence of visual words neglected by M3DA, conditional random field (CRF is employed to optimize the soft label field composed of multiple label posteriors. The framework of M3DA enables one to combine word-level features (generated by support vector machines and topic-level features (generated by MedLDA via the bag-of-words representation. The experimental results on high-resolution satellite images have demonstrated that, using the proposed method can not only obtain suitable semantic interpretation, but also improve the annotation performance by taking into account the multi-level semantics and the contextual information.

  8. Left hemispheric dominance during auditory processing in a noisy environment

    Directory of Open Access Journals (Sweden)

    Ross Bernhard

    2007-11-01

    Full Text Available Abstract Background In daily life, we are exposed to different sound inputs simultaneously. During neural encoding in the auditory pathway, neural activities elicited by these different sounds interact with each other. In the present study, we investigated neural interactions elicited by masker and amplitude-modulated test stimulus in primary and non-primary human auditory cortex during ipsi-lateral and contra-lateral masking by means of magnetoencephalography (MEG. Results We observed significant decrements of auditory evoked responses and a significant inter-hemispheric difference for the N1m response during both ipsi- and contra-lateral masking. Conclusion The decrements of auditory evoked neural activities during simultaneous masking can be explained by neural interactions evoked by masker and test stimulus in peripheral and central auditory systems. The inter-hemispheric differences of N1m decrements during ipsi- and contra-lateral masking reflect a basic hemispheric specialization contributing to the processing of complex auditory stimuli such as speech signals in noisy environments.

  9. Synchronous Generator Model Parameter Estimation Based on Noisy Dynamic Waveforms

    Science.gov (United States)

    Berhausen, Sebastian; Paszek, Stefan

    2016-01-01

    In recent years, there have occurred system failures in many power systems all over the world. They have resulted in a lack of power supply to a large number of recipients. To minimize the risk of occurrence of power failures, it is necessary to perform multivariate investigations, including simulations, of power system operating conditions. To conduct reliable simulations, the current base of parameters of the models of generating units, containing the models of synchronous generators, is necessary. In the paper, there is presented a method for parameter estimation of a synchronous generator nonlinear model based on the analysis of selected transient waveforms caused by introducing a disturbance (in the form of a pseudorandom signal) in the generator voltage regulation channel. The parameter estimation was performed by minimizing the objective function defined as a mean square error for deviations between the measurement waveforms and the waveforms calculated based on the generator mathematical model. A hybrid algorithm was used for the minimization of the objective function. In the paper, there is described a filter system used for filtering the noisy measurement waveforms. The calculation results of the model of a 44 kW synchronous generator installed on a laboratory stand of the Institute of Electrical Engineering and Computer Science of the Silesian University of Technology are also given. The presented estimation method can be successfully applied to parameter estimation of different models of high-power synchronous generators operating in a power system.

  10. Bayesian Inference for Linear Parabolic PDEs with Noisy Boundary Conditions

    KAUST Repository

    Ruggeri, Fabrizio; Sawlan, Zaid A; Scavino, Marco; Tempone, Raul

    2016-01-01

    In this work we develop a hierarchical Bayesian setting to infer unknown parameters in initial-boundary value problems (IBVPs) for one-dimensional linear parabolic partial differential equations. Noisy boundary data and known initial condition are assumed. We derive the likelihood function associated with the forward problem, given some measurements of the solution field subject to Gaussian noise. Such function is then analytically marginalized using the linearity of the equation. Gaussian priors have been assumed for the time-dependent Dirichlet boundary values. Our approach is applied to synthetic data for the one-dimensional heat equation model, where the thermal diffusivity is the unknown parameter. We show how to infer the thermal diffusivity parameter when its prior distribution is lognormal or modeled by means of a space-dependent stationary lognormal random field. We use the Laplace method to provide approximated Gaussian posterior distributions for the thermal diffusivity. Expected information gains and predictive posterior densities for observable quantities are numerically estimated for different experimental setups.

  11. Multiengine Speech Processing Using SNR Estimator in Variable Noisy Environments

    Directory of Open Access Journals (Sweden)

    Ahmad R. Abu-El-Quran

    2012-01-01

    Full Text Available We introduce a multiengine speech processing system that can detect the location and the type of audio signal in variable noisy environments. This system detects the location of the audio source using a microphone array; the system examines the audio first, determines if it is speech/nonspeech, then estimates the value of the signal to noise (SNR using a Discrete-Valued SNR Estimator. Using this SNR value, instead of trying to adapt the speech signal to the speech processing system, we adapt the speech processing system to the surrounding environment of the captured speech signal. In this paper, we introduced the Discrete-Valued SNR Estimator and a multiengine classifier, using Multiengine Selection or Multiengine Weighted Fusion. Also we use the SI as example of the speech processing. The Discrete-Valued SNR Estimator achieves an accuracy of 98.4% in characterizing the environment's SNR. Compared to a conventional single engine SI system, the improvement in accuracy was as high as 9.0% and 10.0% for the Multiengine Selection and Multiengine Weighted Fusion, respectively.

  12. Population coding in sparsely connected networks of noisy neurons.

    Science.gov (United States)

    Tripp, Bryan P; Orchard, Jeff

    2012-01-01

    This study examines the relationship between population coding and spatial connection statistics in networks of noisy neurons. Encoding of sensory information in the neocortex is thought to require coordinated neural populations, because individual cortical neurons respond to a wide range of stimuli, and exhibit highly variable spiking in response to repeated stimuli. Population coding is rooted in network structure, because cortical neurons receive information only from other neurons, and because the information they encode must be decoded by other neurons, if it is to affect behavior. However, population coding theory has often ignored network structure, or assumed discrete, fully connected populations (in contrast with the sparsely connected, continuous sheet of the cortex). In this study, we modeled a sheet of cortical neurons with sparse, primarily local connections, and found that a network with this structure could encode multiple internal state variables with high signal-to-noise ratio. However, we were unable to create high-fidelity networks by instantiating connections at random according to spatial connection probabilities. In our models, high-fidelity networks required additional structure, with higher cluster factors and correlations between the inputs to nearby neurons.

  13. Population Coding in Sparsely Connected Networks of Noisy Neurons

    Directory of Open Access Journals (Sweden)

    Bryan Patrick Tripp

    2012-05-01

    Full Text Available This study examines the relationship between population coding and spatial connection statistics in networks of noisy neurons. Encoding of sensory information in the neocortex is thought to require coordinated neural populations, because individual cortical neurons respond to a wide range of stimuli, and exhibit highly variable spiking in response to repeated stimuli. Population coding is rooted in network structure, because cortical neurons receive information only from other neurons, and because the information they encode must be decoded by other neurons, if it is to affect behaviour. However, population coding theory has often ignored network structure, or assumed discrete, fully-connected populations (in contrast with the sparsely connected, continuous sheet of the cortex. In this study, we model a sheet of cortical neurons with sparse, primarily local connections, and find that a network with this structure can encode multiple internal state variables with high signal-to-noise ratio. However, in our model, although connection probability varies with the distance between neurons, we find that the connections cannot be instantiated at random according to these probabilities, but must have additional structure if information is to be encoded with high fidelity.

  14. On robust signal reconstruction in noisy filter banks

    CERN Document Server

    Vikalo, H; Hassibi, B; Kailath, T; 10.1016/j.sigpro.2004.08.011

    2005-01-01

    We study the design of synthesis filters in noisy filter bank systems using an H/sup infinity / estimation point of view. The H/sup infinity / approach is most promising in situations where the statistical properties of the disturbances (arising from quantization, compression, etc.) in each subband of the filter bank is unknown, or is too difficult to model and analyze. For the important special case of unitary analysis polyphase matrices we obtain an explicit expression for the minimum achievable disturbance attenuation. For arbitrary analysis polyphase matrices, standard state-space H/sup infinity / techniques can be employed to obtain numerical solutions. When the synthesis filters are restricted to being FIR, as is often the case in practice, the design can be cast as a finite-dimensional semi-definite program. In this case, we can effectively exploit the inherent non-uniqueness of the H/sup infinity / solution to optimize for an additional criteria. By optimizing for average performance in addition to th...

  15. Bayesian Inference for Linear Parabolic PDEs with Noisy Boundary Conditions

    KAUST Repository

    Ruggeri, Fabrizio

    2015-01-07

    In this work we develop a hierarchical Bayesian setting to infer unknown parameters in initial-boundary value problems (IBVPs) for one-dimensional linear parabolic partial differential equations. Noisy boundary data and known initial condition are assumed. We derive the likelihood function associated with the forward problem, given some measurements of the solution field subject to Gaussian noise. Such function is then analytically marginalized using the linearity of the equation. Gaussian priors have been assumed for the time-dependent Dirichlet boundary values. Our approach is applied to synthetic data for the one-dimensional heat equation model, where the thermal diffusivity is the unknown parameter. We show how to infer the thermal diffusivity parameter when its prior distribution is lognormal or modeled by means of a space-dependent stationary lognormal random field. We use the Laplace method to provide approximated Gaussian posterior distributions for the thermal diffusivity. Expected information gains and predictive posterior densities for observable quantities are numerically estimated for different experimental setups.

  16. Bayesian Inference for Linear Parabolic PDEs with Noisy Boundary Conditions

    KAUST Repository

    Ruggeri, Fabrizio

    2016-01-06

    In this work we develop a hierarchical Bayesian setting to infer unknown parameters in initial-boundary value problems (IBVPs) for one-dimensional linear parabolic partial differential equations. Noisy boundary data and known initial condition are assumed. We derive the likelihood function associated with the forward problem, given some measurements of the solution field subject to Gaussian noise. Such function is then analytically marginalized using the linearity of the equation. Gaussian priors have been assumed for the time-dependent Dirichlet boundary values. Our approach is applied to synthetic data for the one-dimensional heat equation model, where the thermal diffusivity is the unknown parameter. We show how to infer the thermal diffusivity parameter when its prior distribution is lognormal or modeled by means of a space-dependent stationary lognormal random field. We use the Laplace method to provide approximated Gaussian posterior distributions for the thermal diffusivity. Expected information gains and predictive posterior densities for observable quantities are numerically estimated for different experimental setups.

  17. Stochastic perturbations in open chaotic systems: random versus noisy maps.

    Science.gov (United States)

    Bódai, Tamás; Altmann, Eduardo G; Endler, Antonio

    2013-04-01

    We investigate the effects of random perturbations on fully chaotic open systems. Perturbations can be applied to each trajectory independently (white noise) or simultaneously to all trajectories (random map). We compare these two scenarios by generalizing the theory of open chaotic systems and introducing a time-dependent conditionally-map-invariant measure. For the same perturbation strength we show that the escape rate of the random map is always larger than that of the noisy map. In random maps we show that the escape rate κ and dimensions D of the relevant fractal sets often depend nonmonotonically on the intensity of the random perturbation. We discuss the accuracy (bias) and precision (variance) of finite-size estimators of κ and D, and show that the improvement of the precision of the estimations with the number of trajectories N is extremely slow ([proportionality]1/lnN). We also argue that the finite-size D estimators are typically biased. General theoretical results are combined with analytical calculations and numerical simulations in area-preserving baker maps.

  18. Use of global context for handling noisy names in discussion texts of a homeopathy discussion forum

    Directory of Open Access Journals (Sweden)

    Mukta Majumder

    2014-03-01

    Full Text Available The task of identifying named entities from the discussion texts in Web forums faces the challenge of noisy names. As the names are often misspelled or abbreviated, the conventional techniques have failed to detect the noisy names properly. In this paper we propose a global context based framework for handling the noisy names. The framework is tested on a named entity recognition system designed to identify the names from the discussion texts in a homeopathy diagnosis discussion forum. The proposed global context-based framework is found to be effective in improving the accuracy of the named entity recognition system.

  19. Level-set-based reconstruction algorithm for EIT lung images: first clinical results.

    Science.gov (United States)

    Rahmati, Peyman; Soleimani, Manuchehr; Pulletz, Sven; Frerichs, Inéz; Adler, Andy

    2012-05-01

    We show the first clinical results using the level-set-based reconstruction algorithm for electrical impedance tomography (EIT) data. The level-set-based reconstruction method (LSRM) allows the reconstruction of non-smooth interfaces between image regions, which are typically smoothed by traditional voxel-based reconstruction methods (VBRMs). We develop a time difference formulation of the LSRM for 2D images. The proposed reconstruction method is applied to reconstruct clinical EIT data of a slow flow inflation pressure-volume manoeuvre in lung-healthy and adult lung-injury patients. Images from the LSRM and the VBRM are compared. The results show comparable reconstructed images, but with an improved ability to reconstruct sharp conductivity changes in the distribution of lung ventilation using the LSRM.

  20. Level-set-based reconstruction algorithm for EIT lung images: first clinical results

    International Nuclear Information System (INIS)

    Rahmati, Peyman; Adler, Andy; Soleimani, Manuchehr; Pulletz, Sven; Frerichs, Inéz

    2012-01-01

    We show the first clinical results using the level-set-based reconstruction algorithm for electrical impedance tomography (EIT) data. The level-set-based reconstruction method (LSRM) allows the reconstruction of non-smooth interfaces between image regions, which are typically smoothed by traditional voxel-based reconstruction methods (VBRMs). We develop a time difference formulation of the LSRM for 2D images. The proposed reconstruction method is applied to reconstruct clinical EIT data of a slow flow inflation pressure–volume manoeuvre in lung-healthy and adult lung-injury patients. Images from the LSRM and the VBRM are compared. The results show comparable reconstructed images, but with an improved ability to reconstruct sharp conductivity changes in the distribution of lung ventilation using the LSRM. (paper)

  1. Research on Remote Sensing Image Classification Based on Feature Level Fusion

    Science.gov (United States)

    Yuan, L.; Zhu, G.

    2018-04-01

    Remote sensing image classification, as an important direction of remote sensing image processing and application, has been widely studied. However, in the process of existing classification algorithms, there still exists the phenomenon of misclassification and missing points, which leads to the final classification accuracy is not high. In this paper, we selected Sentinel-1A and Landsat8 OLI images as data sources, and propose a classification method based on feature level fusion. Compare three kind of feature level fusion algorithms (i.e., Gram-Schmidt spectral sharpening, Principal Component Analysis transform and Brovey transform), and then select the best fused image for the classification experimental. In the classification process, we choose four kinds of image classification algorithms (i.e. Minimum distance, Mahalanobis distance, Support Vector Machine and ISODATA) to do contrast experiment. We use overall classification precision and Kappa coefficient as the classification accuracy evaluation criteria, and the four classification results of fused image are analysed. The experimental results show that the fusion effect of Gram-Schmidt spectral sharpening is better than other methods. In four kinds of classification algorithms, the fused image has the best applicability to Support Vector Machine classification, the overall classification precision is 94.01 % and the Kappa coefficients is 0.91. The fused image with Sentinel-1A and Landsat8 OLI is not only have more spatial information and spectral texture characteristics, but also enhances the distinguishing features of the images. The proposed method is beneficial to improve the accuracy and stability of remote sensing image classification.

  2. Postprocessing method to clean up streaks due to noisy detectors

    International Nuclear Information System (INIS)

    Tuy, H.K.; Mattson, R.A.

    1990-01-01

    This paper reports that occasionally, one of the thousands of detectors in a CT scanner will intermittently produce erroneous data, creating streaks in the reconstructed image. The authors propose a method to identify and clean up the streaks automatically. To find the rays along which the data values are bad, a binary image registering the edges of the original image is created. Forward projection is applied to the binary image to single out edges along rays. Data along views containing the identified bad rays are estimated by means of forward projecting the original image. Back projection of the negative of the estimated convolved data along these views onto the streaky image will remove streaks from the image. Image enhancement is achieved by means of back projecting the convolved data estimated from the image after the streak removal along views of bad rays

  3. Storage phosphor radiography of wrist fractures: a subjective comparison of image quality at varying exposure levels

    Energy Technology Data Exchange (ETDEWEB)

    Peer, Regina; Giacomuzzi, Salvatore M.; Bodner, Gerd; Jaschke, Werner; Peer, Siegfried [Innsbruck Univ. (Austria). Inst. fuer Radiologie; Lanser, Anton [Academy of Radiology Technicians, Innsbruck (Austria); Pechlaner, Sigurd [Department of Traumatology, University Hospital, Innsbruck (Austria); Kuenzel, Karl Heinz; Gaber, O. [Department of Anatomy and Histology, University Hospital, Innsbruck (Austria)

    2002-06-01

    Image quality of storage phosphor radiographs acquired at different exposure levels was compared to define the minimal radiation dose needed to achieve images which allow for reliable detection of wrist fractures. In a study on 33 fractured anatomical wrist specimens image quality of storage phosphor radiographs was assessed on a diagnostic PACS workstation by three observers. Images were acquired at exposure levels corresponding to a speed classes 100, 200, 400 and 800. Cortical bone surface, trabecular bone, soft tissues and fracture delineation were judged on a subjective basis. Image quality was rated according to a standard protocol and statistical evaluation was performed based on an analysis of variance (ANOVA). Images at a dose reduction of 37% were rated sufficient quality without loss in diagnostic accuracy. Sufficient trabecular and cortical bone presentation was still achieved at a dose reduction of 62%. The latter images, however, were considered unacceptable for fracture detection. To achieve high-quality storage phosphor radiographs, which allow for a reliable evaluation of wrist fractures, a minimum exposure dose equivalent to a speed class of 200 is needed. For general-purpose skeletal radiography, however, a dose reduction of up to 62% can be achieved. A choice of exposure settings according to the clinical situation (ALARA principle) is recommended to achieve possible dose reductions. (orig.)

  4. Fourier power spectrum characteristics of face photographs: attractiveness perception depends on low-level image properties.

    Science.gov (United States)

    Menzel, Claudia; Hayn-Leichsenring, Gregor U; Langner, Oliver; Wiese, Holger; Redies, Christoph

    2015-01-01

    We investigated whether low-level processed image properties that are shared by natural scenes and artworks - but not veridical face photographs - affect the perception of facial attractiveness and age. Specifically, we considered the slope of the radially averaged Fourier power spectrum in a log-log plot. This slope is a measure of the distribution of special frequency power in an image. Images of natural scenes and artworks possess - compared to face images - a relatively shallow slope (i.e., increased high spatial frequency power). Since aesthetic perception might be based on the efficient processing of images with natural scene statistics, we assumed that the perception of facial attractiveness might also be affected by these properties. We calculated Fourier slope and other beauty-associated measurements in face images and correlated them with ratings of attractiveness and age of the depicted persons (Study 1). We found that Fourier slope - in contrast to the other tested image properties - did not predict attractiveness ratings when we controlled for age. In Study 2A, we overlaid face images with random-phase patterns with different statistics. Patterns with a slope similar to those in natural scenes and artworks resulted in lower attractiveness and higher age ratings. In Studies 2B and 2C, we directly manipulated the Fourier slope of face images and found that images with shallower slopes were rated as more attractive. Additionally, attractiveness of unaltered faces was affected by the Fourier slope of a random-phase background (Study 3). Faces in front of backgrounds with statistics similar to natural scenes and faces were rated as more attractive. We conclude that facial attractiveness ratings are affected by specific image properties. An explanation might be the efficient coding hypothesis.

  5. Identifying FRBR Work-Level Data in MARC Bibliographic Records for Manifestations of Moving Images

    Directory of Open Access Journals (Sweden)

    Lynne Bisko

    2008-12-01

    Full Text Available The library metadata community is dealing with the challenge of implementing the conceptual model, Functional Requirements for Bibliographic Records (FRBR. In response, the Online Audiovisual Catalogers (OLAC created a task force to study the issues related to creating and using FRBR-based work-level records for moving images. This article presents one part of the task force's work: it looks at the feasibility of creating provisional FRBR work-level records for moving images by extracting data from existing manifestation-level bibliographic records. Using a sample of 941 MARC records, a subgroup of the task force conducted a pilot project to look at five characteristics of moving image works. Here they discuss their methodology; analysis; selected results for two elements, original date (year and director name; and conclude with some suggested changes to MARC coding and current cataloging policy.

  6. Noisy mean field game model for malware propagation in opportunistic networks

    KAUST Repository

    Tembine, Hamidou; Vilanova, Pedro; Debbah, Mé roú ane

    2012-01-01

    nodes is examined with a noisy mean field limit and compared to a deterministic one. The stochastic nature of the wireless environment make stochastic approaches more realistic for such types of networks. By introducing control strategies, we show

  7. Methodology to estimate the relative pressure field from noisy experimental velocity data

    International Nuclear Information System (INIS)

    Bolin, C D; Raguin, L G

    2008-01-01

    The determination of intravascular pressure fields is important to the characterization of cardiovascular pathology. We present a two-stage method that solves the inverse problem of estimating the relative pressure field from noisy velocity fields measured by phase contrast magnetic resonance imaging (PC-MRI) on an irregular domain with limited spatial resolution, and includes a filter for the experimental noise. For the pressure calculation, the Poisson pressure equation is solved by embedding the irregular flow domain into a regular domain. To lessen the propagation of the noise inherent to the velocity measurements, three filters - a median filter and two physics-based filters - are evaluated using a 2-D Couette flow. The two physics-based filters outperform the median filter for the estimation of the relative pressure field for realistic signal-to-noise ratios (SNR = 5 to 30). The most accurate pressure field results from a filter that applies in a least-squares sense three constraints simultaneously: consistency between measured and filtered velocity fields, divergence-free and additional smoothness conditions. This filter leads to a 5-fold gain in accuracy for the estimated relative pressure field compared to without noise filtering, in conditions consistent with PC-MRI of the carotid artery: SNR = 5, 20 x 20 discretized flow domain (25 X 25 computational domain).

  8. Pandemics and immune memory in the noisy Penna model

    Science.gov (United States)

    Cebrat, Stanisław; Bonkowska, Katarzyna; Biecek, Przemysław

    2007-06-01

    In the noisy Penna model of ageing, instead of counting the number of defective loci which eventually kill an individual, the noise describing the health status of individuals is introduced. This white noise is composed of two components: the environmental one and the personal one. If the sum of both trespasses the limit set for the individuals homeodynamics the individual dies. The energy of personal fluctuations depends on the number of defective loci expressed in the individuals genome. Environmental fluctuations, the same for all individuals can include some signals, corresponding to the exposition to pathogens which could be dangerous for a fraction of the organisms. Personal noise and the component of random environmental fluctuations, when superimposed on the signal can be life threatening if they are stronger than the limit set for individuals homeodynamics. Nevertheless, some organisms survive the period of dangerous signal and they may remember the signal in the future, like antigens are remembered by our immune systems. Unfortunately, this memory weakens with time and, even worse, some additional defective genes are switched on during the ageing. If the same pathogens (signals) emerge during the lifespan of the population, a fraction of the population could remember it and could respond by increasing the resistance to it. Again, unfortunately for some individuals, their memory could be too weak and their own health status has worsened due to the accumulated mutations, they have to die. Though, a fraction of individuals can survive the pandemics due to the immune memory, but a fraction of population has no such a memory because they were born after the last pandemic or they didnt notice this pandemic. Our simple model, by implementing the noise instead of deterministic threshold of genetic defects, describes how the impact of pandemics on populations depends on the time which elapsed between the two incidents and how the different age groups of

  9. Image accuracy and representational enhancement through low-level, multi-sensor integration techniques

    International Nuclear Information System (INIS)

    Baker, J.E.

    1993-05-01

    Multi-Sensor Integration (MSI) is the combining of data and information from more than one source in order to generate a more reliable and consistent representation of the environment. The need for MSI derives largely from basic ambiguities inherent in our current sensor imaging technologies. These ambiguities exist as long as the mapping from reality to image is not 1-to-1. That is, if different 44 realities'' lead to identical images, a single image cannot reveal the particular reality which was the truth. MSI techniques can be divided into three categories based on the relative information content of the original images with that of the desired representation: (1) ''detail enhancement,'' wherein the relative information content of the original images is less rich than the desired representation; (2) ''data enhancement,'' wherein the MSI techniques axe concerned with improving the accuracy of the data rather than either increasing or decreasing the level of detail; and (3) ''conceptual enhancement,'' wherein the image contains more detail than is desired, making it difficult to easily recognize objects of interest. In conceptual enhancement one must group pixels corresponding to the same conceptual object and thereby reduce the level of extraneous detail. This research focuses on data and conceptual enhancement algorithms. To be useful in many real-world applications, e.g., autonomous or teleoperated robotics, real-time feedback is critical. But, many MSI/image processing algorithms require significant processing time. This is especially true of feature extraction, object isolation, and object recognition algorithms due to their typical reliance on global or large neighborhood information. This research attempts to exploit the speed currently available in state-of-the-art digitizers and highly parallel processing systems by developing MSI algorithms based on pixel rather than global-level features

  10. Knowledge-based low-level image analysis for computer vision systems

    Science.gov (United States)

    Dhawan, Atam P.; Baxi, Himanshu; Ranganath, M. V.

    1988-01-01

    Two algorithms for entry-level image analysis and preliminary segmentation are proposed which are flexible enough to incorporate local properties of the image. The first algorithm involves pyramid-based multiresolution processing and a strategy to define and use interlevel and intralevel link strengths. The second algorithm, which is designed for selected window processing, extracts regions adaptively using local histograms. The preliminary segmentation and a set of features are employed as the input to an efficient rule-based low-level analysis system, resulting in suboptimal meaningful segmentation.

  11. Varying ultrasound power level to distinguish surgical instruments and tissue.

    Science.gov (United States)

    Ren, Hongliang; Anuraj, Banani; Dupont, Pierre E

    2018-03-01

    We investigate a new framework of surgical instrument detection based on power-varying ultrasound images with simple and efficient pixel-wise intensity processing. Without using complicated feature extraction methods, we identified the instrument with an estimated optimal power level and by comparing pixel values of varying transducer power level images. The proposed framework exploits the physics of ultrasound imaging system by varying the transducer power level to effectively distinguish metallic surgical instruments from tissue. This power-varying image-guidance is motivated from our observations that ultrasound imaging at different power levels exhibit different contrast enhancement capabilities between tissue and instruments in ultrasound-guided robotic beating-heart surgery. Using lower transducer power levels (ranging from 40 to 75% of the rated lowest ultrasound power levels of the two tested ultrasound scanners) can effectively suppress the strong imaging artifacts from metallic instruments and thus, can be utilized together with the images from normal transducer power levels to enhance the separability between instrument and tissue, improving intraoperative instrument tracking accuracy from the acquired noisy ultrasound volumetric images. We performed experiments in phantoms and ex vivo hearts in water tank environments. The proposed multi-level power-varying ultrasound imaging approach can identify robotic instruments of high acoustic impedance from low-signal-to-noise-ratio ultrasound images by power adjustments.

  12. Fluid-fluid level on MR image: significance in Musculoskeletal diseases

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Hye Won; Lee, Kyung Won [Seoul Naitonal University, Seoul (Korea, Republic of). Coll. of Medicine; Song, Chi Sung [Seoul City Boramae Hospital, Seoul (Korea, Republic of); Han, Sang Wook; Kang, Heung Sik [Seoul Naitonal University, Seoul (Korea, Republic of). Coll. of Medicine

    1998-01-01

    To evaluate the frequency, number and signal intensity of fluid-fluid levels of musculoskeletal diseases on MR images, and to determine the usefulness of this information for the differentiation of musculoskeletal diseases. MR images revealed fluid-fluid levels in the following diseases : giant cell tumor(6), telangiectatic osteosarcoma(4), aneurysmal bone cyst(3), synovial sarcoma(3), chondroblastoma(2), soft tissue tuberculous abscess(2), hematoma(2), hemangioma (1), neurilemmoma(1), metastasis(1), malignant fibrous histiocytoma(1), bursitis(1), pyogenic abscess(1), and epidermoid inclusion cyst(1). Fourteen benign tumors and ten malignant, three abscesses, and the epidermoid inclusion cyst showed only one fluid-fluid level in a unilocular cyst. On T1-weighted images, the signal intensities of fluid varied, but on T2-weighted images, superior layers were in most cases more hyperintense than inferior layers. Because fluid-fluid layers are a nonspecific finding, it is difficult to specifically diagnose each disease according to the number of fluid-fluid levels or signal intensity of fluid. In spite of the nonspecificity of fluid-fluid levels, they were frequently seen in cases of giant cell tumor, telangiectatic osteosarcoma, aneurysmal bone cycle, and synovial sarcoma. Nontumorous diseases such abscesses and hematomas also demonstrated this finding. (author). 11 refs., 1 tab., 4 figs.

  13. Fluid-fluid level on MR image: significance in Musculoskeletal diseases

    International Nuclear Information System (INIS)

    Chung, Hye Won; Lee, Kyung Won; Han, Sang Wook; Kang, Heung Sik

    1998-01-01

    To evaluate the frequency, number and signal intensity of fluid-fluid levels of musculoskeletal diseases on MR images, and to determine the usefulness of this information for the differentiation of musculoskeletal diseases. MR images revealed fluid-fluid levels in the following diseases : giant cell tumor(6), telangiectatic osteosarcoma(4), aneurysmal bone cyst(3), synovial sarcoma(3), chondroblastoma(2), soft tissue tuberculous abscess(2), hematoma(2), hemangioma (1), neurilemmoma(1), metastasis(1), malignant fibrous histiocytoma(1), bursitis(1), pyogenic abscess(1), and epidermoid inclusion cyst(1). Fourteen benign tumors and ten malignant, three abscesses, and the epidermoid inclusion cyst showed only one fluid-fluid level in a unilocular cyst. On T1-weighted images, the signal intensities of fluid varied, but on T2-weighted images, superior layers were in most cases more hyperintense than inferior layers. Because fluid-fluid layers are a nonspecific finding, it is difficult to specifically diagnose each disease according to the number of fluid-fluid levels or signal intensity of fluid. In spite of the nonspecificity of fluid-fluid levels, they were frequently seen in cases of giant cell tumor, telangiectatic osteosarcoma, aneurysmal bone cycle, and synovial sarcoma. Nontumorous diseases such abscesses and hematomas also demonstrated this finding. (author). 11 refs., 1 tab., 4 figs

  14. Acute cervical spine injuries: prospective MR imaging assessment at a level 1 trauma center.

    Science.gov (United States)

    Katzberg, R W; Benedetti, P F; Drake, C M; Ivanovic, M; Levine, R A; Beatty, C S; Nemzek, W R; McFall, R A; Ontell, F K; Bishop, D M; Poirier, V C; Chong, B W

    1999-10-01

    To determine the weighted average sensitivity of magnetic resonance (MR) imaging in the prospective detection of acute neck injury and to compare these findings with those of a comprehensive conventional radiographic assessment. Conventional radiography and MR imaging were performed in 199 patients presenting to a level 1 trauma center with suspected cervical spine injury. Weighted sensitivities and specificities were calculated, and a weighted average across eight vertebral levels from C1 to T1 was formed. Fourteen parameters indicative of acute injury were tabulated. Fifty-eight patients had 172 acute cervical injuries. MR imaging depicted 136 (79%) acute abnormalities and conventional radiography depicted 39 (23%). For assessment of acute fractures, MR images (weighted average sensitivity, 43%; CI: 21%, 66%) were comparable to conventional radiographs (weighted average sensitivity, 48%; CI: 30%, 65%). MR imaging was superior to conventional radiography in the evaluation of pre- or paravertebral hemorrhage or edema, anterior or posterior longitudinal ligament injury, traumatic disk herniation, cord edema, and cord compression. Cord injuries were associated with cervical spine spondylosis (P < .05), acute fracture (P < .001), and canal stenosis (P < .001). MR imaging is more accurate than radiography in the detection of a wide spectrum of neck injuries, and further study is warranted of its potential effect on medical decision making, clinical outcome, and cost-effectiveness.

  15. AN AUTOMATIC OPTICAL AND SAR IMAGE REGISTRATION METHOD USING ITERATIVE MULTI-LEVEL AND REFINEMENT MODEL

    Directory of Open Access Journals (Sweden)

    C. Xu

    2016-06-01

    Full Text Available Automatic image registration is a vital yet challenging task, particularly for multi-sensor remote sensing images. Given the diversity of the data, it is unlikely that a single registration algorithm or a single image feature will work satisfactorily for all applications. Focusing on this issue, the mainly contribution of this paper is to propose an automatic optical-to-SAR image registration method using –level and refinement model: Firstly, a multi-level strategy of coarse-to-fine registration is presented, the visual saliency features is used to acquire coarse registration, and then specific area and line features are used to refine the registration result, after that, sub-pixel matching is applied using KNN Graph. Secondly, an iterative strategy that involves adaptive parameter adjustment for re-extracting and re-matching features is presented. Considering the fact that almost all feature-based registration methods rely on feature extraction results, the iterative strategy improve the robustness of feature matching. And all parameters can be automatically and adaptively adjusted in the iterative procedure. Thirdly, a uniform level set segmentation model for optical and SAR images is presented to segment conjugate features, and Voronoi diagram is introduced into Spectral Point Matching (VSPM to further enhance the matching accuracy between two sets of matching points. Experimental results show that the proposed method can effectively and robustly generate sufficient, reliable point pairs and provide accurate registration.

  16. Radiation therapists' perceptions of the minimum level of experience required to perform portal image analysis

    International Nuclear Information System (INIS)

    Rybovic, Michala; Halkett, Georgia K.; Banati, Richard B.; Cox, Jennifer

    2008-01-01

    Background and purpose: Our aim was to explore radiation therapists' views on the level of experience necessary to undertake portal image analysis and clinical decision making. Materials and methods: A questionnaire was developed to determine the availability of portal imaging equipment in Australia and New Zealand. We analysed radiation therapists' responses to a specific question regarding their opinion on the minimum level of experience required for health professionals to analyse portal images. We used grounded theory and a constant comparative method of data analysis to derive the main themes. Results: Forty-six radiation oncology facilities were represented in our survey, with 40 questionnaires being returned (87%). Thirty-seven radiation therapists answered our free-text question. Radiation therapists indicated three main themes which they felt were important in determining the minimum level of experience: 'gaining on-the-job experience', 'receiving training' and 'working as a team'. Conclusions: Radiation therapists indicated that competence in portal image review occurs via various learning mechanisms. Further research is warranted to determine perspectives of other health professionals, such as radiation oncologists, on portal image review becoming part of radiation therapists' extended role. Suitable training programs and steps for implementation should be developed to facilitate this endeavour

  17. Two-level image authentication by two-step phase-shifting interferometry and compressive sensing

    Science.gov (United States)

    Zhang, Xue; Meng, Xiangfeng; Yin, Yongkai; Yang, Xiulun; Wang, Yurong; Li, Xianye; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi

    2018-01-01

    A two-level image authentication method is proposed; the method is based on two-step phase-shifting interferometry, double random phase encoding, and compressive sensing (CS) theory, by which the certification image can be encoded into two interferograms. Through discrete wavelet transform (DWT), sparseness processing, Arnold transform, and data compression, two compressed signals can be generated and delivered to two different participants of the authentication system. Only the participant who possesses the first compressed signal attempts to pass the low-level authentication. The application of Orthogonal Match Pursuit CS algorithm reconstruction, inverse Arnold transform, inverse DWT, two-step phase-shifting wavefront reconstruction, and inverse Fresnel transform can result in the output of a remarkable peak in the central location of the nonlinear correlation coefficient distributions of the recovered image and the standard certification image. Then, the other participant, who possesses the second compressed signal, is authorized to carry out the high-level authentication. Therefore, both compressed signals are collected to reconstruct the original meaningful certification image with a high correlation coefficient. Theoretical analysis and numerical simulations verify the feasibility of the proposed method.

  18. [Influence of human body target's spectral characteristics on visual range of low light level image intensifiers].

    Science.gov (United States)

    Zhang, Jun-Ju; Yang, Wen-Bin; Xu, Hui; Liu, Lei; Tao, Yuan-Yaun

    2013-11-01

    To study the effect of different human target's spectral reflective characteristic on low light level (LLL) image intensifier's distance, based on the spectral characteristics of the night-sky radiation and the spectral reflective coefficients of common clothes, we established a equation of human body target's spectral reflective distribution, and analyzed the spectral reflective characteristics of different human targets wearing the clothes of different color and different material, and from the actual detection equation of LLL image intensifier distance, discussed the detection capability of LLL image intensifier for different human target. The study shows that the effect of different human target's spectral reflective characteristic on LLL image intensifier distance is mainly reflected in the average reflectivity rho(-) and the initial contrast of the target and the background C0. Reflective coefficient and spectral reflection intensity of cotton clothes are higher than polyester clothes, and detection capability of LLL image intensifier is stronger for the human target wearing cotton clothes. Experimental results show that the LLL image intensifiers have longer visual ranges for targets who wear cotton clothes than targets who wear same color but polyester clothes, and have longer visual ranges for targets who wear light-colored clothes than targets who wear dark-colored clothes. And in the full moon illumination conditions, LLL image intensifiers are more sensitive to the clothes' material.

  19. Automated ventricular systems segmentation in brain CT images by combining low-level segmentation and high-level template matching

    Directory of Open Access Journals (Sweden)

    Ward Kevin R

    2009-11-01

    Full Text Available Abstract Background Accurate analysis of CT brain scans is vital for diagnosis and treatment of Traumatic Brain Injuries (TBI. Automatic processing of these CT brain scans could speed up the decision making process, lower the cost of healthcare, and reduce the chance of human error. In this paper, we focus on automatic processing of CT brain images to segment and identify the ventricular systems. The segmentation of ventricles provides quantitative measures on the changes of ventricles in the brain that form vital diagnosis information. Methods First all CT slices are aligned by detecting the ideal midlines in all images. The initial estimation of the ideal midline of the brain is found based on skull symmetry and then the initial estimate is further refined using detected anatomical features. Then a two-step method is used for ventricle segmentation. First a low-level segmentation on each pixel is applied on the CT images. For this step, both Iterated Conditional Mode (ICM and Maximum A Posteriori Spatial Probability (MASP are evaluated and compared. The second step applies template matching algorithm to identify objects in the initial low-level segmentation as ventricles. Experiments for ventricle segmentation are conducted using a relatively large CT dataset containing mild and severe TBI cases. Results Experiments show that the acceptable rate of the ideal midline detection is over 95%. Two measurements are defined to evaluate ventricle recognition results. The first measure is a sensitivity-like measure and the second is a false positive-like measure. For the first measurement, the rate is 100% indicating that all ventricles are identified in all slices. The false positives-like measurement is 8.59%. We also point out the similarities and differences between ICM and MASP algorithms through both mathematically relationships and segmentation results on CT images. Conclusion The experiments show the reliability of the proposed algorithms. The

  20. Automatic segmentation of Leishmania parasite in microscopic images using a modified CV level set method

    Science.gov (United States)

    Farahi, Maria; Rabbani, Hossein; Talebi, Ardeshir; Sarrafzadeh, Omid; Ensafi, Shahab

    2015-12-01

    Visceral Leishmaniasis is a parasitic disease that affects liver, spleen and bone marrow. According to World Health Organization report, definitive diagnosis is possible just by direct observation of the Leishman body in the microscopic image taken from bone marrow samples. We utilize morphological and CV level set method to segment Leishman bodies in digital color microscopic images captured from bone marrow samples. Linear contrast stretching method is used for image enhancement and morphological method is applied to determine the parasite regions and wipe up unwanted objects. Modified global and local CV level set methods are proposed for segmentation and a shape based stopping factor is used to hasten the algorithm. Manual segmentation is considered as ground truth to evaluate the proposed method. This method is tested on 28 samples and achieved 10.90% mean of segmentation error for global model and 9.76% for local model.

  1. Lossless, Near-Lossless, and Refinement Coding of Bi-level Images

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren Otto

    1997-01-01

    We present general and unified algorithms for lossy/lossless codingof bi-level images. The compression is realized by applying arithmetic coding to conditional probabilities. As in the current JBIG standard the conditioning may be specified by a template.For better compression, the more general....... Introducing only a small amount of loss in halftoned test images, compression is increased by up to a factor of four compared with JBIG. Lossy, lossless, and refinement decoding speed and lossless encoding speed are less than a factor of two slower than JBIG. The (de)coding method is proposed as part of JBIG......-2, an emerging international standard for lossless/lossy compression of bi-level images....

  2. Lossless, Near-Lossless, and Refinement Coding of Bi-level Images

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren Otto

    1999-01-01

    We present general and unified algorithms for lossy/lossless coding of bi-level images. The compression is realized by applying arithmetic coding to conditional probabilities. As in the current JBIG standard the conditioning may be specified by a template.For better compression, the more general...... to the specialized soft pattern matching techniques which work better for text. Template based refinement coding is applied for lossy-to-lossless refinement. Introducing only a small amount of loss in halftoned test images, compression is increased by up to a factor of four compared with JBIG. Lossy, lossless......, and refinement decoding speed and lossless encoding speed are less than a factor of two slower than JBIG. The (de)coding method is proposed as part of JBIG2, an emerging international standard for lossless/lossy compression of bi-level images....

  3. Effect of administered radioactive dose level on image quality of brain perfusion imaging with 99mTc-HMPAO

    Directory of Open Access Journals (Sweden)

    I.Armeniakos

    2008-01-01

    Full Text Available Brain perfusion imaging by means of 99mTc-labeled hexamethyl propylene amine oxime (HMPAO is a well-established Nuclear Medicine diagnostic procedure. The administered dose range recommended by the supplying company and reported in bibliography is rather wide (approximately 9.5-27 mCi. This fact necessitates further quantitative analysis of the technique, so as to minimise patient absorbed dose without compromising the examination diagnostic value. In this study, a quantitative evaluation of the radiopharmaceutical performance for different values of administered dose (10, 15, 20 mCi was carried out. Subsequently, a generic image quality index was correlated with the administered dose, to produce an overall performance indicator. Through this cost-to-benefit type analysis, the necessity of administration of higher radioactive dose levels in order to perform the specific diagnostic procedure was examined.Materials & methods: The study was based on a sample of 78 patients (56 administered with 10 mCi, 10 with 15 mCi and 12 with 20 mCi. Some patients were classified as normal, while others presented various forms of pathology. Evaluation of image quality was based on contrast, noise and contrast-to-noise ratio indicators, denoted CI, NI and CNR respectively. Calculation of all indicators was based on wavelet transform. An overall performance indicator (denoted PI, produced by the ratio of CNR by administered dose, was also calculated.Results: Calculation of skewness parameter revealed the normality of CI, NI and non-normality of CNR, PI populations. Application of appropriate statistical tests (analysis of variance for normal and Kruskal-Wallis test for non-normal populations showed that there is a statistically significant difference in CI (p0.05 values. Application of Tukey test for normal populations CI, NI led to the conclusion that CI(10 mCi = CI(20 mCiNI(20 mCi, while NI(15 mCi can not be characterised. Finally, application of non

  4. Dissecting random and systematic differences between noisy composite data sets.

    Science.gov (United States)

    Diederichs, Kay

    2017-04-01

    Composite data sets measured on different objects are usually affected by random errors, but may also be influenced by systematic (genuine) differences in the objects themselves, or the experimental conditions. If the individual measurements forming each data set are quantitative and approximately normally distributed, a correlation coefficient is often used to compare data sets. However, the relations between data sets are not obvious from the matrix of pairwise correlations since the numerical value of the correlation coefficient is lowered by both random and systematic differences between the data sets. This work presents a multidimensional scaling analysis of the pairwise correlation coefficients which places data sets into a unit sphere within low-dimensional space, at a position given by their CC* values [as defined by Karplus & Diederichs (2012), Science, 336, 1030-1033] in the radial direction and by their systematic differences in one or more angular directions. This dimensionality reduction can not only be used for classification purposes, but also to derive data-set relations on a continuous scale. Projecting the arrangement of data sets onto the subspace spanned by systematic differences (the surface of a unit sphere) allows, irrespective of the random-error levels, the identification of clusters of closely related data sets. The method gains power with increasing numbers of data sets. It is illustrated with an example from low signal-to-noise ratio image processing, and an application in macromolecular crystallography is shown, but the approach is completely general and thus should be widely applicable.

  5. Examination of image diagnosis system at high level emergency medical service

    International Nuclear Information System (INIS)

    Hirose, Masaharu; Endo, Toshio; Aoki, Tomio

    1983-01-01

    This is a report of the basic idea on imaging system focussing on a necessary X-ray system for high-level emergencies which was worked out due to the establishment of the independent emergency medical institute specialized in the tertiary lifesaving and emergency, and of examinations on satisfactory results we gained for about three years of usage. (author)

  6. Spatiotemporal Analysis of RGB-D-T Facial Images for Multimodal Pain Level Recognition

    DEFF Research Database (Denmark)

    Irani, Ramin; Nasrollahi, Kamal; Oliu Simon, Marc

    2015-01-01

    facial images for pain detection and pain intensity level recognition. For this purpose, we extract energies released by facial pixels using a spatiotemporal filter. Experiments on a group of 12 elderly people applying the multimodal approach show that the proposed method successfully detects pain...

  7. Exploiting street-level panoramic images for large-scale automated surveying of traffic sign

    NARCIS (Netherlands)

    Hazelhoff, L.; Creusen, I.M.; With, de P.H.N.

    2014-01-01

    Accurate and up-to-date inventories of traffic signs contribute to efficient road maintenance and a high road safety. This paper describes a system for the automated surveying of road signs from street-level images. This is an extremely challenging task, as the involved capturings are non-densely

  8. Estimating 3D Object Parameters from 2D Grey-Level Images

    NARCIS (Netherlands)

    Houkes, Z.

    2000-01-01

    This thesis describes a general framework for parameter estimation, which is suitable for computer vision applications. The approach described combines 3D modelling, animation and estimation tools to determine parameters of objects in a scene from 2D grey-level images. The animation tool predicts

  9. Multi-domain, higher order level set scheme for 3D image segmentation on the GPU

    DEFF Research Database (Denmark)

    Sharma, Ojaswa; Zhang, Qin; Anton, François

    2010-01-01

    to evaluate level set surfaces that are $C^2$ continuous, but are slow due to high computational burden. In this paper, we provide a higher order GPU based solver for fast and efficient segmentation of large volumetric images. We also extend the higher order method to multi-domain segmentation. Our streaming...

  10. Diagnostic accuracy at several reduced radiation dose levels for CT imaging in the diagnosis of appendicitis

    Science.gov (United States)

    Zhang, Di; Khatonabadi, Maryam; Kim, Hyun; Jude, Matilda; Zaragoza, Edward; Lee, Margaret; Patel, Maitraya; Poon, Cheryce; Douek, Michael; Andrews-Tang, Denise; Doepke, Laura; McNitt-Gray, Shawn; Cagnon, Chris; DeMarco, John; McNitt-Gray, Michael

    2012-03-01

    Purpose: While several studies have investigated the tradeoffs between radiation dose and image quality (noise) in CT imaging, the purpose of this study was to take this analysis a step further by investigating the tradeoffs between patient radiation dose (including organ dose) and diagnostic accuracy in diagnosis of appendicitis using CT. Methods: This study was IRB approved and utilized data from 20 patients who underwent clinical CT exams for indications of appendicitis. Medical record review established true diagnosis of appendicitis, with 10 positives and 10 negatives. A validated software tool used raw projection data from each scan to create simulated images at lower dose levels (70%, 50%, 30%, 20% of original). An observer study was performed with 6 radiologists reviewing each case at each dose level in random order over several sessions. Readers assessed image quality and provided confidence in their diagnosis of appendicitis, each on a 5 point scale. Liver doses at each case and each dose level were estimated using Monte Carlo simulation based methods. Results: Overall diagnostic accuracy varies across dose levels: 92%, 93%, 91%, 90% and 90% across the 100%, 70%, 50%, 30% and 20% dose levels respectively. And it is 93%, 95%, 88%, 90% and 90% across the 13.5-22mGy, 9.6-13.5mGy, 6.4-9.6mGy, 4-6.4mGy, and 2-4mGy liver dose ranges respectively. Only 4 out of 600 observations were rated "unacceptable" for image quality. Conclusion: The results from this pilot study indicate that the diagnostic accuracy does not change dramatically even at significantly reduced radiation dose.

  11. An improved level set method for brain MR images segmentation and bias correction.

    Science.gov (United States)

    Chen, Yunjie; Zhang, Jianwei; Macione, Jim

    2009-10-01

    Intensity inhomogeneities cause considerable difficulty in the quantitative analysis of magnetic resonance (MR) images. Thus, bias field estimation is a necessary step before quantitative analysis of MR data can be undertaken. This paper presents a variational level set approach to bias correction and segmentation for images with intensity inhomogeneities. Our method is based on an observation that intensities in a relatively small local region are separable, despite of the inseparability of the intensities in the whole image caused by the overall intensity inhomogeneity. We first define a localized K-means-type clustering objective function for image intensities in a neighborhood around each point. The cluster centers in this objective function have a multiplicative factor that estimates the bias within the neighborhood. The objective function is then integrated over the entire domain to define the data term into the level set framework. Our method is able to capture bias of quite general profiles. Moreover, it is robust to initialization, and thereby allows fully automated applications. The proposed method has been used for images of various modalities with promising results.

  12. Analysis of gene expression levels in individual bacterial cells without image segmentation.

    Science.gov (United States)

    Kwak, In Hae; Son, Minjun; Hagen, Stephen J

    2012-05-11

    Studies of stochasticity in gene expression typically make use of fluorescent protein reporters, which permit the measurement of expression levels within individual cells by fluorescence microscopy. Analysis of such microscopy images is almost invariably based on a segmentation algorithm, where the image of a cell or cluster is analyzed mathematically to delineate individual cell boundaries. However segmentation can be ineffective for studying bacterial cells or clusters, especially at lower magnification, where outlines of individual cells are poorly resolved. Here we demonstrate an alternative method for analyzing such images without segmentation. The method employs a comparison between the pixel brightness in phase contrast vs fluorescence microscopy images. By fitting the correlation between phase contrast and fluorescence intensity to a physical model, we obtain well-defined estimates for the different levels of gene expression that are present in the cell or cluster. The method reveals the boundaries of the individual cells, even if the source images lack the resolution to show these boundaries clearly. Copyright © 2012 Elsevier Inc. All rights reserved.

  13. Diagnosing and Ranking Retinopathy Disease Level Using Diabetic Fundus Image Recuperation Approach

    Directory of Open Access Journals (Sweden)

    K. Somasundaram

    2015-01-01

    Full Text Available Retinal fundus images are widely used in diagnosing different types of eye diseases. The existing methods such as Feature Based Macular Edema Detection (FMED and Optimally Adjusted Morphological Operator (OAMO effectively detected the presence of exudation in fundus images and identified the true positive ratio of exudates detection, respectively. These mechanically detected exudates did not include more detailed feature selection technique to the system for detection of diabetic retinopathy. To categorize the exudates, Diabetic Fundus Image Recuperation (DFIR method based on sliding window approach is developed in this work to select the features of optic cup in digital retinal fundus images. The DFIR feature selection uses collection of sliding windows with varying range to obtain the features based on the histogram value using Group Sparsity Nonoverlapping Function. Using support vector model in the second phase, the DFIR method based on Spiral Basis Function effectively ranks the diabetic retinopathy disease level. The ranking of disease level on each candidate set provides a much promising result for developing practically automated and assisted diabetic retinopathy diagnosis system. Experimental work on digital fundus images using the DFIR method performs research on the factors such as sensitivity, ranking efficiency, and feature selection time.

  14. Diagnosing and ranking retinopathy disease level using diabetic fundus image recuperation approach.

    Science.gov (United States)

    Somasundaram, K; Rajendran, P Alli

    2015-01-01

    Retinal fundus images are widely used in diagnosing different types of eye diseases. The existing methods such as Feature Based Macular Edema Detection (FMED) and Optimally Adjusted Morphological Operator (OAMO) effectively detected the presence of exudation in fundus images and identified the true positive ratio of exudates detection, respectively. These mechanically detected exudates did not include more detailed feature selection technique to the system for detection of diabetic retinopathy. To categorize the exudates, Diabetic Fundus Image Recuperation (DFIR) method based on sliding window approach is developed in this work to select the features of optic cup in digital retinal fundus images. The DFIR feature selection uses collection of sliding windows with varying range to obtain the features based on the histogram value using Group Sparsity Nonoverlapping Function. Using support vector model in the second phase, the DFIR method based on Spiral Basis Function effectively ranks the diabetic retinopathy disease level. The ranking of disease level on each candidate set provides a much promising result for developing practically automated and assisted diabetic retinopathy diagnosis system. Experimental work on digital fundus images using the DFIR method performs research on the factors such as sensitivity, ranking efficiency, and feature selection time.

  15. Mid-level image representations for real-time heart view plane classification of echocardiograms.

    Science.gov (United States)

    Penatti, Otávio A B; Werneck, Rafael de O; de Almeida, Waldir R; Stein, Bernardo V; Pazinato, Daniel V; Mendes Júnior, Pedro R; Torres, Ricardo da S; Rocha, Anderson

    2015-11-01

    In this paper, we explore mid-level image representations for real-time heart view plane classification of 2D echocardiogram ultrasound images. The proposed representations rely on bags of visual words, successfully used by the computer vision community in visual recognition problems. An important element of the proposed representations is the image sampling with large regions, drastically reducing the execution time of the image characterization procedure. Throughout an extensive set of experiments, we evaluate the proposed approach against different image descriptors for classifying four heart view planes. The results show that our approach is effective and efficient for the target problem, making it suitable for use in real-time setups. The proposed representations are also robust to different image transformations, e.g., downsampling, noise filtering, and different machine learning classifiers, keeping classification accuracy above 90%. Feature extraction can be performed in 30 fps or 60 fps in some cases. This paper also includes an in-depth review of the literature in the area of automatic echocardiogram view classification giving the reader a through comprehension of this field of study. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Variational Level Set Method for Two-Stage Image Segmentation Based on Morphological Gradients

    Directory of Open Access Journals (Sweden)

    Zemin Ren

    2014-01-01

    Full Text Available We use variational level set method and transition region extraction techniques to achieve image segmentation task. The proposed scheme is done by two steps. We first develop a novel algorithm to extract transition region based on the morphological gradient. After this, we integrate the transition region into a variational level set framework and develop a novel geometric active contour model, which include an external energy based on transition region and fractional order edge indicator function. The external energy is used to drive the zero level set toward the desired image features, such as object boundaries. Due to this external energy, the proposed model allows for more flexible initialization. The fractional order edge indicator function is incorporated into the length regularization term to diminish the influence of noise. Moreover, internal energy is added into the proposed model to penalize the deviation of the level set function from a signed distance function. The results evolution of the level set function is the gradient flow that minimizes the overall energy functional. The proposed model has been applied to both synthetic and real images with promising results.

  17. Effect of blood glucose levels on image quality in 18F fluorodeoxyglucose scanning - a case report

    International Nuclear Information System (INIS)

    Szeto, E.; Keane, J.

    2000-01-01

    Full text: In December last year, a 71-year-old gentleman presented to the Nuclear Medicine Department at St Vincent's Hospital, Sydney for an FDG coincidence detection positron emission scan. The patient had cancer of the lung with a large lesion in the left upper lobe and a small lesion in the right middle lobe. On initial investigation, this patient had a blood sugar level of 17mmol/L which was eventually reduced to 6.7mmol/L just prior to scanning. The patient was then asked to return to be rescanned without his blood sugar levels being adjusted. Just prior to his second scan, his blood sugar level was 15.4mmollL. The aim of the initial scan being repeated was to see just how important a role blood sugar levels play in the quality of a Co Pet scan. The first scan showed excellent image quality while the repeated scan showed markedly inferior image quality due to unwanted soft tissue FDG uptake. In conclusion, blood sugar levels play a significant role in output image quality in FDG coincidence detection positron emission scanning. Copyright (2000) The Australian and New Zealand Society of Nuclear Medicine Inc

  18. Two-Level Evaluation on Sensor Interoperability of Features in Fingerprint Image Segmentation

    Directory of Open Access Journals (Sweden)

    Ya-Shuo Li

    2012-03-01

    Full Text Available Features used in fingerprint segmentation significantly affect the segmentation performance. Various features exhibit different discriminating abilities on fingerprint images derived from different sensors. One feature which has better discriminating ability on images derived from a certain sensor may not adapt to segment images derived from other sensors. This degrades the segmentation performance. This paper empirically analyzes the sensor interoperability problem of segmentation feature, which refers to the feature’s ability to adapt to the raw fingerprints captured by different sensors. To address this issue, this paper presents a two-level feature evaluation method, including the first level feature evaluation based on segmentation error rate and the second level feature evaluation based on decision tree. The proposed method is performed on a number of fingerprint databases which are obtained from various sensors. Experimental results show that the proposed method can effectively evaluate the sensor interoperability of features, and the features with good evaluation results acquire better segmentation accuracies of images originating from different sensors.

  19. InGaAs focal plane arrays for low-light-level SWIR imaging

    Science.gov (United States)

    MacDougal, Michael; Hood, Andrew; Geske, Jon; Wang, Jim; Patel, Falgun; Follman, David; Manzo, Juan; Getty, Jonathan

    2011-06-01

    Aerius Photonics will present their latest developments in large InGaAs focal plane arrays, which are used for low light level imaging in the short wavelength infrared (SWIR) regime. Aerius will present imaging in both 1280x1024 and 640x512 formats. Aerius will present characterization of the FPA including dark current measurements. Aerius will also show the results of development of SWIR FPAs for high temperaures, including imagery and dark current data. Finally, Aerius will show results of using the SWIR camera with Aerius' SWIR illuminators using VCSEL technology.

  20. Raised Anxiety Levels Among Outpatients Preparing to Undergo a Medical Imaging Procedure: Prevalence and Correlates.

    Science.gov (United States)

    Forshaw, Kristy L; Boyes, Allison W; Carey, Mariko L; Hall, Alix E; Symonds, Michael; Brown, Sandy; Sanson-Fisher, Rob W

    2018-04-01

    To examine the percentage of patients with raised state anxiety levels before undergoing a medical imaging procedure; their attribution of procedural-related anxiety or worry; and sociodemographic, health, and procedural characteristics associated with raised state anxiety levels. This prospective cross-sectional study was undertaken in the outpatient medical imaging department at a major public hospital in Australia, with institutional board approval. Adult outpatients undergoing a medical imaging procedure (CT, x-ray, MRI, ultrasound, angiography, or fluoroscopy) completed a preprocedural survey. Anxiety was measured by the short-form state scale of the six-item State-Trait Anxiety Inventory (STAI: Y-6). The number and percentage of participants who reported raised anxiety levels (defined as a STAI: Y-6 score ≥ 33.16) and their attribution of procedural-related anxiety or worry were calculated. Characteristics associated with raised anxiety were examined using multiple logistic regression analysis. Of the 548 (86%) patients who consented to participate, 488 (77%) completed all STAI: Y-6 items. Half of the participants (n = 240; 49%) experienced raised anxiety, and of these, 48% (n = 114) reported feeling most anxious or worried about the possible results. Female gender, imaging modality, medical condition, first time having the procedure, and lower patient-perceived health status were statistically significantly associated with raised anxiety levels. Raised anxiety is common before medical imaging procedures and is mostly attributed to the possible results. Providing increased psychological preparation, particularly to patients with circulatory conditions or neoplasms or those that do not know their medical condition, may help reduce preprocedural anxiety among these subgroups. Copyright © 2018 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  1. Level set segmentation of bovine corpora lutea in ex situ ovarian ultrasound images

    Directory of Open Access Journals (Sweden)

    Adams Gregg P

    2008-08-01

    Full Text Available Abstract Background The objective of this study was to investigate the viability of level set image segmentation methods for the detection of corpora lutea (corpus luteum, CL boundaries in ultrasonographic ovarian images. It was hypothesized that bovine CL boundaries could be located within 1–2 mm by a level set image segmentation methodology. Methods Level set methods embed a 2D contour in a 3D surface and evolve that surface over time according to an image-dependent speed function. A speed function suitable for segmentation of CL's in ovarian ultrasound images was developed. An initial contour was manually placed and contour evolution was allowed to proceed until the rate of change of the area was sufficiently small. The method was tested on ovarian ultrasonographic images (n = 8 obtained ex situ. A expert in ovarian ultrasound interpretation delineated CL boundaries manually to serve as a "ground truth". Accuracy of the level set segmentation algorithm was determined by comparing semi-automatically determined contours with ground truth contours using the mean absolute difference (MAD, root mean squared difference (RMSD, Hausdorff distance (HD, sensitivity, and specificity metrics. Results and discussion The mean MAD was 0.87 mm (sigma = 0.36 mm, RMSD was 1.1 mm (sigma = 0.47 mm, and HD was 3.4 mm (sigma = 2.0 mm indicating that, on average, boundaries were accurate within 1–2 mm, however, deviations in excess of 3 mm from the ground truth were observed indicating under- or over-expansion of the contour. Mean sensitivity and specificity were 0.814 (sigma = 0.171 and 0.990 (sigma = 0.00786, respectively, indicating that CLs were consistently undersegmented but rarely did the contour interior include pixels that were judged by the human expert not to be part of the CL. It was observed that in localities where gradient magnitudes within the CL were strong due to high contrast speckle, contour expansion stopped too early. Conclusion The

  2. An analysis dictionary learning algorithm under a noisy data model with orthogonality constraint.

    Science.gov (United States)

    Zhang, Ye; Yu, Tenglong; Wang, Wenwu

    2014-01-01

    Two common problems are often encountered in analysis dictionary learning (ADL) algorithms. The first one is that the original clean signals for learning the dictionary are assumed to be known, which otherwise need to be estimated from noisy measurements. This, however, renders a computationally slow optimization process and potentially unreliable estimation (if the noise level is high), as represented by the Analysis K-SVD (AK-SVD) algorithm. The other problem is the trivial solution to the dictionary, for example, the null dictionary matrix that may be given by a dictionary learning algorithm, as discussed in the learning overcomplete sparsifying transform (LOST) algorithm. Here we propose a novel optimization model and an iterative algorithm to learn the analysis dictionary, where we directly employ the observed data to compute the approximate analysis sparse representation of the original signals (leading to a fast optimization procedure) and enforce an orthogonality constraint on the optimization criterion to avoid the trivial solutions. Experiments demonstrate the competitive performance of the proposed algorithm as compared with three baselines, namely, the AK-SVD, LOST, and NAAOLA algorithms.

  3. An iterated cubature unscented Kalman filter for large-DoF systems identification with noisy data

    Science.gov (United States)

    Ghorbani, Esmaeil; Cha, Young-Jin

    2018-04-01

    Structural and mechanical system identification under dynamic loading has been an important research topic over the last three or four decades. Many Kalman-filtering-based approaches have been developed for linear and nonlinear systems. For example, to predict nonlinear systems, an unscented Kalman filter was applied. However, from extensive literature reviews, the unscented Kalman filter still showed weak performance on systems with large degrees of freedom. In this research, a modified unscented Kalman filter is proposed by integration of a cubature Kalman filter to improve the system identification performance of systems with large degrees of freedom. The novelty of this work lies on conjugating the unscented transform with the cubature integration concept to find a more accurate output from the transformation of the state vector and its related covariance matrix. To evaluate the proposed method, three different numerical models (i.e., the single degree-of-freedom Bouc-Wen model, the linear 3-degrees-of-freedom system, and the 10-degrees-of-freedom system) are investigated. To evaluate the robustness of the proposed method, high levels of noise in the measured response data are considered. The results show that the proposed method is significantly superior to the traditional UKF for noisy measured data in systems with large degrees of freedom.

  4. Remaining useful life prediction based on noisy condition monitoring signals using constrained Kalman filter

    International Nuclear Information System (INIS)

    Son, Junbo; Zhou, Shiyu; Sankavaram, Chaitanya; Du, Xinyu; Zhang, Yilu

    2016-01-01

    In this paper, a statistical prognostic method to predict the remaining useful life (RUL) of individual units based on noisy condition monitoring signals is proposed. The prediction accuracy of existing data-driven prognostic methods depends on the capability of accurately modeling the evolution of condition monitoring (CM) signals. Therefore, it is inevitable that the RUL prediction accuracy depends on the amount of random noise in CM signals. When signals are contaminated by a large amount of random noise, RUL prediction even becomes infeasible in some cases. To mitigate this issue, a robust RUL prediction method based on constrained Kalman filter is proposed. The proposed method models the CM signals subject to a set of inequality constraints so that satisfactory prediction accuracy can be achieved regardless of the noise level of signal evolution. The advantageous features of the proposed RUL prediction method is demonstrated by both numerical study and case study with real world data from automotive lead-acid batteries. - Highlights: • A computationally efficient constrained Kalman filter is proposed. • Proposed filter is integrated into an online failure prognosis framework. • A set of proper constraints significantly improves the failure prediction accuracy. • Promising results are reported in the application of battery failure prognosis.

  5. An Analysis Dictionary Learning Algorithm under a Noisy Data Model with Orthogonality Constraint

    Directory of Open Access Journals (Sweden)

    Ye Zhang

    2014-01-01

    Full Text Available Two common problems are often encountered in analysis dictionary learning (ADL algorithms. The first one is that the original clean signals for learning the dictionary are assumed to be known, which otherwise need to be estimated from noisy measurements. This, however, renders a computationally slow optimization process and potentially unreliable estimation (if the noise level is high, as represented by the Analysis K-SVD (AK-SVD algorithm. The other problem is the trivial solution to the dictionary, for example, the null dictionary matrix that may be given by a dictionary learning algorithm, as discussed in the learning overcomplete sparsifying transform (LOST algorithm. Here we propose a novel optimization model and an iterative algorithm to learn the analysis dictionary, where we directly employ the observed data to compute the approximate analysis sparse representation of the original signals (leading to a fast optimization procedure and enforce an orthogonality constraint on the optimization criterion to avoid the trivial solutions. Experiments demonstrate the competitive performance of the proposed algorithm as compared with three baselines, namely, the AK-SVD, LOST, and NAAOLA algorithms.

  6. NOISY DISPERSION CURVE PICKING (NDCP): a Matlab friendly suite package for fully control dispersion curve picking

    Science.gov (United States)

    Granados, I.; Calo, M.; Ramos, V.

    2017-12-01

    We developed a Matlab suite package (NDCP, Noisy Dispersion Curve Picking) that allows a full control over parameters to identify correctly group velocity dispersion curves in two types of datasets: correlograms between two stations or surface wave records from earthquakes. Using the frequency-time analysis (FTAN), the procedure to obtain the dispersion curves from records with a high noise level becomes difficult, and sometimes, the picked curve result in a misinterpreted character. For correlogram functions, obtained with cross-correlation of noise records or earthquake's coda, a non-homogeneous noise sources distribution yield to a non-symmetric Green's function (GF); to retrieve the complete information contained in there, NDCP allows to pick the dispersion curve in the time domain both in the causal and non-causal part of the GF. Then the picked dispersion curve is displayed on the FTAN diagram to in order to check if it matches with the maximum of the signal energy avoiding confusion with overtones or spike of noise. To illustrate how NDCP performs, we show exemple using: i) local correlograms functions obtained from sensors deployed into a volcanic caldera (Los Humeros, in Puebla, Mexico), ii) regional correlograms functions between two stations of the National Seismological Service (SSN, Servicio Sismológico Nacional in Spanish), and iii) surface wave seismic record for an earthquake located in the Pacific Ocean coast of Mexico and recorded by the SSN. This work is supported by the GEMEX project (Geothermal Europe-Mexico consortium).

  7. Relationship between levels of serum creatinine and perirenal hyperintensity on heavily T2-weighted MR images

    Energy Technology Data Exchange (ETDEWEB)

    Erden, Ayse, E-mail: ayse.erden@medicine.ankara.edu.tr [Ankara University, School of Medicine, Department of Radiology, Talatpasa Bulvari, Sihhiye, 06100 Ankara (Turkey); Sahin, Burcu Savran, E-mail: bsavrans@hotmail.com [Ankara University, School of Medicine, Department of Radiology, Talatpasa Bulvari, Sihhiye, 06100 Ankara (Turkey); Orgodol, Horolsuren, E-mail: hoogii99@yahoo.com [Ankara University, School of Medicine, Department of Radiology, Talatpasa Bulvari, Sihhiye, 06100 Ankara (Turkey); Erden, Ilhan, E-mail: erden@medicine.ankara.edu.tr [Ankara University, School of Medicine, Department of Radiology, Talatpasa Bulvari, Sihhiye, 06100 Ankara (Turkey); Biyikli, Zeynep, E-mail: zeynep.biyikli@gmail.com [Ankara University, School of Medicine, Department of Biostatistics, Talatpasa Bulvari, Sihhiye, 06100 Ankara (Turkey)

    2011-11-15

    Objective: To determine the frequency of perirenal hyperintensity on heavily T2-weighted images and to evaluate its relationship with serum creatinine levels. Subjects and methods: Axial and coronal single-shot fast spin-echo images which have been originally obtained for MR cholangiopancreatography in 150 subjects were examined by two observers individually for the presence of perirenal hyperintensity. The morphologic properties of perirenal hyperintensity (peripheral rim-like, discontinuous, polar) were recorded. Chi square test was used to test whether the frequencies of bilateral perirenal hyperintensity differ significantly in subjects with high serum creatinine levels and those with normal creatinine levels. This test was also used to compare the frequencies of perirenal hyperintensity in patients with and without renal cysts and in patients with and without corticomedullary differentiation. A p value of less than 0.05 was considered to be statistically significant. Results: The perirenal hyperintensity was identified in 40 of 150 cases (26.6%) on heavily T2-weighted image. Serum creatinine levels were high in 18 of 150 cases (12%). The perirenal hyperintensity was present in 11 of 18 subjects (61%) with high serum creatinine levels and 26 of 132 subjects (19.7%) with normal creatinine levels. The difference of rates in two groups was statistically significant. Odds ratio was 6407 (95% confidence interval 2264-18,129). The frequency of perirenal hyperintensity was also significantly higher in subjects with renal cyst or cysts in whom serum creatinine levels were normal (p < 0.05) (37.5% vs. 11.8%). Conclusion: Perirenal hyperintensities are more frequent in patients with high serum creatinine levels. They are also more common in patients with simple renal cysts.

  8. Relationship between levels of serum creatinine and perirenal hyperintensity on heavily T2-weighted MR images

    International Nuclear Information System (INIS)

    Erden, Ayse; Sahin, Burcu Savran; Orgodol, Horolsuren; Erden, Ilhan; Biyikli, Zeynep

    2011-01-01

    Objective: To determine the frequency of perirenal hyperintensity on heavily T2-weighted images and to evaluate its relationship with serum creatinine levels. Subjects and methods: Axial and coronal single-shot fast spin-echo images which have been originally obtained for MR cholangiopancreatography in 150 subjects were examined by two observers individually for the presence of perirenal hyperintensity. The morphologic properties of perirenal hyperintensity (peripheral rim-like, discontinuous, polar) were recorded. Chi square test was used to test whether the frequencies of bilateral perirenal hyperintensity differ significantly in subjects with high serum creatinine levels and those with normal creatinine levels. This test was also used to compare the frequencies of perirenal hyperintensity in patients with and without renal cysts and in patients with and without corticomedullary differentiation. A p value of less than 0.05 was considered to be statistically significant. Results: The perirenal hyperintensity was identified in 40 of 150 cases (26.6%) on heavily T2-weighted image. Serum creatinine levels were high in 18 of 150 cases (12%). The perirenal hyperintensity was present in 11 of 18 subjects (61%) with high serum creatinine levels and 26 of 132 subjects (19.7%) with normal creatinine levels. The difference of rates in two groups was statistically significant. Odds ratio was 6407 (95% confidence interval 2264-18,129). The frequency of perirenal hyperintensity was also significantly higher in subjects with renal cyst or cysts in whom serum creatinine levels were normal (p < 0.05) (37.5% vs. 11.8%). Conclusion: Perirenal hyperintensities are more frequent in patients with high serum creatinine levels. They are also more common in patients with simple renal cysts.

  9. Intravital imaging of cardiac function at the single-cell level.

    Science.gov (United States)

    Aguirre, Aaron D; Vinegoni, Claudio; Sebas, Matt; Weissleder, Ralph

    2014-08-05

    Knowledge of cardiomyocyte biology is limited by the lack of methods to interrogate single-cell physiology in vivo. Here we show that contracting myocytes can indeed be imaged with optical microscopy at high temporal and spatial resolution in the beating murine heart, allowing visualization of individual sarcomeres and measurement of the single cardiomyocyte contractile cycle. Collectively, this has been enabled by efficient tissue stabilization, a prospective real-time cardiac gating approach, an image processing algorithm for motion-artifact-free imaging throughout the cardiac cycle, and a fluorescent membrane staining protocol. Quantification of cardiomyocyte contractile function in vivo opens many possibilities for investigating myocardial disease and therapeutic intervention at the cellular level.

  10. Reconstructing the CT number array from gray-level images and its application in PACS

    Science.gov (United States)

    Chen, Xu; Zhuang, Tian-ge; Wu, Wei

    2001-08-01

    Although DICOM compliant computed tomography has been prevailing in medical fields nowadays, there are some incompliant ones, from which we could hardly get the raw data and make an apropos interpretation due to the proprietary image format. Under such condition, one usually uses frame grabbers to capture CT images, the results of which could not be freely adjusted by radiologists as the original CT number array could. To alleviate the inflexibility, a new method is presented in this paper to reconstruct the array of CT number from several gray-level images acquired under different window settings. Its feasibility is investigated and a few tips are put forward to correct the errors caused respectively by 'Border Effect' and some hardware problems. The accuracy analysis proves it a good substitution for original CT number array acquisition. And this method has already been successfully used in our newly developing PACS and accepted by the radiologists in clinical use.

  11. Level-set segmentation of pulmonary nodules in megavolt electronic portal images using a CT prior

    International Nuclear Information System (INIS)

    Schildkraut, J. S.; Prosser, N.; Savakis, A.; Gomez, J.; Nazareth, D.; Singh, A. K.; Malhotra, H. K.

    2010-01-01

    Purpose: Pulmonary nodules present unique problems during radiation treatment due to nodule position uncertainty that is caused by respiration. The radiation field has to be enlarged to account for nodule motion during treatment. The purpose of this work is to provide a method of locating a pulmonary nodule in a megavolt portal image that can be used to reduce the internal target volume (ITV) during radiation therapy. A reduction in the ITV would result in a decrease in radiation toxicity to healthy tissue. Methods: Eight patients with nonsmall cell lung cancer were used in this study. CT scans that include the pulmonary nodule were captured with a GE Healthcare LightSpeed RT 16 scanner. Megavolt portal images were acquired with a Varian Trilogy unit equipped with an AS1000 electronic portal imaging device. The nodule localization method uses grayscale morphological filtering and level-set segmentation with a prior. The treatment-time portion of the algorithm is implemented on a graphical processing unit. Results: The method was retrospectively tested on eight cases that include a total of 151 megavolt portal image frames. The method reduced the nodule position uncertainty by an average of 40% for seven out of the eight cases. The treatment phase portion of the method has a subsecond execution time that makes it suitable for near-real-time nodule localization. Conclusions: A method was developed to localize a pulmonary nodule in a megavolt portal image. The method uses the characteristics of the nodule in a prior CT scan to enhance the nodule in the portal image and to identify the nodule region by level-set segmentation. In a retrospective study, the method reduced the nodule position uncertainty by an average of 40% for seven out of the eight cases studied.

  12. Do Quiet Areas Afford Greater Health-Related Quality of Life than Noisy Areas?

    Directory of Open Access Journals (Sweden)

    Kim N. Dirks

    2013-03-01

    Full Text Available People typically choose to live in quiet areas in order to safeguard their health and wellbeing. However, the benefits of living in quiet areas are relatively understudied compared to the burdens associated with living in noisy areas. Additionally, research is increasingly focusing on the relationship between the human response to noise and measures of health and wellbeing, complementing traditional dose-response approaches, and further elucidating the impact of noise and health by incorporating human factors as mediators and moderators. To further explore the benefits of living in quiet areas, we compared the results of health-related quality of life (HRQOL questionnaire datasets collected from households in localities differentiated by their soundscapes and population density: noisy city, quiet city, quiet rural, and noisy rural. The dose-response relationships between noise annoyance and HRQOL measures indicated an inverse relationship between the two. Additionally, quiet areas were found to have higher mean HRQOL domain scores than noisy areas. This research further supports the protection of quiet locales and ongoing noise abatement in noisy areas.

  13. Evaluation of the autoregression time-series model for analysis of a noisy signal

    International Nuclear Information System (INIS)

    Allen, J.W.

    1977-01-01

    The autoregression (AR) time-series model of a continuous noisy signal was statistically evaluated to determine quantitatively the uncertainties of the model order, the model parameters, and the model's power spectral density (PSD). The result of such a statistical evaluation enables an experimenter to decide whether an AR model can adequately represent a continuous noisy signal and be consistent with the signal's frequency spectrum, and whether it can be used for on-line monitoring. Although evaluations of other types of signals have been reported in the literature, no direct reference has been found to AR model's uncertainties for continuous noisy signals; yet the evaluation is necessary to decide the usefulness of AR models of typical reactor signals (e.g., neutron detector output or thermocouple output) and the potential of AR models for on-line monitoring applications. AR and other time-series models for noisy data representation are being investigated by others since such models require fewer parameters than the traditional PSD model. For this study, the AR model was selected for its simplicity and conduciveness to uncertainty analysis, and controlled laboratory bench signals were used for continuous noisy data. (author)

  14. Image Denoising Using Singular Value Difference in the Wavelet Domain

    Directory of Open Access Journals (Sweden)

    Min Wang

    2018-01-01

    Full Text Available Singular value (SV difference is the difference in the singular values between a noisy image and the original image; it varies regularly with noise intensity. This paper proposes an image denoising method using the singular value difference in the wavelet domain. First, the SV difference model is generated for different noise variances in the three directions of the wavelet transform and the noise variance of a new image is used to make the calculation by the diagonal part. Next, the single-level discrete 2-D wavelet transform is used to decompose each noisy image into its low-frequency and high-frequency parts. Then, singular value decomposition (SVD is used to obtain the SVs of the three high-frequency parts. Finally, the three denoised high-frequency parts are reconstructed by SVD from the SV difference, and the final denoised image is obtained using the inverse wavelet transform. Experiments show the effectiveness of this method compared with relevant existing methods.

  15. Document authentication at molecular levels using desorption atmospheric pressure chemical ionization mass spectrometry imaging.

    Science.gov (United States)

    Li, Ming; Jia, Bin; Ding, Liying; Hong, Feng; Ouyang, Yongzhong; Chen, Rui; Zhou, Shumin; Chen, Huanwen; Fang, Xiang

    2013-09-01

    Molecular images of documents were obtained by sequentially scanning the surface of the document using desorption atmospheric pressure chemical ionization mass spectrometry (DAPCI-MS), which was operated in either a gasless, solvent-free or methanol vapor-assisted mode. The decay process of the ink used for handwriting was monitored by following the signal intensities recorded by DAPCI-MS. Handwritings made using four types of inks on four kinds of paper surfaces were tested. By studying the dynamic decay of the inks, DAPCI-MS imaging differentiated a 10-min old from two 4 h old samples. Non-destructive forensic analysis of forged signatures either handwritten or computer-assisted was achieved according to the difference of the contour in DAPCI images, which was attributed to the strength personalized by different writers. Distinction of the order of writing/stamping on documents and detection of illegal printings were accomplished with a spatial resolution of about 140 µm. A Matlab® written program was developed to facilitate the visualization of the similarity between signature images obtained by DAPCI-MS. The experimental results show that DAPCI-MS imaging provides rich information at the molecular level and thus can be used for the reliable document analysis in forensic applications. © 2013 The Authors. Journal of Mass Spectrometry published by John Wiley & Sons, Ltd.

  16. Measurement of thermally ablated lesions in sonoelastographic images using level set methods

    Science.gov (United States)

    Castaneda, Benjamin; Tamez-Pena, Jose Gerardo; Zhang, Man; Hoyt, Kenneth; Bylund, Kevin; Christensen, Jared; Saad, Wael; Strang, John; Rubens, Deborah J.; Parker, Kevin J.

    2008-03-01

    The capability of sonoelastography to detect lesions based on elasticity contrast can be applied to monitor the creation of thermally ablated lesion. Currently, segmentation of lesions depicted in sonoelastographic images is performed manually which can be a time consuming process and prone to significant intra- and inter-observer variability. This work presents a semi-automated segmentation algorithm for sonoelastographic data. The user starts by planting a seed in the perceived center of the lesion. Fast marching methods use this information to create an initial estimate of the lesion. Subsequently, level set methods refine its final shape by attaching the segmented contour to edges in the image while maintaining smoothness. The algorithm is applied to in vivo sonoelastographic images from twenty five thermal ablated lesions created in porcine livers. The estimated area is compared to results from manual segmentation and gross pathology images. Results show that the algorithm outperforms manual segmentation in accuracy, inter- and intra-observer variability. The processing time per image is significantly reduced.

  17. Combined phase and X-Ray fluorescence imaging at the sub-cellular level

    International Nuclear Information System (INIS)

    Kosior, Ewelina

    2013-01-01

    This work presents some recent developments in the field of hard X-ray imaging applied to biomedical research. As the discipline is evolving quickly, new questions appear and the list of needs becomes bigger. Some of them are dealt with in this manuscript. It has been shown that the ID22NI beamline of the ESRF can serve as a proper experimental setup to investigate diverse aspects of cellular research. Together with its high spatial resolution, high flux and high energy range the experimental setup provides bigger field of view, is less sensitive to radiation damages (while taking phase contrast images) and suits well chemical analysis with emphasis on endogenous metals (Zn, Fe, Mn) but also with a possibility for exogenous one's like these found in nanoparticles (Au, Pt, Ag) study. Two synchrotron-based imaging techniques, fluorescence and phase contrast imaging were used in this research project. They were correlated with each other on a number of biological cases, from bacteria E.coli to various cells (HEK 293, PC12, MRC5VA, red blood cells). The explorations made in the chapter 5 allowed preparation of more established and detailed analysis, described in the next chapter where both techniques, X-ray fluorescence and phase contrast imaging, were exploited in order to access absolute metal projected mass fraction in a whole cell. The final image presents for the first time true quantitative information at the sub-cellular level, not biased by the cell thickness. Thus for the first time a fluorescence map serves as a complete quantitative image of a cell without any risk of misinterpretation. Once both maps are divided by each other pixel by pixel (fluorescence map divided by the phase map) they present a complete and final result of the metal (Zn in this work) projected mass fraction in ppm of dry weight. For the purpose of this calculation the analysis was extended to calibration (non-biological) samples. Polystyrene spheres of a known diameter and known

  18. Medical physics personnel for medical imaging: requirements, conditions of involvement and staffing levels-French recommendations

    International Nuclear Information System (INIS)

    Isambert, Aurelie; Valero, Marc; Rousse, Carole; Blanchard, Vincent; Le Du, Dominique; Guilhem, Marie-Therese; Dieudonne, Arnaud; Pierrat, Noelle; Salvat, Cecile

    2015-01-01

    The French regulations concerning the involvement of medical physicists in medical imaging procedures are relatively vague. In May 2013, the ASN and the SFPM issued recommendations regarding Medical Physics Personnel for Medical Imaging: Requirements, Conditions of Involvement and Staffing Levels. In these recommendations, the various areas of activity of medical physicists in radiology and nuclear medicine have been identified and described, and the time required to perform each task has been evaluated. Criteria for defining medical physics staffing levels are thus proposed. These criteria are defined according to the technical platform, the procedures and techniques practised on it, the number of patients treated and the number of persons in the medical and paramedical teams requiring periodic training. The result of this work is an aid available to each medical establishment to determine their own needs in terms of medical physics. (authors)

  19. Medical physics personnel for medical imaging: requirements, conditions of involvement and staffing levels-French recommendations.

    Science.gov (United States)

    Isambert, Aurélie; Le Du, Dominique; Valéro, Marc; Guilhem, Marie-Thérèse; Rousse, Carole; Dieudonné, Arnaud; Blanchard, Vincent; Pierrat, Noëlle; Salvat, Cécile

    2015-04-01

    The French regulations concerning the involvement of medical physicists in medical imaging procedures are relatively vague. In May 2013, the ASN and the SFPM issued recommendations regarding Medical Physics Personnel for Medical Imaging: Requirements, Conditions of Involvement and Staffing Levels. In these recommendations, the various areas of activity of medical physicists in radiology and nuclear medicine have been identified and described, and the time required to perform each task has been evaluated. Criteria for defining medical physics staffing levels are thus proposed. These criteria are defined according to the technical platform, the procedures and techniques practised on it, the number of patients treated and the number of persons in the medical and paramedical teams requiring periodic training. The result of this work is an aid available to each medical establishment to determine their own needs in terms of medical physics. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  20. MPEG-7 low level image descriptors for modeling users' web pages visual appeal opinion

    OpenAIRE

    Uribe Mayoral, Silvia; Alvarez Garcia, Federico; Menendez Garcia, Jose Manuel

    2015-01-01

    The study of the users' web pages first impression is an important factor for interface designers, due to its influence over the final opinion about a site. In this regard, the analysis of web aesthetics can be considered as an interesting tool for evaluating this early impression, and the use of low level image descriptors for modeling it in an objective way represents an innovative research field. According to this, in this paper we present a new model for website aesthetics evaluation and ...

  1. Proton therapy for prostate cancer treatment employing online image guidance and an action level threshold.

    Science.gov (United States)

    Vargas, Carlos; Falchook, Aaron; Indelicato, Daniel; Yeung, Anamaria; Henderson, Randall; Olivier, Kenneth; Keole, Sameer; Williams, Christopher; Li, Zuofeng; Palta, Jatinder

    2009-04-01

    The ability to determine the accuracy of the final prostate position within a determined action level threshold for image-guided proton therapy is unclear. Three thousand one hundred ten images for 20 consecutive patients treated in 1 of our 3 proton prostate protocols from February to May of 2007 were analyzed. Daily kV images and patient repositioning were performed employing an action-level threshold (ALT) of > or = 2.5 mm for each beam. Isocentric orthogonal x-rays were obtained, and prostate position was defined via 3 gold markers for each patient in the 3 axes. To achieve and confirm our action level threshold, an average of 2 x-rays sets (median 2; range, 0-4) was taken daily for each patient. Based on our ALT, we made no corrections in 8.7% (range, 0%-54%), 1 correction in 82% (41%-98%), and 2 to 3 corrections in 9% (0-27%). No patient needed 4 or more corrections. All patients were treated with a confirmed error of < 2.5 mm for every beam delivered. After all corrections, the mean and standard deviations were: anterior-posterior (z): 0.003 +/- 0.094 cm; superior-inferior (y): 0.028 +/- 0.073 cm; and right-left (x) -0.013 +/- 0.08 cm. It is feasible to limit all final prostate positions to less than 2.5 mm employing an action level image-guided radiation therapy (IGRT) process. The residual errors after corrections were very small.

  2. Discrimination of nitrogen fertilizer levels of tea plant (Camellia sinensis) based on hyperspectral imaging.

    Science.gov (United States)

    Wang, Yujie; Hu, Xin; Hou, Zhiwei; Ning, Jingming; Zhang, Zhengzhu

    2018-04-01

    Nitrogen (N) fertilizer plays an important role in tea plantation management, with significant impacts on the photosynthetic capacity, productivity and nutrition status of tea plants. The present study aimed to establish a method for the discrimination of N fertilizer levels using hyperspectral imaging technique. Spectral data were extracted from the region of interest, followed by the first derivative to reduce background noise. Five optimal wavelengths were selected by principal component analysis. Texture features were extracted from the images at optimal wavelengths by gray-level gradient co-occurrence matrix. Support vector machine (SVM) and extreme learning machine were used to build classification models based on spectral data, optimal wavelengths, texture features and data fusion, respectively. The SVM model using fused data gave the best performance with highest correct classification rate of 100% for prediction set. The overall results indicated that visible and near-infrared hyperspectral imaging combined with SVM were effective in discriminating N fertilizer levels of tea plants. © 2018 Society of Chemical Industry. © 2018 Society of Chemical Industry.

  3. A Symmetric Chaos-Based Image Cipher with an Improved Bit-Level Permutation Strategy

    Directory of Open Access Journals (Sweden)

    Chong Fu

    2014-02-01

    Full Text Available Very recently, several chaos-based image ciphers using a bit-level permutation have been suggested and shown promising results. Due to the diffusion effect introduced in the permutation stage, the workload of the time-consuming diffusion stage is reduced, and hence the performance of the cryptosystem is improved. In this paper, a symmetric chaos-based image cipher with a 3D cat map-based spatial bit-level permutation strategy is proposed. Compared with those recently proposed bit-level permutation methods, the diffusion effect of the new method is superior as the bits are shuffled among different bit-planes rather than within the same bit-plane. Moreover, the diffusion key stream extracted from hyperchaotic system is related to both the secret key and the plain image, which enhances the security against known/chosen plaintext attack. Extensive security analysis has been performed on the proposed scheme, including the most important ones like key space analysis, key sensitivity analysis, plaintext sensitivity analysis and various statistical analyses, which has demonstrated the satisfactory security of the proposed scheme

  4. A 256×256 low-light-level CMOS imaging sensor with digital CDS

    Science.gov (United States)

    Zou, Mei; Chen, Nan; Zhong, Shengyou; Li, Zhengfen; Zhang, Jicun; Yao, Li-bin

    2016-10-01

    In order to achieve high sensitivity for low-light-level CMOS image sensors (CIS), a capacitive transimpedance amplifier (CTIA) pixel circuit with a small integration capacitor is used. As the pixel and the column area are highly constrained, it is difficult to achieve analog correlated double sampling (CDS) to remove the noise for low-light-level CIS. So a digital CDS is adopted, which realizes the subtraction algorithm between the reset signal and pixel signal off-chip. The pixel reset noise and part of the column fixed-pattern noise (FPN) can be greatly reduced. A 256×256 CIS with CTIA array and digital CDS is implemented in the 0.35μm CMOS technology. The chip size is 7.7mm×6.75mm, and the pixel size is 15μm×15μm with a fill factor of 20.6%. The measured pixel noise is 24LSB with digital CDS in RMS value at dark condition, which shows 7.8× reduction compared to the image sensor without digital CDS. Running at 7fps, this low-light-level CIS can capture recognizable images with the illumination down to 0.1lux.

  5. An enhanced fractal image denoising algorithm

    International Nuclear Information System (INIS)

    Lu Jian; Ye Zhongxing; Zou Yuru; Ye Ruisong

    2008-01-01

    In recent years, there has been a significant development in image denoising using fractal-based method. This paper presents an enhanced fractal predictive denoising algorithm for denoising the images corrupted by an additive white Gaussian noise (AWGN) by using quadratic gray-level function. Meanwhile, a quantization method for the fractal gray-level coefficients of the quadratic function is proposed to strictly guarantee the contractivity requirement of the enhanced fractal coding, and in terms of the quality of the fractal representation measured by PSNR, the enhanced fractal image coding using quadratic gray-level function generally performs better than the standard fractal coding using linear gray-level function. Based on this enhanced fractal coding, the enhanced fractal image denoising is implemented by estimating the fractal gray-level coefficients of the quadratic function of the noiseless image from its noisy observation. Experimental results show that, compared with other standard fractal-based image denoising schemes using linear gray-level function, the enhanced fractal denoising algorithm can improve the quality of the restored image efficiently

  6. Global optimization based on noisy evaluations: An empirical study of two statistical approaches

    International Nuclear Information System (INIS)

    Vazquez, Emmanuel; Villemonteix, Julien; Sidorkiewicz, Maryan; Walter, Eric

    2008-01-01

    The optimization of the output of complex computer codes has often to be achieved with a small budget of evaluations. Algorithms dedicated to such problems have been developed and compared, such as the Expected Improvement algorithm (El) or the Informational Approach to Global Optimization (IAGO). However, the influence of noisy evaluation results on the outcome of these comparisons has often been neglected, despite its frequent appearance in industrial problems. In this paper, empirical convergence rates for El and IAGO are compared when an additive noise corrupts the result of an evaluation. IAGO appears more efficient than El and various modifications of El designed to deal with noisy evaluations. Keywords. Global optimization; computer simulations; kriging; Gaussian process; noisy evaluations.

  7. Multiwavelength Absolute Phase Retrieval from Noisy Diffractive Patterns: Wavelength Multiplexing Algorithm

    Directory of Open Access Journals (Sweden)

    Vladimir Katkovnik

    2018-05-01

    Full Text Available We study the problem of multiwavelength absolute phase retrieval from noisy diffraction patterns. The system is lensless with multiwavelength coherent input light beams and random phase masks applied for wavefront modulation. The light beams are formed by light sources radiating all wavelengths simultaneously. A sensor equipped by a Color Filter Array (CFA is used for spectral measurement registration. The developed algorithm targeted on optimal phase retrieval from noisy observations is based on maximum likelihood technique. The algorithm is specified for Poissonian and Gaussian noise distributions. One of the key elements of the algorithm is an original sparse modeling of the multiwavelength complex-valued wavefronts based on the complex-domain block-matching 3D filtering. Presented numerical experiments are restricted to noisy Poissonian observations. They demonstrate that the developed algorithm leads to effective solutions explicitly using the sparsity for noise suppression and enabling accurate reconstruction of absolute phase of high-dynamic range.

  8. Implications of a ''Noisy'' observer to data processing techniques

    International Nuclear Information System (INIS)

    Goodenough, D.J.; Metz, C.E.

    1975-01-01

    It is attempted to show how an internal noise source (darklight and threshold jitter) would tend to explain experimental data concerning the visual detection of noise-limited signal in diagnostic imaging. The interesting conclusions can be drawn that the internal noise sets the upper limit to the utility of data processing techniques designed to reduce image noise. Moreover, there should be instances where contrast enhancement techniques may be far more useful to the human observer than corresponding reductions in noise amplitude, especially at high count rates (sigma/sub p/ less than or equal to sigma/sub D/). Then too, the limitations imposed on the human observer by an internal noise source, may point towards the need for additional methods (e.g. computer/microdensitometer) of interpreting images of high photon density so that the highest possible signal to noise ratio might be obtained

  9. Differentiating benign from malignant bone tumors using fluid-fluid level features on magnetic resonance imaging

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Hong; Cui, Jian Ling; Cui, Sheng Jie; Sun, Ying Cal; Cui, Feng Zhen [Dept. of Radiology, The Third Hospital of Hebei Medical University, Hebei Province Biomechanical Key Laborary of Orthopedics, Shijiazhuang, Hebei (China)

    2014-12-15

    To analyze different fluid-fluid level features between benign and malignant bone tumors on magnetic resonance imaging (MRI). This study was approved by the hospital ethics committee. We retrospectively analyzed 47 patients diagnosed with benign (n = 29) or malignant (n = 18) bone tumors demonstrated by biopsy/surgical resection and who showed the intratumoral fluid-fluid level on pre-surgical MRI. The maximum length of the largest fluid-fluid level and the ratio of the maximum length of the largest fluid-fluid level to the maximum length of a bone tumor in the sagittal plane were investigated for use in distinguishing benign from malignant tumors using the Mann-Whitney U-test and a receiver operating characteristic (ROC) analysis. Fluid-fluid level was categorized by quantity (multiple vs. single fluid-fluid level) and by T1-weighted image signal pattern (high/low, low/high, and undifferentiated), and the findings were compared between the benign and malignant groups using the chi2 test. The ratio of the maximum length of the largest fluid-fluid level to the maximum length of bone tumors in the sagittal plane that allowed statistically significant differentiation between benign and malignant bone tumors had an area under the ROC curve of 0.758 (95% confidence interval, 0.616-0.899). A cutoff value of 41.5% (higher value suggests a benign tumor) had sensitivity of 73% and specificity of 83%. The ratio of the maximum length of the largest fluid-fluid level to the maximum length of a bone tumor in the sagittal plane may be useful to differentiate benign from malignant bone tumors.

  10. Differentiating benign from malignant bone tumors using fluid-fluid level features on magnetic resonance imaging

    International Nuclear Information System (INIS)

    Yu, Hong; Cui, Jian Ling; Cui, Sheng Jie; Sun, Ying Cal; Cui, Feng Zhen

    2014-01-01

    To analyze different fluid-fluid level features between benign and malignant bone tumors on magnetic resonance imaging (MRI). This study was approved by the hospital ethics committee. We retrospectively analyzed 47 patients diagnosed with benign (n = 29) or malignant (n = 18) bone tumors demonstrated by biopsy/surgical resection and who showed the intratumoral fluid-fluid level on pre-surgical MRI. The maximum length of the largest fluid-fluid level and the ratio of the maximum length of the largest fluid-fluid level to the maximum length of a bone tumor in the sagittal plane were investigated for use in distinguishing benign from malignant tumors using the Mann-Whitney U-test and a receiver operating characteristic (ROC) analysis. Fluid-fluid level was categorized by quantity (multiple vs. single fluid-fluid level) and by T1-weighted image signal pattern (high/low, low/high, and undifferentiated), and the findings were compared between the benign and malignant groups using the chi2 test. The ratio of the maximum length of the largest fluid-fluid level to the maximum length of bone tumors in the sagittal plane that allowed statistically significant differentiation between benign and malignant bone tumors had an area under the ROC curve of 0.758 (95% confidence interval, 0.616-0.899). A cutoff value of 41.5% (higher value suggests a benign tumor) had sensitivity of 73% and specificity of 83%. The ratio of the maximum length of the largest fluid-fluid level to the maximum length of a bone tumor in the sagittal plane may be useful to differentiate benign from malignant bone tumors.

  11. Security bound of continuous-variable quantum key distribution with noisy coherent states and channel

    International Nuclear Information System (INIS)

    Shen Yong; Yang Jian; Guo Hong

    2009-01-01

    Security of a continuous-variable quantum key distribution protocol based on noisy coherent states and channel is analysed. Assuming that the noise of coherent states is induced by Fred, a neutral party relative to others, we prove that the prepare-and-measurement scheme (P and M) and entanglement-based scheme (E-B) are equivalent. Then, we show that this protocol is secure against Gaussian collective attacks even if the channel is lossy and noisy, and, further, a lower bound to the secure key rate is derived.

  12. Eavesdropping on the Bostroem-Filbinger Communication Protocol in Noisy Quantum Channel

    OpenAIRE

    Cai, Qing-yu

    2004-01-01

    We show an eavesdropping scheme on Bostr\\UNICODE{0xf6}m-Felbinger communication protocol (called ping-pong protocol) [Phys. Rev. Lett. 89, 187902 (2002)] in an ideal quantum channel. A measurement attack can be perfectly used to eavesdrop Alice's information instead of a most general quantum operation attack. In a noisy quantum channel, the direct communication is forbidden. We present a quantum key distribution protocol based on the ping-pong protocol, which can be used in a low noisy quantu...

  13. Estimating the number of sources in a noisy convolutive mixture using BIC

    DEFF Research Database (Denmark)

    Olsson, Rasmus Kongsgaard; Hansen, Lars Kai

    2004-01-01

    The number of source signals in a noisy convolutive mixture is determined based on the exact log-likelihoods of the candidate models. In (Olsson and Hansen, 2004), a novel probabilistic blind source separator was introduced that is based solely on the time-varying second-order statistics of the s......The number of source signals in a noisy convolutive mixture is determined based on the exact log-likelihoods of the candidate models. In (Olsson and Hansen, 2004), a novel probabilistic blind source separator was introduced that is based solely on the time-varying second-order statistics...

  14. Security bound of continuous-variable quantum key distribution with noisy coherent states and channel

    Energy Technology Data Exchange (ETDEWEB)

    Shen Yong; Yang Jian; Guo Hong, E-mail: hongguo@pku.edu.c [CREAM Group, State Key Laboratory of Advanced Optical Communication Systems and Networks (Peking University) and Institute of Quantum Electronics, School of Electronics Engineering and Computer Science, Peking University, Beijing 100871 (China)

    2009-12-14

    Security of a continuous-variable quantum key distribution protocol based on noisy coherent states and channel is analysed. Assuming that the noise of coherent states is induced by Fred, a neutral party relative to others, we prove that the prepare-and-measurement scheme (P and M) and entanglement-based scheme (E-B) are equivalent. Then, we show that this protocol is secure against Gaussian collective attacks even if the channel is lossy and noisy, and, further, a lower bound to the secure key rate is derived.

  15. Thresholding: A Pixel-Level Image Processing Methodology Preprocessing Technique for an OCR System for the Brahmi Script

    Directory of Open Access Journals (Sweden)

    H. K. Anasuya Devi

    2006-12-01

    Full Text Available In this paper we study the methodology employed for preprocessing the archaeological images. We present the various algorithms used in the low-level processing stage of image analysis for Optical Character Recognition System for Brahmi Script. The image preprocessing technique covered in this paper is thresholding. We also try to analyze the results obtained by the pixel-level processing algorithms.

  16. A methodology for the extraction of quantitative information from electron microscopy images at the atomic level

    International Nuclear Information System (INIS)

    Galindo, P L; Pizarro, J; Guerrero, E; Guerrero-Lebrero, M P; Scavello, G; Yáñez, A; Sales, D L; Herrera, M; Molina, S I; Núñez-Moraleda, B M; Maestre, J M

    2014-01-01

    In this paper we describe a methodology developed at the University of Cadiz (Spain) in the past few years for the extraction of quantitative information from electron microscopy images at the atomic level. This work is based on a coordinated and synergic activity of several research groups that have been working together over the last decade in two different and complementary fields: Materials Science and Computer Science. The aim of our joint research has been to develop innovative high-performance computing techniques and simulation methods in order to address computationally challenging problems in the analysis, modelling and simulation of materials at the atomic scale, providing significant advances with respect to existing techniques. The methodology involves several fundamental areas of research including the analysis of high resolution electron microscopy images, materials modelling, image simulation and 3D reconstruction using quantitative information from experimental images. These techniques for the analysis, modelling and simulation allow optimizing the control and functionality of devices developed using materials under study, and have been tested using data obtained from experimental samples

  17. Natural products in Glycyrrhiza glabra (licorice) rhizome imaged at the cellular level by atmospheric pressure matrix-assisted laser desorption/ionization tandem mass spectrometry imaging

    DEFF Research Database (Denmark)

    Li, Bin; Bhandari, Dhaka Ram; Janfelt, Christian

    2014-01-01

    The rhizome of Glycyrrhiza glabra (licorice) was analyzed by high-resolution mass spectrometry imaging and tandem mass spectrometry imaging. An atmospheric pressure matrix-assisted laser desorption/ionization imaging ion source was combined with an orbital trapping mass spectrometer in order to o...... and saponins in legume species, combing the spatially resolved chemical information with morphological details at the microscopic level. Furthermore, the technique offers a scheme capable of high-throughput profiling of metabolites in plant tissues....

  18. Prediction of myelopathic level in cervical spondylotic myelopathy using diffusion tensor imaging.

    Science.gov (United States)

    Wang, Shu-Qiang; Li, Xiang; Cui, Jiao-Long; Li, Han-Xiong; Luk, Keith D K; Hu, Yong

    2015-06-01

    To investigate the use of a newly designed machine learning-based classifier in the automatic identification of myelopathic levels in cervical spondylotic myelopathy (CSM). In all, 58 normal volunteers and 16 subjects with CSM were recruited for diffusion tensor imaging (DTI) acquisition. The eigenvalues were extracted as the selected features from DTI images. Three classifiers, naive Bayesian, support vector machine, and support tensor machine, and fractional anisotropy (FA) were employed to identify myelopathic levels. The results were compared with clinical level diagnosis results and accuracy, sensitivity, and specificity were calculated to evaluate the performance of the developed classifiers. The accuracy by support tensor machine was the highest (93.62%) among the three classifiers. The support tensor machine also showed excellent capacity to identify true positives (sensitivity: 84.62%) and true negatives (specificity: 97.06%). The accuracy by FA value was the lowest (76%) in all the methods. The classifiers-based method using eigenvalues had a better performance in identifying the levels of CSM than the diagnosis using FA values. The support tensor machine was the best among three classifiers. © 2014 Wiley Periodicals, Inc.

  19. Geophysical Imaging of Sea-level Proxies in Beach-Ridge Deposits

    Science.gov (United States)

    Nielsen, L.; Emerich Souza, P.; Meldgaard, A.; Bendixen, M.; Kroon, A.; Clemmensen, L. B.

    2017-12-01

    We show ground-penetrating radar (GPR) reflection data collected over modern and fossil beach deposits from different localities along coastlines in meso-tidal regimes of Greenland and micro-tidal regimes of Denmark. The acquired reflection GPR sections show several similar characteristics but also some differences. A similar characteristic is the presence of downlapping reflections, where the downlap point is interpreted to mark the transition from upper shoreface to beachface deposits and, thus, be a marker of a level close to or at sea-level at the time of deposition. Differences in grain size of the investigated beach ridge system result in different scattering characteristics of the acquired GPR data. These differences call for tailored, careful processing of the GPR data for optimal imaging of internal beach ridge architecture. We outline elements of the GPR data processing of particular importance for optimal imaging. Moreover, we discuss advantages and challenges related to using GPR-based proxies of sea-level as compared to other methods traditionally used for establishment of curves of past sea-level variation.

  20. Multimodal imaging of the human knee down to the cellular level

    Science.gov (United States)

    Schulz, G.; Götz, C.; Müller-Gerbl, M.; Zanette, I.; Zdora, M.-C.; Khimchenko, A.; Deyhle, H.; Thalmann, P.; Müller, B.

    2017-06-01

    Computed tomography reaches the best spatial resolution for the three-dimensional visualization of human tissues among the available nondestructive clinical imaging techniques. Nowadays, sub-millimeter voxel sizes are regularly obtained. Regarding investigations on true micrometer level, lab-based micro-CT (μCT) has become gold standard. The aim of the present study is firstly the hierarchical investigation of a human knee post mortem using hard X-ray μCT and secondly a multimodal imaging using absorption and phase contrast modes in order to investigate hard (bone) and soft (cartilage) tissues on the cellular level. After the visualization of the entire knee using a clinical CT, a hierarchical imaging study was performed using the lab-system nanotom® m. First, the entire knee was measured with a pixel length of 65 μm. The highest resolution with a pixel length of 3 μm could be achieved after extracting cylindrically shaped plugs from the femoral bones. For the visualization of the cartilage, grating-based phase contrast μCT (I13-2, Diamond Light Source) was performed. With an effective voxel size of 2.3 μm it was possible to visualize individual chondrocytes within the cartilage.

  1. Hand Vein Images Enhancement Based on Local Gray-level Information Histogram

    Directory of Open Access Journals (Sweden)

    Jun Wang

    2015-06-01

    Full Text Available Based on the Histogram equalization theory, this paper presents a novel concept of histogram to realize the contrast enhancement of hand vein images, avoiding the lost of topological vein structure or importing the fake vein information. Firstly, we propose the concept of gray-level information histogram, the fundamental characteristic of which is that the amplitudes of the components can objectively reflect the contribution of the gray levels and information to the representation of image information. Then, we propose the histogram equalization method that is composed of an automatic histogram separation module and an intensity transformation module, and the histogram separation module is a combination of the proposed prompt multiple threshold procedure and an optimum peak signal-to-noise (PSNR calculation to separate the histogram into small-scale detail, the use of the intensity transformation module can enhance the vein images with vein topological structure and gray information preservation for each generated sub-histogram. Experimental results show that the proposed method can achieve extremely good contrast enhancement effect.

  2. Multi-level discriminative dictionary learning with application to large scale image classification.

    Science.gov (United States)

    Shen, Li; Sun, Gang; Huang, Qingming; Wang, Shuhui; Lin, Zhouchen; Wu, Enhua

    2015-10-01

    The sparse coding technique has shown flexibility and capability in image representation and analysis. It is a powerful tool in many visual applications. Some recent work has shown that incorporating the properties of task (such as discrimination for classification task) into dictionary learning is effective for improving the accuracy. However, the traditional supervised dictionary learning methods suffer from high computation complexity when dealing with large number of categories, making them less satisfactory in large scale applications. In this paper, we propose a novel multi-level discriminative dictionary learning method and apply it to large scale image classification. Our method takes advantage of hierarchical category correlation to encode multi-level discriminative information. Each internal node of the category hierarchy is associated with a discriminative dictionary and a classification model. The dictionaries at different layers are learnt to capture the information of different scales. Moreover, each node at lower layers also inherits the dictionary of its parent, so that the categories at lower layers can be described with multi-scale information. The learning of dictionaries and associated classification models is jointly conducted by minimizing an overall tree loss. The experimental results on challenging data sets demonstrate that our approach achieves excellent accuracy and competitive computation cost compared with other sparse coding methods for large scale image classification.

  3. Reducing surgical levels by paraspinal mapping and diffusion tensor imaging techniques in lumbar spinal stenosis.

    Science.gov (United States)

    Chen, Hua-Biao; Wan, Qi; Xu, Qi-Feng; Chen, Yi; Bai, Bo

    2016-04-25

    Correlating symptoms and physical examination findings with surgical levels based on common imaging results is not reliable. In patients who have no concordance between radiological and clinical symptoms, the surgical levels determined by conventional magnetic resonance imaging (MRI) and neurogenic examination (NE) may lead to a more extensive surgery and significant complications. We aimed to confirm that whether the use of diffusion tensor imaging (DTI) and paraspinal mapping (PM) techniques can further prevent the occurrence of false positives with conventional MRI, distinguish which are clinically relevant from levels of cauda equina and/or nerve root lesions based on MRI, and determine and reduce the decompression levels of lumbar spinal stenosis than MRI + NE, while ensuring or improving surgical outcomes. We compared the data between patients who underwent MRI + (PM or DTI) and patients who underwent conventional MRI + NE to determine levels of decompression for the treatment of lumbar spinal stenosis. Outcome measures were assessed at 2 weeks, 3 months, 6 months, and 12 months postoperatively. One hundred fourteen patients (59 in the control group, 54 in the experimental group) underwent decompression. The levels of decompression determined by MRI + (PM or DTI) in the experimental group were significantly less than that determined by MRI + NE in the control group (p = 0.000). The surgical time, blood loss, and surgical transfusion were significantly less in the experimental group (p = 0.001, p = 0.011, p = 0.001, respectively). There were no differences in improvement of the visual analog scale back and leg pain (VAS-BP, VAS-LP) scores and Oswestry Disability Index (ODI) scores at 2 weeks, 3 months, 6 months, and 12 months after operation between the experimental and control groups. MRI + (PM or DTI) showed clear benefits in determining decompression levels of lumbar spinal stenosis than MRI + NE. In patients with lumbar spinal

  4. Self-imaging of partially coherent light in graded-index media.

    Science.gov (United States)

    Ponomarenko, Sergey A

    2015-02-15

    We demonstrate that partially coherent light beams of arbitrary intensity and spectral degree of coherence profiles can self-image in linear graded-index media. The results can be applicable to imaging with noisy spatial or temporal light sources.

  5. High-level core sample x-ray imaging at the Hanford Site

    International Nuclear Information System (INIS)

    Weber, J.R.; Keve, J.K.

    1995-10-01

    Waste tank sampling of radioactive high-level waste is required for continued operations, waste characterization, and site safety. Hanford Site tank farms consist of 28 double-shell and 149 single-shell underground storage tanks. The single shell tanks are out-of-service an no longer receive liquid waste. Core samples of salt cake and sludge waste are remotely obtained using truck-mounted, core drill platforms. Samples are recovered from tanks through a 2.25 inch (in.) drill pipe in 26-in. steel tubes, 1.5 in. diameter. Drilling parameters vary with different waste types. Because sample recovery has been marginal an inadequate at times, a system was needed to provide drill truck operators with ''real-time feedback'' about the physical condition of the sample and the percent recovery, prior to making nuclear assay measurements and characterizations at the analytical laboratory. The Westinghouse Hanford Company conducted proof-of-principal radiographic testing to verify the feasibility of a proposed imaging system. Tests were conducted using an iridium 192 radiography source to determine the effects of high radiation on image quality. The tests concluded that samplers with a dose rate in excess of 5000 R/hr could be imaged with only a slight loss of image quality and samples less than 1000 R/hr have virtually no effect on image quality. The Mobile Core Sample X-Ray Examination System, a portable vendor-engineered assembly, has components uniquely configured to produce a real-time radiographic system suitable for safely examining radioactive tank core segments collected at the Hanford Site. The radiographic region of interest extends from the bottom (valve) of the sampler upward 19 to 20 in. The purpose of the Mobile Core Sample X-Ray Examination System is to examine the physical contents of core samples after removal from the tank and prior to placement in an onsite transfer cask

  6. Supervised variational model with statistical inference and its application in medical image segmentation.

    Science.gov (United States)

    Li, Changyang; Wang, Xiuying; Eberl, Stefan; Fulham, Michael; Yin, Yong; Dagan Feng, David

    2015-01-01

    Automated and general medical image segmentation can be challenging because the foreground and the background may have complicated and overlapping density distributions in medical imaging. Conventional region-based level set algorithms often assume piecewise constant or piecewise smooth for segments, which are implausible for general medical image segmentation. Furthermore, low contrast and noise make identification of the boundaries between foreground and background difficult for edge-based level set algorithms. Thus, to address these problems, we suggest a supervised variational level set segmentation model to harness the statistical region energy functional with a weighted probability approximation. Our approach models the region density distributions by using the mixture-of-mixtures Gaussian model to better approximate real intensity distributions and distinguish statistical intensity differences between foreground and background. The region-based statistical model in our algorithm can intuitively provide better performance on noisy images. We constructed a weighted probability map on graphs to incorporate spatial indications from user input with a contextual constraint based on the minimization of contextual graphs energy functional. We measured the performance of our approach on ten noisy synthetic images and 58 medical datasets with heterogeneous intensities and ill-defined boundaries and compared our technique to the Chan-Vese region-based level set model, the geodesic active contour model with distance regularization, and the random walker model. Our method consistently achieved the highest Dice similarity coefficient when compared to the other methods.

  7. Prediction of Intelligibility of Noisy and Time-Frequency Weighted Speech based on Mutual Information Between Amplitude Envelopes

    DEFF Research Database (Denmark)

    Jensen, Jesper; Taal, C.H.

    2013-01-01

    of Shannon information the critical-band amplitude envelopes of the noisy/processed signal convey about the corresponding clean signal envelopes. The resulting intelligibility predictor turns out to be a simple function of the correlation between noisy/processed and clean amplitude envelopes. The proposed...

  8. Non parametric denoising methods based on wavelets: Application to electron microscopy images in low exposure time

    International Nuclear Information System (INIS)

    Soumia, Sid Ahmed; Messali, Zoubeida; Ouahabi, Abdeldjalil; Trepout, Sylvain; Messaoudi, Cedric; Marco, Sergio

    2015-01-01

    The 3D reconstruction of the Cryo-Transmission Electron Microscopy (Cryo-TEM) and Energy Filtering TEM images (EFTEM) hampered by the noisy nature of these images, so that their alignment becomes so difficult. This noise refers to the collision between the frozen hydrated biological samples and the electrons beam, where the specimen is exposed to the radiation with a high exposure time. This sensitivity to the electrons beam led specialists to obtain the specimen projection images at very low exposure time, which resulting the emergence of a new problem, an extremely low signal-to-noise ratio (SNR). This paper investigates the problem of TEM images denoising when they are acquired at very low exposure time. So, our main objective is to enhance the quality of TEM images to improve the alignment process which will in turn improve the three dimensional tomography reconstructions. We have done multiple tests on special TEM images acquired at different exposure time 0.5s, 0.2s, 0.1s and 1s (i.e. with different values of SNR)) and equipped by Golding beads for helping us in the assessment step. We herein, propose a structure to combine multiple noisy copies of the TEM images. The structure is based on four different denoising methods, to combine the multiple noisy TEM images copies. Namely, the four different methods are Soft, the Hard as Wavelet-Thresholding methods, Bilateral Filter as a non-linear technique able to maintain the edges neatly, and the Bayesian approach in the wavelet domain, in which context modeling is used to estimate the parameter for each coefficient. To ensure getting a high signal-to-noise ratio, we have guaranteed that we are using the appropriate wavelet family at the appropriate level. So we have chosen âĂIJsym8âĂİ wavelet at level 3 as the most appropriate parameter. Whereas, for the bilateral filtering many tests are done in order to determine the proper filter parameters represented by the size of the filter, the range parameter and the

  9. Level of conus medullaris termination in adult population analyzed by kinetic magnetic resonance imaging.

    Science.gov (United States)

    Liu, An; Yang, Kaixiang; Wang, Daling; Li, Changqing; Ren, Zhiwei; Yan, Shigui; Buser, Zorica; Wang, Jeffrey C

    2017-07-01

    To investigate the change of conus medullaris termination (CMT) level in neutral, flexion and extension positions and to analyze the effects of age and gender on the CMT level. The midline sagittal T2-weighted kinetic magnetic resonance imaging (kMRI) study of 585 patients was retrospectively reviewed to identify the level of CMT. All patients were in an upright position. A straight line perpendicular to the long axis of the cord was drawn from the tip of the cord and then subtended to the adjacent vertebra or disk space. The CMT level was labeled in relation to the upper, middle and lower segments of adjacent vertebra or disk space and assigned values from 0 to 12 [0 = upper third of T12 (T12U), and 12 = upper third of L3 (L3U)]. All parameters were collected for neutral, flexion and extension positions. The level of CMT had the highest incidence (17.61%) at L1 lower (L1L) in neutral position, 17.44% at L1 upper (L1U) in flexion, and 16.92% at L1 middle (L1M) in extension with no significant differences among three positions (p > 0.05) in weight-bearing status. Moreover, the level of CMT was not correlated with age (p > 0.05). In terms of gender, the level of CMT was lower in women than in men in neutral position, flexion, and extension (p level of CMT in the neutral position was in accordance with previous cadaveric and supine-position MRI studies, and it did not change with flexion and extension. Women had lower CMT level than men, especially in the older population. This information can be very valuable when performing spinal anesthesia and spinal punctures.

  10. A steady-State Genetic Algorithm with Resampling for Noisy Inventory Control

    NARCIS (Netherlands)

    Prestwich, S.; Tarim, S.A.; Rossi, R.; Hnich, B.

    2008-01-01

    Noisy fitness functions occur in many practical applications of evolutionary computation. A standard technique for solving these problems is fitness resampling but this may be inefficient or need a large population, and combined with elitism it may overvalue chromosomes or reduce genetic diversity.

  11. Rational integration of noisy evidence and prior semantic expectations in sentence interpretation.

    Science.gov (United States)

    Gibson, Edward; Bergen, Leon; Piantadosi, Steven T

    2013-05-14

    Sentence processing theories typically assume that the input to our language processing mechanisms is an error-free sequence of words. However, this assumption is an oversimplification because noise is present in typical language use (for instance, due to a noisy environment, producer errors, or perceiver errors). A complete theory of human sentence comprehension therefore needs to explain how humans understand language given imperfect input. Indeed, like many cognitive systems, language processing mechanisms may even be "well designed"--in this case for the task of recovering intended meaning from noisy utterances. In particular, comprehension mechanisms may be sensitive to the types of information that an idealized statistical comprehender would be sensitive to. Here, we evaluate four predictions about such a rational (Bayesian) noisy-channel language comprehender in a sentence comprehension task: (i) semantic cues should pull sentence interpretation towards plausible meanings, especially if the wording of the more plausible meaning is close to the observed utterance in terms of the number of edits; (ii) this process should asymmetrically treat insertions and deletions due to the Bayesian "size principle"; such nonliteral interpretation of sentences should (iii) increase with the perceived noise rate of the communicative situation and (iv) decrease if semantically anomalous meanings are more likely to be communicated. These predictions are borne out, strongly suggesting that human language relies on rational statistical inference over a noisy channel.

  12. Generation of Werner states and preservation of entanglement in a noisy environment

    Energy Technology Data Exchange (ETDEWEB)

    Jakobczyk, Lech [Institute of Theoretical Physics, University of Wroclaw, Pl. M. Borna 9, 50-204 Wroclaw (Poland)]. E-mail: ljak@ift.uni.wroc.pl; Jamroz, Anna [Institute of Theoretical Physics, University of Wroclaw, Pl. M. Borna 9, 50-204 Wroclaw (Poland)

    2005-12-05

    We study the influence of noisy environment on the evolution of two-atomic system in the presence of collective damping. Generation of Werner states as asymptotic stationary states of evolution is described. We also show that for some initial states the amount of entanglement is preserved during the evolution.

  13. FALSE DETERMINATIONS OF CHAOS IN SHORT NOISY TIME SERIES. (R828745)

    Science.gov (United States)

    A method (NEMG) proposed in 1992 for diagnosing chaos in noisy time series with 50 or fewer observations entails fitting the time series with an empirical function which predicts an observation in the series from previous observations, and then estimating the rate of divergenc...

  14. Microwave imaging of dielectric cylinder using level set method and conjugate gradient algorithm

    International Nuclear Information System (INIS)

    Grayaa, K.; Bouzidi, A.; Aguili, T.

    2011-01-01

    In this paper, we propose a computational method for microwave imaging cylinder and dielectric object, based on combining level set technique and the conjugate gradient algorithm. By measuring the scattered field, we tried to retrieve the shape, localisation and the permittivity of the object. The forward problem is solved by the moment method, while the inverse problem is reformulate in an optimization one and is solved by the proposed scheme. It found that the proposed method is able to give good reconstruction quality in terms of the reconstructed shape and permittivity.

  15. The level of detail required in a deformable phantom to accurately perform quality assurance of deformable image registration

    Science.gov (United States)

    Saenz, Daniel L.; Kim, Hojin; Chen, Josephine; Stathakis, Sotirios; Kirby, Neil

    2016-09-01

    The primary purpose of the study was to determine how detailed deformable image registration (DIR) phantoms need to adequately simulate human anatomy and accurately assess the quality of DIR algorithms. In particular, how many distinct tissues are required in a phantom to simulate complex human anatomy? Pelvis and head-and-neck patient CT images were used for this study as virtual phantoms. Two data sets from each site were analyzed. The virtual phantoms were warped to create two pairs consisting of undeformed and deformed images. Otsu’s method was employed to create additional segmented image pairs of n distinct soft tissue CT number ranges (fat, muscle, etc). A realistic noise image was added to each image. Deformations were applied in MIM Software (MIM) and Velocity deformable multi-pass (DMP) and compared with the known warping. Images with more simulated tissue levels exhibit more contrast, enabling more accurate results. Deformation error (magnitude of the vector difference between known and predicted deformation) was used as a metric to evaluate how many CT number gray levels are needed for a phantom to serve as a realistic patient proxy. Stabilization of the mean deformation error was reached by three soft tissue levels for Velocity DMP and MIM, though MIM exhibited a persisting difference in accuracy between the discrete images and the unprocessed image pair. A minimum detail of three levels allows a realistic patient proxy for use with Velocity and MIM deformation algorithms.

  16. Automatic Fontanel Extraction from Newborns' CT Images Using Variational Level Set

    Science.gov (United States)

    Kazemi, Kamran; Ghadimi, Sona; Lyaghat, Alireza; Tarighati, Alla; Golshaeyan, Narjes; Abrishami-Moghaddam, Hamid; Grebe, Reinhard; Gondary-Jouet, Catherine; Wallois, Fabrice

    A realistic head model is needed for source localization methods used for the study of epilepsy in neonates applying Electroencephalographic (EEG) measurements from the scalp. The earliest models consider the head as a series of concentric spheres, each layer corresponding to a different tissue whose conductivity is assumed to be homogeneous. The results of the source reconstruction depend highly on the electric conductivities of the tissues forming the head.The most used model is constituted of three layers (scalp, skull, and intracranial). Most of the major bones of the neonates’ skull are ossified at birth but can slightly move relative to each other. This is due to the sutures, fibrous membranes that at this stage of development connect the already ossified flat bones of the neurocranium. These weak parts of the neurocranium are called fontanels. Thus it is important to enter the exact geometry of fontaneles and flat bone in a source reconstruction because they show pronounced in conductivity. Computer Tomography (CT) imaging provides an excellent tool for non-invasive investigation of the skull which expresses itself in high contrast to all other tissues while the fontanels only can be identified as absence of bone, gaps in the skull formed by flat bone. Therefore, the aim of this paper is to extract the fontanels from CT images applying a variational level set method. We applied the proposed method to CT-images of five different subjects. The automatically extracted fontanels show good agreement with the manually extracted ones.

  17. Long-term in vivo imaging of multiple organs at the single cell level.

    Directory of Open Access Journals (Sweden)

    Benny J Chen

    Full Text Available Two-photon microscopy has enabled the study of individual cell behavior in live animals. Many organs and tissues cannot be studied, especially longitudinally, because they are located too deep, behind bony structures or too close to the lung and heart. Here we report a novel mouse model that allows long-term single cell imaging of many organs. A wide variety of live tissues were successfully engrafted in the pinna of the mouse ear. Many of these engrafted tissues maintained the normal tissue histology. Using the heart and thymus as models, we further demonstrated that the engrafted tissues functioned as would be expected. Combining two-photon microscopy with fluorescent tracers, we successfully visualized the engrafted tissues at the single cell level in live mice over several months. Four dimensional (three-dimensional (3D plus time information of individual cells was obtained from this imaging. This model makes long-term high resolution 4D imaging of multiple organs possible.

  18. Toward accurate tooth segmentation from computed tomography images using a hybrid level set model

    Energy Technology Data Exchange (ETDEWEB)

    Gan, Yangzhou; Zhao, Qunfei [Department of Automation, Shanghai Jiao Tong University, and Key Laboratory of System Control and Information Processing, Ministry of Education of China, Shanghai 200240 (China); Xia, Zeyang, E-mail: zy.xia@siat.ac.cn, E-mail: jing.xiong@siat.ac.cn; Hu, Ying [Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, and The Chinese University of Hong Kong, Shenzhen 518055 (China); Xiong, Jing, E-mail: zy.xia@siat.ac.cn, E-mail: jing.xiong@siat.ac.cn [Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 510855 (China); Zhang, Jianwei [TAMS, Department of Informatics, University of Hamburg, Hamburg 22527 (Germany)

    2015-01-15

    Purpose: A three-dimensional (3D) model of the teeth provides important information for orthodontic diagnosis and treatment planning. Tooth segmentation is an essential step in generating the 3D digital model from computed tomography (CT) images. The aim of this study is to develop an accurate and efficient tooth segmentation method from CT images. Methods: The 3D dental CT volumetric images are segmented slice by slice in a two-dimensional (2D) transverse plane. The 2D segmentation is composed of a manual initialization step and an automatic slice by slice segmentation step. In the manual initialization step, the user manually picks a starting slice and selects a seed point for each tooth in this slice. In the automatic slice segmentation step, a developed hybrid level set model is applied to segment tooth contours from each slice. Tooth contour propagation strategy is employed to initialize the level set function automatically. Cone beam CT (CBCT) images of two subjects were used to tune the parameters. Images of 16 additional subjects were used to validate the performance of the method. Volume overlap metrics and surface distance metrics were adopted to assess the segmentation accuracy quantitatively. The volume overlap metrics were volume difference (VD, mm{sup 3}) and Dice similarity coefficient (DSC, %). The surface distance metrics were average symmetric surface distance (ASSD, mm), RMS (root mean square) symmetric surface distance (RMSSSD, mm), and maximum symmetric surface distance (MSSD, mm). Computation time was recorded to assess the efficiency. The performance of the proposed method has been compared with two state-of-the-art methods. Results: For the tested CBCT images, the VD, DSC, ASSD, RMSSSD, and MSSD for the incisor were 38.16 ± 12.94 mm{sup 3}, 88.82 ± 2.14%, 0.29 ± 0.03 mm, 0.32 ± 0.08 mm, and 1.25 ± 0.58 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the canine were 49.12 ± 9.33 mm{sup 3}, 91.57 ± 0.82%, 0.27 ± 0.02 mm, 0

  19. Toward accurate tooth segmentation from computed tomography images using a hybrid level set model

    International Nuclear Information System (INIS)

    Gan, Yangzhou; Zhao, Qunfei; Xia, Zeyang; Hu, Ying; Xiong, Jing; Zhang, Jianwei

    2015-01-01

    Purpose: A three-dimensional (3D) model of the teeth provides important information for orthodontic diagnosis and treatment planning. Tooth segmentation is an essential step in generating the 3D digital model from computed tomography (CT) images. The aim of this study is to develop an accurate and efficient tooth segmentation method from CT images. Methods: The 3D dental CT volumetric images are segmented slice by slice in a two-dimensional (2D) transverse plane. The 2D segmentation is composed of a manual initialization step and an automatic slice by slice segmentation step. In the manual initialization step, the user manually picks a starting slice and selects a seed point for each tooth in this slice. In the automatic slice segmentation step, a developed hybrid level set model is applied to segment tooth contours from each slice. Tooth contour propagation strategy is employed to initialize the level set function automatically. Cone beam CT (CBCT) images of two subjects were used to tune the parameters. Images of 16 additional subjects were used to validate the performance of the method. Volume overlap metrics and surface distance metrics were adopted to assess the segmentation accuracy quantitatively. The volume overlap metrics were volume difference (VD, mm 3 ) and Dice similarity coefficient (DSC, %). The surface distance metrics were average symmetric surface distance (ASSD, mm), RMS (root mean square) symmetric surface distance (RMSSSD, mm), and maximum symmetric surface distance (MSSD, mm). Computation time was recorded to assess the efficiency. The performance of the proposed method has been compared with two state-of-the-art methods. Results: For the tested CBCT images, the VD, DSC, ASSD, RMSSSD, and MSSD for the incisor were 38.16 ± 12.94 mm 3 , 88.82 ± 2.14%, 0.29 ± 0.03 mm, 0.32 ± 0.08 mm, and 1.25 ± 0.58 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the canine were 49.12 ± 9.33 mm 3 , 91.57 ± 0.82%, 0.27 ± 0.02 mm, 0.28 ± 0.03 mm

  20. FDG-PET imaging of lower extremity muscular activity during level walking

    International Nuclear Information System (INIS)

    Oi, Naoyuki; Iwaya, Tsutomu; Tobimatsu, Yoshiko; Fujimoto, Toshihiko; Itoh, Masatoshi; Yamaguchi, Keiichiro

    2003-01-01

    We analyzed muscular activity of the lower extremities during level walking using positron emission tomography (PET) with 18 F-fluorodeoxyglucose ( 18 F-FDG). We examined 17 healthy male subjects; 11 were assigned to a walking group and 6 to a resting group. After 18 F-FDG injection, the walking group subjects walked at a free speed for 15 min. A whole-body image was then obtained by a PET camera, and the standardized uptake ratio (SUR) was computed for each muscle. The SUR for each muscle of the walking group was compared with that for the corresponding muscles in the resting group. The level of muscular activity of all the muscles we examined were higher during level walking than when resting. The activity of the lower leg muscles was higher than that of the thigh muscles during level walking. The muscular activity of the soleus was highest among all the muscles examined. Among the gluteal muscles, the muscular activity of the gluteus minimus was higher than that of the gluteus maximus and gluteus medius. The concurrent validity of measuring muscular activity of the lower extremity during level walking by the PET method using 18 F-FDG was demonstrated. (author)

  1. Neuroscience-inspired computational systems for speech recognition under noisy conditions

    Science.gov (United States)

    Schafer, Phillip B.

    Humans routinely recognize speech in challenging acoustic environments with background music, engine sounds, competing talkers, and other acoustic noise. However, today's automatic speech recognition (ASR) systems perform poorly in such environments. In this dissertation, I present novel methods for ASR designed to approach human-level performance by emulating the brain's processing of sounds. I exploit recent advances in auditory neuroscience to compute neuron-based representations of speech, and design novel methods for decoding these representations to produce word transcriptions. I begin by considering speech representations modeled on the spectrotemporal receptive fields of auditory neurons. These representations can be tuned to optimize a variety of objective functions, which characterize the response properties of a neural population. I propose an objective function that explicitly optimizes the noise invariance of the neural responses, and find that it gives improved performance on an ASR task in noise compared to other objectives. The method as a whole, however, fails to significantly close the performance gap with humans. I next consider speech representations that make use of spiking model neurons. The neurons in this method are feature detectors that selectively respond to spectrotemporal patterns within short time windows in speech. I consider a number of methods for training the response properties of the neurons. In particular, I present a method using linear support vector machines (SVMs) and show that this method produces spikes that are robust to additive noise. I compute the spectrotemporal receptive fields of the neurons for comparison with previous physiological results. To decode the spike-based speech representations, I propose two methods designed to work on isolated word recordings. The first method uses a classical ASR technique based on the hidden Markov model. The second method is a novel template-based recognition scheme that takes

  2. Methods of Blood Oxygen Level-Dependent Magnetic Resonance Imaging Analysis for Evaluating Renal Oxygenation

    Directory of Open Access Journals (Sweden)

    Fen Chen

    2018-03-01

    Full Text Available Blood oxygen level-dependent magnetic resonance imaging (BOLD MRI has recently been utilized as a noninvasive tool for evaluating renal oxygenation. Several methods have been proposed for analyzing BOLD images. Regional ROI selection is the earliest and most widely used method for BOLD analysis. In the last 20 years, many investigators have used this method to evaluate cortical and medullary oxygenation in patients with ischemic nephropathy, hypertensive nephropathy, diabetic nephropathy, chronic kidney disease (CKD, acute kidney injury and renal allograft rejection. However, clinical trials of BOLD MRI using regional ROI selection revealed that it was difficult to distinguish the renal cortico-medullary zones with this method, and that it was susceptible to observer variability. To overcome these deficiencies, several new methods were proposed for analyzing BOLD images, including the compartmental approach, fractional hypoxia method, concentric objects (CO method and twelve-layer concentric objects (TLCO method. The compartmental approach provides an algorithm to judge whether the pixel belongs to the cortex or medulla. Fractional kidney hypoxia, measured by using BOLD MRI, was negatively correlated with renal blood flow, tissue perfusion and glomerular filtration rate (GFR in patients with atherosclerotic renal artery stenosis. The CO method divides the renal parenchyma into six or twelve layers of thickness in each coronal slice of BOLD images and provides a R2* radial profile curve. The slope of the R2* curve associated positively with eGFR in CKD patients. Indeed, each method invariably has advantages and disadvantages, and there is generally no consensus method so far. Undoubtedly, analytic approaches for BOLD MRI with better reproducibility would assist clinicians in monitoring the degree of kidney hypoxia and thus facilitating timely reversal of tissue hypoxia.

  3. GABA levels in the ventromedial prefrontal cortex during the viewing of appetitive and disgusting food images.

    Science.gov (United States)

    Padulo, Caterina; Delli Pizzi, Stefano; Bonanni, Laura; Edden, Richard A E; Ferretti, Antonio; Marzoli, Daniele; Franciotti, Raffaella; Manippa, Valerio; Onofrj, Marco; Sepede, Gianna; Tartaro, Armando; Tommasi, Luca; Puglisi-Allegra, Stefano; Brancucci, Alfredo

    2016-10-01

    Characterizing how the brain appraises the psychological dimensions of reward is one of the central topics of neuroscience. It has become clear that dopamine neurons are implicated in the transmission of both rewarding information and aversive and alerting events through two different neuronal populations involved in encoding the motivational value and the motivational salience of stimuli, respectively. Nonetheless, there is less agreement on the role of the ventromedial prefrontal cortex (vmPFC) and the related neurotransmitter release during the processing of biologically relevant stimuli. To address this issue, we employed magnetic resonance spectroscopy (MRS), a non-invasive methodology that allows detection of some metabolites in the human brain in vivo, in order to assess the role of the vmPFC in encoding stimulus value rather than stimulus salience. Specifically, we measured gamma-aminobutyric acid (GABA) and, with control purposes, Glx levels in healthy subjects during the observation of appetitive and disgusting food images. We observed a decrease of GABA and no changes in Glx concentration in the vmPFC in both conditions. Furthermore, a comparatively smaller GABA reduction during the observation of appetitive food images than during the observation of disgusting food images was positively correlated with the scores obtained to the body image concerns sub-scale of Body Uneasiness Test (BUT). These results are consistent with the idea that the vmPFC plays a crucial role in processing both rewarding and aversive stimuli, possibly by encoding stimulus salience through glutamatergic and/or noradrenergic projections to deeper mesencephalic and limbic areas. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.

  4. A Weighted Two-Level Bregman Method with Dictionary Updating for Nonconvex MR Image Reconstruction

    Directory of Open Access Journals (Sweden)

    Qiegen Liu

    2014-01-01

    Full Text Available Nonconvex optimization has shown that it needs substantially fewer measurements than l1 minimization for exact recovery under fixed transform/overcomplete dictionary. In this work, two efficient numerical algorithms which are unified by the method named weighted two-level Bregman method with dictionary updating (WTBMDU are proposed for solving lp optimization under the dictionary learning model and subjecting the fidelity to the partial measurements. By incorporating the iteratively reweighted norm into the two-level Bregman iteration method with dictionary updating scheme (TBMDU, the modified alternating direction method (ADM solves the model of pursuing the approximated lp-norm penalty efficiently. Specifically, the algorithms converge after a relatively small number of iterations, under the formulation of iteratively reweighted l1 and l2 minimization. Experimental results on MR image simulations and real MR data, under a variety of sampling trajectories and acceleration factors, consistently demonstrate that the proposed method can efficiently reconstruct MR images from highly undersampled k-space data and presents advantages over the current state-of-the-art reconstruction approaches, in terms of higher PSNR and lower HFEN values.

  5. Automated tracking of lava lake level using thermal images at Kīlauea Volcano, Hawai’i

    Science.gov (United States)

    Patrick, Matthew R.; Swanson, Don; Orr, Tim R.

    2016-01-01

    Tracking the level of the lava lake in Halema‘uma‘u Crater, at the summit of Kīlauea Volcano, Hawai’i, is an essential part of monitoring the ongoing eruption and forecasting potentially hazardous changes in activity. We describe a simple automated image processing routine that analyzes continuously-acquired thermal images of the lava lake and measures lava level. The method uses three image segmentation approaches, based on edge detection, short-term change analysis, and composite temperature thresholding, to identify and track the lake margin in the images. These relative measurements from the images are periodically calibrated with laser rangefinder measurements to produce real-time estimates of lake elevation. Continuous, automated tracking of the lava level has been an important tool used by the U.S. Geological Survey’s Hawaiian Volcano Observatory since 2012 in real-time operational monitoring of the volcano and its hazard potential.

  6. A noisy spring: the impact of globally rising underwater sound levels on fish.

    Science.gov (United States)

    Slabbekoorn, Hans; Bouton, Niels; van Opzeeland, Ilse; Coers, Aukje; ten Cate, Carel; Popper, Arthur N

    2010-07-01

    The underwater environment is filled with biotic and abiotic sounds, many of which can be important for the survival and reproduction of fish. Over the last century, human activities in and near the water have increasingly added artificial sounds to this environment. Very loud sounds of relatively short exposure, such as those produced during pile driving, can harm nearby fish. However, more moderate underwater noises of longer duration, such as those produced by vessels, could potentially impact much larger areas, and involve much larger numbers of fish. Here we call attention to the urgent need to study the role of sound in the lives of fish and to develop a better understanding of the ecological impact of anthropogenic noise. Copyright 2010 Elsevier Ltd. All rights reserved.

  7. The robustness of two tomography reconstructing techniques with heavily noisy dynamical experimental data from a high speed gamma-ray tomograph

    International Nuclear Information System (INIS)

    Vasconcelos, Geovane Vitor; Melo, Silvio de Barros; Dantas, Carlos Costa; Moreira, Icaro Malta; Johansen, Geira; Maad, Rachid

    2013-01-01

    The PSIRT (Particle Systems Iterative Reconstructive Technique) is, just like the ART method, an iterative tomographic reconstruction technique with the recommended use in the reconstruction of catalytic density distribution in the refining process of oil in the FCC-type riser. The PSIRT is based upon computer graphics' particle systems, where the reconstructing material is initially represented as composed of particles subject to a force field emanating from the beams, whose intensities are parameterized by the differences between the experimental readings of a given beam trajectory, and the values corresponding to the current amount of particles landed in this trajectory. A dynamical process is set as the beams fields of attracting forces dispute the particles. At the end, with the equilibrium established, the particles are replaced by the corresponding regions of pixels. The High Speed Gamma-ray Tomograph is a 5-source-fan-beam device with a 17-detector deck per source, capable of producing up to a thousand complete sinograms per second. Around 70.000 experimental sinograms from this tomograph were produced simulating the move of gas bubbles in different angular speeds immersed in oil within the vessel, through the use of a two-hole-polypropylene phantom. The sinogram frames were set with several different detector integration times. This article studies and compares the robustness of both ART and PSIRT methods in this heavily noisy scenario, where this noise comes not only from limitations in the dynamical sampling, but also from to the underlying apparatus that produces the counting in the tomograph. These experiments suggest that PSIRT is a more robust method than ART for noisy data. Visual inspection on the resulting images suggests that PSIRT is a more robust method than ART for noisy data, since it almost never presents globally scattered noise. (author)

  8. Restoring a smooth function from its noisy integrals

    Science.gov (United States)

    Goulko, Olga; Prokof'ev, Nikolay; Svistunov, Boris

    2018-05-01

    Numerical (and experimental) data analysis often requires the restoration of a smooth function from a set of sampled integrals over finite bins. We present the bin hierarchy method that efficiently computes the maximally smooth function from the sampled integrals using essentially all the information contained in the data. We perform extensive tests with different classes of functions and levels of data quality, including Monte Carlo data suffering from a severe sign problem and physical data for the Green's function of the Fröhlich polaron.

  9. Single-level dynamic spiral CT of hepatocellular carcinoma: correlation between imaging features and tumor angiogenesis

    International Nuclear Information System (INIS)

    Chen Weixia; Min Pengqiu; Song Bin; Xiao Bangliang; Liu Yan; Wang Wendong; Chen Xian; Xu Jianying

    2001-01-01

    Objective: To investigate the correlation of the enhancement imaging features of hepatocellular carcinoma (HCC) and relevant parameters revealed by single-level dynamic spiral CT scanning with tumor microvessel counting (MVC). Methods: The study included 26 histopathologically proven HCC patients. Target-slice dynamic scanning and portal venous phase scanning were performed for all patients. The time-density curves were generated with measurement of relevant parameters including: peak value (PV) and contrast enhancement ratio (CER), and the gross enhancement morphology analyzed. Histopathological slides were carefully prepared for the standard F8RA and VEGF immunohistochemical staining and tumor microvessel counting and calculation of VEGF expression percentage of tumor cells. The enhancement imaging features of HCC lesions were correlatively studied with tumor MVC and VEGF expression. Results: Peak value of HCC lesions were 7.9 to 75.2 HU, CER were 3.8% to 36.0%. MVC were 6 to 91, and the VEGF expression percentage were 32.1% to 78.3%. The PV and CER were significantly correlated with tumor tissue MVC (r = 0.508 and 0.423, P < 0.01 and 0.05 respectively). There were no correlations between PV and CER and VEGF expression percentage. Both the patterns of time-density curve and the gross enhancement morphology of HCC lesions were also correlated with tumor MVC, and reflected the distribution characteristics of tumor microvessels within HCC lesions. A close association was found between the likelihood of intrahepatic metastasis of HCC lesions with densely enhanced pseudo capsules and the presence of rich tumor microvessels within these pseudo capsules. Conclusion: The parameters and the enhancement imaging features of HCC lesions on target-slice dynamic scanning are correlated with tumor MVC, and can reflect the distribution characteristics of tumor microvessels within HCC lesions. Dynamic spiral CT scanning is a valuable means to assess the angiogenic activity and

  10. Determining the Level of the Dural Sac Tip: Magnetic Resonance Imaging in an Adult Population

    International Nuclear Information System (INIS)

    Binokay, F.; Akgul, E.; Bicakci, K.; Soyupak, S.; Aksungur, E.; Sertdemir, Y.

    2006-01-01

    Purpose: To determine the variation in the location of the dural sac (DS) in a living adult population and to correlate this position with age and sex. Material and Methods: T2-weighted, midline, sagittal, spin-echo magnetic resonance imaging (MRI) studies of 743 patients were assessed to identify the tip of the DS. This location was recorded in relation to the upper, middle, or lower third of the adjacent vertebral body or the adjacent intervertebral disk. Results: Frequency distribution for levels of termination of the DS on MRI demonstrated that the end of the DS was usually located at the upper one-third of S2 (25.2%). The mean level in females was also the upper one-third of S2 (26.5%) and in males the lower one-third of S2 (24.1%). The overall mean DS position was mostly at the upper one-third of S2. No significant differences in DS position were seen between male and female patients or with increasing age. Conclusion: It is important to know the possible range for the termination level of the DS when performing caudal anesthesia and craniospinal irradiation in some clinical situations. The distribution of DS location in a large adult population was shown to range from the L5-S1 intervertebral disk to the upper third of S3 vertebrae

  11. [Adaptation of self-image level and defense mechanisms in elderly patients with complicated stoma].

    Science.gov (United States)

    Ortiz-Rivas, Miriam Karina; Moreno-Pérez, Norma Elvira; Vega-Macías, Héctor Daniel; Jiménez-González, María de Jesús; Navarro-Elías, María de Guadalupe

    2014-01-01

    Ostomy patients face a number of problems that impact negatively on their personal welfare. The aim of this research is determine the nature and intensity of the relationship between the level of self-concept adaptive mode and the consistent use of coping strategies of older adults with a stoma. Quantitative, correlational and transversal. VIVEROS 03 and CAPS surveys were applied in 3 hospitals in the City of Durango, México. The study included 90 older adults with an intestinal elimination stoma with complications. Kendall's Tau-b coefficient was the non-parametric test used to measure this association. Most older adults analyzed (61.3 < % < 79.9) are not completely adapted to the condition of living with an intestinal stoma. There is also a moderate positive correlation (0,569) between the level of adaptation of the older adults with a stoma and the conscious use of coping strategies. The presence of an intestinal stoma represents a physical and psychological health problem that is reflected in the level of adaptation of the self-image. Elderly people with a stoma use only a small part of defense mechanisms as part of coping process. This limits their ability to face the adversities related to their condition, potentially causing major health complications. Copyright © 2014 Elsevier España, S.L.U. All rights reserved.

  12. Computed tomography imaging with the Adaptive Statistical Iterative Reconstruction (ASIR) algorithm: dependence of image quality on the blending level of reconstruction.

    Science.gov (United States)

    Barca, Patrizio; Giannelli, Marco; Fantacci, Maria Evelina; Caramella, Davide

    2018-06-01

    Computed tomography (CT) is a useful and widely employed imaging technique, which represents the largest source of population exposure to ionizing radiation in industrialized countries. Adaptive Statistical Iterative Reconstruction (ASIR) is an iterative reconstruction algorithm with the potential to allow reduction of radiation exposure while preserving diagnostic information. The aim of this phantom study was to assess the performance of ASIR, in terms of a number of image quality indices, when different reconstruction blending levels are employed. CT images of the Catphan-504 phantom were reconstructed using conventional filtered back-projection (FBP) and ASIR with reconstruction blending levels of 20, 40, 60, 80, and 100%. Noise, noise power spectrum (NPS), contrast-to-noise ratio (CNR) and modulation transfer function (MTF) were estimated for different scanning parameters and contrast objects. Noise decreased and CNR increased non-linearly up to 50 and 100%, respectively, with increasing blending level of reconstruction. Also, ASIR has proven to modify the NPS curve shape. The MTF of ASIR reconstructed images depended on tube load/contrast and decreased with increasing blending level of reconstruction. In particular, for low radiation exposure and low contrast acquisitions, ASIR showed lower performance than FBP, in terms of spatial resolution for all blending levels of reconstruction. CT image quality varies substantially with the blending level of reconstruction. ASIR has the potential to reduce noise whilst maintaining diagnostic information in low radiation exposure CT imaging. Given the opposite variation of CNR and spatial resolution with the blending level of reconstruction, it is recommended to use an optimal value of this parameter for each specific clinical application.

  13. A Web Service for File-Level Access to Disk Images

    Directory of Open Access Journals (Sweden)

    Sunitha Misra

    2014-07-01

    Full Text Available Digital forensics tools have many potential applications in the curation of digital materials in libraries, archives and museums (LAMs. Open source digital forensics tools can help LAM professionals to extract digital contents from born-digital media and make more informed preservation decisions. Many of these tools have ways to display the metadata of the digital media, but few provide file-level access without having to mount the device or use complex command-line utilities. This paper describes a project to develop software that supports access to the contents of digital media without having to mount or download the entire image. The work examines two approaches in creating this tool: First, a graphical user interface running on a local machine. Second, a web-based application running in web browser. The project incorporates existing open source forensics tools and libraries including The Sleuth Kit and libewf along with the Flask web application framework and custom Python scripts to generate web pages supporting disk image browsing.

  14. GPU-based fast cone beam CT reconstruction from undersampled and noisy projection data via total variation

    International Nuclear Information System (INIS)

    Jia Xun; Lou Yifei; Li Ruijiang; Song, William Y.; Jiang, Steve B.

    2010-01-01

    Purpose: Cone-beam CT (CBCT) plays an important role in image guided radiation therapy (IGRT). However, the large radiation dose from serial CBCT scans in most IGRT procedures raises a clinical concern, especially for pediatric patients who are essentially excluded from receiving IGRT for this reason. The goal of this work is to develop a fast GPU-based algorithm to reconstruct CBCT from undersampled and noisy projection data so as to lower the imaging dose. Methods: The CBCT is reconstructed by minimizing an energy functional consisting of a data fidelity term and a total variation regularization term. The authors developed a GPU-friendly version of the forward-backward splitting algorithm to solve this model. A multigrid technique is also employed. Results: It is found that 20-40 x-ray projections are sufficient to reconstruct images with satisfactory quality for IGRT. The reconstruction time ranges from 77 to 130 s on an NVIDIA Tesla C1060 (NVIDIA, Santa Clara, CA) GPU card, depending on the number of projections used, which is estimated about 100 times faster than similar iterative reconstruction approaches. Moreover, phantom studies indicate that the algorithm enables the CBCT to be reconstructed under a scanning protocol with as low as 0.1 mA s/projection. Comparing with currently widely used full-fan head and neck scanning protocol of ∼360 projections with 0.4 mA s/projection, it is estimated that an overall 36-72 times dose reduction has been achieved in our fast CBCT reconstruction algorithm. Conclusions: This work indicates that the developed GPU-based CBCT reconstruction algorithm is capable of lowering imaging dose considerably. The high computation efficiency in this algorithm makes the iterative CBCT reconstruction approach applicable in real clinical environments.

  15. GPU-based fast cone beam CT reconstruction from undersampled and noisy projection data via total variation.

    Science.gov (United States)

    Jia, Xun; Lou, Yifei; Li, Ruijiang; Song, William Y; Jiang, Steve B

    2010-04-01

    Cone-beam CT (CBCT) plays an important role in image guided radiation therapy (IGRT). However, the large radiation dose from serial CBCT scans in most IGRT procedures raises a clinical concern, especially for pediatric patients who are essentially excluded from receiving IGRT for this reason. The goal of this work is to develop a fast GPU-based algorithm to reconstruct CBCT from undersampled and noisy projection data so as to lower the imaging dose. The CBCT is reconstructed by minimizing an energy functional consisting of a data fidelity term and a total variation regularization term. The authors developed a GPU-friendly version of the forward-backward splitting algorithm to solve this model. A multigrid technique is also employed. It is found that 20-40 x-ray projections are sufficient to reconstruct images with satisfactory quality for IGRT. The reconstruction time ranges from 77 to 130 s on an NVIDIA Tesla C1060 (NVIDIA, Santa Clara, CA) GPU card, depending on the number of projections used, which is estimated about 100 times faster than similar iterative reconstruction approaches. Moreover, phantom studies indicate that the algorithm enables the CBCT to be reconstructed under a scanning protocol with as low as 0.1 mA s/projection. Comparing with currently widely used full-fan head and neck scanning protocol of approximately 360 projections with 0.4 mA s/projection, it is estimated that an overall 36-72 times dose reduction has been achieved in our fast CBCT reconstruction algorithm. This work indicates that the developed GPU-based CBCT reconstruction algorithm is capable of lowering imaging dose considerably. The high computation efficiency in this algorithm makes the iterative CBCT reconstruction approach applicable in real clinical environments.

  16. Integrable equation of state for noisy cosmic string

    International Nuclear Information System (INIS)

    Carter, B.

    1990-01-01

    It is argued that, independently of the detailed (thermal or more general) noise spectrum of the microscopic extrinsic excitations that can be expected on an ordinary cosmic string, their effect can be taken into account at a macroscopic level by replacing the standard isotropic Goto-Nambu-type string model by the nondegenerate string model characterized by an equation of state of the nondispersive ''fixed determinant'' type, with the effective surface stress-energy tensor satisfying (T ν ν ) 2 -T μ ν T ν μ =2T 0 2 , where T 0 is a constant representing the null-state limit of the string tension T, whose product with the energy density U of the string is thereby held fixed: TU=T 0 2 . It is shown that this equation of state has the special property of giving rise (in a flat background) to explicitly integrable dynamical equations

  17. Cost-Efficient Wafer-Level Capping for MEMS and Imaging Sensors by Adhesive Wafer Bonding

    Directory of Open Access Journals (Sweden)

    Simon J. Bleiker

    2016-10-01

    Full Text Available Device encapsulation and packaging often constitutes a substantial part of the fabrication cost of micro electro-mechanical systems (MEMS transducers and imaging sensor devices. In this paper, we propose a simple and cost-effective wafer-level capping method that utilizes a limited number of highly standardized process steps as well as low-cost materials. The proposed capping process is based on low-temperature adhesive wafer bonding, which ensures full complementary metal-oxide-semiconductor (CMOS compatibility. All necessary fabrication steps for the wafer bonding, such as cavity formation and deposition of the adhesive, are performed on the capping substrate. The polymer adhesive is deposited by spray-coating on the capping wafer containing the cavities. Thus, no lithographic patterning of the polymer adhesive is needed, and material waste is minimized. Furthermore, this process does not require any additional fabrication steps on the device wafer, which lowers the process complexity and fabrication costs. We demonstrate the proposed capping method by packaging two different MEMS devices. The two MEMS devices include a vibration sensor and an acceleration switch, which employ two different electrical interconnection schemes. The experimental results show wafer-level capping with excellent bond quality due to the re-flow behavior of the polymer adhesive. No impediment to the functionality of the MEMS devices was observed, which indicates that the encapsulation does not introduce significant tensile nor compressive stresses. Thus, we present a highly versatile, robust, and cost-efficient capping method for components such as MEMS and imaging sensors.

  18. Correlation between imaging findings and autoantibody levels and prognosis in patients with neuropsychiatric systemic lupus erythematosus

    International Nuclear Information System (INIS)

    Zhou Guangyu; Han Xuemei; Jin Ling

    2013-01-01

    Objective: To investigate the correlation between cranial magnetic resonance imaging (MRI) findings and autoantibody levels in patients with neuropsychiatric systemic lupus erythematosus (NPSLE), and to elucidate the role of MRI findings in predicting the prognosis of NPSLE. Methods: In total, 36 well-documented cases of NPSLE diagnosed definitely were selected. All the patients were divided into survival group (n=27) and dead group (n=9). Anti-nuclear antibodies and anti-dsDNA antibodies in serum were detected by indirect immunofluorescence and colloidal gold spot penetration method, respectively. Immunoblotting method was used to detect anti-Sm, anti-SSA, anti-SSB, anti U1-RNP, and anti-ribosomal P protein antibodies. Anti-cardiolipin antibody (ACL) was detected byELISA method. The correlation between MRI findings of cerebral lesions and autoantibodies, and prognosis was analyzed. Results: Cranial MRI scans on admission were abnormal in 32 patients (88.9% ), among which 21 cases showed diffuse manifestations, 10 cases showed focal lesion in brain and 1 case showed brain atrophy. The diffuse lesion on MRI showed multiple spotty or patchy normal intensity signal on T 1 -weighted image (T 1 WI) and high-intensity signal changes on T 2 -weighted image (T 2 WI). The focal lesion showed single spotty or patchy normal intensity signal on T 1 WI and high-intensity signal changes on T 2 WI. The lesion sites included basal ganglia, subcortical white matter, anterior and posterior horn of lateral ventricle, semiovale center, cerebral cortex, brainstem and cerebellum. Of 9 patients in dead group, 6 cases presented with focal lesion and the percentage of focal lesion cases in dead group was significantly higher than that in survival group (P<0.01). The percentages of lesion number in brainstem and basal ganglia in dead group were 11.5% and 26.9% , respectively, which were significantly higher than those in survival group (P<0.05 or P<0.01). The positive rate of ACL in cases

  19. Joint source/channel coding of scalable video over noisy channels

    Energy Technology Data Exchange (ETDEWEB)

    Cheung, G.; Zakhor, A. [Department of Electrical Engineering and Computer Sciences University of California Berkeley, California94720 (United States)

    1997-01-01

    We propose an optimal bit allocation strategy for a joint source/channel video codec over noisy channel when the channel state is assumed to be known. Our approach is to partition source and channel coding bits in such a way that the expected distortion is minimized. The particular source coding algorithm we use is rate scalable and is based on 3D subband coding with multi-rate quantization. We show that using this strategy, transmission of video over very noisy channels still renders acceptable visual quality, and outperforms schemes that use equal error protection only. The flexibility of the algorithm also permits the bit allocation to be selected optimally when the channel state is in the form of a probability distribution instead of a deterministic state. {copyright} {ital 1997 American Institute of Physics.}

  20. Security of modified Ping-Pong protocol in noisy and lossy channel.

    Science.gov (United States)

    Han, Yun-Guang; Yin, Zhen-Qiang; Li, Hong-Wei; Chen, Wei; Wang, Shuang; Guo, Guang-Can; Han, Zheng-Fu

    2014-05-12

    The "Ping-Pong" (PP) protocol is a two-way quantum key protocol based on entanglement. In this protocol, Bob prepares one maximally entangled pair of qubits, and sends one qubit to Alice. Then, Alice performs some necessary operations on this qubit and sends it back to Bob. Although this protocol was proposed in 2002, its security in the noisy and lossy channel has not been proven. In this report, we add a simple and experimentally feasible modification to the original PP protocol, and prove the security of this modified PP protocol against collective attacks when the noisy and lossy channel is taken into account. Simulation results show that our protocol is practical.

  1. Advanced topics in control and estimation of state-multiplicative noisy systems

    CERN Document Server

    Gershon, Eli

    2013-01-01

    Advanced Topics in Control and Estimation of State-Multiplicative Noisy Systems begins with an introduction and extensive literature survey. The text proceeds to cover solutions of measurement-feedback control and state problems and the formulation of the Bounded Real Lemma for both continuous- and discrete-time systems. The continuous-time reduced-order and stochastic-tracking control problems for delayed systems are then treated. Ideas of nonlinear stability are introduced for infinite-horizon systems, again, in both the continuous- and discrete-time cases. The reader is introduced to six practical examples of noisy state-multiplicative control and filtering associated with various fields of control engineering. The book is rounded out by a three-part appendix containing stochastic tools necessary for a proper appreciation of the text: a basic introduction to nonlinear stochastic differential equations and aspects of switched systems and peak to peak  optimal control and filtering. Advanced Topics in Contr...

  2. Noisy transcription factor NF-¿B oscillations stabilize and sensitize cytokine signaling in space

    DEFF Research Database (Denmark)

    Gangstad, S.W.; Feldager, C.W.; Juul, Jeppe Søgaard

    2013-01-01

    NF-¿B is a major transcription factor mediating inflammatory response. In response to a pro-inflammatory stimulus, it exhibits a characteristic response - a pulse followed by noisy oscillations in concentrations of considerably smaller amplitude. NF-¿B is an important mediator of cellular...... amplitude has not been addressed. We use a cellular automaton model to address these issues in the context of spatially distributed communicating cells. We find that noisy secondary oscillations stabilize concentric wave patterns, thus improving signal quality. Furthermore, both lower secondary amplitude...... as well as noise in the oscillation period might be working against chronic inflammation, the state of self-sustained and stimulus-independent excitations. Our findings suggest that the characteristic irregular secondary oscillations of lower amplitude are not accidental. On the contrary, they might have...

  3. Complex Lyapunov exponents from short and noisy sets of data. Application to stability analysis of BWRs

    International Nuclear Information System (INIS)

    Verdu, G.; Ginestar, D.; Bovea, M.D.; Jimenez, P.; Pena, J.; Munoz-Cobo, J.L.

    1997-01-01

    The dynamics reconstruction techniques have been applied to systems as BWRs with a big amount of noise. The success of this methodology was limited due to the noise in the signals. Recently, new techniques have been introduced for short and noisy data sets based on a global fit of the signal by means of orthonormal polynomials. In this paper, we revisit these ideas in order to adapt them for the analysis of the neutronic power signals to characterize the stability regime of BWR reactors. To check the performance of the methodology, we have analyzed simulated noisy signals, observing that the method works well, even with a big amount of noise. Also, we have analyzed experimental signals from Ringhals 1 BWR. In this case, the reconstructed phase space for the system is not very good. A modal decomposition treatment for the signals is proposed producing signals with better behaviour. (author)

  4. Smartphone-Based Hearing Screening in Noisy Environments

    Directory of Open Access Journals (Sweden)

    Youngmin Na

    2014-06-01

    Full Text Available It is important and recommended to detect hearing loss as soon as possible. If it is found early, proper treatment may help improve hearing and reduce the negative consequences of hearing loss. In this study, we developed smartphone-based hearing screening methods that can ubiquitously test hearing. However, environmental noise generally results in the loss of ear sensitivity, which causes a hearing threshold shift (HTS. To overcome this limitation in the hearing screening location, we developed a correction algorithm to reduce the HTS effect. A built-in microphone and headphone were calibrated to provide the standard units of measure. The HTSs in the presence of either white or babble noise were systematically investigated to determine the mean HTS as a function of noise level. When the hearing screening application runs, the smartphone automatically measures the environmental noise and provides the HTS value to correct the hearing threshold. A comparison to pure tone audiometry shows that this hearing screening method in the presence of noise could closely estimate the hearing threshold. We expect that the proposed ubiquitous hearing test method could be used as a simple hearing screening tool and could alert the user if they suffer from hearing loss.

  5. Jump Variation Estimation with Noisy High Frequency Financial Data via Wavelets

    Directory of Open Access Journals (Sweden)

    Xin Zhang

    2016-08-01

    Full Text Available This paper develops a method to improve the estimation of jump variation using high frequency data with the existence of market microstructure noises. Accurate estimation of jump variation is in high demand, as it is an important component of volatility in finance for portfolio allocation, derivative pricing and risk management. The method has a two-step procedure with detection and estimation. In Step 1, we detect the jump locations by performing wavelet transformation on the observed noisy price processes. Since wavelet coefficients are significantly larger at the jump locations than the others, we calibrate the wavelet coefficients through a threshold and declare jump points if the absolute wavelet coefficients exceed the threshold. In Step 2 we estimate the jump variation by averaging noisy price processes at each side of a declared jump point and then taking the difference between the two averages of the jump point. Specifically, for each jump location detected in Step 1, we get two averages from the observed noisy price processes, one before the detected jump location and one after it, and then take their difference to estimate the jump variation. Theoretically, we show that the two-step procedure based on average realized volatility processes can achieve a convergence rate close to O P ( n − 4 / 9 , which is better than the convergence rate O P ( n − 1 / 4 for the procedure based on the original noisy process, where n is the sample size. Numerically, the method based on average realized volatility processes indeed performs better than that based on the price processes. Empirically, we study the distribution of jump variation using Dow Jones Industrial Average stocks and compare the results using the original price process and the average realized volatility processes.

  6. Pitch Tracking and Voiced/Unvoiced Detection in Noisy Environment using Optimat Sequence Estimation

    OpenAIRE

    Wasserblat, Moshe; Gainza, Mikel; Dorran, David; Domb, Yuval

    2008-01-01

    This paper addresses the problem of pitch tracking and voiced/unvoiced detection in noisy speech environments. An algorithm is presented which uses a number of variable thresholds to track pitch contour with minimal error. This is achieved by modeling the pitch tracking problem in such a way that allows the use of optimal estimation methods, such MLSE. The performance of the algorithm is evaluated using the Keele pitch detection database with realistic background noise. Results show best perf...

  7. Multifaceted Effects of Noisy Galvanic Vestibular Stimulation on Manual Tracking Behavior in Parkinson’s Disease

    Directory of Open Access Journals (Sweden)

    Soojin eLee

    2015-02-01

    Full Text Available Parkinson’s disease (PD is a neurodegenerative movement disorder that is characterized clinically by slowness of movement, rigidity, tremor, postural instability, and often cognitive impairments. Recent studies have demonstrated altered cortico-basal ganglia rhythms in PD, which raises the possibility of a role for non-invasive stimulation therapies such as noisy galvanic vestibular stimulation (GVS. We applied noisy GVS to 12 mild-moderately affected PD subjects (Hoehn & Yahr 1.5-2.5 off medication while they performed a sinusoidal visuomotor joystick tracking task, which alternated between 2 task conditions depending on whether the displayed cursor position underestimated the actual error by 30% (‘Better’ or overestimated by 200% (‘Worse’. Either sham or subthreshold, noisy GVS (0.1-10 Hz, 1/f-type power spectrum was applied in pseudorandom order. We used exploratory (Linear Discriminant Analysis with bootstrapping and confirmatory (robust multivariate linear regression methods to determine if the presence of GVS significantly affected our ability to predict cursor position based on target variables. Variables related to displayed error were robustly seen to discriminate GVS in all subjects particularly in the Worse condition. If we considered higher frequency components of the cursor trajectory as noise, the signal-to-noise ratio of cursor trajectory was significantly increased during the GVS stimulation. The results suggest that noisy GVS influenced motor performance of the PD subjects, and we speculate that they were elicited through a combination of mechanisms: enhanced cingulate activity resulting in modulation of frontal midline theta rhythms, improved signal processing in neuromotor system via stochastic facilitation and/or enhanced vigor known to be deficient in PD subjects. Further work is required to determine if GVS has a selective effect on corrective submovements that could not be detected by the current analyses.

  8. Effects of flashlight guidance on chest compression performance in cardiopulmonary resuscitation in a noisy environment

    OpenAIRE

    You, Je Sung; Chung, Sung Phil; Chang, Chul Ho; Park, Incheol; Lee, Hye Sun; Kim, SeungHo; Lee, Hahn Shick

    2012-01-01

    Background In real cardiopulmonary resuscitation (CPR), noise can arise from instructional voices and environmental sounds in places such as a battlefield and industrial and high-traffic areas. A feedback device using a flashing light was designed to overcome noise-induced stimulus saturation during CPR. This study was conducted to determine whether ?flashlight? guidance influences CPR performance in a simulated noisy setting. Materials and methods We recruited 30 senior medical students with...

  9. Model-free prediction of noisy chaotic time series by deep learning

    OpenAIRE

    Yeo, Kyongmin

    2017-01-01

    We present a deep neural network for a model-free prediction of a chaotic dynamical system from noisy observations. The proposed deep learning model aims to predict the conditional probability distribution of a state variable. The Long Short-Term Memory network (LSTM) is employed to model the nonlinear dynamics and a softmax layer is used to approximate a probability distribution. The LSTM model is trained by minimizing a regularized cross-entropy function. The LSTM model is validated against...

  10. Lithium NLP: A System for Rich Information Extraction from Noisy User Generated Text on Social Media

    OpenAIRE

    Bhargava, Preeti; Spasojevic, Nemanja; Hu, Guoning

    2017-01-01

    In this paper, we describe the Lithium Natural Language Processing (NLP) system - a resource-constrained, high- throughput and language-agnostic system for information extraction from noisy user generated text on social media. Lithium NLP extracts a rich set of information including entities, topics, hashtags and sentiment from text. We discuss several real world applications of the system currently incorporated in Lithium products. We also compare our system with existing commercial and acad...

  11. An upper bound for codes for the noisy two-access binary adder channel

    NARCIS (Netherlands)

    Tilborg, van H.C.A.

    1986-01-01

    Using earlier methods a combinatorial upper bound is derived for|C|. cdot |D|, where(C,D)is adelta-decodable code pair for the noisy two-access binary adder channel. Asymptotically, this bound reduces toR_{1}=R_{2} leq frac{3}{2} + elog_{2} e - (frac{1}{2} + e) log_{2} (1 + 2e)= frac{1}{2} - e +

  12. Multifaceted effects of noisy galvanic vestibular stimulation on manual tracking behavior in Parkinson’s disease

    Science.gov (United States)

    Lee, Soojin; Kim, Diana J.; Svenkeson, Daniel; Parras, Gabriel; Oishi, Meeko Mitsuko K.; McKeown, Martin J.

    2015-01-01

    Parkinson’s disease (PD) is a neurodegenerative movement disorder that is characterized clinically by slowness of movement, rigidity, tremor, postural instability, and often cognitive impairments. Recent studies have demonstrated altered cortico-basal ganglia rhythms in PD, which raises the possibility of a role for non-invasive stimulation therapies such as noisy galvanic vestibular stimulation (GVS). We applied noisy GVS to 12 mild-moderately affected PD subjects (Hoehn and Yahr 1.5–2.5) off medication while they performed a sinusoidal visuomotor joystick tracking task, which alternated between 2 task conditions depending on whether the displayed cursor position underestimated the actual error by 30% (‘Better’) or overestimated by 200% (‘Worse’). Either sham or subthreshold, noisy GVS (0.1–10 Hz, 1/f-type power spectrum) was applied in pseudorandom order. We used exploratory (linear discriminant analysis with bootstrapping) and confirmatory (robust multivariate linear regression) methods to determine if the presence of GVS significantly affected our ability to predict cursor position based on target variables. Variables related to displayed error were robustly seen to discriminate GVS in all subjects particularly in the Worse condition. If we considered higher frequency components of the cursor trajectory as “noise,” the signal-to-noise ratio of cursor trajectory was significantly increased during the GVS stimulation. The results suggest that noisy GVS influenced motor performance of the PD subjects, and we speculate that they were elicited through a combination of mechanisms: enhanced cingulate activity resulting in modulation of frontal midline theta rhythms, improved signal processing in neuromotor system via stochastic facilitation and/or enhanced “vigor” known to be deficient in PD subjects. Further work is required to determine if GVS has a selective effect on corrective submovements that could not be detected by the current analyses

  13. Mathematical properties of a semi-classical signal analysis method: Noisy signal case

    KAUST Repository

    Liu, Dayan

    2012-08-01

    Recently, a new signal analysis method based on a semi-classical approach has been proposed [1]. The main idea in this method is to interpret a signal as a potential of a Schrodinger operator and then to use the discrete spectrum of this operator to analyze the signal. In this paper, we are interested in a mathematical analysis of this method in discrete case considering noisy signals. © 2012 IEEE.

  14. Mathematical properties of a semi-classical signal analysis method: Noisy signal case

    KAUST Repository

    Liu, Dayan; Laleg-Kirati, Taous-Meriem

    2012-01-01

    Recently, a new signal analysis method based on a semi-classical approach has been proposed [1]. The main idea in this method is to interpret a signal as a potential of a Schrodinger operator and then to use the discrete spectrum of this operator to analyze the signal. In this paper, we are interested in a mathematical analysis of this method in discrete case considering noisy signals. © 2012 IEEE.

  15. Indicators to assess the environmental performances of an innovative subway station : example of Noisy-Champs

    Science.gov (United States)

    Schertzer, D. J. M.; Charbonnier, L.; Versini, P. A.; Tchiguirinskaia, I.

    2017-12-01

    Noisy-Champs is a train station located in Noisy-le-Grand and Champs-sur-Marne, in the Paris urban area (France). Integrated into the Grand Paris Express project (huge development project to modernise the transport network around Paris), this station is going to be radically transformed and become a major hub. Designed by the architectural office Duthilleul, the new Noisy-Champs station aspires to be an example of an innovative and sustainable infrastructure. Its architectural precepts are indeed meant to improve its environmental performances, especially those related to storm water management, water consumption and users' thermal and hygrometric comfort. In order to assess and monitor these performances, objectives and associated indicators have been developed. They aim to be adapted for a specific infrastructure such as a public transport station. Analyses of pre-existing comfort simulations, blueprints and regulatory documents have led to identify the main issues for the Noisy-Champs station, focusing on its resilience to extreme events like droughts, heatwaves and heaxvy rainfalls. Both objectives and indicators have been proposed by studying the space-time variabilities of physical fluxes (heat, pollutants, radiation, wind and water) and passenger flows, and their interactions. Each indicator is linked to an environmental performance and has been determined after consultation of the different stakeholders involved in the rebuilding of the station. It results a monitoring program to assess the environmental performances of the station composed by both the indicators grid and their related objectives, and a measurement program detailing the nature and location of sensors, and the frequency of measurements.

  16. A virtual speaker in noisy classroom conditions: supporting or disrupting children's listening comprehension?

    Science.gov (United States)

    Nirme, Jens; Haake, Magnus; Lyberg Åhlander, Viveka; Brännström, Jonas; Sahlén, Birgitta

    2018-04-05

    Seeing a speaker's face facilitates speech recognition, particularly under noisy conditions. Evidence for how it might affect comprehension of the content of the speech is more sparse. We investigated how children's listening comprehension is affected by multi-talker babble noise, with or without presentation of a digitally animated virtual speaker, and whether successful comprehension is related to performance on a test of executive functioning. We performed a mixed-design experiment with 55 (34 female) participants (8- to 9-year-olds), recruited from Swedish elementary schools. The children were presented with four different narratives, each in one of four conditions: audio-only presentation in a quiet setting, audio-only presentation in noisy setting, audio-visual presentation in a quiet setting, and audio-visual presentation in a noisy setting. After each narrative, the children answered questions on the content and rated their perceived listening effort. Finally, they performed a test of executive functioning. We found significantly fewer correct answers to explicit content questions after listening in noise. This negative effect was only mitigated to a marginally significant degree by audio-visual presentation. Strong executive function only predicted more correct answers in quiet settings. Altogether, our results are inconclusive regarding how seeing a virtual speaker affects listening comprehension. We discuss how methodological adjustments, including modifications to our virtual speaker, can be used to discriminate between possible explanations to our results and contribute to understanding the listening conditions children face in a typical classroom.

  17. Associative memory for online learning in noisy environments using self-organizing incremental neural network.

    Science.gov (United States)

    Sudo, Akihito; Sato, Akihiro; Hasegawa, Osamu

    2009-06-01

    Associative memory operating in a real environment must perform well in online incremental learning and be robust to noisy data because noisy associative patterns are presented sequentially in a real environment. We propose a novel associative memory that satisfies these requirements. Using the proposed method, new associative pairs that are presented sequentially can be learned accurately without forgetting previously learned patterns. The memory size of the proposed method increases adaptively with learning patterns. Therefore, it suffers neither redundancy nor insufficiency of memory size, even in an environment in which the maximum number of associative pairs to be presented is unknown before learning. Noisy inputs in real environments are classifiable into two types: noise-added original patterns and faultily presented random patterns. The proposed method deals with two types of noise. To our knowledge, no conventional associative memory addresses noise of both types. The proposed associative memory performs as a bidirectional one-to-many or many-to-one associative memory and deals not only with bipolar data, but also with real-valued data. Results demonstrate that the proposed method's features are important for application to an intelligent robot operating in a real environment. The originality of our work consists of two points: employing a growing self-organizing network for an associative memory, and discussing what features are necessary for an associative memory for an intelligent robot and proposing an associative memory that satisfies those requirements.

  18. Non-stationary component extraction in noisy multicomponent signal using polynomial chirping Fourier transform.

    Science.gov (United States)

    Lu, Wenlong; Xie, Junwei; Wang, Heming; Sheng, Chuan

    2016-01-01

    Inspired by track-before-detection technology in radar, a novel time-frequency transform, namely polynomial chirping Fourier transform (PCFT), is exploited to extract components from noisy multicomponent signal. The PCFT combines advantages of Fourier transform and polynomial chirplet transform to accumulate component energy along a polynomial chirping curve in the time-frequency plane. The particle swarm optimization algorithm is employed to search optimal polynomial parameters with which the PCFT will achieve a most concentrated energy ridge in the time-frequency plane for the target component. The component can be well separated in the polynomial chirping Fourier domain with a narrow-band filter and then reconstructed by inverse PCFT. Furthermore, an iterative procedure, involving parameter estimation, PCFT, filtering and recovery, is introduced to extract components from a noisy multicomponent signal successively. The Simulations and experiments show that the proposed method has better performance in component extraction from noisy multicomponent signal as well as provides more time-frequency details about the analyzed signal than conventional methods.

  19. Homocysteine plasma levels in patients suspected coronary artery disease: Relation to myocardial perfusion image

    International Nuclear Information System (INIS)

    Yao, Z.Y.; He, Q.; Qu, W.

    2002-01-01

    Purpose: Although there is considerable epidemiologic evidence for a relationship between plasma homocysteine (Hcy) levels and coronary artery disease (CAD), not all studies, especially prospective ones have shown such a relationship. The purpose of this study was to investigate a possible association between Hcy plasma levels and myocardial perfusion defect by SPECT in patients suspected CAD. Methods and Materials: A cohort of 238 patients suspected CAD (age: 60.65±10.43, male to female: 172: 66) was examined for Hcy, tetrahydrofolic acid (FH4), vitamine B12 and coronary angiography (CAG). Furthermore, 42 patients also underwent 99m Tc-MIBI myocardial perfusion images (MPI) to assess the myocardial perfusion. Results: There were 69 patients with normal CAG and 63, 60, 42 and 4 patients with 1 vessel, two vessel, 3 vessel and left main coronary stenosis. The plasma Hcy of this group was significantly increased, p 0.05. In patients with >=3 segments myocardial perfusion defect, 10 of them had normal Hcy, and 7 with hyperhomocysteinemia, in patients with 0.05). Conclusion: Our data may indicate that hyperhomocysteinemia represents an independent risk factor in patients with high possibility of CAD rather than a mark of myocardial ischemia or coronary stenosis

  20. Facilitation of listening comprehension by visual information under noisy listening condition

    Science.gov (United States)

    Kashimada, Chiho; Ito, Takumi; Ogita, Kazuki; Hasegawa, Hiroshi; Kamata, Kazuo; Ayama, Miyoshi

    2009-02-01

    Comprehension of a sentence under a wide range of delay conditions between auditory and visual stimuli was measured in the environment with low auditory clarity of the level of -10dB and -15dB pink noise. Results showed that the image was helpful for comprehension of the noise-obscured voice stimulus when the delay between the auditory and visual stimuli was 4 frames (=132msec) or less, the image was not helpful for comprehension when the delay between the auditory and visual stimulus was 8 frames (=264msec) or more, and in some cases of the largest delay (32 frames), the video image interfered with comprehension.

  1. Precise Automatic Image Coregistration Tools to Enable Pixel-Level Change Detection, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Automated detection of land cover changes between multitemporal images (i.e., images captured at different times) has long been a goal of the remote sensing...

  2. Convergence estimates in probability and in expectation for discrete least squares with noisy evaluations at random points

    KAUST Repository

    Migliorati, Giovanni; Nobile, Fabio; Tempone, Raul

    2015-01-01

    We study the accuracy of the discrete least-squares approximation on a finite dimensional space of a real-valued target function from noisy pointwise evaluations at independent random points distributed according to a given sampling probability

  3. Comparative study between the radiopacity levels of high viscosity and of flowable composite resins, using digital imaging.

    Science.gov (United States)

    Arita, Emiko S; Silveira, Gilson P; Cortes, Arthur R; Brucoli, Henrique C

    2012-01-01

    The development of countless types and trends of high viscosite and flowable composite resins, with different physical and chemical properties applicable to their broad use in dental clinics calls for further studies regarding their radiopacity level. The aim of this study was to evaluate the radiopacity levels of high viscosity and the flowable composite resins, using digital imaging. 96 composite resin discs 5 mm in diameter and 3 mm thick were radiographed and analyzed. The image acquisition system used was the Digora® Phosphor Storage System and the images were analyzed with the Digora software for Windows. The exposure conditions were: 70 kVp, 8 mA, and 0.2 s. The focal distance was 40 cm. The image densities were obtained with the pixel values of the materials in the digital image. Most of the high viscosity composite resins presented higher radiopacity levels than the flowable composite resins, with statistically significant differences between the trends and groups analyzed (P composite resins, Tetric®Ceram presented the highest radiopacity levels and Glacier® presented the lowest. Among the flowable composite resins, Tetric®Flow presented the highest radiopacity levels and Wave® presented the lowest.

  4. Characterization of mammographic masses based on level set segmentation with new image features and patient information

    International Nuclear Information System (INIS)

    Shi Jiazheng; Sahiner, Berkman; Chan Heangping; Ge Jun; Hadjiiski, Lubomir; Helvie, Mark A.; Nees, Alexis; Wu Yita; Wei Jun; Zhou Chuan; Zhang Yiheng; Cui Jing

    2008-01-01

    Computer-aided diagnosis (CAD) for characterization of mammographic masses as malignant or benign has the potential to assist radiologists in reducing the biopsy rate without increasing false negatives. The purpose of this study was to develop an automated method for mammographic mass segmentation and explore new image based features in combination with patient information in order to improve the performance of mass characterization. The authors' previous CAD system, which used the active contour segmentation, and morphological, textural, and spiculation features, has achieved promising results in mass characterization. The new CAD system is based on the level set method and includes two new types of image features related to the presence of microcalcifications with the mass and abruptness of the mass margin, and patient age. A linear discriminant analysis (LDA) classifier with stepwise feature selection was used to merge the extracted features into a classification score. The classification accuracy was evaluated using the area under the receiver operating characteristic curve. The authors' primary data set consisted of 427 biopsy-proven masses (200 malignant and 227 benign) in 909 regions of interest (ROIs) (451 malignant and 458 benign) from multiple mammographic views. Leave-one-case-out resampling was used for training and testing. The new CAD system based on the level set segmentation and the new mammographic feature space achieved a view-based A z value of 0.83±0.01. The improvement compared to the previous CAD system was statistically significant (p=0.02). When patient age was included in the new CAD system, view-based and case-based A z values were 0.85±0.01 and 0.87±0.02, respectively. The study also demonstrated the consistency of the newly developed CAD system by evaluating the statistics of the weights of the LDA classifiers in leave-one-case-out classification. Finally, an independent test on the publicly available digital database for screening

  5. Numeric ultrasonic image processing method: application to non-destructive testing of stainless austenitic steel welds

    International Nuclear Information System (INIS)

    Corneloup, G.

    1988-09-01

    A bibliographic research on the means used to improve the ultrasonic inspection of heterogeneous materials such as stainless austenitic steel welds has shown, taking into account the first analysis, a signal assembly in the form of an image (space, time) which carries an original solution to fault detection in highly noisy environments. A numeric grey-level ultrasonic image processing detection method is proposed based on the research of a certain determinism, in the way which the ultrasonic image evolves in space and time in the presence of a defect: the first criterion studies the horizontal stability of the gradients in the image and the second takes into account the time-transient nature of the defect echo. A very important rise in the signal-to-noise ratio obtained in welding inspections evidencing defects (real and artificial) is shown with the help of a computerized ultrasonic image processing/management system, developed for this application [fr

  6. Noisy Neurons

    Indian Academy of Sciences (India)

    IAS Admin

    Hodgkin–Huxley Model and Stochastic Variants. Shruti Paranjape ... ticellular forms, diffusion and signal transduction which suffice for ... on information from and to the brain. Nerves are .... Figure 3 (left). This is the ... Figure 4 (right). This is the.

  7. Image restoration and processing methods

    International Nuclear Information System (INIS)

    Daniell, G.J.

    1984-01-01

    This review will stress the importance of using image restoration techniques that deal with incomplete, inconsistent, and noisy data and do not introduce spurious features into the processed image. No single image is equally suitable for both the resolution of detail and the accurate measurement of intensities. A good general purpose technique is the maximum entropy method and the basis and use of this will be explained. (orig.)

  8. Increased level of extracellular ATP at tumor sites: in vivo imaging with plasma membrane luciferase.

    Directory of Open Access Journals (Sweden)

    Patrizia Pellegatti

    2008-07-01

    Full Text Available There is growing awareness that tumour cells build up a "self-advantageous" microenvironment that reduces effectiveness of anti-tumour immune response. While many different immunosuppressive mechanisms are likely to come into play, recent evidence suggests that extracellular adenosine acting at A2A receptors may have a major role in down-modulating the immune response as cancerous tissues contain elevated levels of adenosine and adenosine break-down products. While there is no doubt that all cells possess plasma membrane adenosine transporters that mediate adenosine uptake and may also allow its release, it is now clear that most of extracellularly-generated adenosine originates from the catabolism of extracellular ATP.Measurement of extracellular ATP is generally performed in cell supernatants by HPLC or soluble luciferin-luciferase assay, thus it generally turns out to be laborious and inaccurate. We have engineered a chimeric plasma membrane-targeted luciferase that allows in vivo real-time imaging of extracellular ATP. With this novel probe we have measured the ATP concentration within the tumour microenvironment of several experimentally-induced tumours.Our results show that ATP in the tumour interstitium is in the hundreds micromolar range, while it is basically undetectable in healthy tissues. Here we show that a chimeric plasma membrane-targeted luciferase allows in vivo detection of high extracellular ATP concentration at tumour sites. On the contrary, tumour-free tissues show undetectable extracellular ATP levels. Extracellular ATP may be crucial for the tumour not only as a stimulus for growth but also as a source of an immunosuppressive agent such as adenosine. Our approach offers a new tool for the investigation of the biochemical composition of tumour milieu and for development of novel therapies based on the modulation of extracellular purine-based signalling.

  9. Functional neuroanatomy in depressed patients with sexual dysfunction: blood oxygenation level dependent functional MR imaging

    International Nuclear Information System (INIS)

    Yang, Jong Chul

    2004-01-01

    To demonstrate the functional neuroanatomy associated with sexual arousal visually evoked in depressed males who have underlying sexual dysfunction using Blood Oxygenation Level Dependent-based fMRI. Ten healthy volunteers (age range 21-55: mean 32.5 years), and 10 depressed subjects (age range 23-51: mean 34.4 years, mean Beck Depression Inventory score of 39.6 ± 5.9, mean Hamilton Rating Scale Depression (HAMD)-17 score of 33.5 ± 6.0) with sexual arousal dysfunction viewed erotic and neutral video films during functional magnetic resonance imaging (fMRI) with 1.5 T MR scanner (GE Signa Horizon). The fMRI data were obtained from 7 oblique planes using gradient-echo EPI (flip angle/TR/TE=90 .deg. /6000 ms/50 ms). The visual stimulation paradigm began with 60 sec of black screen, 150 sec of neutral stimulation with a documentary video film, 30 sec of black screen, 150 sec of sexual stimulation with an erotic video film followed by 30 sec of black screen. The brain activation maps and their quantification were analyzed by SPM99 program. There was a significant difference of brain activation between two groups during visual sexual stimulation. In depressed subjects, the level of activation during the visually evoked sexual arousal was significantly less than that of healthy volunteers, especially in the cerebrocortical areas of the hypothalamus, thalamus, caudate nucleus, and inferior and superior temporal gyri. On the other hand, the cerebral activation patterns during the neutral condition in both groups showed no significant differences (ρ < 0.01). This study is the first demonstration of the functional neuroanatomy of the brain associated with sexual dysfunction in depressed patients using fMRI. In order to validate our physiological neuroscience results, further studies that would include patients with other disorders and sexual dysfunction, and depressed patients without sexual dysfunction and their treatment response are needed

  10. Functional neuroanatomy in depressed patients with sexual dysfunction: blood oxygenation level dependent functional MR imaging

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Jong Chul [Chonnam National Univ. Hospital, Kwangju (Korea, Republic of)

    2004-06-15

    To demonstrate the functional neuroanatomy associated with sexual arousal visually evoked in depressed males who have underlying sexual dysfunction using Blood Oxygenation Level Dependent-based fMRI. Ten healthy volunteers (age range 21-55: mean 32.5 years), and 10 depressed subjects (age range 23-51: mean 34.4 years, mean Beck Depression Inventory score of 39.6 {+-} 5.9, mean Hamilton Rating Scale Depression (HAMD)-17 score of 33.5 {+-} 6.0) with sexual arousal dysfunction viewed erotic and neutral video films during functional magnetic resonance imaging (fMRI) with 1.5 T MR scanner (GE Signa Horizon). The fMRI data were obtained from 7 oblique planes using gradient-echo EPI (flip angle/TR/TE=90 .deg. /6000 ms/50 ms). The visual stimulation paradigm began with 60 sec of black screen, 150 sec of neutral stimulation with a documentary video film, 30 sec of black screen, 150 sec of sexual stimulation with an erotic video film followed by 30 sec of black screen. The brain activation maps and their quantification were analyzed by SPM99 program. There was a significant difference of brain activation between two groups during visual sexual stimulation. In depressed subjects, the level of activation during the visually evoked sexual arousal was significantly less than that of healthy volunteers, especially in the cerebrocortical areas of the hypothalamus, thalamus, caudate nucleus, and inferior and superior temporal gyri. On the other hand, the cerebral activation patterns during the neutral condition in both groups showed no significant differences ({rho} < 0.01). This study is the first demonstration of the functional neuroanatomy of the brain associated with sexual dysfunction in depressed patients using fMRI. In order to validate our physiological neuroscience results, further studies that would include patients with other disorders and sexual dysfunction, and depressed patients without sexual dysfunction and their treatment response are needed.

  11. Mutation detection for inventories of traffic signs from street-level panoramic images

    Science.gov (United States)

    Hazelhoff, Lykele; Creusen, Ivo; De With, Peter H. N.

    2014-03-01

    Road safety is positively influenced by both adequate placement and optimal visibility of traffic signs. As their visibility degrades over time due to e.g. aging, vandalism, accidents and vegetation coverage, up-to-date inven­tories of traffic signs are highly attractive for preserving a high road safety. These inventories are performed in a semi-automatic fashion from street-level panoramic images, exploiting object detection and classification tech­niques. Next to performing inventories from scratch, these systems are also exploited for the efficient retrieval of situation changes by comparing the outcome of the automated system to a baseline inventory (e.g. performed in a previous year). This allows for specific manual interactions to the found changes, while skipping all unchanged situations, thereby resulting in a large efficiency gain. This work describes such a mutation detection approach, with special attention to re-identifying previously found signs. Preliminary results on a geographical area con­taining about 425 km of road show that 91.3% of the unchanged signs are re-identified, while the amount of found differences equals about 35% of the number of baseline signs. From these differences, about 50% correspond to physically changed traffic signs, next to false detections, misclassifications and missed signs. As a bonus, our approach directly results in the changed situations, which is beneficial for road sign maintenance.

  12. High-level core sample x-ray imaging at the Hanford Site

    International Nuclear Information System (INIS)

    Weber, J.R.; Keye, J.K.

    1995-01-01

    Waste tank sampling of radioactive high-level waste is required for continued operations, waste characterization, and site safety. Hanford Site Tank farms consist of 28 double-shell and 149 single-shell underground storage tanks. The single shell tanks are out-of-service and no longer receive liquid waste. Core samples of salt cake and sludge waste are remotely obtained using truck-mounted, core drill platforms. Samples are recovered from tanks through a 2.25 inch (in.) drill pipe in 26-in. steel tubes, 1.5 in. diameter. Drilling parameters vary with different waste types. Because sample recovery has been marginal and inadequate at times, a system was needed to provide drill truck operators with real-time feedback about the physical conditions of the sample and the percent recovery, prior to making nuclear assay measurements and characterizations at the analytical laboratory. Westinghouse hanford Company conducted proof-of -principal radiographic testing to verify the feasibility of a proposed imaging system

  13. Convergence estimates in probability and in expectation for discrete least squares with noisy evaluations at random points

    KAUST Repository

    Migliorati, Giovanni

    2015-08-28

    We study the accuracy of the discrete least-squares approximation on a finite dimensional space of a real-valued target function from noisy pointwise evaluations at independent random points distributed according to a given sampling probability measure. The convergence estimates are given in mean-square sense with respect to the sampling measure. The noise may be correlated with the location of the evaluation and may have nonzero mean (offset). We consider both cases of bounded or square-integrable noise / offset. We prove conditions between the number of sampling points and the dimension of the underlying approximation space that ensure a stable and accurate approximation. Particular focus is on deriving estimates in probability within a given confidence level. We analyze how the best approximation error and the noise terms affect the convergence rate and the overall confidence level achieved by the convergence estimate. The proofs of our convergence estimates in probability use arguments from the theory of large deviations to bound the noise term. Finally we address the particular case of multivariate polynomial approximation spaces with any density in the beta family, including uniform and Chebyshev.

  14. Subspace learning from image gradient orientations

    NARCIS (Netherlands)

    Tzimiropoulos, Georgios; Zafeiriou, Stefanos; Pantic, Maja

    2012-01-01

    We introduce the notion of subspace learning from image gradient orientations for appearance-based object recognition. As image data is typically noisy and noise is substantially different from Gaussian, traditional subspace learning from pixel intensities fails very often to estimate reliably the

  15. Multivariate statistical analysis for x-ray photoelectron spectroscopy spectral imaging: Effect of image acquisition time

    International Nuclear Information System (INIS)

    Peebles, D.E.; Ohlhausen, J.A.; Kotula, P.G.; Hutton, S.; Blomfield, C.

    2004-01-01

    The acquisition of spectral images for x-ray photoelectron spectroscopy (XPS) is a relatively new approach, although it has been used with other analytical spectroscopy tools for some time. This technique provides full spectral information at every pixel of an image, in order to provide a complete chemical mapping of the imaged surface area. Multivariate statistical analysis techniques applied to the spectral image data allow the determination of chemical component species, and their distribution and concentrations, with minimal data acquisition and processing times. Some of these statistical techniques have proven to be very robust and efficient methods for deriving physically realistic chemical components without input by the user other than the spectral matrix itself. The benefits of multivariate analysis of the spectral image data include significantly improved signal to noise, improved image contrast and intensity uniformity, and improved spatial resolution - which are achieved due to the effective statistical aggregation of the large number of often noisy data points in the image. This work demonstrates the improvements in chemical component determination and contrast, signal-to-noise level, and spatial resolution that can be obtained by the application of multivariate statistical analysis to XPS spectral images

  16. Image understanding systems based on the unifying representation of perceptual and conceptual information and the solution of mid-level and high-level vision problems

    Science.gov (United States)

    Kuvychko, Igor

    2001-10-01

    Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, that is an interpretation of visual information in terms of such knowledge models. A computer vision system based on such principles requires unifying representation of perceptual and conceptual information. Computer simulation models are built on the basis of graphs/networks. The ability of human brain to emulate similar graph/networks models is found. That means a very important shift of paradigm in our knowledge about brain from neural networks to the cortical software. Starting from the primary visual areas, brain analyzes an image as a graph-type spatial structure. Primary areas provide active fusion of image features on a spatial grid-like structure, where nodes are cortical columns. The spatial combination of different neighbor features cannot be described as a statistical/integral characteristic of the analyzed region, but uniquely characterizes such region itself. Spatial logic and topology naturally present in such structures. Mid-level vision processes like clustering, perceptual grouping, multilevel hierarchical compression, separation of figure from ground, etc. are special kinds of graph/network transformations. They convert low-level image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena like shape from shading, occlusion, etc. are results of such analysis. Such approach gives opportunity not only to explain frequently unexplainable results of the cognitive science, but also to create intelligent computer vision systems that simulate perceptional processes in both what and where visual pathways. Such systems can open new horizons for robotic and computer vision industries.

  17. Optimization of hybrid iterative reconstruction level and evaluation of image quality and radiation dose for pediatric cardiac computed tomography angiography

    International Nuclear Information System (INIS)

    Yang, Lin; Liang, Changhong; Zhuang, Jian; Huang, Meiping; Liu, Hui

    2017-01-01

    Hybrid iterative reconstruction can reduce image noise and produce better image quality compared with filtered back-projection (FBP), but few reports describe optimization of the iteration level. We optimized the iteration level of iDose"4 and evaluated image quality for pediatric cardiac CT angiography. Children (n = 160) with congenital heart disease were enrolled and divided into full-dose (n = 84) and half-dose (n = 76) groups. Four series were reconstructed using FBP, and iDose"4 levels 2, 4 and 6; we evaluated subjective quality of the series using a 5-grade scale and compared the series using a Kruskal-Wallis H test. For FBP and iDose"4-optimal images, we compared contrast-to-noise ratios (CNR) and size-specific dose estimates (SSDE) using a Student's t-test. We also compared diagnostic-accuracy of each group using a Kruskal-Wallis H test. Mean scores for iDose"4 level 4 were the best in both dose groups (all P < 0.05). CNR was improved in both groups with iDose"4 level 4 as compared with FBP. Mean decrease in SSDE was 53% in the half-dose group. Diagnostic accuracy for the four datasets were in the range 92.6-96.2% (no statistical difference). iDose"4 level 4 was optimal for both the full- and half-dose groups. Protocols with iDose"4 level 4 allowed 53% reduction in SSDE without significantly affecting image quality and diagnostic accuracy. (orig.)

  18. Optimization of hybrid iterative reconstruction level and evaluation of image quality and radiation dose for pediatric cardiac computed tomography angiography

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Lin; Liang, Changhong [Southern Medical University, Guangzhou (China); Guangdong Academy of Medical Sciences, Dept. of Radiology, Guangdong General Hospital, Guangzhou (China); Zhuang, Jian [Guangdong Academy of Medical Sciences, Dept. of Cardiac Surgery, Guangdong Cardiovascular Inst., Guangdong Provincial Key Lab. of South China Structural Heart Disease, Guangdong General Hospital, Guangzhou (China); Huang, Meiping [Guangdong Academy of Medical Sciences, Dept. of Radiology, Guangdong General Hospital, Guangzhou (China); Guangdong Academy of Medical Sciences, Dept. of Catheterization Lab, Guangdong Cardiovascular Inst., Guangdong Provincial Key Lab. of South China Structural Heart Disease, Guangdong General Hospital, Guangzhou (China); Liu, Hui [Guangdong Academy of Medical Sciences, Dept. of Radiology, Guangdong General Hospital, Guangzhou (China)

    2017-01-15

    Hybrid iterative reconstruction can reduce image noise and produce better image quality compared with filtered back-projection (FBP), but few reports describe optimization of the iteration level. We optimized the iteration level of iDose{sup 4} and evaluated image quality for pediatric cardiac CT angiography. Children (n = 160) with congenital heart disease were enrolled and divided into full-dose (n = 84) and half-dose (n = 76) groups. Four series were reconstructed using FBP, and iDose{sup 4} levels 2, 4 and 6; we evaluated subjective quality of the series using a 5-grade scale and compared the series using a Kruskal-Wallis H test. For FBP and iDose{sup 4}-optimal images, we compared contrast-to-noise ratios (CNR) and size-specific dose estimates (SSDE) using a Student's t-test. We also compared diagnostic-accuracy of each group using a Kruskal-Wallis H test. Mean scores for iDose{sup 4} level 4 were the best in both dose groups (all P < 0.05). CNR was improved in both groups with iDose{sup 4} level 4 as compared with FBP. Mean decrease in SSDE was 53% in the half-dose group. Diagnostic accuracy for the four datasets were in the range 92.6-96.2% (no statistical difference). iDose{sup 4} level 4 was optimal for both the full- and half-dose groups. Protocols with iDose{sup 4} level 4 allowed 53% reduction in SSDE without significantly affecting image quality and diagnostic accuracy. (orig.)

  19. Low-level contrast statistics of natural images can modulate the frequency of event-related potentials (ERP in humans

    Directory of Open Access Journals (Sweden)

    Masoud Ghodrati

    2016-12-01

    Full Text Available Humans are fast and accurate in categorizing complex natural images. It is, however, unclear what features of visual information are exploited by brain to perceive the images with such speed and accuracy. It has been shown that low-level contrast statistics of natural scenes can explain the variance of amplitude of event-related potentials (ERP in response to rapidly presented images. In this study, we investigated the effect of these statistics on frequency content of ERPs. We recorded ERPs from human subjects, while they viewed natural images each presented for 70 ms. Our results showed that Weibull contrast statistics, as a biologically plausible model, explained the variance of ERPs the best, compared to other image statistics that we assessed. Our time-frequency analysis revealed a significant correlation between these statistics and ERPs’ power within theta frequency band (~3-7 Hz. This is interesting, as theta band is believed to be involved in context updating and semantic encoding. This correlation became significant at ~110 ms after stimulus onset, and peaked at 138 ms. Our results show that not only the amplitude but also the frequency of neural responses can be modulated with low-level contrast statistics of natural images and highlights their potential role in scene perception.

  20. Radiation therapists' perceptions of the minimum level of experience required to perform portal image analysis

    Energy Technology Data Exchange (ETDEWEB)

    Rybovic, Michala [Discipline of Medical Radiation Sciences, Faculty of Health Sciences, University of Sydney, PO Box 170, Lidcombe, NSW 1825 (Australia)], E-mail: mryb6983@mail.usyd.edu.au; Halkett, Georgia K. [Western Australia Centre for Cancer and Palliative Care, Curtin University of Technology, Health Research Campus, GPO Box U1987, Perth, WA 6845 (Australia)], E-mail: g.halkett@curtin.edu.au; Banati, Richard B. [Faculty of Health Sciences, Brain and Mind Research Institute - Ramaciotti Centre for Brain Imaging, University of Sydney, PO Box 170, Lidcombe, NSW 1825 (Australia)], E-mail: r.banati@usyd.edu.au; Cox, Jennifer [Discipline of Medical Radiation Sciences, Faculty of Health Sciences, University of Sydney, PO Box 170, Lidcombe, NSW 1825 (Australia)], E-mail: jenny.cox@usyd.edu.au

    2008-11-15

    Background and purpose: Our aim was to explore radiation therapists' views on the level of experience necessary to undertake portal image analysis and clinical decision making. Materials and methods: A questionnaire was developed to determine the availability of portal imaging equipment in Australia and New Zealand. We analysed radiation therapists' responses to a specific question regarding their opinion on the minimum level of experience required for health professionals to analyse portal images. We used grounded theory and a constant comparative method of data analysis to derive the main themes. Results: Forty-six radiation oncology facilities were represented in our survey, with 40 questionnaires being returned (87%). Thirty-seven radiation therapists answered our free-text question. Radiation therapists indicated three main themes which they felt were important in determining the minimum level of experience: 'gaining on-the-job experience', 'receiving training' and 'working as a team'. Conclusions: Radiation therapists indicated that competence in portal image review occurs via various learning mechanisms. Further research is warranted to determine perspectives of other health professionals, such as radiation oncologists, on portal image review becoming part of radiation therapists' extended role. Suitable training programs and steps for implementation should be developed to facilitate this endeavour.

  1. Effects of flashlight guidance on chest compression performance in cardiopulmonary resuscitation in a noisy environment.

    Science.gov (United States)

    You, Je Sung; Chung, Sung Phil; Chang, Chul Ho; Park, Incheol; Lee, Hye Sun; Kim, SeungHo; Lee, Hahn Shick

    2013-08-01

    In real cardiopulmonary resuscitation (CPR), noise can arise from instructional voices and environmental sounds in places such as a battlefield and industrial and high-traffic areas. A feedback device using a flashing light was designed to overcome noise-induced stimulus saturation during CPR. This study was conducted to determine whether 'flashlight' guidance influences CPR performance in a simulated noisy setting. We recruited 30 senior medical students with no previous experience of using flashlight-guided CPR to participate in this prospective, simulation-based, crossover study. The experiment was conducted in a simulated noisy situation using a cardiac arrest model without ventilation. Noise such as patrol car and fire engine sirens was artificially generated. The flashlight guidance device emitted light pulses at the rate of 100 flashes/min. Participants also received instructions to achieve the desired rate of 100 compressions/min. CPR performances were recorded with a Resusci Anne mannequin with a computer skill-reporting system. There were significant differences between the control and flashlight groups in mean compression rate (MCR), MCR/min and visual analogue scale. However, there were no significant differences in correct compression depth, mean compression depth, correct hand position, and correctly released compression. The flashlight group constantly maintained the pace at the desired 100 compressions/min. Furthermore, the flashlight group had a tendency to keep the MCR constant, whereas the control group had a tendency to decrease it after 60 s. Flashlight-guided CPR is particularly advantageous for maintaining a desired MCR during hands-only CPR in noisy environments, where metronome pacing might not be clearly heard.

  2. Continuous-variable protocol for oblivious transfer in the noisy-storage model

    DEFF Research Database (Denmark)

    Furrer, Fabian; Gehring, Tobias; Schaffner, Christian

    2018-01-01

    for oblivious transfer for optical continuous-variable systems, and prove its security in the noisy-storage model. This model allows us to establish security by sending more quantum signals than an attacker can reliably store during the protocol. The security proof is based on uncertainty relations which we...... derive for continuous-variable systems, that differ from the ones used in quantum key distribution. We experimentally demonstrate in a proof-of-principle experiment the proposed oblivious transfer protocol for various channel losses by using entangled two-mode squeezed states measured with balanced...

  3. A Scent of Lemon—Seller Meets Buyer with a Noisy Quality Observation

    Directory of Open Access Journals (Sweden)

    Jörgen W. Weibull

    2011-03-01

    Full Text Available We consider a market for lemons in which the seller is a monopolistic price setter and the buyer receives a private noisy signal of the product’s quality. We model this as a game and analyze perfect Bayesian equilibrium prices, trading probabilities and gains of trade. In particular, we vary the buyer’s signal precision, from being completely uninformative, as in standard models of lemons markets, to being perfectly informative. We show that high quality units are sold with positive probability even in the limit of uninformative signals, and we identify some discontinuities in the equilibrium predictions at the boundaries of completely uninformative and completely informative signals, respectively.

  4. Improving the fidelity of teleportation through noisy channels using weak measurement

    Energy Technology Data Exchange (ETDEWEB)

    Pramanik, T., E-mail: tanu.pram99@bose.res.in; Majumdar, A.S., E-mail: archan@bose.res.in

    2013-12-13

    We employ the technique of weak measurement in order to enable preservation of teleportation fidelity for two-qubit noisy channels. We consider one or both qubits of a maximally entangled state to undergo amplitude damping, and show that the application of weak measurement and a subsequent reverse operation could lead to a fidelity greater than 2/3 for any value of the decoherence parameter. The success probability of the protocol decreases with the strength of weak measurement, and is lower when both the qubits are affected by decoherence. Finally, our protocol is shown to work for the Werner state too.

  5. A Statistical and Spectral Model for Representing Noisy Sounds with Short-Time Sinusoids

    Directory of Open Access Journals (Sweden)

    Myriam Desainte-Catherine

    2005-07-01

    Full Text Available We propose an original model for noise analysis, transformation, and synthesis: the CNSS model. Noisy sounds are represented with short-time sinusoids whose frequencies and phases are random variables. This spectral and statistical model represents information about the spectral density of frequencies. This perceptually relevant property is modeled by three mathematical parameters that define the distribution of the frequencies. This model also represents the spectral envelope. The mathematical parameters are defined and the analysis algorithms to extract these parameters from sounds are introduced. Then algorithms for generating sounds from the parameters of the model are presented. Applications of this model include tools for composers, psychoacoustic experiments, and pedagogy.

  6. Experimental test of the strongly nonclassical character of a noisy squeezed single-photon state

    DEFF Research Database (Denmark)

    Jezek, M.; Tipsmark, A.; Dong, R.

    2012-01-01

    We experimentally verify the quantum non-Gaussian character of a conditionally generated noisy squeezed single-photon state with a positive Wigner function. Employing an optimized witness based on probabilities of squeezed vacuum and squeezed single-photon states, we prove that the state cannot...... be expressed as a mixture of Gaussian states. In our experiment, the non-Gaussian state is generated by conditional subtraction of a single photon from a squeezed vacuum state. The state is probed with a homodyne detector and the witness is determined by averaging a suitable pattern function over the measured...

  7. The MISO Wiretap Channel with Noisy Main Channel Estimation in the High Power Regime

    KAUST Repository

    Rezki, Zouheir

    2017-02-07

    We improve upon our previous upper bound on the secrecy capacity of the wiretap channel with multiple transmit antennas and single-antenna receivers, with noisy main channel state information (CSI) at the transmitter (CSI-T). Specifically, we show that if the main CSI error does not scale with the power budget at the transmitter P̅, then the secrecy capacity is )bounded above essentially by log log (P̅ yielding a secure degree of freedom (sdof) equal to zero. However, if the main CSI error scales as O(P̅-β), for β ∈ [0,1], then the sdof is equal to β.

  8. Performance of unbalanced QPSK in the presence of noisy reference and crosstalk

    Science.gov (United States)

    Divsalar, D.; Yuen, J. H.

    1979-01-01

    The problem of transmitting two telemetry data streams having different rates and different powers using unbalanced quadriphase shift keying (UQPSK) signaling is considered. It is noted that the presence of a noisy carrier phase reference causes a degradation in detection performance in coherent communications systems and that imperfect carrier synchronization not only attenuates the main demodulated signal voltage in UQPSK but also produces interchannel interference (crosstalk) which degrades the performance still further. Exact analytical expressions for symbol error probability of UQPSK in the presence of noise phase reference are derived.

  9. Security of modified Ping-Pong protocol in noisy and lossy channel

    OpenAIRE

    Han, Yun-Guang; Yin, Zhen-Qiang; Li, Hong-Wei; Chen, Wei; Wang, Shuang; Guo, Guang-Can; Han, Zheng-Fu

    2014-01-01

    The “Ping-Pong” (PP) protocol is a two-way quantum key protocol based on entanglement. In this protocol, Bob prepares one maximally entangled pair of qubits, and sends one qubit to Alice. Then, Alice performs some necessary operations on this qubit and sends it back to Bob. Although this protocol was proposed in 2002, its security in the noisy and lossy channel has not been proven. In this report, we add a simple and experimentally feasible modification to the original PP protocol, and prove ...

  10. The MISO Wiretap Channel with Noisy Main Channel Estimation in the High Power Regime

    KAUST Repository

    Rezki, Zouheir; Chaaban, Anas; Alomair, Basel; Alouini, Mohamed-Slim

    2017-01-01

    We improve upon our previous upper bound on the secrecy capacity of the wiretap channel with multiple transmit antennas and single-antenna receivers, with noisy main channel state information (CSI) at the transmitter (CSI-T). Specifically, we show that if the main CSI error does not scale with the power budget at the transmitter P̅, then the secrecy capacity is )bounded above essentially by log log (P̅ yielding a secure degree of freedom (sdof) equal to zero. However, if the main CSI error scales as O(P̅-β), for β ∈ [0,1], then the sdof is equal to β.

  11. Numerical modeling of optical coherent transient processes with complex configurations-III: Noisy laser source

    International Nuclear Information System (INIS)

    Chang Tiejun; Tian Mingzhen

    2007-01-01

    A previously developed numerical model based on Maxwell-Bloch equations was modified to simulate optical coherent transient and spectral hole burning processes with noisy laser sources. Random walk phase noise was simulated using laser-phase sequences generated numerically according to the normal distribution of the phase shift. The noise model was tested by comparing the simulated spectral hole burning effect with the analytical solution. The noise effects on a few typical optical coherence transient processes were investigated using this numerical tool. Flicker and random walk frequency noises were considered in accumulation process

  12. Gaussian Error Correction of Quantum States in a Correlated Noisy Channel

    DEFF Research Database (Denmark)

    Lassen, Mikael Østergaard; Berni, Adriano; Madsen, Lars Skovgaard

    2013-01-01

    Noise is the main obstacle for the realization of fault-tolerant quantum information processing and secure communication over long distances. In this work, we propose a communication protocol relying on simple linear optics that optimally protects quantum states from non-Markovian or correlated...... noise. We implement the protocol experimentally and demonstrate the near-ideal protection of coherent and entangled states in an extremely noisy channel. Since all real-life channels are exhibiting pronounced non-Markovian behavior, the proposed protocol will have immediate implications in improving...... the performance of various quantum information protocols....

  13. Stochastic bounded consensus of second-order multi-agent systems in noisy environment

    International Nuclear Information System (INIS)

    Ren Hong-Wei; Deng Fei-Qi

    2017-01-01

    This paper investigates the stochastic bounded consensus of leader-following second-order multi-agent systems in a noisy environment. It is assumed that each agent received the information of its neighbors corrupted by noises and time delays. Based on the graph theory, stochastic tools, and the Lyapunov function method, we derive the sufficient conditions under which the systems would reach stochastic bounded consensus in mean square with the protocol we designed. Finally, a numerical simulation is illustrated to check the effectiveness of the proposed algorithms. (paper)

  14. An aperiodic phenomenon of the unscented Kalman filter in filtering noisy chaotic signals

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    A non-periodic oscillatory behavior of the unscented Kalman filter (UKF) when used to filter noisy contaminated chaotic signals is reported. We show both theoretically and experimentally that the gain of the UKF may not converge or diverge but oscillate aperiodically. More precisely, when a nonlinear system is periodic, the Kalman gain and error covariance of the UKF converge to zero. However, when the system being considered is chaotic, the Kalman gain either converges to a fixed point with a magnitude larger than zero or oscillates aperiodically.

  15. Can we use PCA to detect small signals in noisy data?

    Science.gov (United States)

    Spiegelberg, Jakob; Rusz, Ján

    2017-01-01

    Principal component analysis (PCA) is among the most commonly applied dimension reduction techniques suitable to denoise data. Focusing on its limitations to detect low variance signals in noisy data, we discuss how statistical and systematical errors occur in PCA reconstructed data as a function of the size of the data set, which extends the work of Lichtert and Verbeeck, (2013) [16]. Particular attention is directed towards the estimation of bias introduced by PCA and its influence on experiment design. Aiming at the denoising of large matrices, nullspace based denoising (NBD) is introduced. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. A quantification of the hazards of fitting sums of exponentials to noisy data

    International Nuclear Information System (INIS)

    Bromage, G.E.

    1983-06-01

    The ill-conditioned nature of sums-of-exponentials analyses is confirmed and quantified, using synthetic noisy data. In particular, the magnification of errors is plotted for various two-exponential models, to illustrate its dependence on the ratio of decay constants, and on the ratios of amplitudes of the contributing terms. On moving from two- to three-exponential models, the condition deteriorates badly. It is also shown that the use of 'direct' Prony-type analyses (rather than general iterative nonlinear optimisation) merely aggravates the condition. (author)

  17. Victims’ language: (noisy silences and (grave parodies to talk (unknowingly about individuals’ forced disappearance

    Directory of Open Access Journals (Sweden)

    Gabriel Gatti

    2011-07-01

    Full Text Available Based on the results of research carried out between 2005 and 2008 about social universes constructed in Argentina and Uruguay around the figure of the disappeared detainee, this piece aims to systematize several answer to one the more complex problems this repression figure bears: that of representation of facts and their consequences. This work focuses no on all possible answers, but on several of the more innovative and creative: those betting on talking about the impossibility to talk (the noisy silences, and those betting on forcing language up to its limit (grave parodies.

  18. Teleportation is necessary for faithful quantum state transfer through noisy channels of maximal rank

    International Nuclear Information System (INIS)

    Romano, Raffaele; Loock, Peter van

    2010-01-01

    Quantum teleportation enables deterministic and faithful transmission of quantum states, provided a maximally entangled state is preshared between sender and receiver, and a one-way classical channel is available. Here, we prove that these resources are not only sufficient, but also necessary, for deterministically and faithfully sending quantum states through any fixed noisy channel of maximal rank, when a single use of the cannel is admitted. In other words, for this family of channels, there are no other protocols, based on different (and possibly cheaper) sets of resources, capable of replacing quantum teleportation.

  19. Bit-level quantum color image encryption scheme with quantum cross-exchange operation and hyper-chaotic system

    Science.gov (United States)

    Zhou, Nanrun; Chen, Weiwei; Yan, Xinyu; Wang, Yunqian

    2018-06-01

    In order to obtain higher encryption efficiency, a bit-level quantum color image encryption scheme by exploiting quantum cross-exchange operation and a 5D hyper-chaotic system is designed. Additionally, to enhance the scrambling effect, the quantum channel swapping operation is employed to swap the gray values of corresponding pixels. The proposed color image encryption algorithm has larger key space and higher security since the 5D hyper-chaotic system has more complex dynamic behavior, better randomness and unpredictability than those based on low-dimensional hyper-chaotic systems. Simulations and theoretical analyses demonstrate that the presented bit-level quantum color image encryption scheme outperforms its classical counterparts in efficiency and security.

  20. MR imaging of a malignant schwannoma and an osteoblastoma with fluid-fluid levels. Report of two new cases

    International Nuclear Information System (INIS)

    Vilanova, J.C.; Dolz, J.L.; Aldoma, J.; Capdevila, A.; Maestro de Leon, J.L.; Aparicio, A.

    1998-01-01

    One case of malignant schwannoma of the sacrum and another of occipital osteoblastoma were evaluated by MR imaging. Both tumors showed fluid-fluid levels with different signal intensities in the sequences performed. Pathologic examination revealed hemmorhagic fluid in both tumors. Malignant schwannoma and osteoblastoma should be included in the list of bone and soft-tissue with fluid-fluid levels. Our data confirm the non-specificity of this finding, which only suggests the presence of previous intratumoral hemorrhage. (orig.) (orig.)

  1. Post-Primary Students' Images of Mathematics: Findings from a Survey of Irish Ordinary Level Mathematics Students

    Science.gov (United States)

    Lane, Ciara; Stynes, Martin; O'Donoghue, John

    2016-01-01

    A questionnaire survey was carried out as part of a PhD research study to investigate the image of mathematics held by post-primary students in Ireland. The study focused on students in fifth year of post-primary education studying ordinary level mathematics for the Irish Leaving Certificate examination--the final examination for students in…

  2. Nutrients and toxin producing phytoplankton control algal blooms - a spatio-temporal study in a noisy environment.

    Science.gov (United States)

    Sarkar, Ram Rup; Malchow, Horst

    2005-12-01

    A phytoplankton-zooplankton prey-predator model has been investigated for temporal, spatial and spatio-temporal dissipative pattern formation in a deterministic and noisy environment, respectively. The overall carrying capacity for the phytoplankton population depends on the nutrient level. The role of nutrient concentrations and toxin producing phytoplankton for controlling the algal blooms has been discussed. The local analysis yields a number of stationary and/or oscillatory regimes and their combinations. Correspondingly interesting is the spatio-temporal behaviour, modelled by stochastic reaction-diffusion equations. The present study also reveals the fact that the rate of toxin production by toxin producing phytoplankton (TPP) plays an important role for controlling oscillations in the plankton system. We also observe that different mortality functions of zooplankton due to TPP have significant influence in controlling oscillations, coexistence, survival or extinction of the zoo-plankton population. External noise can enhance the survival and spread of zooplankton that would go extinct in the deterministic system due to a high rate of toxin production.

  3. Improved classification accuracy of powdery mildew infection levels of wine grapes by spatial-spectral analysis of hyperspectral images.

    Science.gov (United States)

    Knauer, Uwe; Matros, Andrea; Petrovic, Tijana; Zanker, Timothy; Scott, Eileen S; Seiffert, Udo

    2017-01-01

    Hyperspectral imaging is an emerging means of assessing plant vitality, stress parameters, nutrition status, and diseases. Extraction of target values from the high-dimensional datasets either relies on pixel-wise processing of the full spectral information, appropriate selection of individual bands, or calculation of spectral indices. Limitations of such approaches are reduced classification accuracy, reduced robustness due to spatial variation of the spectral information across the surface of the objects measured as well as a loss of information intrinsic to band selection and use of spectral indices. In this paper we present an improved spatial-spectral segmentation approach for the analysis of hyperspectral imaging data and its application for the prediction of powdery mildew infection levels (disease severity) of intact Chardonnay grape bunches shortly before veraison. Instead of calculating texture features (spatial features) for the huge number of spectral bands independently, dimensionality reduction by means of Linear Discriminant Analysis (LDA) was applied first to derive a few descriptive image bands. Subsequent classification was based on modified Random Forest classifiers and selective extraction of texture parameters from the integral image representation of the image bands generated. Dimensionality reduction, integral images, and the selective feature extraction led to improved classification accuracies of up to [Formula: see text] for detached berries used as a reference sample (training dataset). Our approach was validated by predicting infection levels for a sample of 30 intact bunches. Classification accuracy improved with the number of decision trees of the Random Forest classifier. These results corresponded with qPCR results. An accuracy of 0.87 was achieved in classification of healthy, infected, and severely diseased bunches. However, discrimination between visually healthy and infected bunches proved to be challenging for a few samples

  4. Image denoising using the squared eigenfunctions of the Schrodinger operator

    KAUST Repository

    Kaisserli, Zineb; Laleg-Kirati, Taous-Meriem

    2015-01-01

    This study introduces a new image denoising method based on the spectral analysis of the semi-classical Schrodinger operator. The noisy image is considered as a potential of the Schrodinger operator, and the denoised image is reconstructed using the discrete spectrum of this operator. First results illustrating the performance of the proposed approach are presented and compared to the singular value decomposition method.

  5. Image denoising using the squared eigenfunctions of the Schrodinger operator

    KAUST Repository

    Kaisserli, Zineb

    2015-02-02

    This study introduces a new image denoising method based on the spectral analysis of the semi-classical Schrodinger operator. The noisy image is considered as a potential of the Schrodinger operator, and the denoised image is reconstructed using the discrete spectrum of this operator. First results illustrating the performance of the proposed approach are presented and compared to the singular value decomposition method.

  6. NOAA GOES-R Series Advanced Baseline Imager (ABI) Level 2+ Cloud Top Pressure (CTP)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Cloud Top Pressure product contains an image with pixel values identifying the atmospheric pressure at the top of a cloud layer. The product is generated in...

  7. Pixel-level multisensor image fusion based on matrix completion and robust principal component analysis

    Science.gov (United States)

    Wang, Zhuozheng; Deller, J. R.; Fleet, Blair D.

    2016-01-01

    Acquired digital images are often corrupted by a lack of camera focus, faulty illumination, or missing data. An algorithm is presented for fusion of multiple corrupted images of a scene using the lifting wavelet transform. The method employs adaptive fusion arithmetic based on matrix completion and self-adaptive regional variance estimation. Characteristics of the wavelet coefficients are used to adaptively select fusion rules. Robust principal component analysis is applied to low-frequency image components, and regional variance estimation is applied to high-frequency components. Experiments reveal that the method is effective for multifocus, visible-light, and infrared image fusion. Compared with traditional algorithms, the new algorithm not only increases the amount of preserved information and clarity but also improves robustness.

  8. NOAA GOES-R Series Advanced Baseline Imager (ABI) Level 1b Radiances

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Advanced Baseline Imager (ABI) instrument samples the radiance of the Earth in sixteen spectral bands using several arrays of detectors in the instrument’s...

  9. Radiation levels and image quality in patients undergoing chest X-ray examinations

    International Nuclear Information System (INIS)

    Campos de Oliveira, Paulo Márcio; Carmo Santana, Priscila do; Sousa Lacerda, Marco Aurélio de; Silva, Teógenes Augusto da

    2017-01-01

    Patient dose monitoring for different radiographic procedures has been used as a parameter to evaluate the performance of radiology services; skin entrance absorbed dose values for each type of examination were internationally established and recommended aiming patient protection. In this work, a methodology for dose evaluation was applied to three diagnostic services: one with a conventional film and two with digital computerized radiography processing techniques. The x-ray beam parameters were selected and “doses” (specifically the entrance surface and incident air kerma) were evaluated based on images approved in European criteria during postero-anterior (PA) and lateral (LAT) incidences. Data were collected from 200 patients related to 200 PA and 100 LAT incidences. Results showed that doses distributions in the three diagnostic services were very different; the best relation between dose and image quality was found in the institution with the chemical film processing. This work contributed for disseminating the radiation protection culture by emphasizing the need of a continuous dose reduction without losing the quality of the diagnostic image. - Highlights: • A methodology for dose evaluation was applied to three diagnostic services. • The doses in patients were evaluated when the images were adequate. • Data were collected from 200 patients. • Is possible doses optimization with digital system without an image quality reduction. • The best dose and image quality was found in chemical film processing.

  10. Selection of bi-level image compression method for reduction of communication energy in wireless visual sensor networks

    Science.gov (United States)

    Khursheed, Khursheed; Imran, Muhammad; Ahmad, Naeem; O'Nils, Mattias

    2012-06-01

    Wireless Visual Sensor Network (WVSN) is an emerging field which combines image sensor, on board computation unit, communication component and energy source. Compared to the traditional wireless sensor network, which operates on one dimensional data, such as temperature, pressure values etc., WVSN operates on two dimensional data (images) which requires higher processing power and communication bandwidth. Normally, WVSNs are deployed in areas where installation of wired solutions is not feasible. The energy budget in these networks is limited to the batteries, because of the wireless nature of the application. Due to the limited availability of energy, the processing at Visual Sensor Nodes (VSN) and communication from VSN to server should consume as low energy as possible. Transmission of raw images wirelessly consumes a lot of energy and requires higher communication bandwidth. Data compression methods reduce data efficiently and hence will be effective in reducing communication cost in WVSN. In this paper, we have compared the compression efficiency and complexity of six well known bi-level image compression methods. The focus is to determine the compression algorithms which can efficiently compress bi-level images and their computational complexity is suitable for computational platform used in WVSNs. These results can be used as a road map for selection of compression methods for different sets of constraints in WVSN.

  11. Dependence of the appearance-based perception of criminality, suggestibility, and trustworthiness on the level of pixelation of facial images.

    Science.gov (United States)

    Nurmoja, Merle; Eamets, Triin; Härma, Hanne-Loore; Bachmann, Talis

    2012-10-01

    While the dependence of face identification on the level of pixelation-transform of the images of faces has been well studied, similar research on face-based trait perception is underdeveloped. Because depiction formats used for hiding individual identity in visual media and evidential material recorded by surveillance cameras often consist of pixelized images, knowing the effects of pixelation on person perception has practical relevance. Here, the results of two experiments are presented showing the effect of facial image pixelation on the perception of criminality, trustworthiness, and suggestibility. It appears that individuals (N = 46, M age = 21.5 yr., SD = 3.1 for criminality ratings; N = 94, M age = 27.4 yr., SD = 10.1 for other ratings) have the ability to discriminate between facial cues ndicative of these perceived traits from the coarse level of image pixelation (10-12 pixels per face horizontally) and that the discriminability increases with a decrease in the coarseness of pixelation. Perceived criminality and trustworthiness appear to be better carried by the pixelized images than perceived suggestibility.

  12. Quantitative image analysis of intra-tumoral bFGF level as a molecular marker of paclitaxel resistance

    Directory of Open Access Journals (Sweden)

    Wientjes M Guillaume

    2008-01-01

    Full Text Available Abstract Background The role of basic fibroblast growth factor (bFGF in chemoresistance is controversial; some studies showed a relationship between higher bFGF level and chemoresistance while other studies showed the opposite finding. The goal of the present study was to quantify bFGF levels in archived tumor tissues, and to determine its relationship with chemosensitivity. Methods We established an image analysis-based method to quantify and convert the immunostaining intensity of intra-tumor bFGF to concentrations; this was accomplished by generating standard curves using human xenograft tumors as the renewable tissue source for simultaneous image analysis and ELISA. The relationships between bFGF concentrations and tumor chemosensitivity of patient tumors (n = 87 to paclitaxel were evaluated using linear regression analysis. Results The image analysis results were compared to our previous results obtained using a conventional, semi-quantitative visual scoring method. While both analyses indicated an inverse relationship between bFGF level and tumor sensitivity to paclitaxel, the image analysis method, by providing bFGF levels in individual tumors and therefore more data points (87 numerical values as opposed to four groups of staining intensities, further enabled the quantitative analysis of the relationship in subgroups of tumors with different pathobiological properties. The results show significant correlation between bFGF level and tumor sensitivity to the antiproliferation effect, but not the apoptotic effect, of paclitaxel. We further found stronger correlations of bFGF level and paclitaxel sensitivity in four tumor subgroups (high stage, positive p53 staining, negative aFGF staining, containing higher-than-median bFGF level, compared to all other groups. These findings suggest that the relationship between intra-tumoral bFGF level and paclitaxel sensitivity was context-dependent, which may explain the previous contradictory findings

  13. Sensitivity to the visual field origin of natural image patches in human low-level visual cortex

    Directory of Open Access Journals (Sweden)

    Damien J. Mannion

    2015-06-01

    Full Text Available Asymmetries in the response to visual patterns in the upper and lower visual fields (above and below the centre of gaze have been associated with ecological factors relating to the structure of typical visual environments. Here, we investigated whether the content of the upper and lower visual field representations in low-level regions of human visual cortex are specialised for visual patterns that arise from the upper and lower visual fields in natural images. We presented image patches, drawn from above or below the centre of gaze of an observer navigating a natural environment, to either the upper or lower visual fields of human participants (n = 7 while we used functional magnetic resonance imaging (fMRI to measure the magnitude of evoked activity in the visual areas V1, V2, and V3. We found a significant interaction between the presentation location (upper or lower visual field and the image patch source location (above or below fixation; the responses to lower visual field presentation were significantly greater for image patches sourced from below than above fixation, while the responses in the upper visual field were not significantly different for image patches sourced from above and below fixation. This finding demonstrates an association between the representation of the lower visual field in human visual cortex and the structure of the visual input that is likely to be encountered below the centre of gaze.

  14. The k-Language Classification, a Proposed New Theory for Image Classification and Clustering at Pixel Level

    Directory of Open Access Journals (Sweden)

    Alwi Aslan

    2014-03-01

    Full Text Available This theory attempted to explore the possibility of using regular language further in image analysis, departing from the use of string to represent the region in the image. But we are not trying to show an alternative idea about how to generate a string region, where there are many different ways how the image or region produces strings representing, in this paper we propose a way how to generate regular language or group of languages which performs both classify the set of strings generated by a group of a number of image regions. Researchers began by showing a proof that there is always a regular language that accepts a set of strings that produced the image, and then use the language to perform the classification. Research then expanded to the pixel level, on whether the regular language can be used for clustering pixels in the image, the researchers propose a systematic solution of this question. As a tool used to explore regular language is deterministic finite automata. On the end part before conclusion of this paper, we add revision version of this theory. There is another point of view to revision version, added for make this method more precision and more powerfull from before.

  15. Breast ultrasound image segmentation: an optimization approach based on super-pixels and high-level descriptors

    Science.gov (United States)

    Massich, Joan; Lemaître, Guillaume; Martí, Joan; Mériaudeau, Fabrice

    2015-04-01

    Breast cancer is the second most common cancer and the leading cause of cancer death among women. Medical imaging has become an indispensable tool for its diagnosis and follow up. During the last decade, the medical community has promoted to incorporate Ultra-Sound (US) screening as part of the standard routine. The main reason for using US imaging is its capability to differentiate benign from malignant masses, when compared to other imaging techniques. The increasing usage of US imaging encourages the development of Computer Aided Diagnosis (CAD) systems applied to Breast Ultra-Sound (BUS) images. However accurate delineations of the lesions and structures of the breast are essential for CAD systems in order to extract information needed to perform diagnosis. This article proposes a highly modular and flexible framework for segmenting lesions and tissues present in BUS images. The proposal takes advantage of optimization strategies using super-pixels and high-level descriptors, which are analogous to the visual cues used by radiologists. Qualitative and quantitative results are provided stating a performance within the range of the state-of-the-art.

  16. Technical Note: Correcting for signal attenuation from noisy proxy data in climate reconstructions

    KAUST Repository

    Ammann, C. M.

    2010-04-20

    Regression-based climate reconstructions scale one or more noisy proxy records against a (generally) short instrumental data series. Based on that relationship, the indirect information is then used to estimate that particular measure of climate back in time. A well-calibrated proxy record(s), if stationary in its relationship to the target, should faithfully preserve the mean amplitude of the climatic variable. However, it is well established in the statistical literature that traditional regression parameter estimation can lead to substantial amplitude attenuation if the predictors carry significant amounts of noise. This issue is known as "Measurement Error" (Fuller, 1987; Carroll et al., 2006). Climate proxies derived from tree-rings, ice cores, lake sediments, etc., are inherently noisy and thus all regression-based reconstructions could suffer from this problem. Some recent applications attempt to ward off amplitude attenuation, but implementations are often complex (Lee et al., 2008) or require additional information, e.g. from climate models (Hegerl et al., 2006, 2007). Here we explain the cause of the problem and propose an easy, generally applicable, data-driven strategy to effectively correct for attenuation (Fuller, 1987; Carroll et al., 2006), even at annual resolution. The impact is illustrated in the context of a Northern Hemisphere mean temperature reconstruction. An inescapable trade-off for achieving an unbiased reconstruction is an increase in variance, but for many climate applications the change in mean is a core interest.

  17. The Effect of Information Analysis Automation Display Content on Human Judgment Performance in Noisy Environments

    Science.gov (United States)

    Bass, Ellen J.; Baumgart, Leigh A.; Shepley, Kathryn Klein

    2014-01-01

    Displaying both the strategy that information analysis automation employs to makes its judgments and variability in the task environment may improve human judgment performance, especially in cases where this variability impacts the judgment performance of the information analysis automation. This work investigated the contribution of providing either information analysis automation strategy information, task environment information, or both, on human judgment performance in a domain where noisy sensor data are used by both the human and the information analysis automation to make judgments. In a simplified air traffic conflict prediction experiment, 32 participants made probability of horizontal conflict judgments under different display content conditions. After being exposed to the information analysis automation, judgment achievement significantly improved for all participants as compared to judgments without any of the automation's information. Participants provided with additional display content pertaining to cue variability in the task environment had significantly higher aided judgment achievement compared to those provided with only the automation's judgment of a probability of conflict. When designing information analysis automation for environments where the automation's judgment achievement is impacted by noisy environmental data, it may be beneficial to show additional task environment information to the human judge in order to improve judgment performance. PMID:24847184

  18. The Effect of Information Analysis Automation Display Content on Human Judgment Performance in Noisy Environments.

    Science.gov (United States)

    Bass, Ellen J; Baumgart, Leigh A; Shepley, Kathryn Klein

    2013-03-01

    Displaying both the strategy that information analysis automation employs to makes its judgments and variability in the task environment may improve human judgment performance, especially in cases where this variability impacts the judgment performance of the information analysis automation. This work investigated the contribution of providing either information analysis automation strategy information, task environment information, or both, on human judgment performance in a domain where noisy sensor data are used by both the human and the information analysis automation to make judgments. In a simplified air traffic conflict prediction experiment, 32 participants made probability of horizontal conflict judgments under different display content conditions. After being exposed to the information analysis automation, judgment achievement significantly improved for all participants as compared to judgments without any of the automation's information. Participants provided with additional display content pertaining to cue variability in the task environment had significantly higher aided judgment achievement compared to those provided with only the automation's judgment of a probability of conflict. When designing information analysis automation for environments where the automation's judgment achievement is impacted by noisy environmental data, it may be beneficial to show additional task environment information to the human judge in order to improve judgment performance.

  19. Multi-objective optimization with estimation of distribution algorithm in a noisy environment.

    Science.gov (United States)

    Shim, Vui Ann; Tan, Kay Chen; Chia, Jun Yong; Al Mamun, Abdullah

    2013-01-01

    Many real-world optimization problems are subjected to uncertainties that may be characterized by the presence of noise in the objective functions. The estimation of distribution algorithm (EDA), which models the global distribution of the population for searching tasks, is one of the evolutionary computation techniques that deals with noisy information. This paper studies the potential of EDAs; particularly an EDA based on restricted Boltzmann machines that handles multi-objective optimization problems in a noisy environment. Noise is introduced to the objective functions in the form of a Gaussian distribution. In order to reduce the detrimental effect of noise, a likelihood correction feature is proposed to tune the marginal probability distribution of each decision variable. The EDA is subsequently hybridized with a particle swarm optimization algorithm in a discrete domain to improve its search ability. The effectiveness of the proposed algorithm is examined via eight benchmark instances with different characteristics and shapes of the Pareto optimal front. The scalability, hybridization, and computational time are rigorously studied. Comparative studies show that the proposed approach outperforms other state of the art algorithms.

  20. Technical Note: Correcting for signal attenuation from noisy proxy data in climate reconstructions

    Directory of Open Access Journals (Sweden)

    C. M. Ammann

    2010-04-01

    Full Text Available Regression-based climate reconstructions scale one or more noisy proxy records against a (generally short instrumental data series. Based on that relationship, the indirect information is then used to estimate that particular measure of climate back in time. A well-calibrated proxy record(s, if stationary in its relationship to the target, should faithfully preserve the mean amplitude of the climatic variable. However, it is well established in the statistical literature that traditional regression parameter estimation can lead to substantial amplitude attenuation if the predictors carry significant amounts of noise. This issue is known as "Measurement Error" (Fuller, 1987; Carroll et al., 2006. Climate proxies derived from tree-rings, ice cores, lake sediments, etc., are inherently noisy and thus all regression-based reconstructions could suffer from this problem. Some recent applications attempt to ward off amplitude attenuation, but implementations are often complex (Lee et al., 2008 or require additional information, e.g. from climate models (Hegerl et al., 2006, 2007. Here we explain the cause of the problem and propose an easy, generally applicable, data-driven strategy to effectively correct for attenuation (Fuller, 1987; Carroll et al., 2006, even at annual resolution. The impact is illustrated in the context of a Northern Hemisphere mean temperature reconstruction. An inescapable trade-off for achieving an unbiased reconstruction is an increase in variance, but for many climate applications the change in mean is a core interest.

  1. Utilizing functional near-infrared spectroscopy for prediction of cognitive workload in noisy work environments.

    Science.gov (United States)

    Gabbard, Ryan; Fendley, Mary; Dar, Irfaan A; Warren, Rik; Kashou, Nasser H

    2017-10-01

    Occupational noise frequently occurs in the work environment in military intelligence, surveillance, and reconnaissance operations. This impacts cognitive performance by acting as a stressor, potentially interfering with the analysts' decision-making process. We investigated the effects of different noise stimuli on analysts' performance and workload in anomaly detection by simulating a noisy work environment. We utilized functional near-infrared spectroscopy (fNIRS) to quantify oxy-hemoglobin (HbO) and deoxy-hemoglobin concentration changes in the prefrontal cortex (PFC), as well as behavioral measures, which include eye tracking, reaction time, and accuracy rate. We hypothesized that noisy environments would have a negative effect on the participant in terms of anomaly detection performance due to the increase in workload, which would be reflected by an increase in PFC activity. We found that HbO for some of the channels analyzed were significantly different across noise types ([Formula: see text]). Our results also indicated that HbO activation for short-intermittent noise stimuli was greater in the PFC compared to long-intermittent noises. These approaches using fNIRS in conjunction with an understanding of the impact on human analysts in anomaly detection could potentially lead to better performance by optimizing work environments.

  2. A Doubly Stochastic Change Point Detection Algorithm for Noisy Biological Signals

    Directory of Open Access Journals (Sweden)

    Nathan Gold

    2018-01-01

    Full Text Available Experimentally and clinically collected time series data are often contaminated with significant confounding noise, creating short, noisy time series. This noise, due to natural variability and measurement error, poses a challenge to conventional change point detection methods. We propose a novel and robust statistical method for change point detection for noisy biological time sequences. Our method is a significant improvement over traditional change point detection methods, which only examine a potential anomaly at a single time point. In contrast, our method considers all suspected anomaly points and considers the joint probability distribution of the number of change points and the elapsed time between two consecutive anomalies. We validate our method with three simulated time series, a widely accepted benchmark data set, two geological time series, a data set of ECG recordings, and a physiological data set of heart rate variability measurements of fetal sheep model of human labor, comparing it to three existing methods. Our method demonstrates significantly improved performance over the existing point-wise detection methods.

  3. Processing of noisy magnetotelluric time series from Koyna-Warna seismic region, India: a systematic approach

    Directory of Open Access Journals (Sweden)

    Ujjal K. Borah

    2015-06-01

    Full Text Available Rolling array pattern broad band magnetotelluric (MT data was acquired in the Koyna-Warna (Maharashtra, India seismic zone during 2012-14 field campaigns. The main objective of this study is to identify the thickness of the Deccan trap in and around the Koyna-Warna seismic zone and to delineate the electrical nature of the sub-basalt. The MT data at many places got contaminated with high tension power line noise due to Koyna hydroelectric power project. So, in the present study an attempt has been made to tackle this problem due to 50 Hz noise and their harmonics and other cultural noise using commercially available processing software MAPROS. Remote site was running during the entire field period to stand against the cultural noise problem. This study is based on Fast Fourier Transform (FFT and mainly focuses on the behaviour of different processing parameters, their interrelations and the influences of different processing methods concerning improvement of the S/N ratio of noisy data. Our study suggests that no single processing approach can give desirable transfer functions, however combination of different processing approaches may be adopted while processing culturally affected noisy data.

  4. Fractional Order Differentiation by Integration and Error Analysis in Noisy Environment

    KAUST Repository

    Liu, Dayan

    2015-03-31

    The integer order differentiation by integration method based on the Jacobi orthogonal polynomials for noisy signals was originally introduced by Mboup, Join and Fliess. We propose to extend this method from the integer order to the fractional order to estimate the fractional order derivatives of noisy signals. Firstly, two fractional order differentiators are deduced from the Jacobi orthogonal polynomial filter, using the Riemann-Liouville and the Caputo fractional order derivative definitions respectively. Exact and simple formulae for these differentiators are given by integral expressions. Hence, they can be used for both continuous-time and discrete-time models in on-line or off-line applications. Secondly, some error bounds are provided for the corresponding estimation errors. These bounds allow to study the design parameters\\' influence. The noise error contribution due to a large class of stochastic processes is studied in discrete case. The latter shows that the differentiator based on the Caputo fractional order derivative can cope with a class of noises, whose mean value and variance functions are polynomial time-varying. Thanks to the design parameters analysis, the proposed fractional order differentiators are significantly improved by admitting a time-delay. Thirdly, in order to reduce the calculation time for on-line applications, a recursive algorithm is proposed. Finally, the proposed differentiator based on the Riemann-Liouville fractional order derivative is used to estimate the state of a fractional order system and numerical simulations illustrate the accuracy and the robustness with respect to corrupting noises.

  5. Water level response measurement in a steel cylindrical liquid storage tank using image filter processing under seismic excitation

    Science.gov (United States)

    Kim, Sung-Wan; Choi, Hyoung-Suk; Park, Dong-Uk; Baek, Eun-Rim; Kim, Jae-Min

    2018-02-01

    Sloshing refers to the movement of fluid that occurs when the kinetic energy of various storage tanks containing fluid (e.g., excitation and vibration) is continuously applied to the fluid inside the tanks. As the movement induced by an external force gets closer to the resonance frequency of the fluid, the effect of sloshing increases, and this can lead to a serious problem with the structural stability of the system. Thus, it is important to accurately understand the physics of sloshing, and to effectively suppress and reduce the sloshing. Also, a method for the economical measurement of the water level response of a liquid storage tank is needed for the exact analysis of sloshing. In this study, a method using images was employed among the methods for measuring the water level response of a liquid storage tank, and the water level response was measured using an image filter processing algorithm for the reduction of the noise of the fluid induced by light, and for the sharpening of the structure installed at the liquid storage tank. A shaking table test was performed to verify the validity of the method of measuring the water level response of a liquid storage tank using images, and the result was analyzed and compared with the response measured using a water level gauge.

  6. PIRPLE: a penalized-likelihood framework for incorporation of prior images in CT reconstruction

    International Nuclear Information System (INIS)

    Stayman, J Webster; Dang, Hao; Ding, Yifu; Siewerdsen, Jeffrey H

    2013-01-01

    Over the course of diagnosis and treatment, it is common for a number of imaging studies to be acquired. Such imaging sequences can provide substantial patient-specific prior knowledge about the anatomy that can be incorporated into a prior-image-based tomographic reconstruction for improved image quality and better dose utilization. We present a general methodology using a model-based reconstruction approach including formulations of the measurement noise that also integrates prior images. This penalized-likelihood technique adopts a sparsity enforcing penalty that incorporates prior information yet allows for change between the current reconstruction and the prior image. Moreover, since prior images are generally not registered with the current image volume, we present a modified model-based approach that seeks a joint registration of the prior image in addition to the reconstruction of projection data. We demonstrate that the combined prior-image- and model-based technique outperforms methods that ignore the prior data or lack a noise model. Moreover, we demonstrate the importance of registration for prior-image-based reconstruction methods and show that the prior-image-registered penalized-likelihood estimation (PIRPLE) approach can maintain a high level of image quality in the presence of noisy and undersampled projection data. (paper)

  7. Regions of mid-level human visual cortex sensitive to the global coherence of local image patches.

    Science.gov (United States)

    Mannion, Damien J; Kersten, Daniel J; Olman, Cheryl A

    2014-08-01

    The global structural arrangement and spatial layout of the visual environment must be derived from the integration of local signals represented in the lower tiers of the visual system. This interaction between the spatially local and global properties of visual stimulation underlies many of our visual capacities, and how this is achieved in the brain is a central question for visual and cognitive neuroscience. Here, we examine the sensitivity of regions of the posterior human brain to the global coordination of spatially displaced naturalistic image patches. We presented observers with image patches in two circular apertures to the left and right of central fixation, with the patches drawn from either the same (coherent condition) or different (noncoherent condition) extended image. Using fMRI at 7T (n = 5), we find that global coherence affected signal amplitude in regions of dorsal mid-level cortex. Furthermore, we find that extensive regions of mid-level visual cortex contained information in their local activity pattern that could discriminate coherent and noncoherent stimuli. These findings indicate that the global coordination of local naturalistic image information has important consequences for the processing in human mid-level visual cortex.

  8. Classification of Normal and Apoptotic Cells from Fluorescence Microscopy Images Using Generalized Polynomial Chaos and Level Set Function.

    Science.gov (United States)

    Du, Yuncheng; Budman, Hector M; Duever, Thomas A

    2016-06-01

    Accurate automated quantitative analysis of living cells based on fluorescence microscopy images can be very useful for fast evaluation of experimental outcomes and cell culture protocols. In this work, an algorithm is developed for fast differentiation of normal and apoptotic viable Chinese hamster ovary (CHO) cells. For effective segmentation of cell images, a stochastic segmentation algorithm is developed by combining a generalized polynomial chaos expansion with a level set function-based segmentation algorithm. This approach provides a probabilistic description of the segmented cellular regions along the boundary, from which it is possible to calculate morphological changes related to apoptosis, i.e., the curvature and length of a cell's boundary. These features are then used as inputs to a support vector machine (SVM) classifier that is trained to distinguish between normal and apoptotic viable states of CHO cell images. The use of morphological features obtained from the stochastic level set segmentation of cell images in combination with the trained SVM classifier is more efficient in terms of differentiation accuracy as compared with the original deterministic level set method.

  9. Statistical and heuristic image noise extraction (SHINE): a new method for processing Poisson noise in scintigraphic images

    International Nuclear Information System (INIS)

    Hannequin, Pascal; Mas, Jacky

    2002-01-01

    Poisson noise is one of the factors degrading scintigraphic images, especially at low count level, due to the statistical nature of photon detection. We have developed an original procedure, named statistical and heuristic image noise extraction (SHINE), to reduce the Poisson noise contained in the scintigraphic images, preserving the resolution, the contrast and the texture. The SHINE procedure consists in dividing the image into 4 x 4 blocks and performing a correspondence analysis on these blocks. Each block is then reconstructed using its own significant factors which are selected using an original statistical variance test. The SHINE procedure has been validated using a line numerical phantom and a hot spots and cold spots real phantom. The reference images are the noise-free simulated images for the numerical phantom and an extremely high counts image for the real phantom. The SHINE procedure has then been applied to the Jaszczak phantom and clinical data including planar bone scintigraphy, planar Sestamibi scintigraphy and Tl-201 myocardial SPECT. The SHINE procedure reduces the mean normalized error between the noisy images and the corresponding reference images. This reduction is constant and does not change with the count level. The SNR in a SHINE processed image is close to that of the corresponding raw image with twice the number of counts. The visual results with the Jaszczak phantom SPECT have shown that SHINE preserves the contrast and the resolution of the slices well. Clinical examples have shown no visual difference between the SHINE images and the corresponding raw images obtained with twice the acquisition duration. SHINE is an entirely automatic procedure which enables halving the acquisition time or the injected dose in scintigraphic acquisitions. It can be applied to all scintigraphic images, including PET data, and to all low-count photon images

  10. On-Line Temperature Estimation for Noisy Thermal Sensors Using a Smoothing Filter-Based Kalman Predictor

    Directory of Open Access Journals (Sweden)

    Xin Li

    2018-02-01

    Full Text Available Dynamic thermal management (DTM mechanisms utilize embedded thermal sensors to collect fine-grained temperature information for monitoring the real-time thermal behavior of multi-core processors. However, embedded thermal sensors are very susceptible to a variety of sources of noise, including environmental uncertainty and process variation. This causes the discrepancies between actual temperatures and those observed by on-chip thermal sensors, which seriously affect the efficiency of DTM. In this paper, a smoothing filter-based Kalman prediction technique is proposed to accurately estimate the temperatures from noisy sensor readings. For the multi-sensor estimation scenario, the spatial correlations among different sensor locations are exploited. On this basis, a multi-sensor synergistic calibration algorithm (known as MSSCA is proposed to improve the simultaneous prediction accuracy of multiple sensors. Moreover, an infrared imaging-based temperature measurement technique is also proposed to capture the thermal traces of an advanced micro devices (AMD quad-core processor in real time. The acquired real temperature data are used to evaluate our prediction performance. Simulation shows that the proposed synergistic calibration scheme can reduce the root-mean-square error (RMSE by 1.2 ∘ C and increase the signal-to-noise ratio (SNR by 15.8 dB (with a very small average runtime overhead compared with assuming the thermal sensor readings to be ideal. Additionally, the average false alarm rate (FAR of the corrected sensor temperature readings can be reduced by 28.6%. These results clearly demonstrate that if our approach is used to perform temperature estimation, the response mechanisms of DTM can be triggered to adjust the voltages, frequencies, and cooling fan speeds at more appropriate times.

  11. On-Line Temperature Estimation for Noisy Thermal Sensors Using a Smoothing Filter-Based Kalman Predictor.

    Science.gov (United States)

    Li, Xin; Ou, Xingtao; Li, Zhi; Wei, Henglu; Zhou, Wei; Duan, Zhemin

    2018-02-02

    Dynamic thermal management (DTM) mechanisms utilize embedded thermal sensors to collect fine-grained temperature information for monitoring the real-time thermal behavior of multi-core processors. However, embedded thermal sensors are very susceptible to a variety of sources of noise, including environmental uncertainty and process variation. This causes the discrepancies between actual temperatures and those observed by on-chip thermal sensors, which seriously affect the efficiency of DTM. In this paper, a smoothing filter-based Kalman prediction technique is proposed to accurately estimate the temperatures from noisy sensor readings. For the multi-sensor estimation scenario, the spatial correlations among different sensor locations are exploited. On this basis, a multi-sensor synergistic calibration algorithm (known as MSSCA) is proposed to improve the simultaneous prediction accuracy of multiple sensors. Moreover, an infrared imaging-based temperature measurement technique is also proposed to capture the thermal traces of an advanced micro devices (AMD) quad-core processor in real time. The acquired real temperature data are used to evaluate our prediction performance. Simulation shows that the proposed synergistic calibration scheme can reduce the root-mean-square error (RMSE) by 1.2 ∘ C and increase the signal-to-noise ratio (SNR) by 15.8 dB (with a very small average runtime overhead) compared with assuming the thermal sensor readings to be ideal. Additionally, the average false alarm rate (FAR) of the corrected sensor temperature readings can be reduced by 28.6%. These results clearly demonstrate that if our approach is used to perform temperature estimation, the response mechanisms of DTM can be triggered to adjust the voltages, frequencies, and cooling fan speeds at more appropriate times.

  12. Enough positive rate of paraspinal mapping and diffusion tensor imaging with levels which should be decompressed in lumbar spinal stenosis.

    Science.gov (United States)

    Chen, Hua-Biao; Zhong, Zhi-Wei; Li, Chun-Sheng; Bai, Bo

    2016-07-01

    In lumbar spinal stenosis, correlating symptoms and physical examination findings with decompression levels based on common imaging is not reliable. Paraspinal mapping (PM) and diffusion tensor imaging (DTI) may be possible to prevent the false positive occurrences with MRI and show clear benefits to reduce the decompression levels of lumbar spinal stenosis than conventional magnetic resonance imaging (MRI) + neurogenic examination (NE). However, they must have enough positive rate with levels which should be decompressed at first. The study aimed to confirm that the positive of DTI and PM is enough in levels which should be decompressed in lumbar spinal stenosis. The study analyzed the positive of DTI and PM as well as compared the preoperation scores to the postoperation scores, which were assessed preoperatively and at 2 weeks, 3 months 6 months, and 12 months postoperatively. 96 patients underwent the single level decompression surgery. The positive rate among PM, DTI, and (PM or DTI) was 76%, 98%, 100%, respectively. All post-operative Oswestry Disability Index (ODI), visual analog scale for back pain (VAS-BP) and visual analog scale for leg pain (VAS-LP) scores at 2 weeks postoperatively were measured improvement than the preoperative ODI, VAS-BP and VAS-LP scores with statistically significance (p-value = 0.000, p-value = 0.000, p-value = 0.000, respectively). In degenetive lumbar spinal stenosis, the positive rate of (DTI or PM) is enough in levels which should be decompressed, thence using the PM and DTI to determine decompression levels will not miss the level which should be operated. Copyright © 2016 The Japanese Orthopaedic Association. Published by Elsevier B.V. All rights reserved.

  13. Evaluation of image quality and radiation dose by adaptive statistical iterative reconstruction technique level for chest CT examination.

    Science.gov (United States)

    Hong, Sun Suk; Lee, Jong-Woong; Seo, Jeong Beom; Jung, Jae-Eun; Choi, Jiwon; Kweon, Dae Cheol

    2013-12-01

    The purpose of this research is to determine the adaptive statistical iterative reconstruction (ASIR) level that enables optimal image quality and dose reduction in the chest computed tomography (CT) protocol with ASIR. A chest phantom with 0-50 % ASIR levels was scanned and then noise power spectrum (NPS), signal and noise and the degree of distortion of peak signal-to-noise ratio (PSNR) and the root-mean-square error (RMSE) were measured. In addition, the objectivity of the experiment was measured using the American College of Radiology (ACR) phantom. Moreover, on a qualitative basis, five lesions' resolution, latitude and distortion degree of chest phantom and their compiled statistics were evaluated. The NPS value decreased as the frequency increased. The lowest noise and deviation were at the 20 % ASIR level, mean 126.15 ± 22.21. As a result of the degree of distortion, signal-to-noise ratio and PSNR at 20 % ASIR level were at the highest value as 31.0 and 41.52. However, maximum absolute error and RMSE showed the lowest deviation value as 11.2 and 16. In the ACR phantom study, all ASIR levels were within acceptable allowance of guidelines. The 20 % ASIR level performed best in qualitative evaluation at five lesions of chest phantom as resolution score 4.3, latitude 3.47 and the degree of distortion 4.25. The 20 % ASIR level was proved to be the best in all experiments, noise, distortion evaluation using ImageJ and qualitative evaluation of five lesions of a chest phantom. Therefore, optimal images as well as reduce radiation dose would be acquired when 20 % ASIR level in thoracic CT is applied.

  14. Imaging atomic-level random walk of a point defect in graphene

    Science.gov (United States)

    Kotakoski, Jani; Mangler, Clemens; Meyer, Jannik C.

    2014-05-01

    Deviations from the perfect atomic arrangements in crystals play an important role in affecting their properties. Similarly, diffusion of such deviations is behind many microstructural changes in solids. However, observation of point defect diffusion is hindered both by the difficulties related to direct imaging of non-periodic structures and by the timescales involved in the diffusion process. Here, instead of imaging thermal diffusion, we stimulate and follow the migration of a divacancy through graphene lattice using a scanning transmission electron microscope operated at 60 kV. The beam-activated process happens on a timescale that allows us to capture a significant part of the structural transformations and trajectory of the defect. The low voltage combined with ultra-high vacuum conditions ensure that the defect remains stable over long image sequences, which allows us for the first time to directly follow the diffusion of a point defect in a crystalline material.

  15. Robustness of Input features from Noisy Silhouettes in Human Pose Estimation

    DEFF Research Database (Denmark)

    Gong, Wenjuan; Fihl, Preben; Gonzàlez, Jordi

    2014-01-01

    . In this paper, we explore this problem. First, We compare performances of several image features widely used for human pose estimation and explore their performances against each other and select one with best performance. Second, iterative closest point algorithm is introduced for a new quantitative...... of silhouette samples of different noise levels and compare with the selected feature on a public dataset: Human Eva dataset....

  16. Remote Sensing Image Fusion at the Segment Level Using a Spatially-Weighted Approach: Applications for Land Cover Spectral Analysis and Mapping

    Directory of Open Access Journals (Sweden)

    Brian Johnson

    2015-01-01

    Full Text Available Segment-level image fusion involves segmenting a higher spatial resolution (HSR image to derive boundaries of land cover objects, and then extracting additional descriptors of image segments (polygons from a lower spatial resolution (LSR image. In past research, an unweighted segment-level fusion (USF approach, which extracts information from a resampled LSR image, resulted in more accurate land cover classification than the use of HSR imagery alone. However, simply fusing the LSR image with segment polygons may lead to significant errors due to the high level of noise in pixels along the segment boundaries (i.e., pixels containing multiple land cover types. To mitigate this, a spatially-weighted segment-level fusion (SWSF method was proposed for extracting descriptors (mean spectral values of segments from LSR images. SWSF reduces the weights of LSR pixels located on or near segment boundaries to reduce errors in the fusion process. Compared to the USF approach, SWSF extracted more accurate spectral properties of land cover objects when the ratio of the LSR image resolution to the HSR image resolution was greater than 2:1, and SWSF was also shown to increase classification accuracy. SWSF can be used to fuse any type of imagery at the segment level since it is insensitive to spectral differences between the LSR and HSR images (e.g., different spectral ranges of the images or different image acquisition dates.

  17. Development of an omnidirectional gamma-ray imaging Compton camera for low-radiation-level environmental monitoring

    Science.gov (United States)

    Watanabe, Takara; Enomoto, Ryoji; Muraishi, Hiroshi; Katagiri, Hideaki; Kagaya, Mika; Fukushi, Masahiro; Kano, Daisuke; Satoh, Wataru; Takeda, Tohoru; Tanaka, Manobu M.; Tanaka, Souichi; Uchida, Tomohisa; Wada, Kiyoto; Wakamatsu, Ryo

    2018-02-01

    We have developed an omnidirectional gamma-ray imaging Compton camera for environmental monitoring at low levels of radiation. The camera consisted of only six CsI(Tl) scintillator cubes of 3.5 cm, each of which was readout by super-bialkali photo-multiplier tubes (PMTs). Our camera enables the visualization of the position of gamma-ray sources in all directions (∼4π sr) over a wide energy range between 300 and 1400 keV. The angular resolution (σ) was found to be ∼11°, which was realized using an image-sharpening technique. A high detection efficiency of 18 cps/(µSv/h) for 511 keV (1.6 cps/MBq at 1 m) was achieved, indicating the capability of this camera to visualize hotspots in areas with low-radiation-level contamination from the order of µSv/h to natural background levels. Our proposed technique can be easily used as a low-radiation-level imaging monitor in radiation control areas, such as medical and accelerator facilities.

  18. Bayesian image restoration, using configurations

    OpenAIRE

    Thorarinsdottir, Thordis

    2006-01-01

    In this paper, we develop a Bayesian procedure for removing noise from images that can be viewed as noisy realisations of random sets in the plane. The procedure utilises recent advances in configuration theory for noise free random sets, where the probabilities of observing the different boundary configurations are expressed in terms of the mean normal measure of the random set. These probabilities are used as prior probabilities in a Bayesian image restoration approach. Estimation of the re...

  19. Changes in BOLD and ADC weighted imaging in acute hypoxia during sea-level and altitude adapted states

    DEFF Research Database (Denmark)

    Rostrup, Egill; Larsson, Henrik B.W.; Born, Alfred P.

    2005-01-01

    possible structural changes as measured by diffusion weighted imaging. Eleven healthy sea-level residents were studied after 5 weeks of adaptation to high altitude conditions at Chacaltaya, Bolivia (5260 m). The subjects were studied immediately after return to sea-level in hypoxic and normoxic conditions...... was slightly elevated in high altitude as compared to sea-level adaptation. It is concluded that hypoxia significantly diminishes the BOLD response, and the mechanisms underlying this finding are discussed. Furthermore, altitude adaptation may influence both the magnitude of the activation-related response......, and the examinations repeated 6 months later after re-adaptation to sea-level conditions. The BOLD response, measured at 1.5 T, was severely reduced during acute hypoxia both in the altitude and sea-level adapted states (50% reduction during an average S(a)O(2) of 75%). On average, the BOLD response magnitude was 23...

  20. Random Forests as a tool for estimating uncertainty at pixel-level in SAR image classification

    DEFF Research Database (Denmark)

    Loosvelt, Lien; Peters, Jan; Skriver, Henning

    2012-01-01

    , we introduce Random Forests for the probabilistic mapping of vegetation from high-dimensional remote sensing data and present a comprehensive methodology to assess and analyze classification uncertainty based on the local probabilities of class membership. We apply this method to SAR image data...

  1. Small-scale anomaly detection in panoramic imaging using neural models of low-level vision

    Science.gov (United States)

    Casey, Matthew C.; Hickman, Duncan L.; Pavlou, Athanasios; Sadler, James R. E.

    2011-06-01

    Our understanding of sensory processing in animals has reached the stage where we can exploit neurobiological principles in commercial systems. In human vision, one brain structure that offers insight into how we might detect anomalies in real-time imaging is the superior colliculus (SC). The SC is a small structure that rapidly orients our eyes to a movement, sound or touch that it detects, even when the stimulus may be on a small-scale; think of a camouflaged movement or the rustle of leaves. This automatic orientation allows us to prioritize the use of our eyes to raise awareness of a potential threat, such as a predator approaching stealthily. In this paper we describe the application of a neural network model of the SC to the detection of anomalies in panoramic imaging. The neural approach consists of a mosaic of topographic maps that are each trained using competitive Hebbian learning to rapidly detect image features of a pre-defined shape and scale. What makes this approach interesting is the ability of the competition between neurons to automatically filter noise, yet with the capability of generalizing the desired shape and scale. We will present the results of this technique applied to the real-time detection of obscured targets in visible-band panoramic CCTV images. Using background subtraction to highlight potential movement, the technique is able to correctly identify targets which span as little as 3 pixels wide while filtering small-scale noise.

  2. Improving the maximum transmission distance of continuous-variable quantum key distribution with noisy coherent states using a noiseless amplifier

    International Nuclear Information System (INIS)

    Wang, Tianyi; Yu, Song; Zhang, Yi-Chen; Gu, Wanyi; Guo, Hong

    2014-01-01

    By employing a nondeterministic noiseless linear amplifier, we propose to increase the maximum transmission distance of continuous-variable quantum key distribution with noisy coherent states. With the covariance matrix transformation, the expression of secret key rate under reverse reconciliation is derived against collective entangling cloner attacks. We show that the noiseless linear amplifier can compensate the detrimental effect of the preparation noise with an enhancement of the maximum transmission distance and the noise resistance. - Highlights: • Noiseless amplifier is applied in noisy coherent state quantum key distribution. • Negative effect of preparation noise is compensated by noiseless amplification. • Maximum transmission distance and noise resistance are both enhanced

  3. Analysis of Unmanned Aerial System-Based CIR Images in Forestry—A New Perspective to Monitor Pest Infestation Levels

    Directory of Open Access Journals (Sweden)

    Jan Rudolf Karl Lehmann

    2015-03-01

    Full Text Available The detection of pest infestation is an important aspect of forest management. In the case of the oak splendour beetle (Agrilus biguttatus infestation, the affected oaks (Quercus sp. show high levels of defoliation and altered canopy reflection signature. These critical features can be identified in high-resolution colour infrared (CIR images of the tree crown and branches level captured by Unmanned Aerial Systems (UAS. In this study, we used a small UAS equipped with a compact digital camera which has been calibrated and modified to record not only the visual but also the near infrared reflection (NIR of possibly infested oaks. The flight campaigns were realized in August 2013, covering two study sites which are located in a rural area in western Germany. Both locations represent small-scale, privately managed commercial forests in which oaks are economically valuable species. Our workflow includes the CIR/NIR image acquisition, mosaicking, georeferencing and pixel-based image enhancement followed by object-based image classification techniques. A modified Normalized Difference Vegetation Index (NDVImod derived classification was used to distinguish between five vegetation health classes, i.e., infested, healthy or dead branches, other vegetation and canopy gaps. We achieved an overall Kappa Index of Agreement (KIA   of 0.81 and 0.77 for each study site, respectively. This approach offers a low-cost alternative to private forest owners who pursue a sustainable management strategy.

  4. Improved initial guess with semi-subpixel level accuracy in digital image correlation by feature-based method

    Science.gov (United States)

    Zhang, Yunlu; Yan, Lei; Liou, Frank

    2018-05-01

    The quality initial guess of deformation parameters in digital image correlation (DIC) has a serious impact on convergence, robustness, and efficiency of the following subpixel level searching stage. In this work, an improved feature-based initial guess (FB-IG) scheme is presented to provide initial guess for points of interest (POIs) inside a large region. Oriented FAST and Rotated BRIEF (ORB) features are semi-uniformly extracted from the region of interest (ROI) and matched to provide initial deformation information. False matched pairs are eliminated by the novel feature guided Gaussian mixture model (FG-GMM) point set registration algorithm, and nonuniform deformation parameters of the versatile reproducing kernel Hilbert space (RKHS) function are calculated simultaneously. Validations on simulated images and real-world mini tensile test verify that this scheme can robustly and accurately compute initial guesses with semi-subpixel level accuracy in cases with small or large translation, deformation, or rotation.

  5. Statistical image processing and multidimensional modeling

    CERN Document Server

    Fieguth, Paul

    2010-01-01

    Images are all around us! The proliferation of low-cost, high-quality imaging devices has led to an explosion in acquired images. When these images are acquired from a microscope, telescope, satellite, or medical imaging device, there is a statistical image processing task: the inference of something - an artery, a road, a DNA marker, an oil spill - from imagery, possibly noisy, blurry, or incomplete. A great many textbooks have been written on image processing. However this book does not so much focus on images, per se, but rather on spatial data sets, with one or more measurements taken over

  6. Sparse interferometric millimeter-wave array for centimeter-level 100-m standoff imaging

    Science.gov (United States)

    Suen, Jonathan Y.; Lubin, Philip M.; Solomon, Steven L.; Ginn, Robert P.

    2013-05-01

    We present work on the development of a long range standoff concealed weapons detection system capable of imaging under very heavy clothing at distances exceeding 100 m with a cm resolution. The system is based off a combination of phased array technologies used in radio astronomy and SAR radar by using a coherent, multi-frequency reconstruction algorithm which can run at up to 1000 Hz frame rates and high SNR with a multi-tone transceiver. We show the flexible design space of our system as well as algorithm development, predicted system performance and impairments, and simulated reconstructed images. The system can be used for a variety of purposes including portal applications, crowd scanning and tactical situations. Additional uses include seeing through dust and fog.

  7. Image Denoising Using Interquartile Range Filter with Local Averaging

    OpenAIRE

    Jassim, Firas Ajil

    2013-01-01

    Image denoising is one of the fundamental problems in image processing. In this paper, a novel approach to suppress noise from the image is conducted by applying the interquartile range (IQR) which is one of the statistical methods used to detect outlier effect from a dataset. A window of size kXk was implemented to support IQR filter. Each pixel outside the IQR range of the kXk window is treated as noisy pixel. The estimation of the noisy pixels was obtained by local averaging. The essential...

  8. The impact of air pollution on the level of micronuclei measured by automated image analysis

    Czech Academy of Sciences Publication Activity Database

    Rössnerová, Andrea; Špátová, Milada; Rossner, P.; Solanský, I.; Šrám, Radim

    2009-01-01

    Roč. 669, 1-2 (2009), s. 42-47 ISSN 0027-5107 R&D Projects: GA AV ČR 1QS500390506; GA MŠk 2B06088; GA MŠk 2B08005 Institutional research plan: CEZ:AV0Z50390512 Keywords : micronuclei * binucleated cells * automated image analysis Subject RIV: DN - Health Impact of the Environment Quality Impact factor: 3.556, year: 2009

  9. Lane Level Localization; Using Images and HD Maps to Mitigate the Lateral Error

    Science.gov (United States)

    Hosseinyalamdary, S.; Peter, M.

    2017-05-01

    In urban canyon where the GNSS signals are blocked by buildings, the accuracy of measured position significantly deteriorates. GIS databases have been frequently utilized to improve the accuracy of measured position using map matching approaches. In map matching, the measured position is projected to the road links (centerlines) in this approach and the lateral error of measured position is reduced. By the advancement in data acquision approaches, high definition maps which contain extra information, such as road lanes are generated. These road lanes can be utilized to mitigate the positional error and improve the accuracy in position. In this paper, the image content of a camera mounted on the platform is utilized to detect the road boundaries in the image. We apply color masks to detect the road marks, apply the Hough transform to fit lines to the left and right road boundaries, find the corresponding road segment in GIS database, estimate the homography transformation between the global and image coordinates of the road boundaries, and estimate the camera pose with respect to the global coordinate system. The proposed approach is evaluated on a benchmark. The position is measured by a smartphone's GPS receiver, images are taken from smartphone's camera and the ground truth is provided by using Real-Time Kinematic (RTK) technique. Results show the proposed approach significantly improves the accuracy of measured GPS position. The error in measured GPS position with average and standard deviation of 11.323 and 11.418 meters is reduced to the error in estimated postion with average and standard deviation of 6.725 and 5.899 meters.

  10. LANE LEVEL LOCALIZATION; USING IMAGES AND HD MAPS TO MITIGATE THE LATERAL ERROR

    Directory of Open Access Journals (Sweden)

    S. Hosseinyalamdary

    2017-05-01

    Full Text Available In urban canyon where the GNSS signals are blocked by buildings, the accuracy of measured position significantly deteriorates. GIS databases have been frequently utilized to improve the accuracy of measured position using map matching approaches. In map matching, the measured position is projected to the road links (centerlines in this approach and the lateral error of measured position is reduced. By the advancement in data acquision approaches, high definition maps which contain extra information, such as road lanes are generated. These road lanes can be utilized to mitigate the positional error and improve the accuracy in position. In this paper, the image content of a camera mounted on the platform is utilized to detect the road boundaries in the image. We apply color masks to detect the road marks, apply the Hough transform to fit lines to the left and right road boundaries, find the corresponding road segment in GIS database, estimate the homography transformation between the global and image coordinates of the road boundaries, and estimate the camera pose with respect to the global coordinate system. The proposed approach is evaluated on a benchmark. The position is measured by a smartphone’s GPS receiver, images are taken from smartphone’s camera and the ground truth is provided by using Real-Time Kinematic (RTK technique. Results show the proposed approach significantly improves the accuracy of measured GPS position. The error in measured GPS position with average and standard deviation of 11.323 and 11.418 meters is reduced to the error in estimated postion with average and standard deviation of 6.725 and 5.899 meters.

  11. MR imaging of a malignant schwannoma and an osteoblastoma with fluid-fluid levels. Report of two new cases

    Energy Technology Data Exchange (ETDEWEB)

    Vilanova, J.C.; Dolz, J.L.; Aldoma, J.; Capdevila, A. [Centre Diagnostic Pedralbes, Ressonancia Magnetica, Barcelona (Spain); Maestro de Leon, J.L.; Aparicio, A. [Department of Neurosurgery, Hospital Mutua de Terrassa, Barcelona (Spain)

    1998-10-01

    One case of malignant schwannoma of the sacrum and another of occipital osteoblastoma were evaluated by MR imaging. Both tumors showed fluid-fluid levels with different signal intensities in the sequences performed. Pathologic examination revealed hemmorhagic fluid in both tumors. Malignant schwannoma and osteoblastoma should be included in the list of bone and soft-tissue with fluid-fluid levels. Our data confirm the non-specificity of this finding, which only suggests the presence of previous intratumoral hemorrhage. (orig.) (orig.) With 2 figs., 2 tabs., 17 refs.

  12. Radiation levels and image quality in patients undergoing chest X-ray examinations

    Science.gov (United States)

    de Oliveira, Paulo Márcio Campos; do Carmo Santana, Priscila; de Sousa Lacerda, Marco Aurélio; da Silva, Teógenes Augusto

    2017-11-01

    Patient dose monitoring for different radiographic procedures has been used as a parameter to evaluate the performance of radiology services; skin entrance absorbed dose values for each type of examination were internationally established and recommended aiming patient protection. In this work, a methodology for dose evaluation was applied to three diagnostic services: one with a conventional film and two with digital computerized radiography processing techniques. The x-ray beam parameters were selected and "doses" (specifically the entrance surface and incident air kerma) were evaluated based on images approved in European criteria during postero-anterior (PA) and lateral (LAT) incidences. Data were collected from 200 patients related to 200 PA and 100 LAT incidences. Results showed that doses distributions in the three diagnostic services were very different; the best relation between dose and image quality was found in the institution with the chemical film processing. This work contributed for disseminating the radiation protection culture by emphasizing the need of a continuous dose reduction without losing the quality of the diagnostic image.

  13. Model-based failure detection for cylindrical shells from noisy vibration measurements.

    Science.gov (United States)

    Candy, J V; Fisher, K A; Guidry, B L; Chambers, D H

    2014-12-01

    Model-based processing is a theoretically sound methodology to address difficult objectives in complex physical problems involving multi-channel sensor measurement systems. It involves the incorporation of analytical models of both physical phenomenology (complex vibrating structures, noisy operating environment, etc.) and the measurement processes (sensor networks and including noise) into the processor to extract the desired information. In this paper, a model-based methodology is developed to accomplish the task of online failure monitoring of a vibrating cylindrical shell externally excited by controlled excitations. A model-based processor is formulated to monitor system performance and detect potential failure conditions. The objective of this paper is to develop a real-time, model-based monitoring scheme for online diagnostics in a representative structural vibrational system based on controlled experimental data.

  14. Chaotic annealing with hypothesis test for function optimization in noisy environments

    International Nuclear Information System (INIS)

    Pan Hui; Wang Ling; Liu Bo

    2008-01-01

    As a special mechanism to avoid being trapped in local minimum, the ergodicity property of chaos has been used as a novel searching technique for optimization problems, but there is no research work on chaos for optimization in noisy environments. In this paper, the performance of chaotic annealing (CA) for uncertain function optimization is investigated, and a new hybrid approach (namely CAHT) that combines CA and hypothesis test (HT) is proposed. In CAHT, the merits of CA are applied for well exploration and exploitation in searching space, and solution quality can be identified reliably by hypothesis test to reduce the repeated search to some extent and to reasonably estimate performance for solution. Simulation results and comparisons show that, chaos is helpful to improve the performance of SA for uncertain function optimization, and CAHT can further improve the searching efficiency, quality and robustness

  15. A Novel Approach in Text-Independent Speaker Recognition in Noisy Environment

    Directory of Open Access Journals (Sweden)

    Nona Heydari Esfahani

    2014-10-01

    Full Text Available In this paper, robust text-independent speaker recognition is taken into consideration. The proposed method performs on manual silence-removed utterances that are segmented into smaller speech units containing few phones and at least one vowel. The segments are basic units for long-term feature extraction. Sub-band entropy is directly extracted in each segment. A robust vowel detection method is then applied on each segment to separate a high energy vowel that is used as unit for pitch frequency and formant extraction. By applying a clustering technique, extracted short-term features namely MFCC coefficients are combined with long term features. Experiments using MLP classifier show that the average speaker accuracy recognition rate is 97.33% for clean speech and 61.33% in noisy environment for -2db SNR, that shows improvement compared to other conventional methods.

  16. Mobile robot trajectory tracking using noisy RSS measurements: an RFID approach.

    Science.gov (United States)

    Miah, M Suruz; Gueaieb, Wail

    2014-03-01

    Most RF beacons-based mobile robot navigation techniques rely on approximating line-of-sight (LOS) distances between the beacons and the robot. This is mostly performed using the robot's received signal strength (RSS) measurements from the beacons. However, accurate mapping between the RSS measurements and the LOS distance is almost impossible to achieve in reverberant environments. This paper presents a partially-observed feedback controller for a wheeled mobile robot where the feedback signal is in the form of noisy RSS measurements emitted from radio frequency identification (RFID) tags. The proposed controller requires neither an accurate mapping between the LOS distance and the RSS measurements, nor the linearization of the robot model. The controller performance is demonstrated through numerical simulations and real-time experiments. ©2013 Published by ISA. All rights reserved.

  17. Effect of noisy stimulation on neurobiological sensitization systems and its role for normal and pathological physiology

    Science.gov (United States)

    Huber, Martin; Braun, Hans; Krieg, J.\\:Urgen-Christian

    2004-03-01

    Sensitization is discussed as an important phenomenon playing a role in normal physiology but also with respect to the initiation and progression of a variety of neuropsychiatric disorders such as epilepsia, substance-related disorders or recurrent affective disorders. The relevance to understand the dynamics of sensitization phenomena is emphasized by recent findings that even single stimulations can induce longlasting changes in biological systems. To address specific questions associated with the sensitization dynamics, we use a computational approach and develop simple but physiologically-plausible models. In the present study we examine the effect of noisy stimulation on sensitization development in the model. We consider sub- and suprathresold stimulations with varying noise intensities and determine as response measures the (i) absolute number of stimulus-induced sensitzations and (ii) the temporal relsation of stimulus-sensitization coupling. The findings indicate that stochastic effects including stochastic resonance might well contribute to the physiology of sensitization mechanisms under both nomal and pathological conditions.

  18. Frequency-Zooming ARMA Modeling for Analysis of Noisy String Instrument Tones

    Directory of Open Access Journals (Sweden)

    Paulo A. A. Esquef

    2003-09-01

    Full Text Available This paper addresses model-based analysis of string instrument sounds. In particular, it reviews the application of autoregressive (AR modeling to sound analysis/synthesis purposes. Moreover, a frequency-zooming autoregressive moving average (FZ-ARMA modeling scheme is described. The performance of the FZ-ARMA method on modeling the modal behavior of isolated groups of resonance frequencies is evaluated for both synthetic and real string instrument tones immersed in background noise. We demonstrate that the FZ-ARMA modeling is a robust tool to estimate the decay time and frequency of partials of noisy tones. Finally, we discuss the use of the method in synthesis of string instrument sounds.

  19. Purity of Gaussian states: Measurement schemes and time evolution in noisy channels

    International Nuclear Information System (INIS)

    Paris, Matteo G.A.; Illuminati, Fabrizio; Serafini, Alessio; De Siena, Silvio

    2003-01-01

    We present a systematic study of the purity for Gaussian states of single-mode continuous variable systems. We prove the connection of purity to observable quantities for these states, and show that the joint measurement of two conjugate quadratures is necessary and sufficient to determine the purity at any time. The statistical reliability and the range of applicability of the proposed measurement scheme are tested by means of Monte Carlo simulated experiments. We then consider the dynamics of purity in noisy channels. We derive an evolution equation for the purity of general Gaussian states both in thermal and in squeezed thermal baths. We show that purity is maximized at any given time for an initial coherent state evolving in a thermal bath, or for an initial squeezed state evolving in a squeezed thermal bath whose asymptotic squeezing is orthogonal to that of the input state

  20. Comparative study of speed estimators with highly noisy measurement signals for Wind Energy Generation Systems

    Energy Technology Data Exchange (ETDEWEB)

    Carranza, O. [Escuela Superior de Computo, Instituto Politecnico Nacional, Av. Juan de Dios Batiz S/N, Col. Lindavista, Del. Gustavo A. Madero 7738, D.F. (Mexico); Figueres, E.; Garcera, G. [Grupo de Sistemas Electronicos Industriales, Departamento de Ingenieria Electronica, Universidad Politecnica de Valencia, Camino de Vera S/N, 7F, 46020 Valencia (Spain); Gonzalez, L.G. [Departamento de Ingenieria Electronica, Universidad de los Andes, Merida (Venezuela)

    2011-03-15

    This paper presents a comparative study of several speed estimators to implement a sensorless speed control loop in Wind Energy Generation Systems driven by power factor correction three-phase boost rectifiers. This rectifier topology reduces the low frequency harmonics contents of the generator currents and, consequently, the generator power factor approaches unity whereas undesired vibrations of the mechanical system decrease. For implementation of the speed estimators, the compared techniques start from the measurement of electrical variables like currents and voltages, which contain low frequency harmonics of the fundamental frequency of the wind generator, as well as switching frequency components due to the boost rectifier. In this noisy environment it has been analyzed the performance of the following estimation techniques: Synchronous Reference Frame Phase Locked Loop, speed reconstruction by measuring the dc current and voltage of the rectifier and speed estimation by means of both an Extended Kalman Filter and a Linear Kalman Filter. (author)

  1. Implementation of two-party protocols in the noisy-storage model

    International Nuclear Information System (INIS)

    Wehner, Stephanie; Curty, Marcos; Schaffner, Christian; Lo, Hoi-Kwong

    2010-01-01

    The noisy-storage model allows the implementation of secure two-party protocols under the sole assumption that no large-scale reliable quantum storage is available to the cheating party. No quantum storage is thereby required for the honest parties. Examples of such protocols include bit commitment, oblivious transfer, and secure identification. Here, we provide a guideline for the practical implementation of such protocols. In particular, we analyze security in a practical setting where the honest parties themselves are unable to perform perfect operations and need to deal with practical problems such as errors during transmission and detector inefficiencies. We provide explicit security parameters for two different experimental setups using weak coherent, and parametric down-conversion sources. In addition, we analyze a modification of the protocols based on decoy states.

  2. Quantum Privacy Amplification and the Security of Quantum Cryptography over Noisy Channels

    International Nuclear Information System (INIS)

    Deutsch, D.; Ekert, A.; Jozsa, R.; Macchiavello, C.; Popescu, S.; Sanpera, A.

    1996-01-01

    Existing quantum cryptographic schemes are not, as they stand, operable in the presence of noise on the quantum communication channel. Although they become operable if they are supplemented by classical privacy-amplification techniques, the resulting schemes are difficult to analyze and have not been proved secure. We introduce the concept of quantum privacy amplification and a cryptographic scheme incorporating it which is provably secure over a noisy channel. The scheme uses an open-quote open-quote entanglement purification close-quote close-quote procedure which, because it requires only a few quantum controlled-not and single-qubit operations, could be implemented using technology that is currently being developed. copyright 1996 The American Physical Society

  3. Multi-Robot, Multi-Target Particle Swarm Optimization Search in Noisy Wireless Environments

    Energy Technology Data Exchange (ETDEWEB)

    Kurt Derr; Milos Manic

    2009-05-01

    Multiple small robots (swarms) can work together using Particle Swarm Optimization (PSO) to perform tasks that are difficult or impossible for a single robot to accomplish. The problem considered in this paper is exploration of an unknown environment with the goal of finding a target(s) at an unknown location(s) using multiple small mobile robots. This work demonstrates the use of a distributed PSO algorithm with a novel adaptive RSS weighting factor to guide robots for locating target(s) in high risk environments. The approach was developed and analyzed on multiple robot single and multiple target search. The approach was further enhanced by the multi-robot-multi-target search in noisy environments. The experimental results demonstrated how the availability of radio frequency signal can significantly affect robot search time to reach a target.

  4. Adaptive Fourier decomposition based R-peak detection for noisy ECG Signals.

    Science.gov (United States)

    Ze Wang; Chi Man Wong; Feng Wan

    2017-07-01

    An adaptive Fourier decomposition (AFD) based R-peak detection method is proposed for noisy ECG signals. Although lots of QRS detection methods have been proposed in literature, most detection methods require high signal quality. The proposed method extracts the R waves from the energy domain using the AFD and determines the R-peak locations based on the key decomposition parameters, achieving the denoising and the R-peak detection at the same time. Validated by clinical ECG signals in the MIT-BIH Arrhythmia Database, the proposed method shows better performance than the Pan-Tompkin (PT) algorithm in both situations of a native PT and the PT with a denoising process.

  5. Quantum teleportation via noisy bipartite and tripartite accelerating quantum states: beyond the single mode approximation

    Science.gov (United States)

    Zounia, M.; Shamirzaie, M.; Ashouri, A.

    2017-09-01

    In this paper quantum teleportation of an unknown quantum state via noisy maximally bipartite (Bell) and maximally tripartite (Greenberger-Horne-Zeilinger (GHZ)) entangled states are investigated. We suppose that one of the observers who would receive the sent state accelerates uniformly with respect to the sender. The interactions of the quantum system with its environment during the teleportation process impose noises. These (unital and nonunital) noises are: phase damping, phase flip, amplitude damping and bit flip. In expressing the modes of the Dirac field used as qubits, in the accelerating frame, the so-called single mode approximation is not imposed. We calculate the fidelities of teleportation, and discuss their behaviors using suitable plots. The effects of noise, acceleration and going beyond the single mode approximation are discussed. Although the Bell states bring higher fidelities than GHZ states, the global behaviors of the two quantum systems with respect to some noise types, and therefore their fidelities, are different.

  6. Delay-enhanced coherence of spiral waves in noisy Hodgkin-Huxley neuronal networks

    International Nuclear Information System (INIS)

    Wang Qingyun; Perc, Matjaz; Duan Zhisheng; Chen Guanrong

    2008-01-01

    We study the spatial dynamics of spiral waves in noisy Hodgkin-Huxley neuronal ensembles evoked by different information transmission delays and network topologies. In classical settings of coherence resonance the intensity of noise is fine-tuned so as to optimize the system's response. Here, we keep the noise intensity constant, and instead, vary the length of information transmission delay amongst coupled neurons. We show that there exists an intermediate transmission delay by which the spiral waves are optimally ordered, hence indicating the existence of delay-enhanced coherence of spatial dynamics in the examined system. Additionally, we examine the robustness of this phenomenon as the diffusive interaction topology changes towards the small-world type, and discover that shortcut links amongst distant neurons hinder the emergence of coherent spiral waves irrespective of transmission delay length. Presented results thus provide insights that could facilitate the understanding of information transmission delay on realistic neuronal networks

  7. Pediatric providers and radiology examinations. Knowledge and comfort levels regarding ionizing radiation and potential complications of imaging

    Energy Technology Data Exchange (ETDEWEB)

    Wildman-Tobriner, Benjamin; Maxfield, Charles M. [Duke University Hospital, Department of Radiology, Durham, NC (United States); Parente, Victoria M. [Duke University Hospital, Department of Pediatrics, Durham, NC (United States)

    2017-12-15

    Pediatric providers should understand the basic risks of the diagnostic imaging tests they order and comfortably discuss those risks with parents. Appreciating providers' level of understanding is important to guide discussions and enhance relationships between radiologists and pediatric referrers. To assess pediatric provider knowledge of diagnostic imaging modalities that use ionizing radiation and to understand provider concerns about risks of imaging. A 6-question survey was sent via email to 390 pediatric providers (faculty, trainees and midlevel providers) from a single academic institution. A knowledge-based question asked providers to identify which radiology modalities use ionizing radiation. Subjective questions asked providers about discussions with parents, consultations with radiologists, and complications of imaging studies. One hundred sixty-nine pediatric providers (43.3% response rate) completed the survey. Greater than 90% of responding providers correctly identified computed tomography (CT), fluoroscopy and radiography as modalities that use ionizing radiation, and ultrasound and magnetic resonance imaging (MRI) as modalities that do not. Fewer (66.9% correct, P<0.001) knew that nuclear medicine utilizes ionizing radiation. A majority of providers (82.2%) believed that discussions with radiologists regarding ionizing radiation were helpful, but 39.6% said they rarely had time to do so. Providers were more concerned with complications of sedation and cost than they were with radiation-induced cancer, renal failure or anaphylaxis. Providers at our academic referral center have a high level of basic knowledge regarding modalities that use ionizing radiation, but they are less aware of ionizing radiation use in nuclear medicine studies. They find discussions with radiologists helpful and are concerned about complications of sedation and cost. (orig.)

  8. Cerebral misery perfusion diagnosed using hypercapnic blood-oxygenation-level-dependent contrast functional magnetic resonance imaging: a case report

    Directory of Open Access Journals (Sweden)

    D'Souza Olympio

    2010-02-01

    Full Text Available Abstract Introduction Cerebral misery perfusion represents a failure of cerebral autoregulation. It is an important differential diagnosis in post-stroke patients presenting with collapses in the presence of haemodynamically significant cerebrovascular stenosis. This is particularly the case when cortical or internal watershed infarcts are present. When this condition occurs, further investigation should be done immediately. Case presentation A 50-year-old Caucasian man presented with a stroke secondary to complete occlusion of his left internal carotid artery. He went on to suffer recurrent seizures. Neuroimaging demonstrated numerous new watershed-territory cerebral infarcts. No source of arterial thromboembolism was demonstrable. Hypercapnic blood-oxygenation-level-dependent-contrast functional magnetic resonance imaging was used to measure his cerebrovascular reserve capacity. The findings were suggestive of cerebral misery perfusion. Conclusions Blood-oxygenation-level-dependent-contrast functional magnetic resonance imaging allows the inference of cerebral misery perfusion. This procedure is cheaper and more readily available than positron emission tomography imaging, which is the current gold standard diagnostic test. The most evaluated treatment for cerebral misery perfusion is extracranial-intracranial bypass. Although previous trials of this have been unfavourable, the results of new studies involving extracranial-intracranial bypass in high-risk patients identified during cerebral perfusion imaging are awaited. Cerebral misery perfusion is an important and under-recognized condition in which emerging imaging and treatment modalities present the possibility of practical and evidence-based management in the near future. Physicians should thus be aware of this disorder and of recent developments in diagnostic tests that allow its detection.

  9. Pediatric providers and radiology examinations: knowledge and comfort levels regarding ionizing radiation and potential complications of imaging.

    Science.gov (United States)

    Wildman-Tobriner, Benjamin; Parente, Victoria M; Maxfield, Charles M

    2017-12-01

    Pediatric providers should understand the basic risks of the diagnostic imaging tests they order and comfortably discuss those risks with parents. Appreciating providers' level of understanding is important to guide discussions and enhance relationships between radiologists and pediatric referrers. To assess pediatric provider knowledge of diagnostic imaging modalities that use ionizing radiation and to understand provider concerns about risks of imaging. A 6-question survey was sent via email to 390 pediatric providers (faculty, trainees and midlevel providers) from a single academic institution. A knowledge-based question asked providers to identify which radiology modalities use ionizing radiation. Subjective questions asked providers about discussions with parents, consultations with radiologists, and complications of imaging studies. One hundred sixty-nine pediatric providers (43.3% response rate) completed the survey. Greater than 90% of responding providers correctly identified computed tomography (CT), fluoroscopy and radiography as modalities that use ionizing radiation, and ultrasound and magnetic resonance imaging (MRI) as modalities that do not. Fewer (66.9% correct, Pionizing radiation. A majority of providers (82.2%) believed that discussions with radiologists regarding ionizing radiation were helpful, but 39.6% said they rarely had time to do so. Providers were more concerned with complications of sedation and cost than they were with radiation-induced cancer, renal failure or anaphylaxis. Providers at our academic referral center have a high level of basic knowledge regarding modalities that use ionizing radiation, but they are less aware of ionizing radiation use in nuclear medicine studies. They find discussions with radiologists helpful and are concerned about complications of sedation and cost.

  10. Prostate MR imaging for patients with elevated serum PSA levels. The clinical value of diffusion-weighted and dynamic MR imaging in cancer screening

    International Nuclear Information System (INIS)

    Tanimoto, Akihiro; Shinmoto, Hiroshi; Kuribayasi, Sachio; Nakashima, Jun; Kohno, Hidaka; Murai, Masaru

    2006-01-01

    The purpose of this study was to evaluate the clinical value of diffusion-weighted imaging (DWI) and dynamic magnetic resonance imaging (MRI) in combination with T 2 -weighted imaging (T 2 W) for the detection of prostate cancer. Eighty-three patients with elevated serum levels of prostate-specific antigen (PSA) (>4.0 ng/mL) were evaluated by T 2 W, DWI, and dynamic MRI at 1.5T prior to needle biopsy. The data from the results of the T 2 W alone (protocol A), combination of T 2 W and DWI (protocol B), and combination of T 2 W+DWI and dynamic MRI (protocol C) were entered into a receiver operating characteristic (ROC) analysis. Prostate cancer was detected by pathology in 44 of 83 patients. The sensitivity, respective specificity, accuracy, and Az (the area under the ROC curve) for the detection of prostate cancer were 73%, 54%, 64%, and 0.71 in protocol A; 84%, 85%, 84%, and 0.90 in protocol B; and 95%, 74%, 86%, and 0.97 in protocol C. The sensitivity, specificity, and accuracy were significantly different among the 3 protocols (p 2 W, DWI, and dynamic MRI may be valuable for detecting prostate cancer and avoiding unnecessary biopsy. (author)

  11. Classification and Discrimination of Different Fungal Diseases of Three Infection Levels on Peaches Using Hyperspectral Reflectance Imaging Analysis

    Directory of Open Access Journals (Sweden)

    Ye Sun

    2018-04-01

    Full Text Available Peaches are susceptible to infection from several postharvest diseases. In order to control disease and avoid potential health risks, it is important to identify suitable treatments for each disease type. In this study, the spectral and imaging information from hyperspectral reflectance (400~1000 nm was used to evaluate and classify three kinds of common peach disease. To reduce the large dimensionality of the hyperspectral imaging, principal component analysis (PCA was applied to analyse each wavelength image as a whole, and the first principal component was selected to extract the imaging features. A total of 54 parameters were extracted as imaging features for one sample. Three decayed stages (slight, moderate and severe decayed peaches were considered for classification by deep belief network (DBN and partial least squares discriminant analysis (PLSDA in this study. The results showed that the DBN model has better classification results than the classification accuracy of the PLSDA model. The DBN model based on integrated information (494 features showed the highest classification results for the three diseases, with accuracies of 82.5%, 92.5%, and 100% for slightly-decayed, moderately-decayed and severely-decayed samples, respectively. The successive projections algorithm (SPA was used to select the optimal features from the integrated information; then, six optimal features were selected from a total of 494 features to establish the simple model. The SPA-PLSDA model showed better results which were more feasible for industrial application. The results showed that the hyperspectral reflectance imaging technique is feasible for detecting different kinds of diseased peaches, especially at the moderately- and severely-decayed levels.

  12. Low gray scale values of computerized images of carotid plaques associated with increased levels of triglyceride-rich lipoproteins and with increased plaque lipid content

    DEFF Research Database (Denmark)

    Grønholdt, Marie-Louise M.; Nordestgaard, Børge; Weibe, Britt M.

    1997-01-01

    Relatioin between low gray scale values in computerized images of carotid plaques and 1) plasma levels of triglyceride-rich lipoproteins and 2) plaque lipid content......Relatioin between low gray scale values in computerized images of carotid plaques and 1) plasma levels of triglyceride-rich lipoproteins and 2) plaque lipid content...

  13. Atlas-based delineation of lymph node levels in head and neck computed tomography images

    International Nuclear Information System (INIS)

    Commowick, Olivier; Gregoire, Vincent; Malandain, Gregoire

    2008-01-01

    Purpose: Radiotherapy planning requires accurate delineations of the tumor and of the critical structures. Atlas-based segmentation has been shown to be very efficient to automatically delineate brain critical structures. We therefore propose to construct an anatomical atlas of the head and neck region. Methods and materials: Due to the high anatomical variability of this region, an atlas built from a single image as for the brain is not adequate. We address this issue by building a symmetric atlas from a database of manually segmented images. First, we develop an atlas construction method and apply it to a database of 45 Computed Tomography (CT) images from patients with node-negative pharyngo-laryngeal squamous cell carcinoma manually delineated for radiotherapy. Then, we qualitatively and quantitatively evaluate the results generated by the built atlas based on Leave-One-Out framework on the database. Results: We present qualitative and quantitative results using this atlas construction method. The evaluation was performed on a subset of 12 patients among the original CT database of 45 patients. Qualitative results depict visually well delineated structures. The quantitative results are also good, with an error with respect to the best achievable results ranging from 0.196 to 0.404 with a mean of 0.253. Conclusions: These results show the feasibility of using such an atlas for radiotherapy planning. Many perspectives are raised from this work ranging from extensive validation to the construction of several atlases representing sub-populations, to account for large inter-patient variabilities, and populations with node-positive tumors

  14. Real-time Implementation of Synthetic Aperture Vector Flow Imaging on a Consumer-level Tablet

    DEFF Research Database (Denmark)

    di Ianni, Tommaso; Kjeldsen, Thomas Kim; Villagómez Hoyos, Carlos Armando

    2017-01-01

    In this work, a 2-D vector flow imaging (VFI) method based on synthetic aperture sequential beamforming (SASB) and directional transverse oscillation is implemented on a commercially available tablet. The SASB technique divides the beamforming process in two parts, whereby the required data rate ......’s built-in GPU (Nvidia Tegra K1) through the OpenGL ES 3.1 API. Real-time performance was achieved with rates up to 26 VFI frames per second (38 ms/frame) for concurrent processing and Wi-Fi transmission....

  15. Análise acústica em brinquedos ruidosos Acoustics analysis of the noisy toys

    Directory of Open Access Journals (Sweden)

    Carla Linhares Taxini

    2012-01-01

    Inmetro and 10 without the seal with the use of digital sound level meter in an acoustically treated room, and the sound analysis was performed using the Praat program. RESULTS: toys placed at 2.5 cm from the equipment with the seal of the Inmetro had an intensity ranging from 61.50 to 91.55 dB (A and from 69.75 to 95.05 dB (C, positioned at 25 cm ranged from 58.3 to 79.85 dB (A and from 62.50 to 83.65 dB (C. The results of the toys without warranty stamps placed at 2.5 cm ranged from 67.45 to 94.30 dB (A and 65.4 to 99.50 dB (C and the distance of 25 cm recorded from 61. 30 to 87.45 dB (A and 63.75 to 97.60 dB (C, so that the findings demonstrated that there are noisy toys that go beyond the values recommended by the current legislation in both groups, with and without warranty stamps . CONCLUSION: the toys without the seal of Inmetro showed intensities values significantly higher than the other group, offering more risk to the children’s hearing health.

  16. Feature Extraction in Sequential Multimedia Images: with Applications in Satellite Images and On-line Videos

    Science.gov (United States)

    Liang, Yu-Li

    Multimedia data is increasingly important in scientific discovery and people's daily lives. Content of massive multimedia is often diverse and noisy, and motion between frames is sometimes crucial in analyzing those data. Among all, still images and videos are commonly used formats. Images are compact in size but do not contain motion information. Videos record motion but are sometimes too big to be analyzed. Sequential images, which are a set of continuous images with low frame rate, stand out because they are smaller than videos and still maintain motion information. This thesis investigates features in different types of noisy sequential images, and the proposed solutions that intelligently combined multiple features to successfully retrieve visual information from on-line videos and cloudy satellite images. The first task is detecting supraglacial lakes above ice sheet in sequential satellite images. The dynamics of supraglacial lakes on the Greenland ice sheet deeply affect glacier movement, which is directly related to sea level rise and global environment change. Detecting lakes above ice is suffering from diverse image qualities and unexpected clouds. A new method is proposed to efficiently extract prominent lake candidates with irregular shapes, heterogeneous backgrounds, and in cloudy images. The proposed system fully automatize the procedure that track lakes with high accuracy. We further cooperated with geoscientists to examine the tracked lakes and found new scientific findings. The second one is detecting obscene content in on-line video chat services, such as Chatroulette, that randomly match pairs of users in video chat sessions. A big problem encountered in such systems is the presence of flashers and obscene content. Because of various obscene content and unstable qualities of videos capture by home web-camera, detecting misbehaving users is a highly challenging task. We propose SafeVchat, which is the first solution that achieves satisfactory

  17. Robust nuclei segmentation in cyto-histopathological images using statistical level set approach with topology preserving constraint

    Science.gov (United States)

    Taheri, Shaghayegh; Fevens, Thomas; Bui, Tien D.

    2017-02-01

    Computerized assessments for diagnosis or malignancy grading of cyto-histopathological specimens have drawn increased attention in the field of digital pathology. Automatic segmentation of cell nuclei is a fundamental step in such automated systems. Despite considerable research, nuclei segmentation is still a challenging task due noise, nonuniform illumination, and most importantly, in 2D projection images, overlapping and touching nuclei. In most published approaches, nuclei refinement is a post-processing step after segmentation, which usually refers to the task of detaching the aggregated nuclei or merging the over-segmented nuclei. In this work, we present a novel segmentation technique which effectively addresses the problem of individually segmenting touching or overlapping cell nuclei during the segmentation process. The proposed framework is a region-based segmentation method, which consists of three major modules: i) the image is passed through a color deconvolution step to extract the desired stains; ii) then the generalized fast radial symmetry transform is applied to the image followed by non-maxima suppression to specify the initial seed points for nuclei, and their corresponding GFRS ellipses which are interpreted as the initial nuclei borders for segmentation; iii) finally, these nuclei border initial curves are evolved through the use of a statistical level-set approach along with topology preserving criteria for segmentation and separation of nuclei at the same time. The proposed method is evaluated using Hematoxylin and Eosin, and fluorescent stained images, performing qualitative and quantitative analysis, showing that the method outperforms thresholding and watershed segmentation approaches.

  18. The image of a research institution as an important element in shaping the level of competitiveness of the organisation

    Directory of Open Access Journals (Sweden)

    Świeczak Witold

    2017-06-01

    Full Text Available The primary objective of the publication is defining the factors and processes affecting the efficient course of actions undertaken around building a positive image of the organisation. The study raises key aspects of this issue. The diagnosis providing that the recipient is guided in his/her purchasing decisions by the opinions about a given product or service that reach them through all the available content distribution channels is appearing increasingly often in subsequent study findings. The same studies have also confirmed that an inherent feature of a knowledge-based economy is the level of the intangible asset share as regards determining the position of an organisation (brand, reputation. The unbridled increase in competition has led to the generation of a growing volume of advertising offers for buyers, as a result of which standing out among other market players has now become more of an issue. One of the ways to enhance an organisation’s presence in a changing environment includes measures promoting its positive image creation and shaping of favourable (valuable opinions about it. The selection of the appropriate image concept to a constantly changing business environment in terms of competing or consumer preferences requires an effective response to their experiences in the cognitive, emotional and behavioural dimensions. If they are positive, they will contribute to building a favourable mindset among consumers, which in turn will facilitate the formation of a positive image of the organisation.

  19. Fibered confocal fluorescence microscopy for imaging apoptotic DNA fragmentation at the single-cell level in vivo

    International Nuclear Information System (INIS)

    Al-Gubory, Kais H.

    2005-01-01

    The major characteristic of cell death by apoptosis is the loss of nuclear DNA integrity by endonucleases, resulting in the formation of small DNA fragments. The application of confocal imaging to in vivo monitoring of dynamic cellular events, like apoptosis, within internal organs and tissues has been limited by the accessibility to these sites. Therefore, the aim of the present study was to test the feasibility of fibered confocal fluorescence microscopy (FCFM) to image in situ apoptotic DNA fragmentation in surgically exteriorized sheep corpus luteum in the living animal. Following intra-luteal administration of a fluorescent DNA-staining dye, YO-PRO-1, DNA cleavage within nuclei of apoptotic cells was serially imaged at the single-cell level by FCFM. This imaging technology is sufficiently simple and rapid to allow time series in situ detection and visualization of cells undergoing apoptosis in the intact animal. Combined with endoscope, this approach can be used for minimally invasive detection of fluorescent signals and visualization of cellular events within internal organs and tissues and thereby provides the opportunity to study biological processes in the natural physiological environment of the cell in living animals

  20. Dependence of accuracy of ESPRIT estimates on signal eigenvalues: the case of a noisy sum of two real exponentials.

    Science.gov (United States)

    Alexandrov, Theodore; Golyandina, Nina; Timofeyev, Alexey

    2009-02-26

    This paper is devoted to estimation of parameters for a noisy sum of two real exponential functions. Singular Spectrum Analysis is used to extract the signal subspace and then the ESPRIT method exploiting signal subspace features is applied to obtain estimates of the desired exponential rates. Dependence of estimation quality on signal eigenvalues is investigated. The special design to test this relation is elaborated.

  1. Controlling transfer of quantum correlations among bi-partitions of a composite quantum system by combining different noisy environments

    International Nuclear Information System (INIS)

    Zhang Xiu-Xing; Li Fu-Li

    2011-01-01

    The correlation dynamics are investigated for various bi-partitions of a composite quantum system consisting of two qubits and two independent and non-identical noisy environments. The two qubits have no direct interaction with each other and locally interact with their environments. Classical and quantum correlations including the entanglement are initially prepared only between the two qubits. We find that contrary to the identical noisy environment case, the quantum correlation transfer direction can be controlled by combining different noisy environments. The amplitude-damping environment determines whether there exists the entanglement transfer among bi-partitions of the system. When one qubit is coupled to an amplitude-damping environment and the other one to a bit-flip one, we find a very interesting result that all the quantum and the classical correlations, and even the entanglement, originally existing between the qubits, can be completely transferred without any loss to the qubit coupled to the bit-flit environment and the amplitude-damping environment. We also notice that it is possible to distinguish the quantum correlation from the classical correlation and the entanglement by combining different noisy environments. (general)

  2. Mapping Water Level Dynamics over Central Congo River Using PALSAR Images, Envisat Altimetry, and Landsat NDVI Data

    Science.gov (United States)

    Kim, D.; Lee, H.; Jung, H. C.; Beighley, E.; Laraque, A.; Tshimanga, R.; Alsdorf, D. E.

    2016-12-01

    Rivers and wetlands are very important for ecological habitats, and it plays a key role in providing a source of greenhouse gases (CO2 and CH4). The floodplains ecosystems depend on the process between the vegetation and flood characteristics. The water level is a prerequisite to an understanding of terrestrial water storage and discharge. Despite the lack of in situ data over the Congo Basin, which is the world's third largest in size ( 3.7 million km2), and second only to the Amazon River in discharge ( 40,500 m3 s-1 annual average between 1902 and 2015 in the main Brazzaville-Kinshasa gauging station), the surface water level dynamics in the wetlands have been successfully estimated using satellite altimetry, backscattering coefficients (σ0) from Synthetic Aperture Radar (SAR) images and, interferometric SAR technique. However, the water level estimation of the Congo River remains poorly quantified due to the sparse orbital spacing of radar altimeters. Hence, we essentially have limited information only over the sparsely distributed the so-called "virtual stations". The backscattering coefficients from SAR images have been successfully used to distinguish different vegetation types, to monitor flood conditions, and to access soil moistures over the wetlands. However, σ0 has not been used to measure the water level changes over the open river because of very week return signal due to specular scattering. In this study, we have discovered that changes in σ0 over the Congo River occur mainly due to the water level changes in the river with the existence of the water plants (macrophytes, emergent plants, and submersed plant), depending on the rising and falling stage inside the depression of the "Cuvette Centrale". We expand the finding into generating the multi-temporal water level maps over the Congo River using PALSAR σ0, Envisat altimetry, and Landsat Normalized Difference Vegetation Index (NDVI) data. We also present preliminary estimates of the river

  3. Backtracking-Based Iterative Regularization Method for Image Compressive Sensing Recovery

    Directory of Open Access Journals (Sweden)

    Lingjun Liu

    2017-01-01

    Full Text Available This paper presents a variant of the iterative shrinkage-thresholding (IST algorithm, called backtracking-based adaptive IST (BAIST, for image compressive sensing (CS reconstruction. For increasing iterations, IST usually yields a smoothing of the solution and runs into prematurity. To add back more details, the BAIST method backtracks to the previous noisy image using L2 norm minimization, i.e., minimizing the Euclidean distance between the current solution and the previous ones. Through this modification, the BAIST method achieves superior performance while maintaining the low complexity of IST-type methods. Also, BAIST takes a nonlocal regularization with an adaptive regularizor to automatically detect the sparsity level of an image. Experimental results show that our algorithm outperforms the original IST method and several excellent CS techniques.

  4. Mapping very low level occupational exposure in medical imaging: A useful tool in risk communication and decision making

    Energy Technology Data Exchange (ETDEWEB)

    Covens, P., E-mail: pcovens@vub.ac.be [Health Physics Department, Vrije Universiteit Brussel and UZ Brussel, Laarbeeklaan 103, 1090 Brussels (Belgium); Beeldvorming en Fysische Wetenschappen (BEFY), Vrije Universiteit Brussel, Laarbeeklaan 103, 1090 Brussels (Belgium); Berus, D., E-mail: dberus@vub.ac.be [Health Physics Department, Vrije Universiteit Brussel and UZ Brussel, Laarbeeklaan 103, 1090 Brussels (Belgium); Mey, J. de, E-mail: Johan.DeMey@uzbrussel.be [Department of Radiology, UZ Brussel, Laarbeeklaan 101, 1090 Brussels (Belgium); Beeldvorming en Fysische Wetenschappen (BEFY), Vrije Universiteit Brussel, Laarbeeklaan 103, 1090 Brussels (Belgium); Buls, N., E-mail: Nico.Buls@uzbrussel.be [Department of Radiology, UZ Brussel, Laarbeeklaan 101, 1090 Brussels (Belgium); Beeldvorming en Fysische Wetenschappen (BEFY), Vrije Universiteit Brussel, Laarbeeklaan 103, 1090 Brussels (Belgium)

    2012-09-15

    Objectives: The use of ionising radiation in medical imaging is accompanied with occupational exposure which should be limited by optimised room design and safety instructions. These measures can however not prevent that workers are exposed to instantaneous dose rates, e.g. the residual exposure through shielding or the exposure of discharged nuclear medicine patients. The latter elements are often questioned by workers and detailed assessment should give more information about the impact on the individual radiation dose. Methods: Cumulated radiation exposure was measured in a university hospital during a period of 6 months by means of thermoluminescent dosimeters. Radiation exposure was measured at background locations and at locations where enhanced exposure levels are expected but where the impact on the individual exposure is unclear. Results: The results show a normal distribution of the cumulated background radiation level. No enhanced cumulated radiation exposure which significantly differs from this background level could be found during the operation of intra-oral apparatus, during ultrasonography procedures among nuclear medicine patients and at operator consoles of most CT-rooms. Conclusions: This 6 months survey offers useful information about occupational low level exposure in medical imaging and the findings can be useful in both risk communication and decision making.

  5. Mapping very low level occupational exposure in medical imaging: A useful tool in risk communication and decision making

    International Nuclear Information System (INIS)

    Covens, P.; Berus, D.; Mey, J. de; Buls, N.

    2012-01-01

    Objectives: The use of ionising radiation in medical imaging is accompanied with occupational exposure which should be limited by optimised room design and safety instructions. These measures can however not prevent that workers are exposed to instantaneous dose rates, e.g. the residual exposure through shielding or the exposure of discharged nuclear medicine patients. The latter elements are often questioned by workers and detailed assessment should give more information about the impact on the individual radiation dose. Methods: Cumulated radiation exposure was measured in a university hospital during a period of 6 months by means of thermoluminescent dosimeters. Radiation exposure was measured at background locations and at locations where enhanced exposure levels are expected but where the impact on the individual exposure is unclear. Results: The results show a normal distribution of the cumulated background radiation level. No enhanced cumulated radiation exposure which significantly differs from this background level could be found during the operation of intra-oral apparatus, during ultrasonography procedures among nuclear medicine patients and at operator consoles of most CT-rooms. Conclusions: This 6 months survey offers useful information about occupational low level exposure in medical imaging and the findings can be useful in both risk communication and decision making

  6. Development of imaging techniques to study the pathogenesis of biosafety level 2/3 infectious agents.

    Science.gov (United States)

    Rella, Courtney E; Ruel, Nancy; Eugenin, Eliseo A

    2014-12-01

    Despite significant advances in microbiology and molecular biology over the last decades, several infectious diseases remain global concerns, resulting in the death of millions of people worldwide each year. According to the Center for Disease Control (CDC) in 2012, there were 34 million people infected with HIV, 8.7 million new cases of tuberculosis, 500 million cases of hepatitis, and 50-100 million people infected with dengue. Several of these pathogens, despite high incidence, do not have reliable clinical detection methods. New or improved protocols have been generated to enhance detection and quantitation of several pathogens using high-end microscopy (light, confocal, and STORM microscopy) and imaging software. In the current manuscript, we discuss these approaches and the theories behind these methodologies. Thus, advances in imaging techniques will open new possibilities to discover therapeutic interventions to reduce or eliminate the devastating consequences of infectious diseases. © 2014 Federation of European Microbiological Societies. Published by John Wiley & Sons Ltd. All rights reserved.

  7. Cardiac magnetic resonance imaging in patients with chest pain, high troponin levels and absence of coronary artery obstruction

    International Nuclear Information System (INIS)

    Avegliano, G.P.; Costabel, J.P.; Kuschnir, P.; Thierer, J.; Alves de Lima, A.; Sanchez, G.; Ronderos, J.; Huguet, M.; Petit, M.; Frangi, A.A.

    2011-01-01

    The prevalence of myocardial infarction with angiographically normal coronary arteries is approximately 7-10%. The etiological diagnosis is sometimes difficult and is important in terms of clinical practice and prognosis. The goal of our study was to show a series of consecutive patients with an initial diagnosis of acute coronary syndrome with high troponin levels and absence of coronary artery obstruction in which cardiac magnetic resonance imaging (CMRI) gave a description of the myocardial lesion, orientating towards the etiological diagnosis. From January 2005 to December 2009, 720 consecutive patients with an initial diagnosis of acute coronary syndrome and elevated troponins were included; 64 of these patients did not present angiographically significant coronary artery stenosis. Within 72 ± 24 h after coronary angiography, these patients underwent CMRI using b-SSFP sequences for cine imaging in short-axis, 2-, 3- and 4- chamber views for the evaluation of segmental wall motion, with T2-weighted and delayed enhancement (DE) images of the myocardium with an 'inversion-recovery' sequence. The following diagnoses were made: myocarditis (39 patients); myocardial infarction (12 patients); Tako-Tsubo syndrome (8 patients); apical hypertrophic cardiomyopathy (2 patients); 3 patients remained without diagnosis. These findings demonstrate the usefulness of CMRI in the clinical scenario of patients with chest pain, inconclusive ECG findings and high troponin levels with angiographically normal coronary arteries. The presence and distribution pattern of DE make it possible to define the etiological diagnosis and interpret the physiopathological process. (authors) [es

  8. Blood oxygenation level dependent (BOLD). Renal imaging. Concepts and applications; Blood Oxygenation Level Dependent (BOLD). Bildgebung der Nieren. Konzepte und Anwendungen

    Energy Technology Data Exchange (ETDEWEB)

    Nissen, Johanna C.; Haneder, Stefan; Schoenberg, Stefan O.; Michaely, Henrik J. [Heidelberg Univ. Medizinische Fakultaet Mannheim (Germany). Inst. fuer Klinische Radiologie und Nuklearmedizin; Mie, Moritz B.; Zoellner, Frank G. [Heidelberg Univ. Medizinische Fakultaet Mannheim (DE). Inst. fuer Computerunterstuetzte Klinische Medizin (CKM)

    2010-07-01

    Many renal diseases as well as several pharmacons cause a change in renal blood flow and/or renal oxygenation. The blood oxygenation level dependent (BOLD) imaging takes advantage of local field inhomogeneities and is based on a T2{sup *}-weighted sequence. BOLD is a non-invasive method allowing an estimation of the renal, particularly the medullary oxygenation, and an indirect measurement of blood flow without administration of contrast agents. Thus, effects of different drugs on the kidney and various renal diseases can be controlled and observed. This work will provide an overview of the studies carried out so far and identify ways how BOLD can be used in clinical studies. (orig.)

  9. Use of high resolution images of orbital surface of waterproofing with different levels of urban land: case Irati-PR

    Directory of Open Access Journals (Sweden)

    Paulo Costa de Oliveira Filho

    2012-11-01

    Full Text Available The objective of this research was to present a detailed diagnosis for use and occupation of urban land aimed at different levels of sealing, in a downtownIrati area of 14 blocks, totaling 0.23 km2, from the Quickbird satellite images with spatial resolution of 0.60 m, by the method of interpretation and vectorization on the screen followed by classification. The area occupied by the classes that represent the highest level of waterproof is 33,218% of the total study area, since the area occupied by classes representing less impermeable level is 22,488% of the same area. The results show that the study area is well sealed.

  10. Efficient globally optimal segmentation of cells in fluorescence microscopy images using level sets and convex energy functionals.

    Science.gov (United States)

    Bergeest, Jan-Philip; Rohr, Karl

    2012-10-01

    In high-throughput applications, accurate and efficient segmentation of cells in fluorescence microscopy images is of central importance for the quantification of protein expression and the understanding of cell function. We propose an approach for segmenting cell nuclei which is based on active contours using level sets and convex energy functionals. Compared to previous work, our approach determines the global solution. Thus, the approach does not suffer from local minima and the segmentation result does not depend on the initialization. We consider three different well-known energy functionals for active contour-based segmentation and introduce convex formulations of these functionals. We also suggest a numeric approach for efficiently computing the solution. The performance of our approach has been evaluated using fluorescence microscopy images from different experiments comprising different cell types. We have also performed a quantitative comparison with previous segmentation approaches. Copyright © 2012 Elsevier B.V. All rights reserved.

  11. Mass Spectrometry Imaging Can Distinguish on a Proteomic Level Between Proliferative Nodules Within a Benign Congenital Nevus and Malignant Melanoma.

    Science.gov (United States)

    Lazova, Rossitza; Yang, Zhe; El Habr, Constantin; Lim, Young; Choate, Keith Adam; Seeley, Erin H; Caprioli, Richard M; Yangqun, Li

    2017-09-01

    Histopathological interpretation of proliferative nodules occurring in association with congenital melanocytic nevi can be very challenging due to their similarities with congenital malignant melanoma and malignant melanoma arising in association with congenital nevi. We hereby report a diagnostically challenging case of congenital melanocytic nevus with proliferative nodules and ulcerations, which was originally misdiagnosed as congenital malignant melanoma. Subsequent histopathological examination in consultation by one of the authors (R.L.) and mass spectrometry imaging analysis rendered a diagnosis of congenital melanocytic nevus with proliferative nodules. In this case, mass spectrometry imaging, a novel method capable of distinguishing benign from malignant melanocytic lesions on a proteomic level, was instrumental in making the diagnosis of a benign nevus. We emphasize the importance of this method as an ancillary tool in the diagnosis of difficult melanocytic lesions.

  12. Advantages of GSO Scintillator in Imaging and Law Level Gamma-ray Spectroscopy

    CERN Document Server

    Sharaf, J

    2002-01-01

    The single GSO crystal is an excellent scintillation material featuring a high light yield and short decay time for gamma-ray detection. Its performance characteristics were investigated and directly compared to those of BGO. For this purpose, the two scintillators are cut into small crystals of approximately 4*4*10 mm sup 3 and mounted on a PMT. Energy resolution, detection efficiency and counting precision have been measured for various photon energies. In addition to this spectroscopic characterization, the imaging performance of GSO was studied using a scanning rig. The modulation transfer function was calculated and the spatial resolution evaluated by measurements of the detector's point spread function. It is shown that there exists some source intensity for which the two scintillators yield identical precision for identical count time. Below this intensity, the GSO is superior to the BGO detector. The presented properties of GSO suggest potential applications of this scintillator in gamma-ray spectrosc...

  13. Principal component analysis of image gradient orientations for face recognition

    NARCIS (Netherlands)

    Tzimiropoulos, Georgios; Zafeiriou, Stefanos; Pantic, Maja

    We introduce the notion of Principal Component Analysis (PCA) of image gradient orientations. As image data is typically noisy, but noise is substantially different from Gaussian, traditional PCA of pixel intensities very often fails to estimate reliably the low-dimensional subspace of a given data

  14. Atomic force microscopy for cellular level manipulation: imaging intracellular structures and DNA delivery through a membrane hole.

    Science.gov (United States)

    Afrin, Rehana; Zohora, Umme Salma; Uehara, Hironori; Watanabe-Nakayama, Takahiro; Ikai, Atsushi

    2009-01-01

    The atomic force microscope (AFM) is a versatile tool for imaging, force measurement and manipulation of proteins, DNA, and living cells basically at the single molecular level. In the cellular level manipulation, extraction, and identification of mRNA's from defined loci of a cell, insertion of plasmid DNA and pulling of membrane proteins, for example, have been reported. In this study, AFM was used to create holes at defined loci on the cell membrane for the investigation of viability of the cells after hole creation, visualization of intracellular structure through the hole and for targeted gene delivery into living cells. To create large holes with an approximate diameter of 5-10 microm, a phospholipase A(2) coated bead was added to the AFM cantilever and the bead was allowed to touch the cell surface for approximately 5-10 min. The evidence of hole creation was obtained mainly from fluorescent image of Vybrant DiO labeled cell before and after the contact with the bead and the AFM imaging of the contact area. In parallel, cells with a hole were imaged by AFM to reveal intracellular structures such as filamentous structures presumably actin fibers and mitochondria which were identified with fluorescent labeling with rhodamine 123. Targeted gene delivery was also attempted by inserting an AFM probe that was coated with the Monster Green Fluorescent Protein phMGFP Vector for transfection of the cell. Following targeted transfection, the gene expression of green fluorescent protein (GFP) was observed and confirmed by the fluorescence microscope. Copyright (c) 2009 John Wiley & Sons, Ltd.

  15. Simultaneous low level treadmill exercise and intravenous dipyridamole stress thallium imaging

    International Nuclear Information System (INIS)

    Casale, P.N.; Guiney, T.E.; Strauss, H.W.; Boucher, C.A.

    1988-01-01

    Intravenous dipyridamole-thallium imaging unmasks ischemia in patients unable to exercise adequately. However, some of these patients can perform limited exercise, which, if added, may provide useful information. Treadmill exercise combined with dipyridamole-thallium imaging was performed in 100 patients and results compared with those of 100 other blindly age- and sex-matched patients who received dipyridamole alone. Exercise began after completion of the dipyridamole infusion. Mean +/- 1 standard deviation peak heart rate (109 +/- 19 vs 83 +/- 12 beats/min, p less than 0.0001) and peak systolic and diastolic blood pressure (146 +/- 28/77 +/- 14 vs 125 +/- 24/68 +/- 11 mm Hg, p less than 0.0001) were higher in the exercise group compared with the nonexercise group. There was no difference in the occurrence of chest pain, but more patients in the exercise group developed ST-segment depression (26 vs 12%, p less than 0.0001). The exercise group had fewer noncardiac side effects (4 vs 12%, p less than 0.01) and a higher target (heart) to background (liver) count ratio (2.1 +/- 0.7 vs 1.2 +/- 0.3; p less than 0.01), due to fewer liver counts. There were no deaths, myocardial infarctions or sustained arrhythmias in either group. Combined treadmill exercise and dipyridamole testing is safe, associated with fewer noncardiac side effects, a higher target to background ratio and a higher incidence of clinical electrocardiographic ischemia than dipyridamole alone. Therefore, it is recommended whenever possible

  16. Ambient radiation levels in positron emission tomography/computed tomography (PET/CT) imaging center

    Energy Technology Data Exchange (ETDEWEB)

    Santana, Priscila do Carmo; Oliveira, Paulo Marcio Campos de; Mamede, Marcelo; Silveira, Mariana de Castro; Aguiar, Polyanna; Real, Raphaela Vila, E-mail: pridili@gmail.com [Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil); Silva, Teogenes Augusto da [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil)

    2015-01-15

    Objective: to evaluate the level of ambient radiation in a PET/CT center. Materials and methods: previously selected and calibrated TLD-100H thermoluminescent dosimeters were utilized to measure room radiation levels. During 32 days, the detectors were placed in several strategically selected points inside the PET/CT center and in adjacent buildings. After the exposure period the dosimeters were collected and processed to determine the radiation level. Results: in none of the points selected for measurements the values exceeded the radiation dose threshold for controlled area (5 mSv/ year) or free area (0.5 mSv/year) as recommended by the Brazilian regulations. Conclusion: in the present study the authors demonstrated that the whole shielding system is appropriate and, consequently, the workers are exposed to doses below the threshold established by Brazilian standards, provided the radiation protection standards are followed. (author)

  17. Ambient radiation levels in positron emission tomography/computed tomography (PET/CT) imaging center

    Science.gov (United States)

    Santana, Priscila do Carmo; de Oliveira, Paulo Marcio Campos; Mamede, Marcelo; Silveira, Mariana de Castro; Aguiar, Polyanna; Real, Raphaela Vila; da Silva, Teógenes Augusto

    2015-01-01

    Objective To evaluate the level of ambient radiation in a PET/CT center. Materials and Methods Previously selected and calibrated TLD-100H thermoluminescent dosimeters were utilized to measure room radiation levels. During 32 days, the detectors were placed in several strategically selected points inside the PET/CT center and in adjacent buildings. After the exposure period the dosimeters were collected and processed to determine the radiation level. Results In none of the points selected for measurements the values exceeded the radiation dose threshold for controlled area (5 mSv/year) or free area (0.5 mSv/year) as recommended by the Brazilian regulations. Conclusion In the present study the authors demonstrated that the whole shielding system is appropriate and, consequently, the workers are exposed to doses below the threshold established by Brazilian standards, provided the radiation protection standards are followed. PMID:25798004

  18. Terahertz imaging of Landau levels in HgTe-based topological insulators

    Energy Technology Data Exchange (ETDEWEB)

    Kadykov, Aleksandr M.; Krishtopenko, Sergey S. [Laboratoire Charles Coulomb (L2C), UMR 5221 CNRS–Université de Montpellier, Montpellier (France); Institute for Physics of Microstructures, Russian Academy of Sciences, GSP-105, 603950 Nizhny Novgorod (Russian Federation); Torres, Jeremie [Institut d' Electronique et des Systèmes (IES), UMR 5214 CNRS–Université de Montpellier, Montpellier (France); Consejo, Christophe; Ruffenach, Sandra; Marcinkiewicz, Michal; But, Dmytro; Teppe, Frederic, E-mail: frederic.teppe@umontpellier.fr [Laboratoire Charles Coulomb (L2C), UMR 5221 CNRS–Université de Montpellier, Montpellier (France); Knap, Wojciech [Laboratoire Charles Coulomb (L2C), UMR 5221 CNRS–Université de Montpellier, Montpellier (France); Institute of High Pressure Institute Physics, Polish Academy of Sciences, 01-447 Warsaw (Poland); Morozov, Sergey V.; Gavrilenko, Vladimir I. [Institute for Physics of Microstructures, Russian Academy of Sciences, GSP-105, 603950 Nizhny Novgorod (Russian Federation); Lobachevsky State University of Nizhny Novgorod, 603950 Nizhny Novgorod (Russian Federation); Mikhailov, Nikolai N. [Institute of Semiconductor Physics, Siberian Branch, Russian Academy of Sciences, pr. Akademika Lavrent' eva 13, 630090 Novosibirsk (Russian Federation); Novosibirsk State University, 630090 Novosibirsk (Russian Federation); Dvoretsky, Sergey A. [Institute of Semiconductor Physics, Siberian Branch, Russian Academy of Sciences, pr. Akademika Lavrent' eva 13, 630090 Novosibirsk (Russian Federation)

    2016-06-27

    We report on sub-terahertz photoconductivity under the magnetic field of a two dimensional topological insulator based on HgTe quantum wells. We perform a detailed visualization of Landau levels by means of photoconductivity measured at different gate voltages. This technique allows one to determine a critical magnetic field, corresponding to topological phase transition from inverted to normal band structure, even in almost gapless samples. The comparison with realistic calculations of Landau levels reveals a smaller role of bulk inversion asymmetry in HgTe quantum wells than it was assumed previously.

  19. Evaluation of Parallel and Fan-Beam Data Acquisition Geometries and Strategies for Myocardial SPECT Imaging

    Science.gov (United States)

    Qi, Yujin; Tsui, B. M. W.; Gilland, K. L.; Frey, E. C.; Gullberg, G. T.

    2004-06-01

    This study evaluates myocardial SPECT images obtained from parallel-hole (PH) and fan-beam (FB) collimator geometries using both circular-orbit (CO) and noncircular-orbit (NCO) acquisitions. A newly developed 4-D NURBS-based cardiac-torso (NCAT) phantom was used to simulate the /sup 99m/Tc-sestamibi uptakes in human torso with myocardial defects in the left ventricular (LV) wall. Two phantoms were generated to simulate patients with thick and thin body builds. Projection data including the effects of attenuation, collimator-detector response and scatter were generated using SIMSET Monte Carlo simulations. A large number of photon histories were generated such that the projection data were close to noise free. Poisson noise fluctuations were then added to simulate the count densities found in clinical data. Noise-free and noisy projection data were reconstructed using the iterative OS-EM reconstruction algorithm with attenuation compensation. The reconstructed images from noisy projection data show that the noise levels are lower for the FB as compared to the PH collimator due to increase in detected counts. The NCO acquisition method provides slightly better resolution and small improvement in defect contrast as compared to the CO acquisition method in noise-free reconstructed images. Despite lower projection counts the NCO shows the same noise level as the CO in the attenuation corrected reconstruction images. The results from the channelized Hotelling observer (CHO) study show that FB collimator is superior to PH collimator in myocardial defect detection, but the NCO shows no statistical significant difference from the CO for either PH or FB collimator. In conclusion, our results indicate that data acquisition using NCO makes a very small improvement in the resolution over CO for myocardial SPECT imaging. This small improvement does not make a significant difference on myocardial defect detection. However, an FB collimator provides better defect detection than a

  20. Gold nanocrystal labeling allows low-density lipoprotein imaging from the subcellular to macroscopic level

    NARCIS (Netherlands)

    Allijn, Iris E.; Leong, Wei; Tang, Jun; Gianella, Anita; Mieszawska, Aneta J.; Fay, Francois; Ma, Ge; Russell, Stewart; Callo, Catherine B.; Gordon, Ronald E.; Korkmaz, Emine; Post, Jan Andries; Zhao, Yiming; Gerritsen, Hans C.; Thran, Axel; Proksa, Roland; Daerr, Heiner; Storm, Gert; Fuster, Valentin; Fisher, Edward A.; Fayad, Zahi A.; Mulder, Willem J. M.; Cormode, David P.

    2013-01-01

    Low-density lipoprotein (LDL) plays a critical role in cholesterol transport and is closely linked to the progression of several diseases. This motivates the development of methods to study LDL behavior from the microscopic to whole-body level. We have developed an approach to efficiently load LDL

  1. Evaluation of Renal Oxygenation Level Changes after Water Loading Using Susceptibility-Weighted Imaging and T2* Mapping.

    Science.gov (United States)

    Ding, Jiule; Xing, Wei; Wu, Dongmei; Chen, Jie; Pan, Liang; Sun, Jun; Xing, Shijun; Dai, Yongming

    2015-01-01

    To assess the feasibility of susceptibility-weighted imaging (SWI) while monitoring changes in renal oxygenation level after water loading. Thirty-two volunteers (age, 28.0 ± 2.2 years) were enrolled in this study. SWI and multi-echo gradient echo sequence-based T2(*) mapping were used to cover the kidney before and after water loading. Cortical and medullary parameters were measured using small regions of interest, and their relative changes due to water loading were calculated based on baseline and post-water loading data. An intraclass correlation coefficient analysis was used to assess inter-observer reliability of each parameter. A receiver operating characteristic curve analysis was conducted to compare the performance of the two methods for detecting renal oxygenation changes due to water loading. Both medullary phase and medullary T2(*) values increased after water loading (p T2(*) changes (p > 0.05). Interobserver reliability was excellent for the T2(*) values, good for SWI cortical phase values, and moderate for the SWI medullary phase values. The area under receiver operating characteristic curve of the SWI medullary phase values was 0.85 and was not different from the medullary T2(*) value (0.84). Susceptibility-weighted imaging enabled monitoring changes in the oxygenation level in the medulla after water loading, and may allow comparable feasibility to detect renal oxygenation level changes due to water loading compared with that of T2(*) mapping.

  2. Application of Data Mining and Knowledge Discovery Techniques to Enhance Binary Target Detection and Decision-Making for Compromised Visual Images

    National Research Council Canada - National Science Library

    Repperger, D. W; Phillips, C. A; Schrider, C. D; Smith, E. A

    2004-01-01

    In an effort to improve decision-making on the identity of unknown objects appearing in visual images when the surrounding environment may be noisy and cluttered, a highly sensitive target detection...

  3. Correlation between serum VEGF level and CT perfusion imaging in patients with primary liver cancer pre-and post TACE

    International Nuclear Information System (INIS)

    Jia Zhongzhi; Huang Yuanquan; Feng Yaoliang; Shi Haibin

    2010-01-01

    Objective: To investigate the correlation between serum vascular endothelial growth factor(VEGF) level and CT perfusion parameters in patients with primary liver cancer (PLC) pre-and post-transcatheter arterial chemoembolization (TACE) treatment. Methods: Serum VEGF level was measured and CT perfusion imaging was performed 1 day before and 6 ∼ 8, 32 ∼ 40 days after TACE in 18 patients with PLC. Before and after TACE, the serum VEGF level, the tumor's artery liver perfusion (ALP), the portal vein perfusion (PVP) and the hepatic artery perfusion index (HPI) were measured pre-and post-TACE. The pre-TACE and post-TACE results were compared and statistically analyzed. Results: Based on the therapeutic results, the patients were divided into complete response (CR) group and partial response or stable disease(PR+SD) group. Although no significant difference in serum VEGF level, tumor's ALP, PVP and HPI existed between two groups pre-TACE, there was significant difference in ALP, HPI 6-8 days after TACE (P<0.05). Significant difference in serum VEGF level also existed in CR group (P<0.05), but not in (PR+SD) group, at (32-40) days post-TACE (P=0.221). The serum VEGF level carried a positive correlation with the tumor's ALP and HPI. Conclusion: The serum VEGF level can indirectly reflect the neovascularization of the tumor, while the CTPI can directly and quantitatively reflect the hemodynamic changes of the tumor post-TACE. Moreover, a positive correlation exists between serum VEGF level and ALP, HPI. Therefore, the determination of serum VEGF level together with CTPI is very useful in both evaluating TACE efficacy and making therapeutic schedule. (authors)

  4. Venus: cloud level circulation during 1982 as determined from Pioneer cloud photopolarimeter images. 11. Solar longitude dependent circulation

    International Nuclear Information System (INIS)

    Limaye, S.S.

    1988-01-01

    Pioneer Venus Orbiter images obtained in 1982 indicate a marked solar-locked dependence of cloud level circulation in both averaged cloud motions and cloud layer UV reflectivity. An apparent relationship is noted between horizontal divergence and UV reflectivity: the highest reflectivities are associated with regions of convergence at high latitudes, while lower values are associated with equatorial latitude regions where the motions are divergent. In solar-locked coordinates, the rms deviation of normalized UV brightness is higher at 45-deg latitudes than in equatorial regions. 37 references

  5. SD LMS L-Filters for Filtration of Gray Level Images in Timespatial Domain Based on GLCM Features

    Directory of Open Access Journals (Sweden)

    Robert Hudec

    2008-01-01

    Full Text Available In this paper, the new kind of adaptive signal-dependent LMS L-filter for suppression of a mixed noise in greyscale images is developed. It is based on the texture parameter measurement as modification of spatial impulse detector structure. Moreover, the one of GLCM (Gray Level Co-occurrence Matrix features, namely, the contrast or inertia adjusted by threshold as switch between partial filters is utilised. Finally, at the positions of partial filters the adaptive LMS versions of L-filters are chosen.

  6. Megalophallus as a sequela of priapism in sickle cell anemia: use of blood oxygen level-dependent magnetic resonance imaging.

    Science.gov (United States)

    Kassim, A A; Umans, H; Nagel, R L; Fabry, M E

    2000-09-01

    Priapism is a common complication of sickle cell anemia. We report a little known sequela of priapism: painless megalophallus, with significant penile enlargement. The patient had had an intense episode of priapism 9 years previously and his penis remained enlarged. Blood oxygen level-dependent magnetic resonance imaging revealed enlarged, hypoxic corpora cavernosa. Megalophallus probably resulted from permanent loss of elasticity of the tunica albuginea due to severe engorgement during the episode of priapism. This sequela needs to be recognized by physicians because no intervention is necessary and sexual function seems to remain intact.

  7. Levels of dose in head exams by TAC in Cuba. Interrelation with parameters of image quality

    International Nuclear Information System (INIS)

    Fleitas Estevez, Ileana; Mora Machado, Roxana de la; Guevara Ferrer, Carmen R.

    2001-01-01

    In the last years an increment in the use of Computer Tomography (CT) has been observed as an efficient diagnostic method. On the other hand, this type of study involves high doses levels which are imparted to the patient. In Cuba, there are not reported values of that doses levels for typical CT studies. The National Control Center for Medical Devices has been developing for several years investigations for knowing the guidance doses levels for different examinations in radiodiagnosis, taking into account the recommendations made for The International Basic Safety Standards, which have been adopted in Cuba. This paper presents an study of imparted doses for typical head examinations in 10 CT scanners (6 are helical and 3 axial technology). Values of CTDI, CTDIw (weighed) and nCTDIw (weighted and normalized) were calculated. The relation between CTDI at surface and CTDI in the center of the phantom for the highest slice width was calculated for each CT scanner. Others parameters, such as CT number and its special uniformity, noise, contrast scale, sensibility profiles, MTF evaluation, table movement and alignment between lasers and radiation field, have also obtained. These parameters were evaluated by means of the manufacturer's phantom (Siemens). (author)

  8. Scaling law of diffusivity generated by a noisy telegraph signal with fractal intermittency

    International Nuclear Information System (INIS)

    Paradisi, Paolo; Allegrini, Paolo

    2015-01-01

    In many complex systems the non-linear cooperative dynamics determine the emergence of self-organized, metastable, structures that are associated with a birth–death process of cooperation. This is found to be described by a renewal point process, i.e., a sequence of crucial birth–death events corresponding to transitions among states that are faster than the typical long-life time of the metastable states. Metastable states are highly correlated, but the occurrence of crucial events is typically associated with a fast memory drop, which is the reason for the renewal condition. Consequently, these complex systems display a power-law decay and, thus, a long-range or scale-free behavior, in both time correlations and distribution of inter-event times, i.e., fractal intermittency. The emergence of fractal intermittency is then a signature of complexity. However, the scaling features of complex systems are, in general, affected by the presence of added white or short-term noise. This has been found also for fractal intermittency. In this work, after a brief review on metastability and noise in complex systems, we discuss the emerging paradigm of Temporal Complexity. Then, we propose a model of noisy fractal intermittency, where noise is interpreted as a renewal Poisson process with event rate r_p. We show that the presence of Poisson noise causes the emergence of a normal diffusion scaling in the long-time range of diffusion generated by a telegraph signal driven by noisy fractal intermittency. We analytically derive the scaling law of the long-time normal diffusivity coefficient. We find the surprising result that this long-time normal diffusivity depends not only on the Poisson event rate, but also on the parameters of the complex component of the signal: the power exponent μ of the inter-event time distribution, denoted as complexity index, and the time scale T needed to reach the asymptotic power-law behavior marking the emergence of complexity. In particular

  9. Ship Detection in Optical Remote Sensing Images Based on Wavelet Transform and Multi-Level False Alarm Identification

    Directory of Open Access Journals (Sweden)

    Fang Xu

    2017-09-01

    Full Text Available Ship detection by Unmanned Airborne Vehicles (UAVs and satellites plays an important role in a spectrum of related military and civil applications. To improve the detection efficiency, accuracy, and speed, a novel ship detection method from coarse to fine is presented. Ship targets are viewed as uncommon regions in the sea background caused by the differences in colors, textures, shapes, or other factors. Inspired by this fact, a global saliency model is constructed based on high-frequency coefficients of the multi-scale and multi-direction wavelet decomposition, which can characterize different feature information from edge to texture of the input image. To further reduce the false alarms, a new and effective multi-level discrimination method is designed based on the improved entropy and pixel distribution, which is robust against the interferences introduced by islands, coastlines, clouds, and shadows. The experimental results on optical remote sensing images validate that the presented saliency model outperforms the comparative models in terms of the area under the receiver operating characteristic curves core and the accuracy in the images with different sizes. After the target identification, the locations and the number of the ships in various sizes and colors can be detected accurately and fast with high robustness.

  10. Gender classification from face images by using local binary pattern and gray-level co-occurrence matrix

    Science.gov (United States)

    Uzbaş, Betül; Arslan, Ahmet

    2018-04-01

    Gender is an important step for human computer interactive processes and identification. Human face image is one of the important sources to determine gender. In the present study, gender classification is performed automatically from facial images. In order to classify gender, we propose a combination of features that have been extracted face, eye and lip regions by using a hybrid method of Local Binary Pattern and Gray-Level Co-Occurrence Matrix. The features have been extracted from automatically obtained face, eye and lip regions. All of the extracted features have been combined and given as input parameters to classification methods (Support Vector Machine, Artificial Neural Networks, Naive Bayes and k-Nearest Neighbor methods) for gender classification. The Nottingham Scan face database that consists of the frontal face images of 100 people (50 male and 50 female) is used for this purpose. As the result of the experimental studies, the highest success rate has been achieved as 98% by using Support Vector Machine. The experimental results illustrate the efficacy of our proposed method.

  11. Segmentation of neuroanatomy in magnetic resonance images

    Science.gov (United States)

    Simmons, Andrew; Arridge, Simon R.; Barker, G. J.; Tofts, Paul S.

    1992-06-01

    Segmentation in neurological magnetic resonance imaging (MRI) is necessary for feature extraction, volume measurement and for the three-dimensional display of neuroanatomy. Automated and semi-automated methods offer considerable advantages over manual methods because of their lack of subjectivity, their data reduction capabilities, and the time savings they give. We have used dual echo multi-slice spin-echo data sets which take advantage of the intrinsically multispectral nature of MRI. As a pre-processing step, a rf non-uniformity correction is applied and if the data is noisy the images are smoothed using a non-isotropic blurring method. Edge-based processing is used to identify the skin (the major outer contour) and the eyes. Edge-focusing has been used to significantly simplify edge images and thus allow simple postprocessing to pick out the brain contour in each slice of the data set. Edge- focusing is a technique which locates significant edges using a high degree of smoothing at a coarse level and tracks these edges to a fine level where the edges can be determined with high positional accuracy. Both 2-D and 3-D edge-detection methods have been compared. Once isolated, the brain is further processed to identify CSF, and, depending upon the MR pulse sequence used, the brain itself may be sub-divided into gray matter and white matter using semi-automatic contrast enhancement and clustering methods.

  12. Estimation of urban surface water at subpixel level from neighborhood pixels using multispectral remote sensing image (Conference Presentation)

    Science.gov (United States)

    Xie, Huan; Luo, Xin; Xu, Xiong; Wang, Chen; Pan, Haiyan; Tong, Xiaohua; Liu, Shijie

    2016-10-01

    Water body is a fundamental element in urban ecosystems and water mapping is critical for urban and landscape planning and management. As remote sensing has increasingly been used for water mapping in rural areas, this spatially explicit approach applied in urban area is also a challenging work due to the water bodies mainly distributed in a small size and the spectral confusion widely exists between water and complex features in the urban environment. Water index is the most common method for water extraction at pixel level, and spectral mixture analysis (SMA) has been widely employed in analyzing urban environment at subpixel level recently. In this paper, we introduce an automatic subpixel water mapping method in urban areas using multispectral remote sensing data. The objectives of this research consist of: (1) developing an automatic land-water mixed pixels extraction technique by water index; (2) deriving the most representative endmembers of water and land by utilizing neighboring water pixels and adaptive iterative optimal neighboring land pixel for respectively; (3) applying a linear unmixing model for subpixel water fraction estimation. Specifically, to automatically extract land-water pixels, the locally weighted scatter plot smoothing is firstly used to the original histogram curve of WI image . And then the Ostu threshold is derived as the start point to select land-water pixels based on histogram of the WI image with the land threshold and water threshold determination through the slopes of histogram curve . Based on the previous process at pixel level, the image is divided into three parts: water pixels, land pixels, and mixed land-water pixels. Then the spectral mixture analysis (SMA) is applied to land-water mixed pixels for water fraction estimation at subpixel level. With the assumption that the endmember signature of a target pixel should be more similar to adjacent pixels due to spatial dependence, the endmember of water and land are determined

  13. Image processing and data reduction of Apollo low light level photographs

    Science.gov (United States)

    Alvord, G. C.

    1975-01-01

    The removal of the lens induced vignetting from a selected sample of the Apollo low light level photographs is discussed. The methods used were developed earlier. A study of the effect of noise on vignetting removal and the comparability of the Apollo 35mm Nikon lens vignetting was also undertaken. The vignetting removal was successful to about 10% photometry, and noise has a severe effect on the useful photometric output data. Separate vignetting functions must be used for different flights since the vignetting function varies from camera to camera in size and shape.

  14. Diffusion Imaging of Cerebral White Matter in Persons Who Stutter: Evidence for Network-Level Anomalies

    Directory of Open Access Journals (Sweden)

    Shanqing eCai

    2014-02-01

    Full Text Available Deficits in brain white matter have been a main focus of recent neuroimaging studies on stuttering. However, no prior study has examined brain connectivity on the global level of the cerebral cortex in persons who stutter (PWS. In the current study, we analyzed the results from probabilistic tractography between regions comprising the cortical speech network. An anatomical parcellation scheme was used to define 28 speech production-related ROIs in each hemisphere. We used network-based statistic (NBS and graph theory to analyze the connectivity patterns obtained from tractography. At the network level, the probabilistic corticocortical connectivity from the PWS group were significantly weaker that from persons with fluent speech (PFS. NBS analysis revealed significant components in the bilateral speech networks with negative correlations with stuttering severity. To facilitate comparison with previous studies, we also performed tract-based spatial statistics (TBSS and regional fractional anisotropy (FA averaging. Results from tractography, TBSS and regional FA averaging jointly highlight the importance of several regions in the left peri-Rolandic sensorimotor and premotor areas, most notably the left ventral premotor cortex and middle primary motor cortex, in the neuroanatomical basis of stuttering.

  15. Reconstruction of thin electromagnetic inclusions by a level-set method

    International Nuclear Information System (INIS)

    Park, Won-Kwang; Lesselier, Dominique

    2009-01-01

    In this contribution, we consider a technique of electromagnetic imaging (at a single, non-zero frequency) which uses the level-set evolution method for reconstructing a thin inclusion (possibly made of disconnected parts) with either dielectric or magnetic contrast with respect to the embedding homogeneous medium. Emphasis is on the proof of the concept, the scattering problem at hand being so far based on a two-dimensional scalar model. To do so, two level-set functions are employed; the first one describes location and shape, and the other one describes connectivity and length. Speeds of evolution of the level-set functions are calculated via the introduction of Fréchet derivatives of a least-square cost functional. Several numerical experiments on noiseless and noisy data as well illustrate how the proposed method behaves

  16. A new method for mobile phone image denoising

    Science.gov (United States)

    Jin, Lianghai; Jin, Min; Li, Xiang; Xu, Xiangyang

    2015-12-01

    Images captured by mobile phone cameras via pipeline processing usually contain various kinds of noises, especially granular noise with different shapes and sizes in both luminance and chrominance channels. In chrominance channels, noise is closely related to image brightness. To improve image quality, this paper presents a new method to denoise such mobile phone images. The proposed scheme converts the noisy RGB image to luminance and chrominance images, which are then denoised by a common filtering framework. The common filtering framework processes a noisy pixel by first excluding the neighborhood pixels that significantly deviate from the (vector) median and then utilizing the other neighborhood pixels to restore the current pixel. In the framework, the strength of chrominance image denoising is controlled by image brightness. The experimental results show that the proposed method obviously outperforms some other representative denoising methods in terms of both objective measure and visual evaluation.

  17. Woodland Mapping at Single-Tree Levels Using Object-Oriented Classification of Unmanned Aerial Vehicle (uav) Images

    Science.gov (United States)

    Chenari, A.; Erfanifard, Y.; Dehghani, M.; Pourghasemi, H. R.

    2017-09-01

    Remotely sensed datasets offer a reliable means to precisely estimate biophysical characteristics of individual species sparsely distributed in open woodlands. Moreover, object-oriented classification has exhibited significant advantages over different classification methods for delineation of tree crowns and recognition of species in various types of ecosystems. However, it still is unclear if this widely-used classification method can have its advantages on unmanned aerial vehicle (UAV) digital images for mapping vegetation cover at single-tree levels. In this study, UAV orthoimagery was classified using object-oriented classification method for mapping a part of wild pistachio nature reserve in Zagros open woodlands, Fars Province, Iran. This research focused on recognizing two main species of the study area (i.e., wild pistachio and wild almond) and estimating their mean crown area. The orthoimage of study area was consisted of 1,076 images with spatial resolution of 3.47 cm which was georeferenced using 12 ground control points (RMSE=8 cm) gathered by real-time kinematic (RTK) method. The results showed that the UAV orthoimagery classified by object-oriented method efficiently estimated mean crown area of wild pistachios (52.09±24.67 m2) and wild almonds (3.97±1.69 m2) with no significant difference with their observed values (α=0.05). In addition, the results showed that wild pistachios (accuracy of 0.90 and precision of 0.92) and wild almonds (accuracy of 0.90 and precision of 0.89) were well recognized by image segmentation. In general, we concluded that UAV orthoimagery can efficiently produce precise biophysical data of vegetation stands at single-tree levels, which therefore is suitable for assessment and monitoring open woodlands.

  18. Cone beam computed tomography and its image guidance technology during percutaneous nucleoplasty procedures at L5/S1 lumbar level

    Energy Technology Data Exchange (ETDEWEB)

    Ierardi, Anna Maria; Piacentino, Filippo; Giorlando, Francesca [University of Insubria, Unit of Interventional Radiology, Department of Radiology, Varese (Italy); Magenta Biasina, Alberto; Carrafiello, Gianpaolo [University of Milan, San Paolo Hospital, Department of Diagnostic and Interventional Radiology, Milan (Italy); Bacuzzi, Alessandro [University of Insubria, Anaesthesia and Palliative Care, Varese (Italy); Novario, Raffaele [University of Insubria, Medical Physics Department, Varese (Italy)

    2016-12-15

    To demonstrate the feasibility of percutaneous nucleoplasty procedures at L5/S1 level using cone beam CT (CBCT) and its associated image guidance technology for the treatment of lumbar disc herniation (LDH). We retrospectively reviewed 25 cases (20 men, 5 women) of LDH at L5/S1 levels. CBCT as guidance imaging was chosen after a first unsuccessful fluoroscopy attempt that was related to complex anatomy (n = 15), rapid pathological changes due to degenerative diseases (n = 7) or both (n = 3). Technical success, defined as correct needle positioning in the target LDH, and safety were evaluated; overall procedure time and radiation dose were registered. A visual analog scale (VAS) was used to evaluate pain and discomfort pre-intervention after 1 week and 1, 3, and 6 months after the procedure. Technical success was 100 %; using CBCT as guidance imaging the needle was correctly positioned at the first attempt in 20 out of 25 patients. Neither major nor minor complications were registered during or after the procedure. The average procedure time was 11 min and 56 s (range, 9-15 min), whereas mean procedural radiation dose was 46.25 Gy.cm{sup 2} (range 38.10-52.84 Gy.cm{sup 2}), and mean fluoroscopy time was 5 min 34 s (range 3 min 40 s to 6 min 55 s). The VAS pain score decreased significantly from 7.6 preoperatively to 3.9 at 1 week, 2.8 at 1 month, 2.1 at 3 months, and 1.6 at 6 months postoperatively. CBCT-guided percutaneous nucleoplasty is a highly effective technique for LDH with acceptable procedure time and radiation dose. (orig.)

  19. WOODLAND MAPPING AT SINGLE-TREE LEVELS USING OBJECT-ORIENTED CLASSIFICATION OF UNMANNED AERIAL VEHICLE (UAV IMAGES

    Directory of Open Access Journals (Sweden)

    A. Chenari

    2017-09-01

    Full Text Available Remotely sensed datasets offer a reliable means to precisely estimate biophysical characteristics of individual species sparsely distributed in open woodlands. Moreover, object-oriented classification has exhibited significant advantages over different classification methods for delineation of tree crowns and recognition of species in various types of ecosystems. However, it still is unclear if this widely-used classification method can have its advantages on unmanned aerial vehicle (UAV digital images for mapping vegetation cover at single-tree levels. In this study, UAV orthoimagery was classified using object-oriented classification method for mapping a part of wild pistachio nature reserve in Zagros open woodlands, Fars Province, Iran. This research focused on recognizing two main species of the study area (i.e., wild pistachio and wild almond and estimating their mean crown area. The orthoimage of study area was consisted of 1,076 images with spatial resolution of 3.47 cm which was georeferenced using 12 ground control points (RMSE=8 cm gathered by real-time kinematic (RTK method. The results showed that the UAV orthoimagery classified by object-oriented method efficiently estimated mean crown area of wild pistachios (52.09±24.67 m2 and wild almonds (3.97±1.69 m2 with no significant difference with their observed values (α=0.05. In addition, the results showed that wild pistachios (accuracy of 0.90 and precision of 0.92 and wild almonds (accuracy of 0.90 and precision of 0.89 were well recognized by image segmentation. In general, we concluded that UAV orthoimagery can efficiently produce precise biophysical data of vegetation stands at single-tree levels, which therefore is suitable for assessment and monitoring open woodlands.

  20. A SEMI-LAGRANGIAN TWO-LEVEL PRECONDITIONED NEWTON-KRYLOV SOLVER FOR CONSTRAINED DIFFEOMORPHIC IMAGE REGISTRATION.

    Science.gov (United States)

    Mang, Andreas; Biros, George

    2017-01-01

    We propose an efficient numerical algorithm for the solution of diffeomorphic image registration problems. We use a variational formulation constrained by a partial differential equation (PDE), where the constraints are a scalar transport equation. We use a pseudospectral discretization in space and second-order accurate semi-Lagrangian time stepping scheme for the transport equations. We solve for a stationary velocity field using a preconditioned, globalized, matrix-free Newton-Krylov scheme. We propose and test a two-level Hessian preconditioner. We consider two strategies for inverting the preconditioner on the coarse grid: a nested preconditioned conjugate gradient method (exact solve) and a nested Chebyshev iterative method (inexact solve) with a fixed number of iterations. We test the performance of our solver in different synthetic and real-world two-dimensional application scenarios. We study grid convergence and computational efficiency of our new scheme. We compare the performance of our solver against our initial implementation that uses the same spatial discretization but a standard, explicit, second-order Runge-Kutta scheme for the numerical time integration of the transport equations and a single-level preconditioner. Our improved scheme delivers significant speedups over our original implementation. As a highlight, we observe a 20 × speedup for a two dimensional, real world multi-subject medical image registration problem.