WorldWideScience

Sample records for driven imaging methods

  1. TH-EF-BRA-03: Assessment of Data-Driven Respiratory Motion-Compensation Methods for 4D-CBCT Image Registration and Reconstruction Using Clinical Datasets

    Energy Technology Data Exchange (ETDEWEB)

    Riblett, MJ; Weiss, E; Hugo, GD [Virginia Commonwealth University, Richmond, VA (United States); Christensen, GE [University of Iowa, Iowa City, IA (United States)

    2016-06-15

    method. Conclusion: Data-driven groupwise registration and motion-compensated reconstruction have the potential to improve the quality of 4D-CBCT images acquired under clinical conditions. For clinical image datasets, the addition of motion compensation after groupwise registration visibly reduced artifact impact. This work was supported by the National Cancer Institute of the National Institutes of Health under Award Number R01CA166119. Hugo and Weiss hold a research agreement with Philips Healthcare and license agreement with Varian Medical Systems. Weiss receives royalties from UpToDate. Christensen receives funds from Roger Koch to support research.

  2. TH-EF-BRA-03: Assessment of Data-Driven Respiratory Motion-Compensation Methods for 4D-CBCT Image Registration and Reconstruction Using Clinical Datasets

    International Nuclear Information System (INIS)

    Riblett, MJ; Weiss, E; Hugo, GD; Christensen, GE

    2016-01-01

    method. Conclusion: Data-driven groupwise registration and motion-compensated reconstruction have the potential to improve the quality of 4D-CBCT images acquired under clinical conditions. For clinical image datasets, the addition of motion compensation after groupwise registration visibly reduced artifact impact. This work was supported by the National Cancer Institute of the National Institutes of Health under Award Number R01CA166119. Hugo and Weiss hold a research agreement with Philips Healthcare and license agreement with Varian Medical Systems. Weiss receives royalties from UpToDate. Christensen receives funds from Roger Koch to support research.

  3. Radiosity methods driven by human perception

    International Nuclear Information System (INIS)

    Prikryl, J.

    2001-05-01

    Despite its popularity among researchers the radiosity method still suffers some disadvantage over other global illumination methods. Usual implementations of the radiosity method use criteria based on radiometric values to drive the computation and to decide about sufficient mesh quality or to estimate the error of the simulation process and to decide when the simulation can be safely terminated. This is absolutely correct for the case of radiometric simulation, when the user is interested in actual values of radiometric quantities. On the other hand, the radiosity method is very often used just to generate pictures for the human observer and those pictures are not required to be the results of correct physical simulations, they just have to look the same. The results of research on human visual performance and visual signal processing can be built into the image synthesis algorithm itself under some circumstances and guarantee that no effort will be spent on computing changes that are only marginally important for the human observer. In the area of image processing, perceptual error metrics are used for image comparison and image coding that enable to better predict the differences between two images as opposed to the perceptually inappropriate and widely used mean-squared error metrics. Tone reproduction operators known from image synthesis make it possible to map a bright scale of image luminance onto a narrow scale of CRT luminance in such a way that the perceived CRT image produces the same mental image as the original image. Perceptually-driven radiosity algorithms exist, which use various methods to control the optimum density of the finite-element mesh defining the scene that is being rendered, to include only visible discontinuity lines into this mesh, and to predict the convergence of the method. We will describe an hierarchical extension to the Monte Carlo radiosity that keeps the accuracy of the solution high only in the area immediately visible from

  4. Energy-Driven Image Interpolation Using Gaussian Process Regression

    Directory of Open Access Journals (Sweden)

    Lingling Zi

    2012-01-01

    Full Text Available Image interpolation, as a method of obtaining a high-resolution image from the corresponding low-resolution image, is a classical problem in image processing. In this paper, we propose a novel energy-driven interpolation algorithm employing Gaussian process regression. In our algorithm, each interpolated pixel is predicted by a combination of two information sources: first is a statistical model adopted to mine underlying information, and second is an energy computation technique used to acquire information on pixel properties. We further demonstrate that our algorithm can not only achieve image interpolation, but also reduce noise in the original image. Our experiments show that the proposed algorithm can achieve encouraging performance in terms of image visualization and quantitative measures.

  5. Terahertz composite imaging method

    Institute of Scientific and Technical Information of China (English)

    QIAO Xiaoli; REN Jiaojiao; ZHANG Dandan; CAO Guohua; LI Lijuan; ZHANG Xinming

    2017-01-01

    In order to improve the imaging quality of terahertz(THz) spectroscopy, Terahertz Composite Imaging Method(TCIM) is proposed. The traditional methods of improving THz spectroscopy image quality are mainly from the aspects of de-noising and image enhancement. TCIM breaks through this limitation. A set of images, reconstructed in a single data collection, can be utilized to construct two kinds of composite images. One algorithm, called Function Superposition Imaging Algorithm(FSIA), is to construct a new gray image utilizing multiple gray images through a certain function. The features of the Region Of Interest (ROI) are more obvious after operating, and it has capability of merging ROIs in multiple images. The other, called Multi-characteristics Pseudo-color Imaging Algorithm(McPcIA), is to construct a pseudo-color image by combining multiple reconstructed gray images in a single data collection. The features of ROI are enhanced by color differences. Two algorithms can not only improve the contrast of ROIs, but also increase the amount of information resulting in analysis convenience. The experimental results show that TCIM is a simple and effective tool for THz spectroscopy image analysis.

  6. Data-driven imaging in anisotropic media

    Energy Technology Data Exchange (ETDEWEB)

    Volker, Arno; Hunter, Alan [TNO Stieltjes weg 1, 2600 AD, Delft (Netherlands)

    2012-05-17

    Anisotropic materials are being used increasingly in high performance industrial applications, particularly in the aeronautical and nuclear industries. Some important examples of these materials are composites, single-crystal and heavy-grained metals. Ultrasonic array imaging in these materials requires exact knowledge of the anisotropic material properties. Without this information, the images can be adversely affected, causing a reduction in defect detection and characterization performance. The imaging operation can be formulated in two consecutive and reciprocal focusing steps, i.e., focusing the sources and then focusing the receivers. Applying just one of these focusing steps yields an interesting intermediate domain. The resulting common focus point gather (CFP-gather) can be interpreted to determine the propagation operator. After focusing the sources, the observed travel-time in the CFP-gather describes the propagation from the focus point to the receivers. If the correct propagation operator is used, the measured travel-times should be the same as the time-reversed focusing operator due to reciprocity. This makes it possible to iteratively update the focusing operator using the data only and allows the material to be imaged without explicit knowledge of the anisotropic material parameters. Furthermore, the determined propagation operator can also be used to invert for the anisotropic medium parameters. This paper details the proposed technique and demonstrates its use on simulated array data from a specimen of Inconel single-crystal alloy commonly used in the aeronautical and nuclear industries.

  7. Educational Accountability: A Qualitatively Driven Mixed-Methods Approach

    Science.gov (United States)

    Hall, Jori N.; Ryan, Katherine E.

    2011-01-01

    This article discusses the importance of mixed-methods research, in particular the value of qualitatively driven mixed-methods research for quantitatively driven domains like educational accountability. The article demonstrates the merits of qualitative thinking by describing a mixed-methods study that focuses on a middle school's system of…

  8. Rapid flow imaging method

    International Nuclear Information System (INIS)

    Pelc, N.J.; Spritzer, C.E.; Lee, J.N.

    1988-01-01

    A rapid, phase-contrast, MR imaging method of imaging flow has been implemented. The method, called VIGRE (velocity imaging with gradient recalled echoes), consists of two interleaved, narrow flip angle, gradient-recalled acquisitions. One is flow compensated while the second has a specified flow encoding (both peak velocity and direction) that causes signals to contain additional phase in proportion to velocity in the specified direction. Complex image data from the first acquisition are used as a phase reference for the second, yielding immunity from phase accumulation due to causes other than motion. Images with pixel values equal to MΔΘ where M is the magnitude of the flow compensated image and ΔΘ is the phase difference at the pixel, are produced. The magnitude weighting provides additional vessel contrast, suppresses background noise, maintains the flow direction information, and still allows quantitative data to be retrieved. The method has been validated with phantoms and is undergoing initial clinical evaluation. Early results are extremely encouraging

  9. Data-Driven Methods to Diversify Knowledge of Human Psychology

    OpenAIRE

    Jack, Rachael E.; Crivelli, Carlos; Wheatley, Thalia

    2017-01-01

    open access article Psychology aims to understand real human behavior. However, cultural biases in the scientific process can constrain knowledge. We describe here how data-driven methods can relax these constraints to reveal new insights that theories can overlook. To advance knowledge we advocate a symbiotic approach that better combines data-driven methods with theory.

  10. Magnetic imager and method

    Science.gov (United States)

    Powell, James; Reich, Morris; Danby, Gordon

    1997-07-22

    A magnetic imager 10 includes a generator 18 for practicing a method of applying a background magnetic field over a concealed object, with the object being effective to locally perturb the background field. The imager 10 also includes a sensor 20 for measuring perturbations of the background field to detect the object. In one embodiment, the background field is applied quasi-statically. And, the magnitude or rate of change of the perturbations may be measured for determining location, size, and/or condition of the object.

  11. Neutron Imaging at Compact Accelerator-Driven Neutron Sources in Japan

    Directory of Open Access Journals (Sweden)

    Yoshiaki Kiyanagi

    2018-03-01

    Full Text Available Neutron imaging has been recognized to be very useful to investigate inside of materials and products that cannot be seen by X-ray. New imaging methods using the pulsed structure of neutron sources based on accelerators has been developed also at compact accelerator-driven neutron sources and opened new application fields in neutron imaging. The world’s first dedicated imaging instrument at pulsed neutron sources was constructed at J-PARC in Japan owing to the development of such new methods. Then, usefulness of the compact accelerator-driven neutron sources in neutron science was recognized and such facilities were newly constructed in Japan. Now, existing and new sources have been used for neutron imaging. Traditional imaging and newly developed pulsed neutron imaging such as Bragg edge transmission have been applied to various fields by using compact and large neutron facilities. Here, compact accelerator-driven neutron sources used for imaging in Japan are introduced and some of their activities are presented.

  12. Methods in Astronomical Image Processing

    Science.gov (United States)

    Jörsäter, S.

    A Brief Introductory Note History of Astronomical Imaging Astronomical Image Data Images in Various Formats Digitized Image Data Digital Image Data Philosophy of Astronomical Image Processing Properties of Digital Astronomical Images Human Image Processing Astronomical vs. Computer Science Image Processing Basic Tools of Astronomical Image Processing Display Applications Calibration of Intensity Scales Calibration of Length Scales Image Re-shaping Feature Enhancement Noise Suppression Noise and Error Analysis Image Processing Packages: Design of AIPS and MIDAS AIPS MIDAS Reduction of CCD Data Bias Subtraction Clipping Preflash Subtraction Dark Subtraction Flat Fielding Sky Subtraction Extinction Correction Deconvolution Methods Rebinning/Combining Summary and Prospects for the Future

  13. Photoacoustic imaging driven by an interstitial irradiation source

    Directory of Open Access Journals (Sweden)

    Trevor Mitcham

    2015-06-01

    Full Text Available Photoacoustic (PA imaging has shown tremendous promise in providing valuable diagnostic and therapy-monitoring information in select clinical procedures. Many of these pursued applications, however, have been relatively superficial due to difficulties with delivering light deep into tissue. To address this limitation, this work investigates generating a PA image using an interstitial irradiation source with a clinical ultrasound (US system, which was shown to yield improved PA signal quality at distances beyond 13 mm and to provide improved spectral fidelity. Additionally, interstitially driven multi-wavelength PA imaging was able to provide accurate spectra of gold nanoshells and deoxyhemoglobin in excised prostate and liver tissue, respectively, and allowed for clear visualization of a wire at 7 cm in excised liver. This work demonstrates the potential of using a local irradiation source to extend the depth capabilities of future PA imaging techniques for minimally invasive interventional radiology procedures.

  14. Universal Image Steganalytic Method

    Directory of Open Access Journals (Sweden)

    V. Banoci

    2014-12-01

    Full Text Available In the paper we introduce a new universal steganalytic method in JPEG file format that is detecting well-known and also newly developed steganographic methods. The steganalytic model is trained by MHF-DZ steganographic algorithm previously designed by the same authors. The calibration technique with the Feature Based Steganalysis (FBS was employed in order to identify statistical changes caused by embedding a secret data into original image. The steganalyzer concept utilizes Support Vector Machine (SVM classification for training a model that is later used by the same steganalyzer in order to identify between a clean (cover and steganographic image. The aim of the paper was to analyze the variety in accuracy of detection results (ACR while detecting testing steganographic algorithms as F5, Outguess, Model Based Steganography without deblocking, JP Hide and Seek which represent the generally used steganographic tools. The comparison of four feature vectors with different lengths FBS (22, FBS (66 FBS(274 and FBS(285 shows promising results of proposed universal steganalytic method comparing to binary methods.

  15. User-driven sampling strategies in image exploitation

    Science.gov (United States)

    Harvey, Neal; Porter, Reid

    2013-12-01

    Visual analytics and interactive machine learning both try to leverage the complementary strengths of humans and machines to solve complex data exploitation tasks. These fields overlap most significantly when training is involved: the visualization or machine learning tool improves over time by exploiting observations of the human-computer interaction. This paper focuses on one aspect of the human-computer interaction that we call user-driven sampling strategies. Unlike relevance feedback and active learning sampling strategies, where the computer selects which data to label at each iteration, we investigate situations where the user selects which data is to be labeled at each iteration. User-driven sampling strategies can emerge in many visual analytics applications but they have not been fully developed in machine learning. User-driven sampling strategies suggest new theoretical and practical research questions for both visualization science and machine learning. In this paper we identify and quantify the potential benefits of these strategies in a practical image analysis application. We find user-driven sampling strategies can sometimes provide significant performance gains by steering tools towards local minima that have lower error than tools trained with all of the data. In preliminary experiments we find these performance gains are particularly pronounced when the user is experienced with the tool and application domain.

  16. Multiple Active Contours Driven by Particle Swarm Optimization for Cardiac Medical Image Segmentation

    Science.gov (United States)

    Cruz-Aceves, I.; Aviña-Cervantes, J. G.; López-Hernández, J. M.; González-Reyna, S. E.

    2013-01-01

    This paper presents a novel image segmentation method based on multiple active contours driven by particle swarm optimization (MACPSO). The proposed method uses particle swarm optimization over a polar coordinate system to increase the energy-minimizing capability with respect to the traditional active contour model. In the first stage, to evaluate the robustness of the proposed method, a set of synthetic images containing objects with several concavities and Gaussian noise is presented. Subsequently, MACPSO is used to segment the human heart and the human left ventricle from datasets of sequential computed tomography and magnetic resonance images, respectively. Finally, to assess the performance of the medical image segmentations with respect to regions outlined by experts and by the graph cut method objectively and quantifiably, a set of distance and similarity metrics has been adopted. The experimental results demonstrate that MACPSO outperforms the traditional active contour model in terms of segmentation accuracy and stability. PMID:23762177

  17. Laser speckle imaging based on photothermally driven convection

    Science.gov (United States)

    Regan, Caitlin; Choi, Bernard

    2016-02-01

    Laser speckle imaging (LSI) is an interferometric technique that provides information about the relative speed of moving scatterers in a sample. Photothermal LSI overcomes limitations in depth resolution faced by conventional LSI by incorporating an excitation pulse to target absorption by hemoglobin within the vascular network. Here we present results from experiments designed to determine the mechanism by which photothermal LSI decreases speckle contrast. We measured the impact of mechanical properties on speckle contrast, as well as the spatiotemporal temperature dynamics and bulk convective motion occurring during photothermal LSI. Our collective data strongly support the hypothesis that photothermal LSI achieves a transient reduction in speckle contrast due to bulk motion associated with thermally driven convection. The ability of photothermal LSI to image structures below a scattering medium may have important preclinical and clinical applications.

  18. Hyperspectral image processing methods

    Science.gov (United States)

    Hyperspectral image processing refers to the use of computer algorithms to extract, store and manipulate both spatial and spectral information contained in hyperspectral images across the visible and near-infrared portion of the electromagnetic spectrum. A typical hyperspectral image processing work...

  19. Example-driven manifold priors for image deconvolution.

    Science.gov (United States)

    Ni, Jie; Turaga, Pavan; Patel, Vishal M; Chellappa, Rama

    2011-11-01

    Image restoration methods that exploit prior information about images to be estimated have been extensively studied, typically using the Bayesian framework. In this paper, we consider the role of prior knowledge of the object class in the form of a patch manifold to address the deconvolution problem. Specifically, we incorporate unlabeled image data of the object class, say natural images, in the form of a patch-manifold prior for the object class. The manifold prior is implicitly estimated from the given unlabeled data. We show how the patch-manifold prior effectively exploits the available sample class data for regularizing the deblurring problem. Furthermore, we derive a generalized cross-validation (GCV) function to automatically determine the regularization parameter at each iteration without explicitly knowing the noise variance. Extensive experiments show that this method performs better than many competitive image deconvolution methods.

  20. Image registration method for medical image sequences

    Science.gov (United States)

    Gee, Timothy F.; Goddard, James S.

    2013-03-26

    Image registration of low contrast image sequences is provided. In one aspect, a desired region of an image is automatically segmented and only the desired region is registered. Active contours and adaptive thresholding of intensity or edge information may be used to segment the desired regions. A transform function is defined to register the segmented region, and sub-pixel information may be determined using one or more interpolation methods.

  1. Soft tissue tumors - imaging methods

    International Nuclear Information System (INIS)

    Arlart, I.P.

    1985-01-01

    Soft Tissue Tumors - Imaging Methods: Imaging methods play an important diagnostic role in soft tissue tumors concerning a preoperative evaluation of localization, size, topographic relationship, dignity, and metastatic disease. The present paper gives an overview about diagnostic methods available today such as ultrasound, thermography, roentgenographic plain films and xeroradiography, radionuclide methods, computed tomography, lymphography, angiography, and magnetic resonance imaging. Besides sonography particularly computed tomography has the most important diagnostic value in soft tissue tumors. The application of a recently developed method, the magnetic resonance imaging, cannot yet be assessed in its significance. (orig.) [de

  2. Computational methods for molecular imaging

    CERN Document Server

    Shi, Kuangyu; Li, Shuo

    2015-01-01

    This volume contains original submissions on the development and application of molecular imaging computing. The editors invited authors to submit high-quality contributions on a wide range of topics including, but not limited to: • Image Synthesis & Reconstruction of Emission Tomography (PET, SPECT) and other Molecular Imaging Modalities • Molecular Imaging Enhancement • Data Analysis of Clinical & Pre-clinical Molecular Imaging • Multi-Modal Image Processing (PET/CT, PET/MR, SPECT/CT, etc.) • Machine Learning and Data Mining in Molecular Imaging. Molecular imaging is an evolving clinical and research discipline enabling the visualization, characterization and quantification of biological processes taking place at the cellular and subcellular levels within intact living subjects. Computational methods play an important role in the development of molecular imaging, from image synthesis to data analysis and from clinical diagnosis to therapy individualization. This work will bring readers fro...

  3. Imaging Apparatus And Method

    NARCIS (Netherlands)

    Manohar, Srirang; van Leeuwen, A.G.J.M.

    2010-01-01

    A thermoacoustic imaging apparatus comprises an electromagnetic radiation source configured to irradiate a sample area and an acoustic signal detection probe arrangement for detecting acoustic signals. A radiation responsive acoustic signal generator is added outside the sample area. The detection

  4. IMAGING APPARATUS AND METHOD

    NARCIS (Netherlands)

    Manohar, Srirang; van Leeuwen, A.G.J.M.

    2008-01-01

    A thermoacoustic imaging apparatus comprises an electromagnetic radiation source configured to irradiate a sample area and an acoustic signal detection probe arrangement for detecting acoustic signals. A radiation responsive acoustic signal generator is added outside the sample area. The detection

  5. Data-driven forward model inference for EEG brain imaging

    DEFF Research Database (Denmark)

    Hansen, Sofie Therese; Hauberg, Søren; Hansen, Lars Kai

    2016-01-01

    Electroencephalography (EEG) is a flexible and accessible tool with excellent temporal resolution but with a spatial resolution hampered by volume conduction. Reconstruction of the cortical sources of measured EEG activity partly alleviates this problem and effectively turns EEG into a brain......-of-concept study, we show that, even when anatomical knowledge is unavailable, a suitable forward model can be estimated directly from the EEG. We propose a data-driven approach that provides a low-dimensional parametrization of head geometry and compartment conductivities, built using a corpus of forward models....... Combined with only a recorded EEG signal, we are able to estimate both the brain sources and a person-specific forward model by optimizing this parametrization. We thus not only solve an inverse problem, but also optimize over its specification. Our work demonstrates that personalized EEG brain imaging...

  6. Laser-sheet imaging of HE-driven interfaces

    International Nuclear Information System (INIS)

    Benjamin, R.F.; Rightley, P.M.; Kinkead, S.; Martin, R.A.; Critchfield, R.; Sandoval, D.L.; Holmes, R.; Gorman, T.

    1998-01-01

    This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at Los Alamos National Laboratory (LANL). The authors made substantial progress in developing the MILSI (Multiple Imaging of Laser-Sheet Illumination) technique for high explosive (HE)-driven fluid interfaces. They observed the instability, but have not yet measured the instability growth rate. They developed suitable sample containers and optical systems for studying the Rightmyer-Meshkov instability of perturbed water/bromoform interfaces and they successfully fielded the new MILSI diagnostic at two firing-site facilities. The problem continues to be of central importance to the inertial confinement fusion (ICF) and weapons physics communities

  7. Methods of digital image processing

    International Nuclear Information System (INIS)

    Doeler, W.

    1985-01-01

    Increasing use of computerized methods for diagnostical imaging of radiological problems will open up a wide field of applications for digital image processing. The requirements set by routine diagnostics in medical radiology point to picture data storage and documentation and communication as the main points of interest for application of digital image processing. As to the purely radiological problems, the value of digital image processing is to be sought in the improved interpretability of the image information in those cases where the expert's experience and image interpretation by human visual capacities do not suffice. There are many other domains of imaging in medical physics where digital image processing and evaluation is very useful. The paper reviews the various methods available for a variety of problem solutions, and explains the hardware available for the tasks discussed. (orig.) [de

  8. Neutron Transport Methods for Accelerator-Driven Systems

    International Nuclear Information System (INIS)

    Nicholas Tsoulfanidis; Elmer Lewis

    2005-01-01

    The objective of this project has been to develop computational methods that will enable more effective analysis of Accelerator Driven Systems (ADS). The work is centered at the University of Missouri at Rolla, with a subcontract at Northwestern University, and close cooperation with the Nuclear Engineering Division at Argonne National Laboratory. The work has fallen into three categories. First, the treatment of the source for neutrons originating from the spallation target which drives the neutronics calculations of the ADS. Second, the generalization of the nodal variational method to treat the R-Z geometry configurations frequently needed for scoping calculations in Accelerator Driven Systems. Third, the treatment of void regions within variational nodal methods as needed to treat the accelerator beam tube

  9. A Full Parallel Event Driven Readout Technique for Area Array SPAD FLIM Image Sensors

    Directory of Open Access Journals (Sweden)

    Kaiming Nie

    2016-01-01

    Full Text Available This paper presents a full parallel event driven readout method which is implemented in an area array single-photon avalanche diode (SPAD image sensor for high-speed fluorescence lifetime imaging microscopy (FLIM. The sensor only records and reads out effective time and position information by adopting full parallel event driven readout method, aiming at reducing the amount of data. The image sensor includes four 8 × 8 pixel arrays. In each array, four time-to-digital converters (TDCs are used to quantize the time of photons’ arrival, and two address record modules are used to record the column and row information. In this work, Monte Carlo simulations were performed in Matlab in terms of the pile-up effect induced by the readout method. The sensor’s resolution is 16 × 16. The time resolution of TDCs is 97.6 ps and the quantization range is 100 ns. The readout frame rate is 10 Mfps, and the maximum imaging frame rate is 100 fps. The chip’s output bandwidth is 720 MHz with an average power of 15 mW. The lifetime resolvability range is 5–20 ns, and the average error of estimated fluorescence lifetimes is below 1% by employing CMM to estimate lifetimes.

  10. A Model-Driven Development Method for Management Information Systems

    Science.gov (United States)

    Mizuno, Tomoki; Matsumoto, Keinosuke; Mori, Naoki

    Traditionally, a Management Information System (MIS) has been developed without using formal methods. By the informal methods, the MIS is developed on its lifecycle without having any models. It causes many problems such as lack of the reliability of system design specifications. In order to overcome these problems, a model theory approach was proposed. The approach is based on an idea that a system can be modeled by automata and set theory. However, it is very difficult to generate automata of the system to be developed right from the start. On the other hand, there is a model-driven development method that can flexibly correspond to changes of business logics or implementing technologies. In the model-driven development, a system is modeled using a modeling language such as UML. This paper proposes a new development method for management information systems applying the model-driven development method to a component of the model theory approach. The experiment has shown that a reduced amount of efforts is more than 30% of all the efforts.

  11. An Image Registration Method for Colposcopic Images

    Directory of Open Access Journals (Sweden)

    Efrén Mezura-Montes

    2013-01-01

    sequence and a division of such image into small windows. A search process is then carried out to find the window with the highest affinity in each image of the sequence and replace it with the window in the reference image. The affinity value is based on polynomial approximation of the time series computed and the search is bounded by a search radius which defines the neighborhood of each window. The proposed approach is tested in ten 310-frame real cases in two experiments: the first one to determine the best values for the window size and the search radius and the second one to compare the best obtained results with respect to four registration methods found in the specialized literature. The obtained results show a robust and competitive performance of the proposed approach with a significant lower time with respect to the compared methods.

  12. Motion simulation of hydraulic driven safety rod using FSI method

    International Nuclear Information System (INIS)

    Jung, Jaeho; Kim, Sanghaun; Yoo, Yeonsik; Cho, Yeonggarp; Kim, Jong In

    2013-01-01

    Hydraulic driven safety rod which is one of them is being developed by Division for Reactor Mechanical Engineering, KAERI. In this paper the motion of this rod is simulated by fluid structure interaction (FSI) method before manufacturing for design verification and pump sizing. A newly designed hydraulic driven safety rod which is one of reactivity control mechanism is simulated using FSI method for design verification and pump sizing. The simulation is done in CFD domain with UDF. The pressure drop is changed slightly by flow rates. It means that the pressure drop is mainly determined by weight of moving part. The simulated velocity of piston is linearly proportional to flow rates so the pump can be sized easily according to the rising and drop time requirement of the safety rod using the simulation results

  13. Imaging methods in otorhinolaryngology

    International Nuclear Information System (INIS)

    Frey, K.W.; Mees, K.; Vogl, T.

    1989-01-01

    This book is the work of an otorhinolaryngologist and two radiologists, who combined their experience and efforts in order to solve a great variety and number of problems encountered in practical work, taking into account the latest technical potentials and the practical feasibility, which is determined by the equipment available. Every chapter presents the full range of diagnostic methods applicable, starting with the suitable plain radiography methods and proceeding to the various tomographic scanning methods, including conventional tomography. Every technique is assessed in terms of diagnostic value and drawbacks. (orig./MG) With 778 figs [de

  14. Image restoration and processing methods

    International Nuclear Information System (INIS)

    Daniell, G.J.

    1984-01-01

    This review will stress the importance of using image restoration techniques that deal with incomplete, inconsistent, and noisy data and do not introduce spurious features into the processed image. No single image is equally suitable for both the resolution of detail and the accurate measurement of intensities. A good general purpose technique is the maximum entropy method and the basis and use of this will be explained. (orig.)

  15. Methods in quantitative image analysis.

    Science.gov (United States)

    Oberholzer, M; Ostreicher, M; Christen, H; Brühlmann, M

    1996-05-01

    histogram of an existing image (input image) into a new grey value histogram (output image) are most quickly handled by a look-up table (LUT). The histogram of an image can be influenced by gain, offset and gamma of the camera. Gain defines the voltage range, offset defines the reference voltage and gamma the slope of the regression line between the light intensity and the voltage of the camera. A very important descriptor of neighbourhood relations in an image is the co-occurrence matrix. The distance between the pixels (original pixel and its neighbouring pixel) can influence the various parameters calculated from the co-occurrence matrix. The main goals of image enhancement are elimination of surface roughness in an image (smoothing), correction of defects (e.g. noise), extraction of edges, identification of points, strengthening texture elements and improving contrast. In enhancement, two types of operations can be distinguished: pixel-based (point operations) and neighbourhood-based (matrix operations). The most important pixel-based operations are linear stretching of grey values, application of pre-stored LUTs and histogram equalisation. The neighbourhood-based operations work with so-called filters. These are organising elements with an original or initial point in their centre. Filters can be used to accentuate or to suppress specific structures within the image. Filters can work either in the spatial or in the frequency domain. The method used for analysing alterations of grey value intensities in the frequency domain is the Hartley transform. Filter operations in the spatial domain can be based on averaging or ranking the grey values occurring in the organising element. The most important filters, which are usually applied, are the Gaussian filter and the Laplace filter (both averaging filters), and the median filter, the top hat filter and the range operator (all ranking filters). Segmentation of objects is traditionally based on threshold grey values. (AB

  16. A defect-driven diagnostic method for machine tool spindles.

    Science.gov (United States)

    Vogl, Gregory W; Donmez, M Alkan

    2015-01-01

    Simple vibration-based metrics are, in many cases, insufficient to diagnose machine tool spindle condition. These metrics couple defect-based motion with spindle dynamics; diagnostics should be defect-driven. A new method and spindle condition estimation device (SCED) were developed to acquire data and to separate system dynamics from defect geometry. Based on this method, a spindle condition metric relying only on defect geometry is proposed. Application of the SCED on various milling and turning spindles shows that the new approach is robust for diagnosing the machine tool spindle condition.

  17. Simulation of electrically driven jet using Chebyshev collocation method

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    The model of electrically driven jet is governed by a series of quasi 1D dimensionless partial differential equations(PDEs).Following the method of lines,the Chebyshev collocation method is employed to discretize the PDEs and obtain a system of differential-algebraic equations(DAEs).By differentiating constrains in DAEs twice,the system is transformed into a set of ordinary differential equations(ODEs) with invariants.Then the implicit differential equations solver "ddaskr" is used to solve the ODEs and ...

  18. Numerical methods for image registration

    CERN Document Server

    Modersitzki, Jan

    2003-01-01

    Based on the author's lecture notes and research, this well-illustrated and comprehensive text is one of the first to provide an introduction to image registration with particular emphasis on numerical methods in medical imaging. Ideal for researchers in industry and academia, it is also a suitable study guide for graduate mathematicians, computer scientists, engineers, medical physicists, and radiologists.Image registration is utilised whenever information obtained from different viewpoints needs to be combined or compared and unwanted distortion needs to be eliminated. For example, CCTV imag

  19. 252Cf-source-driven neutron noise analysis method

    International Nuclear Information System (INIS)

    Mihalczo, J.T.; King, W.T.; Blakeman, E.D.

    1985-01-01

    The 252 Cf-source-driven neutron noise analysis method has been tested in a a wide variety of experiments that have indicated the broad range of applicability of the method. The neutron multiplication factor, k/sub eff/ has been satisfactorily determined for a variety of materials including uranium metal, light water reactor fuel pins, fissile solutions, fuel plates in water, and interacting cylinders. For a uranyl nitrate solution tank which is typical of a fuel processing or reprocessing plant, the k/sub eff/ values were satisfactorily determined for values between 0.92 and 0.5 using a simple point kinetics interpretation of the experimental data. The short measurement times, in several cases as low as 1 min, have shown that the development of this method can lead to a practical subcriticality monitor for many in-plant applications. The further development of the method will require experiments and the development of theoretical methods to predict the experimental observables

  20. Twin-Foucault imaging method

    Science.gov (United States)

    Harada, Ken

    2012-02-01

    A method of Lorentz electron microscopy, which enables observation two Foucault images simultaneously by using an electron biprism instead of an objective aperture, was developed. The electron biprism is installed between two electron beams deflected by 180° magnetic domains. Potential applied to the biprism deflects the two electron beams further, and two Foucault images with reversed contrast are then obtained in one visual field. The twin Foucault images are able to extract the magnetic domain structures and to reconstruct an ordinary electron micrograph. The developed Foucault method was demonstrated with a 180° domain structure of manganite La0.825Sr0.175MnO3.

  1. Image Structure-Preserving Denoising Based on Difference Curvature Driven Fractional Nonlinear Diffusion

    Directory of Open Access Journals (Sweden)

    Xuehui Yin

    2015-01-01

    Full Text Available The traditional integer-order partial differential equations and gradient regularization based image denoising techniques often suffer from staircase effect, speckle artifacts, and the loss of image contrast and texture details. To address these issues, in this paper, a difference curvature driven fractional anisotropic diffusion for image noise removal is presented, which uses two new techniques, fractional calculus and difference curvature, to describe the intensity variations in images. The fractional-order derivatives information of an image can deal well with the textures of the image and achieve a good tradeoff between eliminating speckle artifacts and restraining staircase effect. The difference curvature constructed by the second order derivatives along the direction of gradient of an image and perpendicular to the gradient can effectively distinguish between ramps and edges. Fourier transform technique is also proposed to compute the fractional-order derivative. Experimental results demonstrate that the proposed denoising model can avoid speckle artifacts and staircase effect and preserve important features such as curvy edges, straight edges, ramps, corners, and textures. They are obviously superior to those of traditional integral based methods. The experimental results also reveal that our proposed model yields a good visual effect and better values of MSSIM and PSNR.

  2. Image portion identification methods, image parsing methods, image parsing systems, and articles of manufacture

    Science.gov (United States)

    Lassahn, Gordon D.; Lancaster, Gregory D.; Apel, William A.; Thompson, Vicki S.

    2013-01-08

    Image portion identification methods, image parsing methods, image parsing systems, and articles of manufacture are described. According to one embodiment, an image portion identification method includes accessing data regarding an image depicting a plurality of biological substrates corresponding to at least one biological sample and indicating presence of at least one biological indicator within the biological sample and, using processing circuitry, automatically identifying a portion of the image depicting one of the biological substrates but not others of the biological substrates.

  3. Mathematical methods in elasticity imaging

    CERN Document Server

    Ammari, Habib; Garnier, Josselin; Kang, Hyeonbae; Lee, Hyundae; Wahab, Abdul

    2015-01-01

    This book is the first to comprehensively explore elasticity imaging and examines recent, important developments in asymptotic imaging, modeling, and analysis of deterministic and stochastic elastic wave propagation phenomena. It derives the best possible functional images for small inclusions and cracks within the context of stability and resolution, and introduces a topological derivative-based imaging framework for detecting elastic inclusions in the time-harmonic regime. For imaging extended elastic inclusions, accurate optimal control methodologies are designed and the effects of uncertainties of the geometric or physical parameters on stability and resolution properties are evaluated. In particular, the book shows how localized damage to a mechanical structure affects its dynamic characteristics, and how measured eigenparameters are linked to elastic inclusion or crack location, orientation, and size. Demonstrating a novel method for identifying, locating, and estimating inclusions and cracks in elastic...

  4. 3D Seismic Imaging using Marchenko Methods

    Science.gov (United States)

    Lomas, A.; Curtis, A.

    2017-12-01

    Marchenko methods are novel, data driven techniques that allow seismic wavefields from sources and receivers on the Earth's surface to be redatumed to construct wavefields with sources in the subsurface - including complex multiply-reflected waves, and without the need for a complex reference model. In turn, this allows subsurface images to be constructed at any such subsurface redatuming points (image or virtual receiver points). Such images are then free of artefacts from multiply-scattered waves that usually contaminate migrated seismic images. Marchenko algorithms require as input the same information as standard migration methods: the full reflection response from sources and receivers at the Earth's surface, and an estimate of the first arriving wave between the chosen image point and the surface. The latter can be calculated using a smooth velocity model estimated using standard methods. The algorithm iteratively calculates a signal that focuses at the image point to create a virtual source at that point, and this can be used to retrieve the signal between the virtual source and the surface. A feature of these methods is that the retrieved signals are naturally decomposed into up- and down-going components. That is, we obtain both the signal that initially propagated upwards from the virtual source and arrived at the surface, separated from the signal that initially propagated downwards. Figure (a) shows a 3D subsurface model with a variable density but a constant velocity (3000m/s). Along the surface of this model (z=0) in both the x and y directions are co-located sources and receivers at 20-meter intervals. The redatumed signal in figure (b) has been calculated using Marchenko methods from a virtual source (1200m, 500m and 400m) to the surface. For comparison the true solution is given in figure (c), and shows a good match when compared to figure (b). While these 2D redatuming and imaging methods are still in their infancy having first been developed in

  5. CMOS Active-Pixel Image Sensor With Intensity-Driven Readout

    Science.gov (United States)

    Langenbacher, Harry T.; Fossum, Eric R.; Kemeny, Sabrina

    1996-01-01

    Proposed complementary metal oxide/semiconductor (CMOS) integrated-circuit image sensor automatically provides readouts from pixels in order of decreasing illumination intensity. Sensor operated in integration mode. Particularly useful in number of image-sensing tasks, including diffractive laser range-finding, three-dimensional imaging, event-driven readout of sparse sensor arrays, and star tracking.

  6. Data-driven execution of fast multipole methods

    KAUST Repository

    Ltaief, Hatem

    2013-09-17

    Fast multipole methods (FMMs) have O (N) complexity, are compute bound, and require very little synchronization, which makes them a favorable algorithm on next-generation supercomputers. Their most common application is to accelerate N-body problems, but they can also be used to solve boundary integral equations. When the particle distribution is irregular and the tree structure is adaptive, load balancing becomes a non-trivial question. A common strategy for load balancing FMMs is to use the work load from the previous step as weights to statically repartition the next step. The authors discuss in the paper another approach based on data-driven execution to efficiently tackle this challenging load balancing problem. The core idea consists of breaking the most time-consuming stages of the FMMs into smaller tasks. The algorithm can then be represented as a directed acyclic graph where nodes represent tasks and edges represent dependencies among them. The execution of the algorithm is performed by asynchronously scheduling the tasks using the queueing and runtime for kernels runtime environment, in a way such that data dependencies are not violated for numerical correctness purposes. This asynchronous scheduling results in an out-of-order execution. The performance results of the data-driven FMM execution outperform the previous strategy and show linear speedup on a quad-socket quad-core Intel Xeon system.Copyright © 2013 John Wiley & Sons, Ltd. Copyright © 2013 John Wiley & Sons, Ltd.

  7. 252Cf-source-driven neutron noise analysis method

    International Nuclear Information System (INIS)

    Mihalczo, J.T.; King, W.T.; Blakeman, E.D.

    1985-01-01

    The 252 Cf-source-driven neutron noise analysis method has been tested in a wide variety of experiments that have indicated the broad range of applicability of the method. The neutron multiplication factor k/sub eff/ has been satisfactorily detemined for a variety of materials including uranium metal, light water reactor fuel pins, fissile solutions, fuel plates in water, and interacting cylinders. For a uranyl nitrate solution tank which is typical of a fuel processing or reprocessing plant, the k/sub eff/ values were satisfactorily determined for values between 0.92 and 0.5 using a simple point kinetics interpretation of the experimental data. The short measurement times, in several cases as low as 1 min, have shown that the development of this method can lead to a practical subcriticality monitor for many in-plant applications. The further development of the method will require experiments oriented toward particular applications including dynamic experiments and the development of theoretical methods to predict the experimental observables

  8. Quantitative imaging methods in osteoporosis.

    Science.gov (United States)

    Oei, Ling; Koromani, Fjorda; Rivadeneira, Fernando; Zillikens, M Carola; Oei, Edwin H G

    2016-12-01

    Osteoporosis is characterized by a decreased bone mass and quality resulting in an increased fracture risk. Quantitative imaging methods are critical in the diagnosis and follow-up of treatment effects in osteoporosis. Prior radiographic vertebral fractures and bone mineral density (BMD) as a quantitative parameter derived from dual-energy X-ray absorptiometry (DXA) are among the strongest known predictors of future osteoporotic fractures. Therefore, current clinical decision making relies heavily on accurate assessment of these imaging features. Further, novel quantitative techniques are being developed to appraise additional characteristics of osteoporosis including three-dimensional bone architecture with quantitative computed tomography (QCT). Dedicated high-resolution (HR) CT equipment is available to enhance image quality. At the other end of the spectrum, by utilizing post-processing techniques such as the trabecular bone score (TBS) information on three-dimensional architecture can be derived from DXA images. Further developments in magnetic resonance imaging (MRI) seem promising to not only capture bone micro-architecture but also characterize processes at the molecular level. This review provides an overview of various quantitative imaging techniques based on different radiological modalities utilized in clinical osteoporosis care and research.

  9. Computer codes and methods for simulating accelerator driven systems

    International Nuclear Information System (INIS)

    Sartori, E.; Byung Chan Na

    2003-01-01

    A large set of computer codes and associated data libraries have been developed by nuclear research and industry over the past half century. A large number of them are in the public domain and can be obtained under agreed conditions from different Information Centres. The areas covered comprise: basic nuclear data and models, reactor spectra and cell calculations, static and dynamic reactor analysis, criticality, radiation shielding, dosimetry and material damage, fuel behaviour, safety and hazard analysis, heat conduction and fluid flow in reactor systems, spent fuel and waste management (handling, transportation, and storage), economics of fuel cycles, impact on the environment of nuclear activities etc. These codes and models have been developed mostly for critical systems used for research or power generation and other technological applications. Many of them have not been designed for accelerator driven systems (ADS), but with competent use, they can be used for studying such systems or can form the basis for adapting existing methods to the specific needs of ADS's. The present paper describes the types of methods, codes and associated data available and their role in the applications. It provides Web addresses for facilitating searches for such tools. Some indications are given on the effect of non appropriate or 'blind' use of existing tools to ADS. Reference is made to available experimental data that can be used for validating the methods use. Finally, some international activities linked to the different computational aspects are described briefly. (author)

  10. Task-Driven Dictionary Learning Based on Mutual Information for Medical Image Classification.

    Science.gov (United States)

    Diamant, Idit; Klang, Eyal; Amitai, Michal; Konen, Eli; Goldberger, Jacob; Greenspan, Hayit

    2017-06-01

    We present a novel variant of the bag-of-visual-words (BoVW) method for automated medical image classification. Our approach improves the BoVW model by learning a task-driven dictionary of the most relevant visual words per task using a mutual information-based criterion. Additionally, we generate relevance maps to visualize and localize the decision of the automatic classification algorithm. These maps demonstrate how the algorithm works and show the spatial layout of the most relevant words. We applied our algorithm to three different tasks: chest x-ray pathology identification (of four pathologies: cardiomegaly, enlarged mediastinum, right consolidation, and left consolidation), liver lesion classification into four categories in computed tomography (CT) images and benign/malignant clusters of microcalcifications (MCs) classification in breast mammograms. Validation was conducted on three datasets: 443 chest x-rays, 118 portal phase CT images of liver lesions, and 260 mammography MCs. The proposed method improves the classical BoVW method for all tested applications. For chest x-ray, area under curve of 0.876 was obtained for enlarged mediastinum identification compared to 0.855 using classical BoVW (with p-value 0.01). For MC classification, a significant improvement of 4% was achieved using our new approach (with p-value = 0.03). For liver lesion classification, an improvement of 6% in sensitivity and 2% in specificity were obtained (with p-value 0.001). We demonstrated that classification based on informative selected set of words results in significant improvement. Our new BoVW approach shows promising results in clinically important domains. Additionally, it can discover relevant parts of images for the task at hand without explicit annotations for training data. This can provide computer-aided support for medical experts in challenging image analysis tasks.

  11. Methods of producing luminescent images

    International Nuclear Information System (INIS)

    Broadhead, P.; Newman, G.A.

    1977-01-01

    A method is described for producing a luminescent image in a layer of a binding material in which is dispersed a thermoluminescent material. The layer is heated uniformly to a temperature of 80 to 300 0 C and is exposed to luminescence inducing radiation whilst so heated. The preferred exposing radiation is X-rays and preferably the thermoluminescent material is insensitive to electromagnetic radiation of wavelength longer than 300 mm. Information concerning preparation of the luminescent material is given in BP 1,347,672; this material has the advantage that at elevated temperatures it shows increased sensitivity compared with room temperature. At temperatures in the range 80 to 150 0 C the thermoluminescent material exhibits 'afterglow', allowing the image to persist for several seconds after the X-radiation has ceased, thus allowing the image to be retained for visual inspection in this temperature range. At higher temperatures, however, there is negligible 'afterglow'. The thermoluminescent layers so produced are particularly useful as fluoroscopic screens. The preferred method of heating the thermoluminescent material is described in BP 1,354,149. An example is given of the application of the method. (U.K.)

  12. 2D-Driven 3D Object Detection in RGB-D Images

    KAUST Repository

    Lahoud, Jean

    2017-12-25

    In this paper, we present a technique that places 3D bounding boxes around objects in an RGB-D scene. Our approach makes best use of the 2D information to quickly reduce the search space in 3D, benefiting from state-of-the-art 2D object detection techniques. We then use the 3D information to orient, place, and score bounding boxes around objects. We independently estimate the orientation for every object, using previous techniques that utilize normal information. Object locations and sizes in 3D are learned using a multilayer perceptron (MLP). In the final step, we refine our detections based on object class relations within a scene. When compared to state-of-the-art detection methods that operate almost entirely in the sparse 3D domain, extensive experiments on the well-known SUN RGB-D dataset [29] show that our proposed method is much faster (4.1s per image) in detecting 3D objects in RGB-D images and performs better (3 mAP higher) than the state-of-the-art method that is 4.7 times slower and comparably to the method that is two orders of magnitude slower. This work hints at the idea that 2D-driven object detection in 3D should be further explored, especially in cases where the 3D input is sparse.

  13. Digital image processing mathematical and computational methods

    CERN Document Server

    Blackledge, J M

    2005-01-01

    This authoritative text (the second part of a complete MSc course) provides mathematical methods required to describe images, image formation and different imaging systems, coupled with the principle techniques used for processing digital images. It is based on a course for postgraduates reading physics, electronic engineering, telecommunications engineering, information technology and computer science. This book relates the methods of processing and interpreting digital images to the 'physics' of imaging systems. Case studies reinforce the methods discussed, with examples of current research

  14. A method of image improvement in three-dimensional imaging

    International Nuclear Information System (INIS)

    Suto, Yasuzo; Huang, Tewen; Furuhata, Kentaro; Uchino, Masafumi.

    1988-01-01

    In general, image interpolation is required when the surface configurations of such structures as bones and organs are three-dimensionally constructed from the multi-sliced images obtained by CT. Image interpolation is a processing method whereby an artificial image is inserted between two adjacent slices to make spatial resolution equal to slice resolution in appearance. Such image interpolation makes it possible to increase the image quality of the constructed three-dimensional image. In our newly-developed algorithm, we have converted the presently and subsequently sliced images to distance images, and generated the interpolation images from these two distance images. As a result, compared with the previous method, three-dimensional images with better image quality have been constructed. (author)

  15. Sealed operation of a rf driven ion source for a compact neutron generator to be used for associated particle imaging.

    Science.gov (United States)

    Wu, Y; Hurley, J P; Ji, Q; Kwan, J W; Leung, K N

    2010-02-01

    We present the recent development of a prototype compact neutron generator to be used in conjunction with the method of associated particle imaging for the purpose of active neutron interrogation. In this paper, the performance and device specifications of these compact generators that employ rf driven ion sources will be discussed. Initial measurements of the generator performance include a beam spot size of 1 mm in diameter and a neutron yield of 2x10(5) n/s with air cooling.

  16. Method of assessing heterogeneity in images

    Science.gov (United States)

    Jacob, Richard E.; Carson, James P.

    2016-08-23

    A method of assessing heterogeneity in images is disclosed. 3D images of an object are acquired. The acquired images may be filtered and masked. Iterative decomposition is performed on the masked images to obtain image subdivisions that are relatively homogeneous. Comparative analysis, such as variogram analysis or correlogram analysis, is performed of the decomposed images to determine spatial relationships between regions of the images that are relatively homogeneous.

  17. Image classification using multiscale information fusion based on saliency driven nonlinear diffusion filtering.

    Science.gov (United States)

    Hu, Weiming; Hu, Ruiguang; Xie, Nianhua; Ling, Haibin; Maybank, Stephen

    2014-04-01

    In this paper, we propose saliency driven image multiscale nonlinear diffusion filtering. The resulting scale space in general preserves or even enhances semantically important structures such as edges, lines, or flow-like structures in the foreground, and inhibits and smoothes clutter in the background. The image is classified using multiscale information fusion based on the original image, the image at the final scale at which the diffusion process converges, and the image at a midscale. Our algorithm emphasizes the foreground features, which are important for image classification. The background image regions, whether considered as contexts of the foreground or noise to the foreground, can be globally handled by fusing information from different scales. Experimental tests of the effectiveness of the multiscale space for the image classification are conducted on the following publicly available datasets: 1) the PASCAL 2005 dataset; 2) the Oxford 102 flowers dataset; and 3) the Oxford 17 flowers dataset, with high classification rates.

  18. T2-weighted four dimensional magnetic resonance imaging with result-driven phase sorting

    International Nuclear Information System (INIS)

    Liu, Yilin; Yin, Fang-Fang; Cai, Jing; Czito, Brian G.; Bashir, Mustafa R.

    2015-01-01

    Purpose: T2-weighted MRI provides excellent tumor-to-tissue contrast for target volume delineation in radiation therapy treatment planning. This study aims at developing a novel T2-weighted retrospective four dimensional magnetic resonance imaging (4D-MRI) phase sorting technique for imaging organ/tumor respiratory motion. Methods: A 2D fast T2-weighted half-Fourier acquisition single-shot turbo spin-echo MR sequence was used for image acquisition of 4D-MRI, with a frame rate of 2–3 frames/s. Respiratory motion was measured using an external breathing monitoring device. A phase sorting method was developed to sort the images by their corresponding respiratory phases. Besides, a result-driven strategy was applied to effectively utilize redundant images in the case when multiple images were allocated to a bin. This strategy, selecting the image with minimal amplitude error, will generate the most representative 4D-MRI. Since we are using a different image acquisition mode for 4D imaging (the sequential image acquisition scheme) with the conventionally used cine or helical image acquisition scheme, the 4D dataset sufficient condition was not obviously and directly predictable. An important challenge of the proposed technique was to determine the number of repeated scans (N_R) required to obtain sufficient phase information at each slice position. To tackle this challenge, the authors first conducted computer simulations using real-time position management respiratory signals of the 29 cancer patients under an IRB-approved retrospective study to derive the relationships between N_R and the following factors: number of slices (N_S), number of 4D-MRI respiratory bins (N_B), and starting phase at image acquisition (P_0). To validate the authors’ technique, 4D-MRI acquisition and reconstruction were simulated on a 4D digital extended cardiac-torso (XCAT) human phantom using simulation derived parameters. Twelve healthy volunteers were involved in an IRB-approved study

  19. WE-EF-207-01: FEATURED PRESENTATION and BEST IN PHYSICS (IMAGING): Task-Driven Imaging for Cone-Beam CT in Interventional Guidance

    International Nuclear Information System (INIS)

    Gang, G; Stayman, J; Ouadah, S; Siewerdsen, J; Ehtiati, T

    2015-01-01

    Purpose: This work introduces a task-driven imaging framework that utilizes a patient-specific anatomical model, mathematical definition of the imaging task, and a model of the imaging system to prospectively design acquisition and reconstruction techniques that maximize task-based imaging performance. Utility of the framework is demonstrated in the joint optimization of tube current modulation and view-dependent reconstruction kernel in filtered-backprojection reconstruction and non-circular orbit design in model-based reconstruction. Methods: The system model is based on a cascaded systems analysis of cone-beam CT capable of predicting the spatially varying noise and resolution characteristics as a function of the anatomical model and a wide range of imaging parameters. Detectability index for a non-prewhitening observer model is used as the objective function in a task-driven optimization. The combination of tube current and reconstruction kernel modulation profiles were identified through an alternating optimization algorithm where tube current was updated analytically followed by a gradient-based optimization of reconstruction kernel. The non-circular orbit is first parameterized as a linear combination of bases functions and the coefficients were then optimized using an evolutionary algorithm. The task-driven strategy was compared with conventional acquisitions without modulation, using automatic exposure control, and in a circular orbit. Results: The task-driven strategy outperformed conventional techniques in all tasks investigated, improving the detectability of a spherical lesion detection task by an average of 50% in the interior of a pelvis phantom. The non-circular orbit design successfully mitigated photon starvation effects arising from a dense embolization coil in a head phantom, improving the conspicuity of an intracranial hemorrhage proximal to the coil. Conclusion: The task-driven imaging framework leverages a knowledge of the imaging task within

  20. WE-EF-207-01: FEATURED PRESENTATION and BEST IN PHYSICS (IMAGING): Task-Driven Imaging for Cone-Beam CT in Interventional Guidance

    Energy Technology Data Exchange (ETDEWEB)

    Gang, G; Stayman, J; Ouadah, S; Siewerdsen, J [Johns Hopkins University, Baltimore, MD (United States); Ehtiati, T [Siemens Healthcare AX Division, Erlangen, DE (Germany)

    2015-06-15

    Purpose: This work introduces a task-driven imaging framework that utilizes a patient-specific anatomical model, mathematical definition of the imaging task, and a model of the imaging system to prospectively design acquisition and reconstruction techniques that maximize task-based imaging performance. Utility of the framework is demonstrated in the joint optimization of tube current modulation and view-dependent reconstruction kernel in filtered-backprojection reconstruction and non-circular orbit design in model-based reconstruction. Methods: The system model is based on a cascaded systems analysis of cone-beam CT capable of predicting the spatially varying noise and resolution characteristics as a function of the anatomical model and a wide range of imaging parameters. Detectability index for a non-prewhitening observer model is used as the objective function in a task-driven optimization. The combination of tube current and reconstruction kernel modulation profiles were identified through an alternating optimization algorithm where tube current was updated analytically followed by a gradient-based optimization of reconstruction kernel. The non-circular orbit is first parameterized as a linear combination of bases functions and the coefficients were then optimized using an evolutionary algorithm. The task-driven strategy was compared with conventional acquisitions without modulation, using automatic exposure control, and in a circular orbit. Results: The task-driven strategy outperformed conventional techniques in all tasks investigated, improving the detectability of a spherical lesion detection task by an average of 50% in the interior of a pelvis phantom. The non-circular orbit design successfully mitigated photon starvation effects arising from a dense embolization coil in a head phantom, improving the conspicuity of an intracranial hemorrhage proximal to the coil. Conclusion: The task-driven imaging framework leverages a knowledge of the imaging task within

  1. A novel magnetic resonance imaging-compatible motor control method for image-guided robotic surgery

    International Nuclear Information System (INIS)

    Suzuki, Takashi; Liao, Hongen; Kobayashi, Etsuko; Sakuma, Ichiro

    2006-01-01

    For robotic surgery assistance systems that use magnetic resonance imaging (MRI) for guidance, the problem of electromagnetic interference is common. Image quality is particularly degraded if motors are running during scanning. We propose a novel MRI-compatible method considering the pulse sequence of imaging. Motors are driven for a short time when the MRI system stops signal acquisition (i.e., awaiting relaxation of the proton), so the image does not contain noise from the actuators. The MRI system and motor are synchronized using a radio frequency pulse signal (8.5 MHz) as the trigger, which is acquired via a special antenna mounted near the scanner. This method can be widely applied because it only receives part of the scanning signal and neither hardware nor software of the MRI system needs to be changed. As a feasibility evaluation test, we compared the images and signal-to-noise ratios between the cases with and without this method, under the condition that a piezoelectric motor was driven during scanning as a noise source, which was generally used as a MRI-compatible actuator. The results showed no deterioration in image quality and the benefit of the new method even though the choice of available scanning sequences is limited. (author)

  2. COMPARISON OF IMAGE ENHANCEMENT METHODS FOR CHROMOSOME KARYOTYPE IMAGE ENHANCEMENT

    Directory of Open Access Journals (Sweden)

    Dewa Made Sri Arsa

    2017-02-01

    Full Text Available The chromosome is a set of DNA structure that carry information about our life. The information can be obtained through Karyotyping. The process requires a clear image so the chromosome can be evaluate well. Preprocessing have to be done on chromosome images that is image enhancement. The process starts with image background removing. The image will be cleaned background color. The next step is image enhancement. This paper compares several methods for image enhancement. We evaluate some method in image enhancement like Histogram Equalization (HE, Contrast-limiting Adaptive Histogram Equalization (CLAHE, Histogram Equalization with 3D Block Matching (HE+BM3D, and basic image enhancement, unsharp masking. We examine and discuss the best method for enhancing chromosome image. Therefore, to evaluate the methods, the original image was manipulated by the addition of some noise and blur. Peak Signal-to-noise Ratio (PSNR and Structural Similarity Index (SSIM are used to examine method performance. The output of enhancement method will be compared with result of Professional software for karyotyping analysis named Ikaros MetasystemT M . Based on experimental results, HE+BM3D method gets a stable result on both scenario noised and blur image.

  3. Imaging of Coulomb-Driven Quantum Hall Edge States

    KAUST Repository

    Lai, Keji

    2011-10-01

    The edges of a two-dimensional electron gas (2DEG) in the quantum Hall effect (QHE) regime are divided into alternating metallic and insulating strips, with their widths determined by the energy gaps of the QHE states and the electrostatic Coulomb interaction. Local probing of these submicrometer features, however, is challenging due to the buried 2DEG structures. Using a newly developed microwave impedance microscope, we demonstrate the real-space conductivity mapping of the edge and bulk states. The sizes, positions, and field dependence of the edge strips around the sample perimeter agree quantitatively with the self-consistent electrostatic picture. The evolution of microwave images as a function of magnetic fields provides rich microscopic information around the ν=2 QHE state. © 2011 American Physical Society.

  4. Double-compression method for biomedical images

    Science.gov (United States)

    Antonenko, Yevhenii A.; Mustetsov, Timofey N.; Hamdi, Rami R.; Małecka-Massalska, Teresa; Orshubekov, Nurbek; DzierŻak, RóŻa; Uvaysova, Svetlana

    2017-08-01

    This paper describes a double compression method (DCM) of biomedical images. A comparison of image compression factors in size JPEG, PNG and developed DCM was carried out. The main purpose of the DCM - compression of medical images while maintaining the key points that carry diagnostic information. To estimate the minimum compression factor an analysis of the coding of random noise image is presented.

  5. The configuration-driven table CI method and comparison with integral-driven CI procedures

    International Nuclear Information System (INIS)

    Buenker, R.J.

    1980-01-01

    A new configuration-driven CI algorithm is outlined which eliminates the need for explicit comparison of pairs of Slater determinants through the use of a series of compact tables. In this scheme each pair of configurations is either shown to be non-interacting or to fall into one of nine cases, each of which is characterized fully once certain orbital permutations are determined. The program is divided into three parts: a case structure analysis step including integral label generation, a sort of the required electron repulsion integrals, and finally a procedure in which the foregoing information is combined with tabulated directions for the evaluation of the necessary Hamiltonian matrix elements over spin-adapted functions. Timing improvements of up to more than a factor of four have been achieved with the new algorithm

  6. Optoelectronic imaging of speckle using image processing method

    Science.gov (United States)

    Wang, Jinjiang; Wang, Pengfei

    2018-01-01

    A detailed image processing of laser speckle interferometry is proposed as an example for the course of postgraduate student. Several image processing methods were used together for dealing with optoelectronic imaging system, such as the partial differential equations (PDEs) are used to reduce the effect of noise, the thresholding segmentation also based on heat equation with PDEs, the central line is extracted based on image skeleton, and the branch is removed automatically, the phase level is calculated by spline interpolation method, and the fringe phase can be unwrapped. Finally, the imaging processing method was used to automatically measure the bubble in rubber with negative pressure which could be used in the tire detection.

  7. Color image definition evaluation method based on deep learning method

    Science.gov (United States)

    Liu, Di; Li, YingChun

    2018-01-01

    In order to evaluate different blurring levels of color image and improve the method of image definition evaluation, this paper proposed a method based on the depth learning framework and BP neural network classification model, and presents a non-reference color image clarity evaluation method. Firstly, using VGG16 net as the feature extractor to extract 4,096 dimensions features of the images, then the extracted features and labeled images are employed in BP neural network to train. And finally achieve the color image definition evaluation. The method in this paper are experimented by using images from the CSIQ database. The images are blurred at different levels. There are 4,000 images after the processing. Dividing the 4,000 images into three categories, each category represents a blur level. 300 out of 400 high-dimensional features are trained in VGG16 net and BP neural network, and the rest of 100 samples are tested. The experimental results show that the method can take full advantage of the learning and characterization capability of deep learning. Referring to the current shortcomings of the major existing image clarity evaluation methods, which manually design and extract features. The method in this paper can extract the images features automatically, and has got excellent image quality classification accuracy for the test data set. The accuracy rate is 96%. Moreover, the predicted quality levels of original color images are similar to the perception of the human visual system.

  8. Laser-driven ion acceleration: methods, challenges and prospects

    Science.gov (United States)

    Badziak, J.

    2018-01-01

    The recent development of laser technology has resulted in the construction of short-pulse lasers capable of generating fs light pulses with PW powers and intensities exceeding 1021 W/cm2, and has laid the basis for the multi-PW lasers, just being built in Europe, that will produce fs pulses of ultra-relativistic intensities ~ 1023 - 1024 W/cm2. The interaction of such an intense laser pulse with a dense target can result in the generation of collimated beams of ions of multi-MeV to GeV energies of sub-ps time durations and of extremely high beam intensities and ion fluencies, barely attainable with conventional RF-driven accelerators. Ion beams with such unique features have the potential for application in various fields of scientific research as well as in medical and technological developments. This paper provides a brief review of state-of-the art in laser-driven ion acceleration, with a focus on basic ion acceleration mechanisms and the production of ultra-intense ion beams. The challenges facing laser-driven ion acceleration studies, in particular those connected with potential applications of laser-accelerated ion beams, are also discussed.

  9. Task-driven image acquisition and reconstruction in cone-beam CT

    International Nuclear Information System (INIS)

    Gang, Grace J; Stayman, J Webster; Siewerdsen, Jeffrey H; Ehtiati, Tina

    2015-01-01

    This work introduces a task-driven imaging framework that incorporates a mathematical definition of the imaging task, a model of the imaging system, and a patient-specific anatomical model to prospectively design image acquisition and reconstruction techniques to optimize task performance. The framework is applied to joint optimization of tube current modulation, view-dependent reconstruction kernel, and orbital tilt in cone-beam CT. The system model considers a cone-beam CT system incorporating a flat-panel detector and 3D filtered backprojection and accurately describes the spatially varying noise and resolution over a wide range of imaging parameters in the presence of a realistic anatomical model. Task-based detectability index (d′) is incorporated as the objective function in a task-driven optimization of image acquisition and reconstruction techniques. The orbital tilt was optimized through an exhaustive search across tilt angles ranging ±30°. For each tilt angle, the view-dependent tube current and reconstruction kernel (i.e. the modulation profiles) that maximized detectability were identified via an alternating optimization. The task-driven approach was compared with conventional unmodulated and automatic exposure control (AEC) strategies for a variety of imaging tasks and anthropomorphic phantoms. The task-driven strategy outperformed the unmodulated and AEC cases for all tasks. For example, d′ for a sphere detection task in a head phantom was improved by 30% compared to the unmodulated case by using smoother kernels for noisy views and distributing mAs across less noisy views (at fixed total mAs) in a manner that was beneficial to task performance. Similarly for detection of a line-pair pattern, the task-driven approach increased d′ by 80% compared to no modulation by means of view-dependent mA and kernel selection that yields modulation transfer function and noise-power spectrum optimal to the task. Optimization of orbital tilt identified the

  10. Efficient visibility-driven medical image visualisation via adaptive binned visibility histogram.

    Science.gov (United States)

    Jung, Younhyun; Kim, Jinman; Kumar, Ashnil; Feng, David Dagan; Fulham, Michael

    2016-07-01

    (MR) imaging and multi-modality positron emission tomography-CT (PET-CT). In our experiments, the AB-VH markedly improved the computational efficiency for the VH construction and thus improved the subsequent VH-driven volume manipulations. This efficiency was achieved without major degradation in the VH visually and numerical differences between the AB-VH and its full-bin counterpart. We applied several variants of the K-means clustering algorithm with varying Ks (the number of clusters) and found that higher values of K resulted in better performance at a lower computational gain. The AB-VH also had an improved performance when compared to the conventional method of down-sampling of the histogram bins (equal binning) for volume rendering visualisation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Nuclear magnetic resonance imaging method

    International Nuclear Information System (INIS)

    Johnson, G.; MacDonald, J.; Hutchison, S.; Eastwood, L.M.; Redpath, T.W.T.; Mallard, J.R.

    1984-01-01

    A method of deriving three dimensional image information from an object using nuclear magnetic resonance signals comprises subjecting the object to a continuous, static magnetic field and carrying out the following set of sequential steps: 1) exciting nuclear spins in a selected volume (90deg pulse); 2) applying non-aligned first, second and third gradients of the magnetic field; 3) causing the spins to rephase periodically by reversal of the first gradient to produce spin echoes, and applying pulses of the second gradient prior to every read-out of an echo signal from the object, to differently encode the spin in the second gradient direction for each read-out signal. The above steps 1-3 are then successively repeated with different values of gradient of the third gradient, there being a recovery interval between the repetition of successive sets of steps. Alternate echoes only are read out, the other echoes being time-reversed and ignored for convenience. The resulting signals are appropriately sampled, set out in an array and subjected to three dimensional Fourier transformation. (author)

  12. Applying Data-driven Imaging Biomarker in Mammography for Breast Cancer Screening: Preliminary Study

    OpenAIRE

    Kim, Eun-Kyung; Kim, Hyo-Eun; Han, Kyunghwa; Kang, Bong Joo; Sohn, Yu-Mee; Woo, Ok Hee; Lee, Chan Wha

    2018-01-01

    We assessed the feasibility of a data-driven imaging biomarker based on weakly supervised learning (DIB; an imaging biomarker derived from large-scale medical image data with deep learning technology) in mammography (DIB-MG). A total of 29,107 digital mammograms from five institutions (4,339 cancer cases and 24,768 normal cases) were included. After matching patients’ age, breast density, and equipment, 1,238 and 1,238 cases were chosen as validation and test sets, respectively, and the remai...

  13. Computational methods in molecular imaging technologies

    CERN Document Server

    Gunjan, Vinit Kumar; Venkatesh, C; Amarnath, M

    2017-01-01

    This book highlights the experimental investigations that have been carried out on magnetic resonance imaging and computed tomography (MRI & CT) images using state-of-the-art Computational Image processing techniques, and tabulates the statistical values wherever necessary. In a very simple and straightforward way, it explains how image processing methods are used to improve the quality of medical images and facilitate analysis. It offers a valuable resource for researchers, engineers, medical doctors and bioinformatics experts alike.

  14. Data-driven methods towards learning the highly nonlinear inverse kinematics of tendon-driven surgical manipulators.

    Science.gov (United States)

    Xu, Wenjun; Chen, Jie; Lau, Henry Y K; Ren, Hongliang

    2017-09-01

    Accurate motion control of flexible surgical manipulators is crucial in tissue manipulation tasks. The tendon-driven serpentine manipulator (TSM) is one of the most widely adopted flexible mechanisms in minimally invasive surgery because of its enhanced maneuverability in torturous environments. TSM, however, exhibits high nonlinearities and conventional analytical kinematics model is insufficient to achieve high accuracy. To account for the system nonlinearities, we applied a data driven approach to encode the system inverse kinematics. Three regression methods: extreme learning machine (ELM), Gaussian mixture regression (GMR) and K-nearest neighbors regression (KNNR) were implemented to learn a nonlinear mapping from the robot 3D position states to the control inputs. The performance of the three algorithms was evaluated both in simulation and physical trajectory tracking experiments. KNNR performed the best in the tracking experiments, with the lowest RMSE of 2.1275 mm. The proposed inverse kinematics learning methods provide an alternative and efficient way to accurately model the tendon driven flexible manipulator. Copyright © 2016 John Wiley & Sons, Ltd.

  15. The evaluation of single-view and multi-view fusion 3D echocardiography using image-driven segmentation and tracking.

    Science.gov (United States)

    Rajpoot, Kashif; Grau, Vicente; Noble, J Alison; Becher, Harald; Szmigielski, Cezary

    2011-08-01

    Real-time 3D echocardiography (RT3DE) promises a more objective and complete cardiac functional analysis by dynamic 3D image acquisition. Despite several efforts towards automation of left ventricle (LV) segmentation and tracking, these remain challenging research problems due to the poor-quality nature of acquired images usually containing missing anatomical information, speckle noise, and limited field-of-view (FOV). Recently, multi-view fusion 3D echocardiography has been introduced as acquiring multiple conventional single-view RT3DE images with small probe movements and fusing them together after alignment. This concept of multi-view fusion helps to improve image quality and anatomical information and extends the FOV. We now take this work further by comparing single-view and multi-view fused images in a systematic study. In order to better illustrate the differences, this work evaluates image quality and information content of single-view and multi-view fused images using image-driven LV endocardial segmentation and tracking. The image-driven methods were utilized to fully exploit image quality and anatomical information present in the image, thus purposely not including any high-level constraints like prior shape or motion knowledge in the analysis approaches. Experiments show that multi-view fused images are better suited for LV segmentation and tracking, while relatively more failures and errors were observed on single-view images. Copyright © 2011 Elsevier B.V. All rights reserved.

  16. Method and device for current driven electric energy conversion

    DEFF Research Database (Denmark)

    2012-01-01

    Device comprising an electric power converter circuit for converting electric energy. The converter circuit comprises a switch arrangement with two or more controllable electric switches connected in a switching configuration and controlled so as to provide a current drive of electric energy from...... configurations such as half bridge buck, full bridge buck, half bridge boost, or full bridge boost. A current driven conversion is advantageous for high efficient energy conversion from current sources such as solar cells or where a voltage source is connected through long cables, e.g. powerline cables for long...... an associated electric source connected to a set of input terminals. This is obtained by the two or more electric swiches being connected and controlled to short-circuit the input terminals during a part of a switching period. Further, a low pass filter with a capacitor and an inductor are provided to low pass...

  17. Review methods for image segmentation from computed tomography images

    International Nuclear Information System (INIS)

    Mamat, Nurwahidah; Rahman, Wan Eny Zarina Wan Abdul; Soh, Shaharuddin Cik; Mahmud, Rozi

    2014-01-01

    Image segmentation is a challenging process in order to get the accuracy of segmentation, automation and robustness especially in medical images. There exist many segmentation methods that can be implemented to medical images but not all methods are suitable. For the medical purposes, the aims of image segmentation are to study the anatomical structure, identify the region of interest, measure tissue volume to measure growth of tumor and help in treatment planning prior to radiation therapy. In this paper, we present a review method for segmentation purposes using Computed Tomography (CT) images. CT images has their own characteristics that affect the ability to visualize anatomic structures and pathologic features such as blurring of the image and visual noise. The details about the methods, the goodness and the problem incurred in the methods will be defined and explained. It is necessary to know the suitable segmentation method in order to get accurate segmentation. This paper can be a guide to researcher to choose the suitable segmentation method especially in segmenting the images from CT scan

  18. Visualization and simulation of density driven convection in porous media using magnetic resonance imaging

    Science.gov (United States)

    Montague, James A.; Pinder, George F.; Gonyea, Jay V.; Hipko, Scott; Watts, Richard

    2018-05-01

    Magnetic resonance imaging is used to observe solute transport in a 40 cm long, 26 cm diameter sand column that contained a central core of low permeability silica surrounded by higher permeability well-sorted sand. Low concentrations (2.9 g/L) of Magnevist, a gadolinium based contrast agent, produce density driven convection within the column when it starts in an unstable state. The unstable state, for this experiment, exists when higher density contrast agent is present above the lower density water. We implement a numerical model in OpenFOAM to reproduce the observed fluid flow and transport from a density difference of 0.3%. The experimental results demonstrate the usefulness of magnetic resonance imaging in observing three-dimensional gravity-driven convective-dispersive transport behaviors in medium scale experiments.

  19. Gamma-ray Imaging Methods

    Energy Technology Data Exchange (ETDEWEB)

    Vetter, K; Mihailescu, L; Nelson, K; Valentine, J; Wright, D

    2006-10-05

    In this document we discuss specific implementations for gamma-ray imaging instruments including the principle of operation and describe systems which have been built and demonstrated as well as systems currently under development. There are several fundamentally different technologies each with specific operational requirements and performance trade offs. We provide an overview of the different gamma-ray imaging techniques and briefly discuss challenges and limitations associated with each modality (in the appendix we give detailed descriptions of specific implementations for many of these technologies). In Section 3 we summarize the performance and operational aspects in tabular form as an aid for comparing technologies and mapping technologies to potential applications.

  20. Magnetic resonance spectroscopy as an imaging method

    International Nuclear Information System (INIS)

    Bomsdorf, H.; Imme, M.; Jensen, D.; Kunz, D.; Menhardt, W.; Ottenberg, K.; Roeschmann, P.; Schmidt, K.H.; Tschendel, O.; Wieland, J.

    1990-01-01

    An experimental Magnetic Resonance (MR) system with 4 tesla flux density was set up. For that purpose a data acquisition system and RF coils for resonance frequencies up to 170 MHz were developed. Methods for image guided spectroscopy as well as spectroscopic imaging focussing on the nuclei 1 H and 13 C were developed and tested on volunteers and selected patients. The advantages of the high field strength with respect to spectroscopic studies were demonstrated. Developments of a new fast imaging technique for the acquisition of scout images as well as a method for mapping and displaying the magnetic field inhomogeneity in-vivo represent contributions to the optimisation of the experimental procedure in spectroscopic studies. Investigations on the interaction of RF radiation with the exposed tissue allowed conclusions regarding the applicability of MR methods at high field strengths. Methods for display and processing of multi-dimensional spectroscopic imaging data sets were developed and existing methods for real-time image synthesis were extended. Results achieved in the field of computer aided analysis of MR images comprised new techniques for image background detection, contour detection and automatic image interpretation as well as knowledge bases for textural representation of medical knowledge for diagnosis. (orig.) With 82 refs., 3 tabs., 75 figs [de

  1. Image splitting and remapping method for radiological image compression

    Science.gov (United States)

    Lo, Shih-Chung B.; Shen, Ellen L.; Mun, Seong K.

    1990-07-01

    A new decomposition method using image splitting and gray-level remapping has been proposed for image compression, particularly for images with high contrast resolution. The effects of this method are especially evident in our radiological image compression study. In our experiments, we tested the impact of this decomposition method on image compression by employing it with two coding techniques on a set of clinically used CT images and several laser film digitized chest radiographs. One of the compression techniques used was full-frame bit-allocation in the discrete cosine transform domain, which has been proven to be an effective technique for radiological image compression. The other compression technique used was vector quantization with pruned tree-structured encoding, which through recent research has also been found to produce a low mean-square-error and a high compression ratio. The parameters we used in this study were mean-square-error and the bit rate required for the compressed file. In addition to these parameters, the difference between the original and reconstructed images will be presented so that the specific artifacts generated by both techniques can be discerned by visual perception.

  2. Method for position emission mammography image reconstruction

    Science.gov (United States)

    Smith, Mark Frederick

    2004-10-12

    An image reconstruction method comprising accepting coincidence datat from either a data file or in real time from a pair of detector heads, culling event data that is outside a desired energy range, optionally saving the desired data for each detector position or for each pair of detector pixels on the two detector heads, and then reconstructing the image either by backprojection image reconstruction or by iterative image reconstruction. In the backprojection image reconstruction mode, rays are traced between centers of lines of response (LOR's), counts are then either allocated by nearest pixel interpolation or allocated by an overlap method and then corrected for geometric effects and attenuation and the data file updated. If the iterative image reconstruction option is selected, one implementation is to compute a grid Siddon retracing, and to perform maximum likelihood expectation maiximization (MLEM) computed by either: a) tracing parallel rays between subpixels on opposite detector heads; or b) tracing rays between randomized endpoint locations on opposite detector heads.

  3. Linear Methods for Image Interpolation

    OpenAIRE

    Pascal Getreuer

    2011-01-01

    We discuss linear methods for interpolation, including nearest neighbor, bilinear, bicubic, splines, and sinc interpolation. We focus on separable interpolation, so most of what is said applies to one-dimensional interpolation as well as N-dimensional separable interpolation.

  4. Data-driven performance evaluation method for CMS RPC trigger ...

    Indian Academy of Sciences (India)

    level triggers, to handle the large stream of data produced in collision. The information transmitted from the three muon subsystems (DT, CSC and RPC) are collected by the Global Muon Trigger (GMT) Board and merged. A method for evaluating ...

  5. Data-driven performance evaluation method for CMS RPC trigger ...

    Indian Academy of Sciences (India)

    2012-10-06

    Oct 6, 2012 ... hardware-implemented algorithm, which performs the task of combining and merging information from muon ... Figure 1 shows the comparison of efficiencies obtained with the two methods containing .... [3] The CMS Collaboration, The trigger and data acquisition project, Volume 1, The Level 1. Trigger ...

  6. Detector for imaging and dosimetry of laser-driven epithermal neutrons by alpha conversion

    Science.gov (United States)

    Mirfayzi, S. R.; Alejo, A.; Ahmed, H.; Wilson, L. A.; Ansell, S.; Armstrong, C.; Butler, N. M. H.; Clarke, R. J.; Higginson, A.; Notley, M.; Raspino, D.; Rusby, D. R.; Borghesi, M.; Rhodes, N. J.; McKenna, P.; Neely, D.; Brenner, C. M.; Kar, S.

    2016-10-01

    An epithermal neutron imager based on detecting alpha particles created via boron neutron capture mechanism is discussed. The diagnostic mainly consists of a mm thick Boron Nitride (BN) sheet (as an alpha converter) in contact with a non-borated cellulose nitride film (LR115 type-II) detector. While the BN absorbs the neutrons in the thermal and epithermal ranges, the fast neutrons register insignificantly on the detector due to their low neutron capture and recoil cross-sections. The use of solid-state nuclear track detectors (SSNTD), unlike image plates, micro-channel plates and scintillators, provide safeguard from the x-rays, gamma-rays and electrons. The diagnostic was tested on a proof-of-principle basis, in front of a laser driven source of moderated neutrons, which suggests the potential of using this diagnostic (BN+SSNTD) for dosimetry and imaging applications.

  7. Frank Gilbreth and health care delivery method study driven learning.

    Science.gov (United States)

    Towill, Denis R

    2009-01-01

    The purpose of this article is to look at method study, as devised by the Gilbreths at the beginning of the twentieth century, which found early application in hospital quality assurance and surgical "best practice". It has since become a core activity in all modern methods, as applied to healthcare delivery improvement programmes. The article traces the origin of what is now currently and variously called "business process re-engineering", "business process improvement" and "lean healthcare" etc., by different management gurus back to the century-old pioneering work of Frank Gilbreth. The outcome is a consistent framework involving "width", "length" and "depth" dimensions within which healthcare delivery systems can be analysed, designed and successfully implemented to achieve better and more consistent performance. Healthcare method (saving time plus saving motion) study is best practised as co-joint action learning activity "owned" by all "players" involved in the re-engineering process. However, although process mapping is a key step forward, in itself it is no guarantee of effective re-engineering. It is not even the beginning of the end of the change challenge, although it should be the end of the beginning. What is needed is innovative exploitation of method study within a healthcare organisational learning culture accelerated via the Gilbreth Knowledge Flywheel. It is shown that effective healthcare delivery pipeline improvement is anchored into a team approach involving all "players" in the system especially physicians. A comprehensive process study, constructive dialogue, proper and highly professional re-engineering plus managed implementation are essential components. Experience suggests "learning" is thereby achieved via "natural groups" actively involved in healthcare processes. The article provides a proven method for exploiting Gilbreths' outputs and their many successors in enabling more productive evidence-based healthcare delivery as summarised

  8. Digital image envelope: method and evaluation

    Science.gov (United States)

    Huang, H. K.; Cao, Fei; Zhou, Michael Z.; Mogel, Greg T.; Liu, Brent J.; Zhou, Xiaoqiang

    2003-05-01

    Health data security, characterized in terms of data privacy, authenticity, and integrity, is a vital issue when digital images and other patient information are transmitted through public networks in telehealth applications such as teleradiology. Mandates for ensuring health data security have been extensively discussed (for example The Health Insurance Portability and Accountability Act, HIPAA) and health informatics guidelines (such as the DICOM standard) are beginning to focus on issues of data continue to be published by organizing bodies in healthcare; however, there has not been a systematic method developed to ensure data security in medical imaging Because data privacy and authenticity are often managed primarily with firewall and password protection, we have focused our research and development on data integrity. We have developed a systematic method of ensuring medical image data integrity across public networks using the concept of the digital envelope. When a medical image is generated regardless of the modality, three processes are performed: the image signature is obtained, the DICOM image header is encrypted, and a digital envelope is formed by combining the signature and the encrypted header. The envelope is encrypted and embedded in the original image. This assures the security of both the image and the patient ID. The embedded image is encrypted again and transmitted across the network. The reverse process is performed at the receiving site. The result is two digital signatures, one from the original image before transmission, and second from the image after transmission. If the signatures are identical, there has been no alteration of the image. This paper concentrates in the method and evaluation of the digital image envelope.

  9. On an image reconstruction method for ECT

    Science.gov (United States)

    Sasamoto, Akira; Suzuki, Takayuki; Nishimura, Yoshihiro

    2007-04-01

    An image by Eddy Current Testing(ECT) is a blurred image to original flaw shape. In order to reconstruct fine flaw image, a new image reconstruction method has been proposed. This method is based on an assumption that a very simple relationship between measured data and source were described by a convolution of response function and flaw shape. This assumption leads to a simple inverse analysis method with deconvolution.In this method, Point Spread Function (PSF) and Line Spread Function(LSF) play a key role in deconvolution processing. This study proposes a simple data processing to determine PSF and LSF from ECT data of machined hole and line flaw. In order to verify its validity, ECT data for SUS316 plate(200x200x10mm) with artificial machined hole and notch flaw had been acquired by differential coil type sensors(produced by ZETEC Inc). Those data were analyzed by the proposed method. The proposed method restored sharp discrete multiple hole image from interfered data by multiple holes. Also the estimated width of line flaw has been much improved compared with original experimental data. Although proposed inverse analysis strategy is simple and easy to implement, its validity to holes and line flaw have been shown by many results that much finer image than original image have been reconstructed.

  10. Automatic extraction of soft tissues from 3D MRI head images using model driven analysis

    International Nuclear Information System (INIS)

    Jiang, Hao; Yamamoto, Shinji; Imao, Masanao.

    1995-01-01

    This paper presents an automatic extraction system (called TOPS-3D : Top Down Parallel Pattern Recognition System for 3D Images) of soft tissues from 3D MRI head images by using model driven analysis algorithm. As the construction of system TOPS we developed, two concepts have been considered in the design of system TOPS-3D. One is the system having a hierarchical structure of reasoning using model information in higher level, and the other is a parallel image processing structure used to extract plural candidate regions for a destination entity. The new points of system TOPS-3D are as follows. (1) The TOPS-3D is a three-dimensional image analysis system including 3D model construction and 3D image processing techniques. (2) A technique is proposed to increase connectivity between knowledge processing in higher level and image processing in lower level. The technique is realized by applying opening operation of mathematical morphology, in which a structural model function defined in higher level by knowledge representation is immediately used to the filter function of opening operation as image processing in lower level. The system TOPS-3D applied to 3D MRI head images consists of three levels. First and second levels are reasoning part, and third level is image processing part. In experiments, we applied 5 samples of 3D MRI head images with size 128 x 128 x 128 pixels to the system TOPS-3D to extract the regions of soft tissues such as cerebrum, cerebellum and brain stem. From the experimental results, the system is robust for variation of input data by using model information, and the position and shape of soft tissues are extracted corresponding to anatomical structure. (author)

  11. PROMETHEE II: A knowledge-driven method for copper exploration

    Science.gov (United States)

    Abedi, Maysam; Ali Torabi, S.; Norouzi, Gholam-Hossain; Hamzeh, Mohammad; Elyasi, Gholam-Reza

    2012-09-01

    This paper describes the application of a well-known Multi Criteria Decision Making (MCDM) technique called Preference Ranking Organization METHod for Enrichment Evaluation (PROMETHEE II) to explore porphyry copper deposits. Various raster-based evidential layers involving geological, geophysical, and geochemical geo-datasets are integrated to prepare a mineral prospectivity mapping (MPM). In a case study, thirteen layers of the Now Chun copper deposit located in the Kerman province of Iran are used to explore the region of interest. The PROMETHEE II technique is applied to produce the desired MPM, and the outputs are validated using twenty-one boreholes that have been classified into five classes. This proposed method shows a high performance when providing the MPM while reducing the cost of exploratory drilling in the study area.

  12. Method for nuclear magnetic resonance imaging

    Science.gov (United States)

    Kehayias, J.J.; Joel, D.D.; Adams, W.H.; Stein, H.L.

    1988-05-26

    A method for in vivo NMR imaging of the blood vessels and organs of a patient characterized by using a dark dye-like imaging substance consisting essentially of a stable, high-purity concentration of D/sub 2/O in a solution with water.

  13. Linear Methods for Image Interpolation

    Directory of Open Access Journals (Sweden)

    Pascal Getreuer

    2011-09-01

    Full Text Available We discuss linear methods for interpolation, including nearest neighbor, bilinear, bicubic, splines, and sinc interpolation. We focus on separable interpolation, so most of what is said applies to one-dimensional interpolation as well as N-dimensional separable interpolation.

  14. Nonperturbative stochastic method for driven spin-boson model

    Science.gov (United States)

    Orth, Peter P.; Imambekov, Adilet; Le Hur, Karyn

    2013-01-01

    We introduce and apply a numerically exact method for investigating the real-time dissipative dynamics of quantum impurities embedded in a macroscopic environment beyond the weak-coupling limit. We focus on the spin-boson Hamiltonian that describes a two-level system interacting with a bosonic bath of harmonic oscillators. This model is archetypal for investigating dissipation in quantum systems, and tunable experimental realizations exist in mesoscopic and cold-atom systems. It finds abundant applications in physics ranging from the study of decoherence in quantum computing and quantum optics to extended dynamical mean-field theory. Starting from the real-time Feynman-Vernon path integral, we derive an exact stochastic Schrödinger equation that allows us to compute the full spin density matrix and spin-spin correlation functions beyond weak coupling. We greatly extend our earlier work [P. P. Orth, A. Imambekov, and K. Le Hur, Phys. Rev. APLRAAN1050-294710.1103/PhysRevA.82.032118 82, 032118 (2010)] by fleshing out the core concepts of the method and by presenting a number of interesting applications. Methodologically, we present an analogy between the dissipative dynamics of a quantum spin and that of a classical spin in a random magnetic field. This analogy is used to recover the well-known noninteracting-blip approximation in the weak-coupling limit. We explain in detail how to compute spin-spin autocorrelation functions. As interesting applications of our method, we explore the non-Markovian effects of the initial spin-bath preparation on the dynamics of the coherence σx(t) and of σz(t) under a Landau-Zener sweep of the bias field. We also compute to a high precision the asymptotic long-time dynamics of σz(t) without bias and demonstrate the wide applicability of our approach by calculating the spin dynamics at nonzero bias and different temperatures.

  15. Evaluating laser-driven Bremsstrahlung radiation sources for imaging and analysis of nuclear waste packages.

    Science.gov (United States)

    Jones, Christopher P; Brenner, Ceri M; Stitt, Camilla A; Armstrong, Chris; Rusby, Dean R; Mirfayzi, Seyed R; Wilson, Lucy A; Alejo, Aarón; Ahmed, Hamad; Allott, Ric; Butler, Nicholas M H; Clarke, Robert J; Haddock, David; Hernandez-Gomez, Cristina; Higginson, Adam; Murphy, Christopher; Notley, Margaret; Paraskevoulakos, Charilaos; Jowsey, John; McKenna, Paul; Neely, David; Kar, Satya; Scott, Thomas B

    2016-11-15

    A small scale sample nuclear waste package, consisting of a 28mm diameter uranium penny encased in grout, was imaged by absorption contrast radiography using a single pulse exposure from an X-ray source driven by a high-power laser. The Vulcan laser was used to deliver a focused pulse of photons to a tantalum foil, in order to generate a bright burst of highly penetrating X-rays (with energy >500keV), with a source size of <0.5mm. BAS-TR and BAS-SR image plates were used for image capture, alongside a newly developed Thalium doped Caesium Iodide scintillator-based detector coupled to CCD chips. The uranium penny was clearly resolved to sub-mm accuracy over a 30cm(2) scan area from a single shot acquisition. In addition, neutron generation was demonstrated in situ with the X-ray beam, with a single shot, thus demonstrating the potential for multi-modal criticality testing of waste materials. This feasibility study successfully demonstrated non-destructive radiography of encapsulated, high density, nuclear material. With recent developments of high-power laser systems, to 10Hz operation, a laser-driven multi-modal beamline for waste monitoring applications is envisioned. Copyright © 2016. Published by Elsevier B.V.

  16. Imaging methods in medical diagnosis

    International Nuclear Information System (INIS)

    Krestel, E.

    1981-01-01

    Pictures of parts of the human body or of the human body (views, superposition pictures, pictures of body layers, or photographs) are considerable helps for the medical diagnostics. Physics, electrotechnique, and machine construction make the picture production possible. Modern electronics and optics offer facilities of picture processing which influences the picture quality. Picture interpretation is the the physican's task. The picture-delivering methods applied in medicine include the conventional X-ray diagnostics, X-ray computer tomography, nuclear diagnostics, sonography with ultas sound, and endoscopy. Their rapid development and immprovement was caused by the development of electronics during the past 20 years. A method presently in discussion and development is the Kernspin-tomography. (orig./MG) [de

  17. A NEW IMAGE REGISTRATION METHOD FOR GREY IMAGES

    Institute of Scientific and Technical Information of China (English)

    Nie Xuan; Zhao Rongchun; Jiang Zetao

    2004-01-01

    The proposed algorithm relies on a group of new formulas for calculating tangent slope so as to address angle feature of edge curves of image. It can utilize tangent angle features to estimate automatically and fully the rotation parameters of geometric transform and enable rough matching of images with huge rotation difference. After angle compensation, it can search for matching point sets by correlation criterion, then calculate parameters of affine transform, enable higher-precision emendation of rotation and transferring. Finally, it fulfills precise matching for images with relax-tense iteration method. Compared with the registration approach based on wavelet direction-angle features, the matching algorithm with tangent feature of image edge is more robust and realizes precise registration of various images. Furthermore, it is also helpful in graphics matching.

  18. Historic Methods for Capturing Magnetic Field Images

    Science.gov (United States)

    Kwan, Alistair

    2016-01-01

    I investigated two late 19th-century methods for capturing magnetic field images from iron filings for historical insight into the pedagogy of hands-on physics education methods, and to flesh out teaching and learning practicalities tacit in the historical record. Both methods offer opportunities for close sensory engagement in data-collection…

  19. Methods for evaluating imaging methods of limited reproducibility

    International Nuclear Information System (INIS)

    Krummenauer, F.

    2005-01-01

    Just like new drugs, new or modified imaging methods must be subjected to objective clinical tests, including tests on humans. In this, it must be ensured that the principle of Good Clinical Practice (GCP) are followed with regard to medical, administrative and methodical quality. Innovative methods fo clinical epidemiology and medical biometry should be applied from the planning stage to the final statistical evaluation. The author presents established and new methods for planning, evaluation and reporting of clinical tests of diagnostic methods, and especially imaging methods, in clinical medicine and illustrates these by means of current research projects in the various medical disciplines. The strategies presented are summarized in a recommendation based on the concept of phases I - IV of clinical drug testing in order to enable standardisation of the clinical evaluation of imaging methods. (orig.)

  20. Residual-driven online generalized multiscale finite element methods

    KAUST Repository

    Chung, Eric T.

    2015-09-08

    The construction of local reduced-order models via multiscale basis functions has been an area of active research. In this paper, we propose online multiscale basis functions which are constructed using the offline space and the current residual. Online multiscale basis functions are constructed adaptively in some selected regions based on our error indicators. We derive an error estimator which shows that one needs to have an offline space with certain properties to guarantee that additional online multiscale basis function will decrease the error. This error decrease is independent of physical parameters, such as the contrast and multiple scales in the problem. The offline spaces are constructed using Generalized Multiscale Finite Element Methods (GMsFEM). We show that if one chooses a sufficient number of offline basis functions, one can guarantee that additional online multiscale basis functions will reduce the error independent of contrast. We note that the construction of online basis functions is motivated by the fact that the offline space construction does not take into account distant effects. Using the residual information, we can incorporate the distant information provided the offline approximation satisfies certain properties. In the paper, theoretical and numerical results are presented. Our numerical results show that if the offline space is sufficiently large (in terms of the dimension) such that the coarse space contains all multiscale spectral basis functions that correspond to small eigenvalues, then the error reduction by adding online multiscale basis function is independent of the contrast. We discuss various ways computing online multiscale basis functions which include a use of small dimensional offline spaces.

  1. Prognostic aspects of imaging method development

    International Nuclear Information System (INIS)

    Steinhart, L.

    1987-01-01

    A survey is presented of X-ray diagnostic methods and techniques and possibilities of their further development. Promising methods include direct imaging using digital radiography. In connection with computer technology these methods achieve higher resolution. The storage of obtained images in the computer memory will allow automated processing and evaluation and the use of expert systems. Development is expected to take place especially in computerized tomography using magnetic resonance, and positron computed tomography and other non-radioactive diagnostic methods. (J.B.). 5 figs., 1 tab., 1 ref

  2. Matrix Krylov subspace methods for image restoration

    Directory of Open Access Journals (Sweden)

    khalide jbilou

    2015-09-01

    Full Text Available In the present paper, we consider some matrix Krylov subspace methods for solving ill-posed linear matrix equations and in those problems coming from the restoration of blurred and noisy images. Applying the well known Tikhonov regularization procedure leads to a Sylvester matrix equation depending the Tikhonov regularized parameter. We apply the matrix versions of the well known Krylov subspace methods, namely the Least Squared (LSQR and the conjugate gradient (CG methods to get approximate solutions representing the restored images. Some numerical tests are presented to show the effectiveness of the proposed methods.

  3. High power ring methods and accelerator driven subcritical reactor application

    Energy Technology Data Exchange (ETDEWEB)

    Tahar, Malek Haj [Univ. of Grenoble (France)

    2016-08-07

    High power proton accelerators allow providing, by spallation reaction, the neutron fluxes necessary in the synthesis of fissile material, starting from Uranium 238 or Thorium 232. This is the basis of the concept of sub-critical operation of a reactor, for energy production or nuclear waste transmutation, with the objective of achieving cleaner, safer and more efficient process than today’s technologies allow. Designing, building and operating a proton accelerator in the 500-1000 MeV energy range, CW regime, MW power class still remains a challenge nowadays. There is a limited number of installations at present achieving beam characteristics in that class, e.g., PSI in Villigen, 590 MeV CW beam from a cyclotron, SNS in Oakland, 1 GeV pulsed beam from a linear accelerator, in addition to projects as the ESS in Europe, a 5 MW beam from a linear accelerator. Furthermore, coupling an accelerator to a sub-critical nuclear reactor is a challenging proposition: some of the key issues/requirements are the design of a spallation target to withstand high power densities as well as ensure the safety of the installation. These two domains are the grounds of the PhD work: the focus is on the high power ring methods in the frame of the KURRI FFAG collaboration in Japan: upgrade of the installation towards high intensity is crucial to demonstrate the high beam power capability of FFAG. Thus, modeling of the beam dynamics and benchmarking of different codes was undertaken to validate the simulation results. Experimental results revealed some major losses that need to be understood and eventually overcome. By developing analytical models that account for the field defects, one identified major sources of imperfection in the design of scaling FFAG that explain the important tune variations resulting in the crossing of several betatron resonances. A new formula is derived to compute the tunes and properties established that characterize the effect of the field imperfections on the

  4. Applying Data-driven Imaging Biomarker in Mammography for Breast Cancer Screening: Preliminary Study.

    Science.gov (United States)

    Kim, Eun-Kyung; Kim, Hyo-Eun; Han, Kyunghwa; Kang, Bong Joo; Sohn, Yu-Mee; Woo, Ok Hee; Lee, Chan Wha

    2018-02-09

    We assessed the feasibility of a data-driven imaging biomarker based on weakly supervised learning (DIB; an imaging biomarker derived from large-scale medical image data with deep learning technology) in mammography (DIB-MG). A total of 29,107 digital mammograms from five institutions (4,339 cancer cases and 24,768 normal cases) were included. After matching patients' age, breast density, and equipment, 1,238 and 1,238 cases were chosen as validation and test sets, respectively, and the remainder were used for training. The core algorithm of DIB-MG is a deep convolutional neural network; a deep learning algorithm specialized for images. Each sample (case) is an exam composed of 4-view images (RCC, RMLO, LCC, and LMLO). For each case in a training set, the cancer probability inferred from DIB-MG is compared with the per-case ground-truth label. Then the model parameters in DIB-MG are updated based on the error between the prediction and the ground-truth. At the operating point (threshold) of 0.5, sensitivity was 75.6% and 76.1% when specificity was 90.2% and 88.5%, and AUC was 0.903 and 0.906 for the validation and test sets, respectively. This research showed the potential of DIB-MG as a screening tool for breast cancer.

  5. Evaluating laser-driven Bremsstrahlung radiation sources for imaging and analysis of nuclear waste packages

    Energy Technology Data Exchange (ETDEWEB)

    Jones, Christopher P., E-mail: cj0810@bristol.ac.uk [Interface Analysis Centre, HH Wills Physics Laboratory, Tyndall Avenue, Bristol BS8 1TL (United Kingdom); Brenner, Ceri M. [Central Laser Facility, STFC, Rutherford Appleton Laboratory, Didcot, Oxon OX11 0QX (United Kingdom); Stitt, Camilla A. [Interface Analysis Centre, HH Wills Physics Laboratory, Tyndall Avenue, Bristol BS8 1TL (United Kingdom); Armstrong, Chris; Rusby, Dean R. [Central Laser Facility, STFC, Rutherford Appleton Laboratory, Didcot, Oxon OX11 0QX (United Kingdom); Department of Physics, SUPA, University of Strathclyde, Glasgow G4 0NG (United Kingdom); Mirfayzi, Seyed R. [Centre for Plasma Physics, Queen' s University Belfast, Belfast BT7 1NN (United Kingdom); Wilson, Lucy A. [Central Laser Facility, STFC, Rutherford Appleton Laboratory, Didcot, Oxon OX11 0QX (United Kingdom); Alejo, Aarón; Ahmed, Hamad [Centre for Plasma Physics, Queen' s University Belfast, Belfast BT7 1NN (United Kingdom); Allott, Ric [Central Laser Facility, STFC, Rutherford Appleton Laboratory, Didcot, Oxon OX11 0QX (United Kingdom); Butler, Nicholas M.H. [Department of Physics, SUPA, University of Strathclyde, Glasgow G4 0NG (United Kingdom); Clarke, Robert J.; Haddock, David; Hernandez-Gomez, Cristina [Central Laser Facility, STFC, Rutherford Appleton Laboratory, Didcot, Oxon OX11 0QX (United Kingdom); Higginson, Adam [Department of Physics, SUPA, University of Strathclyde, Glasgow G4 0NG (United Kingdom); Murphy, Christopher [Department of Physics, University of York, York YO10 5DD (United Kingdom); Notley, Margaret [Central Laser Facility, STFC, Rutherford Appleton Laboratory, Didcot, Oxon OX11 0QX (United Kingdom); Paraskevoulakos, Charilaos [Interface Analysis Centre, HH Wills Physics Laboratory, Tyndall Avenue, Bristol BS8 1TL (United Kingdom); Jowsey, John [Ground Floor North B582, Sellafield Ltd, Seascale, Cumbria CA20 1PG (United Kingdom); and others

    2016-11-15

    Highlights: • X-ray generation was achieved via laser interaction with a tantalum thin foil target. • Picosecond X-ray pulse from a sub-mm spot generated high resolution images. • MeV X-ray emission is possible, permitting analysis of full scale waste containers. • In parallel neutron emission of 10{sup 7}–10{sup 9} neutrons per steradian per pulse was attained. • Development of a 10 Hz diode pumped laser system for waste monitoring is envisioned. - Abstract: A small scale sample nuclear waste package, consisting of a 28 mm diameter uranium penny encased in grout, was imaged by absorption contrast radiography using a single pulse exposure from an X-ray source driven by a high-power laser. The Vulcan laser was used to deliver a focused pulse of photons to a tantalum foil, in order to generate a bright burst of highly penetrating X-rays (with energy >500 keV), with a source size of <0.5 mm. BAS-TR and BAS-SR image plates were used for image capture, alongside a newly developed Thalium doped Caesium Iodide scintillator-based detector coupled to CCD chips. The uranium penny was clearly resolved to sub-mm accuracy over a 30 cm{sup 2} scan area from a single shot acquisition. In addition, neutron generation was demonstrated in situ with the X-ray beam, with a single shot, thus demonstrating the potential for multi-modal criticality testing of waste materials. This feasibility study successfully demonstrated non-destructive radiography of encapsulated, high density, nuclear material. With recent developments of high-power laser systems, to 10 Hz operation, a laser-driven multi-modal beamline for waste monitoring applications is envisioned.

  6. Evaluating laser-driven Bremsstrahlung radiation sources for imaging and analysis of nuclear waste packages

    International Nuclear Information System (INIS)

    Jones, Christopher P.; Brenner, Ceri M.; Stitt, Camilla A.; Armstrong, Chris; Rusby, Dean R.; Mirfayzi, Seyed R.; Wilson, Lucy A.; Alejo, Aarón; Ahmed, Hamad; Allott, Ric; Butler, Nicholas M.H.; Clarke, Robert J.; Haddock, David; Hernandez-Gomez, Cristina; Higginson, Adam; Murphy, Christopher; Notley, Margaret; Paraskevoulakos, Charilaos; Jowsey, John

    2016-01-01

    Highlights: • X-ray generation was achieved via laser interaction with a tantalum thin foil target. • Picosecond X-ray pulse from a sub-mm spot generated high resolution images. • MeV X-ray emission is possible, permitting analysis of full scale waste containers. • In parallel neutron emission of 10"7–10"9 neutrons per steradian per pulse was attained. • Development of a 10 Hz diode pumped laser system for waste monitoring is envisioned. - Abstract: A small scale sample nuclear waste package, consisting of a 28 mm diameter uranium penny encased in grout, was imaged by absorption contrast radiography using a single pulse exposure from an X-ray source driven by a high-power laser. The Vulcan laser was used to deliver a focused pulse of photons to a tantalum foil, in order to generate a bright burst of highly penetrating X-rays (with energy >500 keV), with a source size of <0.5 mm. BAS-TR and BAS-SR image plates were used for image capture, alongside a newly developed Thalium doped Caesium Iodide scintillator-based detector coupled to CCD chips. The uranium penny was clearly resolved to sub-mm accuracy over a 30 cm"2 scan area from a single shot acquisition. In addition, neutron generation was demonstrated in situ with the X-ray beam, with a single shot, thus demonstrating the potential for multi-modal criticality testing of waste materials. This feasibility study successfully demonstrated non-destructive radiography of encapsulated, high density, nuclear material. With recent developments of high-power laser systems, to 10 Hz operation, a laser-driven multi-modal beamline for waste monitoring applications is envisioned.

  7. Analysis of Non Local Image Denoising Methods

    Science.gov (United States)

    Pardo, Álvaro

    Image denoising is probably one of the most studied problems in the image processing community. Recently a new paradigm on non local denoising was introduced. The Non Local Means method proposed by Buades, Morel and Coll attracted the attention of other researches who proposed improvements and modifications to their proposal. In this work we analyze those methods trying to understand their properties while connecting them to segmentation based on spectral graph properties. We also propose some improvements to automatically estimate the parameters used on these methods.

  8. Handbook of mathematical methods in imaging

    CERN Document Server

    2015-01-01

    The Handbook of Mathematical Methods in Imaging provides a comprehensive treatment of the mathematical techniques used in imaging science. The material is grouped into two central themes, namely, Inverse Problems (Algorithmic Reconstruction) and Signal and Image Processing. Each section within the themes covers applications (modeling), mathematics, numerical methods (using a case example) and open questions. Written by experts in the area, the presentation is mathematically rigorous. This expanded and revised second edition contains updates to existing chapters and 16 additional entries on important mathematical methods such as graph cuts, morphology, discrete geometry, PDEs, conformal methods, to name a few. The entries are cross-referenced for easy navigation through connected topics. Available in both print and electronic forms, the handbook is enhanced by more than 200 illustrations and an extended bibliography. It will benefit students, scientists and researchers in applied mathematics. Engineers and com...

  9. Accelerated gradient methods for constrained image deblurring

    International Nuclear Information System (INIS)

    Bonettini, S; Zanella, R; Zanni, L; Bertero, M

    2008-01-01

    In this paper we propose a special gradient projection method for the image deblurring problem, in the framework of the maximum likelihood approach. We present the method in a very general form and we give convergence results under standard assumptions. Then we consider the deblurring problem and the generality of the proposed algorithm allows us to add a energy conservation constraint to the maximum likelihood problem. In order to improve the convergence rate, we devise appropriate scaling strategies and steplength updating rules, especially designed for this application. The effectiveness of the method is evaluated by means of a computational study on astronomical images corrupted by Poisson noise. Comparisons with standard methods for image restoration, such as the expectation maximization algorithm, are also reported.

  10. Data-driven remaining useful life prognosis techniques stochastic models, methods and applications

    CERN Document Server

    Si, Xiao-Sheng; Hu, Chang-Hua

    2017-01-01

    This book introduces data-driven remaining useful life prognosis techniques, and shows how to utilize the condition monitoring data to predict the remaining useful life of stochastic degrading systems and to schedule maintenance and logistics plans. It is also the first book that describes the basic data-driven remaining useful life prognosis theory systematically and in detail. The emphasis of the book is on the stochastic models, methods and applications employed in remaining useful life prognosis. It includes a wealth of degradation monitoring experiment data, practical prognosis methods for remaining useful life in various cases, and a series of applications incorporated into prognostic information in decision-making, such as maintenance-related decisions and ordering spare parts. It also highlights the latest advances in data-driven remaining useful life prognosis techniques, especially in the contexts of adaptive prognosis for linear stochastic degrading systems, nonlinear degradation modeling based pro...

  11. Image change detection systems, methods, and articles of manufacture

    Science.gov (United States)

    Jones, James L.; Lassahn, Gordon D.; Lancaster, Gregory D.

    2010-01-05

    Aspects of the invention relate to image change detection systems, methods, and articles of manufacture. According to one aspect, a method of identifying differences between a plurality of images is described. The method includes loading a source image and a target image into memory of a computer, constructing source and target edge images from the source and target images to enable processing of multiband images, displaying the source and target images on a display device of the computer, aligning the source and target edge images, switching displaying of the source image and the target image on the display device, to enable identification of differences between the source image and the target image.

  12. Quality assessment in radiological imaging methods

    International Nuclear Information System (INIS)

    Herstel, W.

    1985-01-01

    The equipment used in diagnostic radiology is becoming more and more complicated. In the imaging process four components are distinguished, each of which can introduce loss in essential information: the X-ray source, the human body, the imaging system and the observer. In nearly all imaging methods the X-ray quantum fluctuations are a limitation to observation. But there are also technical factors. As an illustration it is shown how in a television scanning process the resolution is restricted by the system parameters. A short review is given of test devices and the results are given of an image comparison based on regular bar patterns. Although this method has the disadvantage of measuring mainly the limiting resolution, the results of the test correlate reasonably well with the subjective appreciations of radiographs of bony structures made by a group of trained radiologists. Fluoroscopic systems should preferably be tested using moving structures under dynamic conditions. (author)

  13. Case Study of CPT-based Design Methods for Axial Capacity of Driven Piles in Sand

    DEFF Research Database (Denmark)

    Thomassen, Kristina; Ibsen, Lars Bo; Andersen, Lars Vabbersgaard

    2012-01-01

    loaded offshore driven piles in cohesionless soil has until now been the β-method given in API. The API-method is based on the effective overburden pressure at the depth in question. Previous studies show deviations between full-scale load test measurements of the axial pile capacity and the predictions...... found by means of the API-method. Compared to the test measurements, the API-method under-estimates the capacity of short piles (piles in loose sand, and gives a shaft capacity less conservative for piles in tension than for piles in compression......Today the design of onshore axially loaded driven piles in cohesionless soil is commonly made on basis of CPT-based methods because field investigations have shown strong correlation between the local shaft friction and the CPT cone resistance. However, the recommended design method for axially...

  14. Circular SAR Optimization Imaging Method of Buildings

    Directory of Open Access Journals (Sweden)

    Wang Jian-feng

    2015-12-01

    Full Text Available The Circular Synthetic Aperture Radar (CSAR can obtain the entire scattering properties of targets because of its great ability of 360° observation. In this study, an optimal orientation of the CSAR imaging algorithm of buildings is proposed by applying a combination of coherent and incoherent processing techniques. FEKO software is used to construct the electromagnetic scattering modes and simulate the radar echo. The FEKO imaging results are compared with the isotropic scattering results. On comparison, the optimal azimuth coherent accumulation angle of CSAR imaging of buildings is obtained. Practically, the scattering directions of buildings are unknown; therefore, we divide the 360° echo of CSAR into many overlapped and few angle echoes corresponding to the sub-aperture and then perform an imaging procedure on each sub-aperture. Sub-aperture imaging results are applied to obtain the all-around image using incoherent fusion techniques. The polarimetry decomposition method is used to decompose the all-around image and further retrieve the edge information of buildings successfully. The proposed method is validated with P-band airborne CSAR data from Sichuan, China.

  15. Method of orthogonally splitting imaging pose measurement

    Science.gov (United States)

    Zhao, Na; Sun, Changku; Wang, Peng; Yang, Qian; Liu, Xintong

    2018-01-01

    In order to meet the aviation's and machinery manufacturing's pose measurement need of high precision, fast speed and wide measurement range, and to resolve the contradiction between measurement range and resolution of vision sensor, this paper proposes an orthogonally splitting imaging pose measurement method. This paper designs and realizes an orthogonally splitting imaging vision sensor and establishes a pose measurement system. The vision sensor consists of one imaging lens, a beam splitter prism, cylindrical lenses and dual linear CCD. Dual linear CCD respectively acquire one dimensional image coordinate data of the target point, and two data can restore the two dimensional image coordinates of the target point. According to the characteristics of imaging system, this paper establishes the nonlinear distortion model to correct distortion. Based on cross ratio invariability, polynomial equation is established and solved by the least square fitting method. After completing distortion correction, this paper establishes the measurement mathematical model of vision sensor, and determines intrinsic parameters to calibrate. An array of feature points for calibration is built by placing a planar target in any different positions for a few times. An terative optimization method is presented to solve the parameters of model. The experimental results show that the field angle is 52 °, the focus distance is 27.40 mm, image resolution is 5185×5117 pixels, displacement measurement error is less than 0.1mm, and rotation angle measurement error is less than 0.15°. The method of orthogonally splitting imaging pose measurement can satisfy the pose measurement requirement of high precision, fast speed and wide measurement range.

  16. COMPARISON OF DIGITAL IMAGE STEGANOGRAPHY METHODS

    Directory of Open Access Journals (Sweden)

    S. A. Seyyedi

    2013-01-01

    Full Text Available Steganography is a method of hiding information in other information of different format (container. There are many steganography techniques with various types of container. In the Internet, digital images are the most popular and frequently used containers. We consider main image steganography techniques and their advantages and disadvantages. We also identify the requirements of a good steganography algorithm and compare various such algorithms.

  17. Study on Processing Method of Image Shadow

    Directory of Open Access Journals (Sweden)

    Wang Bo

    2014-07-01

    Full Text Available In order to effectively remove disturbance of shadow and enhance robustness of information processing of computer visual image, this paper makes study on inspection and removal of image shadow. It makes study the continual removal algorithm of shadow based on integration, the illumination surface and texture, it respectively introduces their work principles and realization method, it can effectively carrying processing for shadow by test.

  18. Coherent diffractive imaging methods for semiconductor manufacturing

    Science.gov (United States)

    Helfenstein, Patrick; Mochi, Iacopo; Rajeev, Rajendran; Fernandez, Sara; Ekinci, Yasin

    2017-12-01

    The paradigm shift of the semiconductor industry moving from deep ultraviolet to extreme ultraviolet lithography (EUVL) brought about new challenges in the fabrication of illumination and projection optics, which constitute one of the core sources of cost of ownership for many of the metrology tools needed in the lithography process. For this reason, lensless imaging techniques based on coherent diffractive imaging started to raise interest in the EUVL community. This paper presents an overview of currently on-going research endeavors that use a number of methods based on lensless imaging with coherent light.

  19. Improved image alignment method in application to X-ray images and biological images.

    Science.gov (United States)

    Wang, Ching-Wei; Chen, Hsiang-Chou

    2013-08-01

    Alignment of medical images is a vital component of a large number of applications throughout the clinical track of events; not only within clinical diagnostic settings, but prominently so in the area of planning, consummation and evaluation of surgical and radiotherapeutical procedures. However, image registration of medical images is challenging because of variations on data appearance, imaging artifacts and complex data deformation problems. Hence, the aim of this study is to develop a robust image alignment method for medical images. An improved image registration method is proposed, and the method is evaluated with two types of medical data, including biological microscopic tissue images and dental X-ray images and compared with five state-of-the-art image registration techniques. The experimental results show that the presented method consistently performs well on both types of medical images, achieving 88.44 and 88.93% averaged registration accuracies for biological tissue images and X-ray images, respectively, and outperforms the benchmark methods. Based on the Tukey's honestly significant difference test and Fisher's least square difference test tests, the presented method performs significantly better than all existing methods (P ≤ 0.001) for tissue image alignment, and for the X-ray image registration, the proposed method performs significantly better than the two benchmark b-spline approaches (P < 0.001). The software implementation of the presented method and the data used in this study are made publicly available for scientific communities to use (http://www-o.ntust.edu.tw/∼cweiwang/ImprovedImageRegistration/). cweiwang@mail.ntust.edu.tw.

  20. An Improved Image Contrast Assessment Method

    Directory of Open Access Journals (Sweden)

    Yuanyuan Fan

    2013-07-01

    Full Text Available Contrast is an important factor affecting the image quality. In order to overcome the problems of local band-limited contrast, a novel image contrast assessment method based on the property of HVS is proposed. Firstly, the image by low-pass filter is performed fast wavelet decomposition. Secondly, all levels of band-pass filtered image and its corresponding low-pass filtered image are obtained by processing wavelet coefficients. Thirdly, local band-limited contrast is calculated, and the local band-limited contrast entropy is calculated according to the definition of entropy, Finally, the contrast entropy of image is obtained by averaging the local band-limited contrast entropy weighed using CSF coefficient. The experiment results show that the best contrast image can be accurately identified in the sequence images obtained by adjusting the exposure time and stretching gray respectively, the assessment results accord with human visual characteristics and make up the lack of local band-limited contrast.

  1. NMR blood vessel imaging method and apparatus

    International Nuclear Information System (INIS)

    Riederer, S.J.

    1988-01-01

    A high speed method of forming computed images of blood vessels based on measurements of characteristics of a body is described comprising the steps of: subjecting a predetermined body area containing blood vessels of interest to, successively, applications of a short repetition time (TR) NMR pulse sequence during the period of high blood velocity and then to corresponding applications during the period of low blood velocity for successive heart beat cycles; weighting the collected imaging data from each application of the NMR pulse sequence according to whether the data was acquired during the period of high blood velocity or a period of low blood velocity of the corresponding heart beat cycle; accumulating weighted imaging data from a plurality of NMR pulse sequences corresponding to high blood velocity periods and from a plurality of NMR pulse sequences corresponding to low blood velocity periods; subtracting the weighted imaging data corresponding to each specific phase encoding acquired during the high blood velocity periods from the weighted imaging data for the same phase encoding corresponding to low blood velocity periods in order to compute blood vessel imaging data; and forming an image of the blood vessels of interest from the blood vessel imaging data

  2. Medical Imaging Image Quality Assessment with Monte Carlo Methods

    International Nuclear Information System (INIS)

    Michail, C M; Fountos, G P; Kalyvas, N I; Valais, I G; Kandarakis, I S; Karpetas, G E; Martini, Niki; Koukou, Vaia

    2015-01-01

    The aim of the present study was to assess image quality of PET scanners through a thin layer chromatography (TLC) plane source. The source was simulated using a previously validated Monte Carlo model. The model was developed by using the GATE MC package and reconstructed images obtained with the STIR software for tomographic image reconstruction, with cluster computing. The PET scanner simulated in this study was the GE DiscoveryST. A plane source consisted of a TLC plate, was simulated by a layer of silica gel on aluminum (Al) foil substrates, immersed in 18F-FDG bath solution (1MBq). Image quality was assessed in terms of the Modulation Transfer Function (MTF). MTF curves were estimated from transverse reconstructed images of the plane source. Images were reconstructed by the maximum likelihood estimation (MLE)-OSMAPOSL algorithm. OSMAPOSL reconstruction was assessed by using various subsets (3 to 21) and iterations (1 to 20), as well as by using various beta (hyper) parameter values. MTF values were found to increase up to the 12th iteration whereas remain almost constant thereafter. MTF improves by using lower beta values. The simulated PET evaluation method based on the TLC plane source can be also useful in research for the further development of PET and SPECT scanners though GATE simulations. (paper)

  3. Active Contour Driven by Local Region Statistics and Maximum A Posteriori Probability for Medical Image Segmentation

    Directory of Open Access Journals (Sweden)

    Xiaoliang Jiang

    2014-01-01

    Full Text Available This paper presents a novel active contour model in a variational level set formulation for simultaneous segmentation and bias field estimation of medical images. An energy function is formulated based on improved Kullback-Leibler distance (KLD with likelihood ratio. According to the additive model of images with intensity inhomogeneity, we characterize the statistics of image intensities belonging to each different object in local regions as Gaussian distributions with different means and variances. Then, we use the Gaussian distribution with bias field as a local region descriptor in level set formulation for segmentation and bias field correction of the images with inhomogeneous intensities. Therefore, image segmentation and bias field estimation are simultaneously achieved by minimizing the level set formulation. Experimental results demonstrate desirable performance of the proposed method for different medical images with weak boundaries and noise.

  4. Development of Quantification Method for Bioluminescence Imaging

    International Nuclear Information System (INIS)

    Kim, Hyeon Sik; Min, Jung Joon; Lee, Byeong Il; Choi, Eun Seo; Tak, Yoon O; Choi, Heung Kook; Lee, Ju Young

    2009-01-01

    Optical molecular luminescence imaging is widely used for detection and imaging of bio-photons emitted by luminescent luciferase activation. The measured photons in this method provide the degree of molecular alteration or cell numbers with the advantage of high signal-to-noise ratio. To extract useful information from the measured results, the analysis based on a proper quantification method is necessary. In this research, we propose a quantification method presenting linear response of measured light signal to measurement time. We detected the luminescence signal by using lab-made optical imaging equipment of animal light imaging system (ALIS) and different two kinds of light sources. One is three bacterial light-emitting sources containing different number of bacteria. The other is three different non-bacterial light sources emitting very weak light. By using the concept of the candela and the flux, we could derive simplified linear quantification formula. After experimentally measuring light intensity, the data was processed with the proposed quantification function. We could obtain linear response of photon counts to measurement time by applying the pre-determined quantification function. The ratio of the re-calculated photon counts and measurement time present a constant value although different light source was applied. The quantification function for linear response could be applicable to the standard quantification process. The proposed method could be used for the exact quantitative analysis in various light imaging equipment with presenting linear response behavior of constant light emitting sources to measurement time

  5. Blind image deconvolution methods and convergence

    CERN Document Server

    Chaudhuri, Subhasis; Rameshan, Renu

    2014-01-01

    Blind deconvolution is a classical image processing problem which has been investigated by a large number of researchers over the last four decades. The purpose of this monograph is not to propose yet another method for blind image restoration. Rather the basic issue of deconvolvability has been explored from a theoretical view point. Some authors claim very good results while quite a few claim that blind restoration does not work. The authors clearly detail when such methods are expected to work and when they will not. In order to avoid the assumptions needed for convergence analysis in the

  6. Radiographic imaging method by gas ionisation

    International Nuclear Information System (INIS)

    Eickel, R.; Rheude, A.

    1982-02-01

    The search for a substitute of the silver halide film has been intensified worldwide due to the shortage and price increase of silver metal. Gasionography could be an alternative to the wellknown silver film imaging techniques in roentgenology. Therefore the practical basis of the imaging process and the electrophoretic development was investigated. The technical realisation of this method was demonstrated for two different types of X-ray examen by developing a fully automatic chest changer and a mammography system that can be adapted to commercially available imaging stands. The image quality achieved with these apparatus was evaluated in comparison with conventional film techniques in the laboratory as well as in a clinical trial. (orig.) [de

  7. A multicore based parallel image registration method.

    Science.gov (United States)

    Yang, Lin; Gong, Leiguang; Zhang, Hong; Nosher, John L; Foran, David J

    2009-01-01

    Image registration is a crucial step for many image-assisted clinical applications such as surgery planning and treatment evaluation. In this paper we proposed a landmark based nonlinear image registration algorithm for matching 2D image pairs. The algorithm was shown to be effective and robust under conditions of large deformations. In landmark based registration, the most important step is establishing the correspondence among the selected landmark points. This usually requires an extensive search which is often computationally expensive. We introduced a nonregular data partition algorithm using the K-means clustering algorithm to group the landmarks based on the number of available processing cores. The step optimizes the memory usage and data transfer. We have tested our method using IBM Cell Broadband Engine (Cell/B.E.) platform.

  8. Image-reconstruction methods in positron tomography

    CERN Document Server

    Townsend, David W; CERN. Geneva

    1993-01-01

    Physics and mathematics for medical imaging In the two decades since the introduction of the X-ray scanner into radiology, medical imaging techniques have become widely established as essential tools in the diagnosis of disease. As a consequence of recent technological and mathematical advances, the non-invasive, three-dimensional imaging of internal organs such as the brain and the heart is now possible, not only for anatomical investigations using X-rays but also for studies which explore the functional status of the body using positron-emitting radioisotopes and nuclear magnetic resonance. Mathematical methods which enable three-dimentional distributions to be reconstructed from projection data acquired by radiation detectors suitably positioned around the patient will be described in detail. The lectures will trace the development of medical imaging from simpleradiographs to the present-day non-invasive measurement of in vivo boichemistry. Powerful techniques to correlate anatomy and function that are cur...

  9. Radiopharmaceutical chelates and method of external imaging

    International Nuclear Information System (INIS)

    Loberg, M.D.; Callery, P.S.; Cooper, M.

    1977-01-01

    A chelate of technetium-99m, cobalt-57, gallium-67, gallium-68, indium-111 or indium-113m and a substituted iminodiacetic acid or an 8-hydroxyquinoline useful as a radiopharmaceutical external imaging agent. The invention also includes preparative methods therefor

  10. Geometry-Driven-Diffusion filtering of MR Brain Images using dissimilarities and optimal relaxation parameter

    Energy Technology Data Exchange (ETDEWEB)

    Bajla, Ivan [Austrian Research Centres Sibersdorf, Department of High Performance Image Processing and Video-Technology, A-2444 Seibersdorf (Austria); Hollander, Igor [Institute of information Processing, Austrian Academy of Sciences, Sonnenfelsgasse 19/2, 1010 Wien (Austria)

    1999-12-31

    A novel method of local adapting of the conductance using a pixel dissimilarity measure is developed. An alternative processing methodology is proposed, which is based on intensity gradient histogram calculated for region interiors and boundaries of a phantom which models real MR brain scans. It involves a specific cost function suitable for the calculation of the optimum relaxation parameter Kopt and for the selection of the optimal exponential conductance. Computer experiments for locally adaptive geometry-driven-diffusion filtering of an MR brain phantom have been performed and evaluated. (authors) 6 refs., 3 figs.2 tabs.

  11. Geometry-Driven-Diffusion filtering of MR Brain Images using dissimilarities and optimal relaxation parameter

    International Nuclear Information System (INIS)

    Bajla, Ivan; Hollander, Igor

    1998-01-01

    A novel method of local adapting of the conductance using a pixel dissimilarity measure is developed. An alternative processing methodology is proposed, which is based on intensity gradient histogram calculated for region interiors and boundaries of a phantom which models real MR brain scans. It involves a specific cost function suitable for the calculation of the optimum relaxation parameter Kopt and for the selection of the optimal exponential conductance. Computer experiments for locally adaptive geometry-driven-diffusion filtering of an MR brain phantom have been performed and evaluated. (authors)

  12. Diffusion tensor magnetic resonance imaging driven growth modeling for radiotherapy target definition in glioblastoma

    DEFF Research Database (Denmark)

    Jensen, Morten B; Guldberg, Trine L; Harbøll, Anja

    2017-01-01

    the microscopic tumor cell spread. Gliomas favor spread along the white matter fiber tracts. Tumor growth models incorporating the MRI diffusion tensors (DTI) allow to account more consistently for the glioma growth. The aim of the study was to investigate the potential of a DTI driven growth model to improve...... target definition in glioblastoma (GBM). MATERIAL AND METHODS: Eleven GBM patients were scanned using T1w, T2w FLAIR, T1w + Gd and DTI. The brain was segmented into white matter, gray matter and cerebrospinal fluid. The Fisher-Kolmogorov growth model was used assuming uniform proliferation...

  13. Image correlation method for DNA sequence alignment.

    Science.gov (United States)

    Curilem Saldías, Millaray; Villarroel Sassarini, Felipe; Muñoz Poblete, Carlos; Vargas Vásquez, Asticio; Maureira Butler, Iván

    2012-01-01

    The complexity of searches and the volume of genomic data make sequence alignment one of bioinformatics most active research areas. New alignment approaches have incorporated digital signal processing techniques. Among these, correlation methods are highly sensitive. This paper proposes a novel sequence alignment method based on 2-dimensional images, where each nucleic acid base is represented as a fixed gray intensity pixel. Query and known database sequences are coded to their pixel representation and sequence alignment is handled as object recognition in a scene problem. Query and database become object and scene, respectively. An image correlation process is carried out in order to search for the best match between them. Given that this procedure can be implemented in an optical correlator, the correlation could eventually be accomplished at light speed. This paper shows an initial research stage where results were "digitally" obtained by simulating an optical correlation of DNA sequences represented as images. A total of 303 queries (variable lengths from 50 to 4500 base pairs) and 100 scenes represented by 100 x 100 images each (in total, one million base pair database) were considered for the image correlation analysis. The results showed that correlations reached very high sensitivity (99.01%), specificity (98.99%) and outperformed BLAST when mutation numbers increased. However, digital correlation processes were hundred times slower than BLAST. We are currently starting an initiative to evaluate the correlation speed process of a real experimental optical correlator. By doing this, we expect to fully exploit optical correlation light properties. As the optical correlator works jointly with the computer, digital algorithms should also be optimized. The results presented in this paper are encouraging and support the study of image correlation methods on sequence alignment.

  14. New magnetic resonance imaging methods in nephrology

    Science.gov (United States)

    Zhang, Jeff L.; Morrell, Glen; Rusinek, Henry; Sigmund, Eric; Chandarana, Hersh; Lerman, Lilach O.; Prasad, Pottumarthi Vara; Niles, David; Artz, Nathan; Fain, Sean; Vivier, Pierre H.; Cheung, Alfred K.; Lee, Vivian S.

    2013-01-01

    Established as a method to study anatomic changes, such as renal tumors or atherosclerotic vascular disease, magnetic resonance imaging (MRI) to interrogate renal function has only recently begun to come of age. In this review, we briefly introduce some of the most important MRI techniques for renal functional imaging, and then review current findings on their use for diagnosis and monitoring of major kidney diseases. Specific applications include renovascular disease, diabetic nephropathy, renal transplants, renal masses, acute kidney injury and pediatric anomalies. With this review, we hope to encourage more collaboration between nephrologists and radiologists to accelerate the development and application of modern MRI tools in nephrology clinics. PMID:24067433

  15. Three-dimensional image signals: processing methods

    Science.gov (United States)

    Schiopu, Paul; Manea, Adrian; Craciun, Anca-Ileana; Craciun, Alexandru

    2010-11-01

    Over the years extensive studies have been carried out to apply coherent optics methods in real-time processing, communications and transmission image. This is especially true when a large amount of information needs to be processed, e.g., in high-resolution imaging. The recent progress in data-processing networks and communication systems has considerably increased the capacity of information exchange. We describe the results of literature investigation research of processing methods for the signals of the three-dimensional images. All commercially available 3D technologies today are based on stereoscopic viewing. 3D technology was once the exclusive domain of skilled computer-graphics developers with high-end machines and software. The images capture from the advanced 3D digital camera can be displayed onto screen of the 3D digital viewer with/ without special glasses. For this is needed considerable processing power and memory to create and render the complex mix of colors, textures, and virtual lighting and perspective necessary to make figures appear three-dimensional. Also, using a standard digital camera and a technique called phase-shift interferometry we can capture "digital holograms." These are holograms that can be stored on computer and transmitted over conventional networks. We present some research methods to process "digital holograms" for the Internet transmission and results.

  16. Image reconstruction methods in positron tomography

    International Nuclear Information System (INIS)

    Townsend, D.W.; Defrise, M.

    1993-01-01

    In the two decades since the introduction of the X-ray scanner into radiology, medical imaging techniques have become widely established as essential tools in the diagnosis of disease. As a consequence of recent technological and mathematical advances, the non-invasive, three-dimensional imaging of internal organs such as the brain and the heart is now possible, not only for anatomical investigations using X-ray but also for studies which explore the functional status of the body using positron-emitting radioisotopes. This report reviews the historical and physical basis of medical imaging techniques using positron-emitting radioisotopes. Mathematical methods which enable three-dimensional distributions of radioisotopes to be reconstructed from projection data (sinograms) acquired by detectors suitably positioned around the patient are discussed. The extension of conventional two-dimensional tomographic reconstruction algorithms to fully three-dimensional reconstruction is described in detail. (orig.)

  17. A hybrid source-driven method to compute fast neutron fluence in reactor pressure vessel - 017

    International Nuclear Information System (INIS)

    Ren-Tai, Chiang

    2010-01-01

    A hybrid source-driven method is developed to compute fast neutron fluence with neutron energy greater than 1 MeV in nuclear reactor pressure vessel (RPV). The method determines neutron flux by solving a steady-state neutron transport equation with hybrid neutron sources composed of peripheral fixed fission neutron sources and interior chain-reacted fission neutron sources. The relative rod-by-rod power distribution of the peripheral assemblies in a nuclear reactor obtained from reactor core depletion calculations and subsequent rod-by-rod power reconstruction is employed as the relative rod-by-rod fixed fission neutron source distribution. All fissionable nuclides other than U-238 (such as U-234, U-235, U-236, Pu-239 etc) are replaced with U-238 to avoid counting the fission contribution twice and to preserve fast neutron attenuation for heavy nuclides in the peripheral assemblies. An example is provided to show the feasibility of the method. Since the interior fuels only have a marginal impact on RPV fluence results due to rapid attenuation of interior fast fission neutrons, a generic set or one of several generic sets of interior fuels can be used as the driver and only the neutron sources in the peripheral assemblies will be changed in subsequent hybrid source-driven fluence calculations. Consequently, this hybrid source-driven method can simplify and reduce cost for fast neutron fluence computations. This newly developed hybrid source-driven method should be a useful and simplified tool for computing fast neutron fluence at selected locations of interest in RPV of contemporary nuclear power reactors. (authors)

  18. Method and apparatus for enhancing radiometric imaging

    International Nuclear Information System (INIS)

    Logan, R. H.; Paradish, F. J.

    1985-01-01

    Disclosed is a method and apparatus for enhancing target detection, particularly in the millimeter wave frequency range, through the utilization of an imaging radiometer. The radiometer, which is a passive thermal receiver, detects the reflected and emitted thermal radiation of targets within a predetermined antenna/receiver beamwidth. By scanning the radiometer over a target area, a thermal image is created. At millimeter wave frequencies, the received emissions from the target area are highly dependent on the emissivity of the target of interest. Foliage will appear ''hot'' due to its high emissivity and metals will appear cold due to their low emissivities. A noise power illuminator is periodically actuated to illuminate the target of interest. When the illuminator is actuated, the role of emissivity is reversed, namely poorly emissive targets will generally be good reflectors which in the presence of an illuminator will appear ''hot''. The highly emissive targets (such as foliage and dirt) which absorb most of the transmitted energy will appear almost the same as in a nonilluminated, passive image. Using a data processor, the intensity of the passive image is subtracted from the intensity of the illuminated, active image which thereby cancels the background foliage, dirt, etc. and the reflective metallic targets are enhanced

  19. Active learning methods for interactive image retrieval.

    Science.gov (United States)

    Gosselin, Philippe Henri; Cord, Matthieu

    2008-07-01

    Active learning methods have been considered with increased interest in the statistical learning community. Initially developed within a classification framework, a lot of extensions are now being proposed to handle multimedia applications. This paper provides algorithms within a statistical framework to extend active learning for online content-based image retrieval (CBIR). The classification framework is presented with experiments to compare several powerful classification techniques in this information retrieval context. Focusing on interactive methods, active learning strategy is then described. The limitations of this approach for CBIR are emphasized before presenting our new active selection process RETIN. First, as any active method is sensitive to the boundary estimation between classes, the RETIN strategy carries out a boundary correction to make the retrieval process more robust. Second, the criterion of generalization error to optimize the active learning selection is modified to better represent the CBIR objective of database ranking. Third, a batch processing of images is proposed. Our strategy leads to a fast and efficient active learning scheme to retrieve sets of online images (query concept). Experiments on large databases show that the RETIN method performs well in comparison to several other active strategies.

  20. An evaluation of data-driven motion estimation in comparison to the usage of external-surrogates in cardiac SPECT imaging

    International Nuclear Information System (INIS)

    Mukherjee, Joyeeta Mitra; Johnson, Karen L; Pretorius, P Hendrik; King, Michael A; Hutton, Brian F

    2013-01-01

    visual appearance of motion-corrected images using data-driven motion estimates was compared to that obtained using the external motion-tracking system in patient studies. Pattern intensity and normalized mutual information cost functions were observed to have the best performance in terms of lowest average position error and stability with degradation of image quality of the partial reconstruction in simulations. In all patients, the visual quality of PI-based estimation was either significantly better or comparable to NMI-based estimation. Best visual quality was obtained with PI-based estimation in one of the five patient studies, and with external-surrogate based correction in three out of five patients. In the remaining patient study there was little motion and all methods yielded similar visual image quality. (paper)

  1. USING H.264/AVC-INTRA FOR DCT BASED SEGMENTATION DRIVEN COMPOUND IMAGE COMPRESSION

    Directory of Open Access Journals (Sweden)

    S. Ebenezer Juliet

    2011-08-01

    Full Text Available This paper presents a one pass block classification algorithm for efficient coding of compound images which consists of multimedia elements like text, graphics and natural images. The objective is to minimize the loss of visual quality of text during compression by separating text information which needs high special resolution than the pictures and background. It segments computer screen images into text/graphics and picture/background classes based on DCT energy in each 4x4 block, and then compresses both text/graphics pixels and picture/background blocks by H.264/AVC with variable quantization parameter. Experimental results show that the single H.264/AVC-INTRA coder with variable quantization outperforms single coders such as JPEG, JPEG-2000 for compound images. Also the proposed method improves the PSNR value significantly than standard JPEG, JPEG-2000 and while keeping competitive compression ratios.

  2. Systems Biology-Driven Hypotheses Tested In Vivo: The Need to Advancing Molecular Imaging Tools.

    Science.gov (United States)

    Verma, Garima; Palombo, Alessandro; Grigioni, Mauro; La Monaca, Morena; D'Avenio, Giuseppe

    2018-01-01

    Processing and interpretation of biological images may provide invaluable insights on complex, living systems because images capture the overall dynamics as a "whole." Therefore, "extraction" of key, quantitative morphological parameters could be, at least in principle, helpful in building a reliable systems biology approach in understanding living objects. Molecular imaging tools for system biology models have attained widespread usage in modern experimental laboratories. Here, we provide an overview on advances in the computational technology and different instrumentations focused on molecular image processing and analysis. Quantitative data analysis through various open source software and algorithmic protocols will provide a novel approach for modeling the experimental research program. Besides this, we also highlight the predictable future trends regarding methods for automatically analyzing biological data. Such tools will be very useful to understand the detailed biological and mathematical expressions under in-silico system biology processes with modeling properties.

  3. Enhancing the (MSLDIP) image steganographic method (ESLDIP method)

    Science.gov (United States)

    Seddik Saad, Al-hussien

    2011-10-01

    Message transmissions over the Internet still have data security problem. Therefore, secure and secret communication methods are needed for transmitting messages over the Internet. Cryptography scrambles the message so that it cannot be understood. However, it makes the message suspicious enough to attract eavesdropper's attention. Steganography hides the secret message within other innocuous-looking cover files (i.e. images, music and video files) so that it cannot be observed [1].The term steganography originates from the Greek root words "steganos'' and "graphein'' which literally mean "covered writing''. It is defined as the science that involves communicating secret data in an appropriate multimedia carrier, e.g., image, audio text and video files [3].Steganographic techniques allow one party to communicate information to another without a third party even knowing that the communication is occurring. The ways to deliver these "secret messages" vary greatly [3].Our proposed method called Enhanced SLDIP (ESLDIP). In which the maximmum hiding capacity (MHC) of proposed ESLDIP method is higher than the previously proposed MSLDIP methods and the PSNR of the ESLDIP method is higher than the MSLDIP PSNR values', which means that the image quality of the ESLDIP method will be better than MSLDIP method and the maximmum hiding capacity (MHC) also improved. The rest of this paper is organized as follows. In section 2, steganography has been discussed; lingo, carriers and types. In section 3, related works are introduced. In section 4, the proposed method will be discussed in details. In section 5, the simulation results are given and Section 6 concludes the paper.

  4. A Frequency Splitting Method For CFM Imaging

    DEFF Research Database (Denmark)

    Udesen, Jesper; Gran, Fredrik; Jensen, Jørgen Arendt

    2006-01-01

    The performance of conventional CFM imaging will often be degraded due to the relatively low number of pulses (4-10) used for each velocity estimate. To circumvent this problem we propose a new method using frequency splitting (FS). The FS method uses broad band chirps as excitation pulses instead...... of narrow band pulses as in conventional CFM imaging. By appropriate filtration, the returned signals are divided into a number of narrow band signals which are approximately disjoint. After clutter filtering the velocities are found from each frequency band using a conventional autocorrelation estimator......, a 5 MHz linear array transducer was used to scan a vessel situated at 30 mm depth with a maximum flow velocity of 0.1 m/s. The pulse repetition frequency was 1.8 kHz and the angle between the flow and the beam was 60 deg. A 15 mus chirp was used as excitation pulse and 40 independent velocity...

  5. A Method for Denoising Image Contours

    Directory of Open Access Journals (Sweden)

    Ovidiu COSMA

    2017-12-01

    Full Text Available The edge detection techniques have to compromise between sensitivity and noise. In order for the main contours to be uninterrupted, the level of sensitivity has to be raised, which however has the negative effect of producing a multitude of insignificant contours (noise. This article proposes a method of removing this noise, which acts directly on the binary representation of the image contours.

  6. Data-driven fault detection for industrial processes canonical correlation analysis and projection based methods

    CERN Document Server

    Chen, Zhiwen

    2017-01-01

    Zhiwen Chen aims to develop advanced fault detection (FD) methods for the monitoring of industrial processes. With the ever increasing demands on reliability and safety in industrial processes, fault detection has become an important issue. Although the model-based fault detection theory has been well studied in the past decades, its applications are limited to large-scale industrial processes because it is difficult to build accurate models. Furthermore, motivated by the limitations of existing data-driven FD methods, novel canonical correlation analysis (CCA) and projection-based methods are proposed from the perspectives of process input and output data, less engineering effort and wide application scope. For performance evaluation of FD methods, a new index is also developed. Contents A New Index for Performance Evaluation of FD Methods CCA-based FD Method for the Monitoring of Stationary Processes Projection-based FD Method for the Monitoring of Dynamic Processes Benchmark Study and Real-Time Implementat...

  7. Diffusion weighted imaging by MR method

    International Nuclear Information System (INIS)

    Horikawa, Yoshiharu; Naruse, Shoji; Ebisu, Toshihiko; Tokumitsu, Takuaki; Ueda, Satoshi; Tanaka, Chuzo; Higuchi, Toshihiro; Umeda, Masahiro.

    1993-01-01

    Diffusion weighted magnetic resonance imaging is a recently developed technique used to examine the micromovement of water molecules in vivo. We have applied this technique to examine various kinds of brain diseases, both experimentally and clinically. The calculated apparent diffusion coefficient (ADC) in vivo showed reliable values. In experimentally induced brain edema in rats, the pathophysiological difference of the type of edema (such as cytotoxic, and vasogenic) could be differentiated on the diffusion weighted MR images. Cytotoxic brain edema showed high intensity (slower diffusion) on the diffusion weighted images. On the other hand, vasogenic brain edema showed a low intensity image (faster diffusion). Diffusion anisotropy was demonstrated according to the direction of myelinated fibers and applied motion proving gradient (MPG). This anisotropy was also demonstrated in human brain tissue along the course of the corpus callosum, pyramidal tract and optic radiation. In brain ischemia cases, lesions were detected as high signal intensity areas, even one hour after the onset of ischemia. Diffusion was faster in brain tumor compared with normal brain. Histological differences were not clearly reflected by the ADC value. In epidermoid tumor cases, the intensity was characteristically high, was demonstrated, and the cerebrospinal fluid border was clearly demonstrated. New clinical information obtainable with this molecular diffusion method will prove to be useful in various clinical studies. (author)

  8. Automatic analysis of digitized TV-images by a computer-driven optical microscope

    International Nuclear Information System (INIS)

    Rosa, G.; Di Bartolomeo, A.; Grella, G.; Romano, G.

    1997-01-01

    New methods of image analysis and three-dimensional pattern recognition were developed in order to perform the automatic scan of nuclear emulsion pellicles. An optical microscope, with a motorized stage, was equipped with a CCD camera and an image digitizer, and interfaced to a personal computer. Selected software routines inspired the design of a dedicated hardware processor. Fast operation, high efficiency and accuracy were achieved. First applications to high-energy physics experiments are reported. Further improvements are in progress, based on a high-resolution fast CCD camera and on programmable digital signal processors. Applications to other research fields are envisaged. (orig.)

  9. LCP method for a planar passive dynamic walker based on an event-driven scheme

    Science.gov (United States)

    Zheng, Xu-Dong; Wang, Qi

    2018-06-01

    The main purpose of this paper is to present a linear complementarity problem (LCP) method for a planar passive dynamic walker with round feet based on an event-driven scheme. The passive dynamic walker is treated as a planar multi-rigid-body system. The dynamic equations of the passive dynamic walker are obtained by using Lagrange's equations of the second kind. The normal forces and frictional forces acting on the feet of the passive walker are described based on a modified Hertz contact model and Coulomb's law of dry friction. The state transition problem of stick-slip between feet and floor is formulated as an LCP, which is solved with an event-driven scheme. Finally, to validate the methodology, four gaits of the walker are simulated: the stance leg neither slips nor bounces; the stance leg slips without bouncing; the stance leg bounces without slipping; the walker stands after walking several steps.

  10. Pipe break prediction based on evolutionary data-driven methods with brief recorded data

    International Nuclear Information System (INIS)

    Xu Qiang; Chen Qiuwen; Li Weifeng; Ma Jinfeng

    2011-01-01

    Pipe breaks often occur in water distribution networks, imposing great pressure on utility managers to secure stable water supply. However, pipe breaks are hard to detect by the conventional method. It is therefore necessary to develop reliable and robust pipe break models to assess the pipe's probability to fail and then to optimize the pipe break detection scheme. In the absence of deterministic physical models for pipe break, data-driven techniques provide a promising approach to investigate the principles underlying pipe break. In this paper, two data-driven techniques, namely Genetic Programming (GP) and Evolutionary Polynomial Regression (EPR) are applied to develop pipe break models for the water distribution system of Beijing City. The comparison with the recorded pipe break data from 1987 to 2005 showed that the models have great capability to obtain reliable predictions. The models can be used to prioritize pipes for break inspection and then improve detection efficiency.

  11. A Simple Method for Measuring the Verticality of Small-Diameter Driven Wells

    DEFF Research Database (Denmark)

    Kjeldsen, Peter; Skov, Bent

    1994-01-01

    The presence of stones, solid waste, and other obstructions can deflect small-diameter driven wells during installation, leading to deviations of the well from its intended position. This could lead to erroneous results, especially for measurements of ground water levels by water level meters....... A simple method was developed to measure deviations from the intended positions of well screens and determine correction factors required for proper measurement of ground water levels in nonvertical wells. The method is based upon measurement of the hydrostatic pressure in the bottom of a water column...... ground water flow directions....

  12. Concentration gradient driven molecular dynamics: a new method for simulations of membrane permeation and separation.

    Science.gov (United States)

    Ozcan, Aydin; Perego, Claudio; Salvalaglio, Matteo; Parrinello, Michele; Yazaydin, Ozgur

    2017-05-01

    In this study, we introduce a new non-equilibrium molecular dynamics simulation method to perform simulations of concentration driven membrane permeation processes. The methodology is based on the application of a non-conservative bias force controlling the concentration of species at the inlet and outlet of a membrane. We demonstrate our method for pure methane, ethane and ethylene permeation and for ethane/ethylene separation through a flexible ZIF-8 membrane. Results show that a stationary concentration gradient is maintained across the membrane, realistically simulating an out-of-equilibrium diffusive process, and the computed permeabilities and selectivity are in good agreement with experimental results.

  13. A data driven method to measure electron charge mis-identification rate

    CERN Document Server

    Bakhshiansohi, Hamed

    2009-01-01

    Electron charge mis-measurement is an important challenge in analyses which depend on the charge of electron. To estimate the probability of {\\it electron charge mis-measurement} a data driven method is introduced and a good agreement with MC based methods is achieved.\\\\ The third moment of $\\phi$ distribution of hits in electron SuperCluster is studied. The correlation between this variable and the electron charge is also investigated. Using this `new' variable and some other variables the electron charge measurement is improved by two different approaches.

  14. A nuclear method to authenticate Buddha images

    International Nuclear Information System (INIS)

    Khaweerat, S; Ratanatongchai, W; Channuie, J; Wonglee, S; Picha, R; Promping, J; Silva, K; Liamsuwan, T

    2015-01-01

    The value of Buddha images in Thailand varies dramatically depending on authentication and provenance. In general, people use their individual skills to make the justification which frequently leads to obscurity, deception and illegal activities. Here, we propose two non-destructive techniques of neutron radiography (NR) and neutron activation autoradiography (NAAR) to reveal respectively structural and elemental profiles of small Buddha images. For NR, a thermal neutron flux of 10 5 n cm -2 s -1 was applied. NAAR needed a higher neutron flux of 10 12 n cm -2 s -1 to activate the samples. Results from NR and NAAR revealed unique characteristic of the samples. Similarity of the profile played a key role in the classification of the samples. The results provided visual evidence to enhance the reliability of authenticity approval. The method can be further developed for routine practice which impact thousands of customers in Thailand. (paper)

  15. A nuclear method to authenticate Buddha images

    Science.gov (United States)

    Khaweerat, S.; Ratanatongchai, W.; Channuie, J.; Wonglee, S.; Picha, R.; Promping, J.; Silva, K.; Liamsuwan, T.

    2015-05-01

    The value of Buddha images in Thailand varies dramatically depending on authentication and provenance. In general, people use their individual skills to make the justification which frequently leads to obscurity, deception and illegal activities. Here, we propose two non-destructive techniques of neutron radiography (NR) and neutron activation autoradiography (NAAR) to reveal respectively structural and elemental profiles of small Buddha images. For NR, a thermal neutron flux of 105 n cm-2s-1 was applied. NAAR needed a higher neutron flux of 1012 n cm-2 s-1 to activate the samples. Results from NR and NAAR revealed unique characteristic of the samples. Similarity of the profile played a key role in the classification of the samples. The results provided visual evidence to enhance the reliability of authenticity approval. The method can be further developed for routine practice which impact thousands of customers in Thailand.

  16. Ivane S. Beritashvili (1884-1974): from spinal cord reflexes to image-driven behavior.

    Science.gov (United States)

    Tsagareli, M G; Doty, R W

    2009-10-20

    Ivane Beritashvili ("Beritoff" in Russian, and often in Western languages) was a major figure in 20th-century neuroscience. Mastering the string galvanometer, he founded the electrophysiology of spinal cord reflexes, showing that inhibition is a distinctly different process from excitation, contrary to the concepts of his famous mentor, Wedensky. Work on postural reflexes with Magnus was cut short by World War I, but he later demonstrated that navigation in two-dimensional space without vision is a function solely of the vestibular system rather than of muscle proprioception. Persevering in his experiments despite postwar turmoil he founded an enduring Physiology Institute in Tbilisi, where he pursued an ingenious and extensive investigation of comparative memory in vertebrates. This revealed the unique nature of mammalian memory processes, which he forthrightly called "image driven," and distinguished them unequivocally from those underlying conditional reflexes. For some 30 years the Stalinist terror confined his publications to the Russian language. Work with his colleague, Chichinadze, discovering that memory confined to one cerebral hemisphere could be accessed by the other via a specific forebrain commissure, did reach the West, and ultimately led to recognition of the fascinating "split brain" condition. In the 1950s he was removed from his professorial position for 5 years as being "anti-Pavlovian." Restored to favor, he was honorary president of the "Moscow Colloquium" that saw the foundation of the International Brain Research Organization.

  17. Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification.

    Science.gov (United States)

    Rueckauer, Bodo; Lungu, Iulia-Alexandra; Hu, Yuhuang; Pfeiffer, Michael; Liu, Shih-Chii

    2017-01-01

    Spiking neural networks (SNNs) can potentially offer an efficient way of doing inference because the neurons in the networks are sparsely activated and computations are event-driven. Previous work showed that simple continuous-valued deep Convolutional Neural Networks (CNNs) can be converted into accurate spiking equivalents. These networks did not include certain common operations such as max-pooling, softmax, batch-normalization and Inception-modules. This paper presents spiking equivalents of these operations therefore allowing conversion of nearly arbitrary CNN architectures. We show conversion of popular CNN architectures, including VGG-16 and Inception-v3, into SNNs that produce the best results reported to date on MNIST, CIFAR-10 and the challenging ImageNet dataset. SNNs can trade off classification error rate against the number of available operations whereas deep continuous-valued neural networks require a fixed number of operations to achieve their classification error rate. From the examples of LeNet for MNIST and BinaryNet for CIFAR-10, we show that with an increase in error rate of a few percentage points, the SNNs can achieve more than 2x reductions in operations compared to the original CNNs. This highlights the potential of SNNs in particular when deployed on power-efficient neuromorphic spiking neuron chips, for use in embedded applications.

  18. Segmentation of Thalamus from MR images via Task-Driven Dictionary Learning.

    Science.gov (United States)

    Liu, Luoluo; Glaister, Jeffrey; Sun, Xiaoxia; Carass, Aaron; Tran, Trac D; Prince, Jerry L

    2016-02-27

    Automatic thalamus segmentation is useful to track changes in thalamic volume over time. In this work, we introduce a task-driven dictionary learning framework to find the optimal dictionary given a set of eleven features obtained from T1-weighted MRI and diffusion tensor imaging. In this dictionary learning framework, a linear classifier is designed concurrently to classify voxels as belonging to the thalamus or non-thalamus class. Morphological post-processing is applied to produce the final thalamus segmentation. Due to the uneven size of the training data samples for the non-thalamus and thalamus classes, a non-uniform sampling scheme is proposed to train the classifier to better discriminate between the two classes around the boundary of the thalamus. Experiments are conducted on data collected from 22 subjects with manually delineated ground truth. The experimental results are promising in terms of improvements in the Dice coefficient of the thalamus segmentation over state-of-the-art atlas-based thalamus segmentation algorithms.

  19. Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification

    Directory of Open Access Journals (Sweden)

    Bodo Rueckauer

    2017-12-01

    Full Text Available Spiking neural networks (SNNs can potentially offer an efficient way of doing inference because the neurons in the networks are sparsely activated and computations are event-driven. Previous work showed that simple continuous-valued deep Convolutional Neural Networks (CNNs can be converted into accurate spiking equivalents. These networks did not include certain common operations such as max-pooling, softmax, batch-normalization and Inception-modules. This paper presents spiking equivalents of these operations therefore allowing conversion of nearly arbitrary CNN architectures. We show conversion of popular CNN architectures, including VGG-16 and Inception-v3, into SNNs that produce the best results reported to date on MNIST, CIFAR-10 and the challenging ImageNet dataset. SNNs can trade off classification error rate against the number of available operations whereas deep continuous-valued neural networks require a fixed number of operations to achieve their classification error rate. From the examples of LeNet for MNIST and BinaryNet for CIFAR-10, we show that with an increase in error rate of a few percentage points, the SNNs can achieve more than 2x reductions in operations compared to the original CNNs. This highlights the potential of SNNs in particular when deployed on power-efficient neuromorphic spiking neuron chips, for use in embedded applications.

  20. Research of ART method in CT image reconstruction

    International Nuclear Information System (INIS)

    Li Zhipeng; Cong Peng; Wu Haifeng

    2005-01-01

    This paper studied Algebraic Reconstruction Technique (ART) in CT image reconstruction. Discussed the ray number influence on image quality. And the adopting of smooth method got high quality CT image. (authors)

  1. A result-driven minimum blocking method for PageRank parallel computing

    Science.gov (United States)

    Tao, Wan; Liu, Tao; Yu, Wei; Huang, Gan

    2017-01-01

    Matrix blocking is a common method for improving computational efficiency of PageRank, but the blocking rules are hard to be determined, and the following calculation is complicated. In tackling these problems, we propose a minimum blocking method driven by result needs to accomplish a parallel implementation of PageRank algorithm. The minimum blocking just stores the element which is necessary for the result matrix. In return, the following calculation becomes simple and the consumption of the I/O transmission is cut down. We do experiments on several matrixes of different data size and different sparsity degree. The results show that the proposed method has better computational efficiency than traditional blocking methods.

  2. Diffusion tensor magnetic resonance imaging driven growth modeling for radiotherapy target definition in glioblastoma.

    Science.gov (United States)

    Jensen, Morten B; Guldberg, Trine L; Harbøll, Anja; Lukacova, Slávka; Kallehauge, Jesper F

    2017-11-01

    The clinical target volume (CTV) in radiotherapy is routinely based on gadolinium contrast enhanced T1 weighted (T1w + Gd) and T2 weighted fluid attenuated inversion recovery (T2w FLAIR) magnetic resonance imaging (MRI) sequences which have been shown to over- or underestimate the microscopic tumor cell spread. Gliomas favor spread along the white matter fiber tracts. Tumor growth models incorporating the MRI diffusion tensors (DTI) allow to account more consistently for the glioma growth. The aim of the study was to investigate the potential of a DTI driven growth model to improve target definition in glioblastoma (GBM). Eleven GBM patients were scanned using T1w, T2w FLAIR, T1w + Gd and DTI. The brain was segmented into white matter, gray matter and cerebrospinal fluid. The Fisher-Kolmogorov growth model was used assuming uniform proliferation and a difference in white and gray matter diffusion of a ratio of 10. The tensor directionality was tested using an anisotropy weighting parameter set to zero (γ0) and twenty (γ20). The volumetric comparison was performed using Hausdorff distance, Dice similarity coefficient (DSC) and surface area. The median of the standard CTV (CTVstandard) was 180 cm 3 . The median surface area of CTVstandard was 211 cm 2 . The median surface area of respective CTV γ0 and CTV γ20 significantly increased to 338 and 376 cm 2 , respectively. The Hausdorff distance was greater than zero and significantly increased for both CTV γ0 and CTV γ20 with respective median of 18.7 and 25.2 mm. The DSC for both CTV γ0 and CTV γ20 were significantly below one with respective median of 0.74 and 0.72, which means that 74 and 72% of CTVstandard were included in CTV γ0 and CTV γ20, respectively. DTI driven growth models result in CTVs with a significantly increased surface area, a significantly increased Hausdorff distance and decreased overlap between the standard and model derived volume.

  3. A finite volume method for density driven flows in porous media

    Directory of Open Access Journals (Sweden)

    Hilhorst Danielle

    2013-01-01

    Full Text Available In this paper, we apply a semi-implicit finite volume method for the numerical simulation of density driven flows in porous media; this amounts to solving a nonlinear convection-diffusion parabolic equation for the concentration coupled with an elliptic equation for the pressure. We compute the solutions for two specific problems: a problem involving a rotating interface between salt and fresh water and the classical but difficult Henry’s problem. All solutions are compared to results obtained by running FEflow, a commercial software package for the simulation of groundwater flow, mass and heat transfer in porous media.

  4. MCID: A Software Tool to Provide Monte Carlo Driven Dosimetric Calculations Using Multimodality NM Images

    International Nuclear Information System (INIS)

    Vergara Gil, Alex; Torres Aroche, Leonel A; Coca Péreza, Marco A; Pacilio, Massimiliano; Botta, Francesca; Cremonesi, Marta

    2016-01-01

    Aim: In this work, a new software tool (named MCID) to calculate patient specific absorbed dose in molecular radiotherapy, based on Monte Carlo simulation, is presented. Materials & Methods: The inputs for MCID are two co-registered medical images containing anatomical (CT) and functional (PET or SPECT) information of the patient. The anatomical image is converted to a density map, and tissues segmentation is provided considering compositions and densities from ICRU 44 and ICRP; the functional image provides the cumulative activity map at voxel level (figure 1). MCID creates an input file for Monte Carlo (MC) codes such as MCNP5 and GATE, and converts the MC outputs into an absorbed dose image. Results: The developed tool allows estimating dose distributions for non-uniform activities distributions and non-homogeneous tissues. It includes tools for delineation of volumes of interest, and dosimetric data analysis. Procedures to decrease the calculation time are implemented in order to allow its use in clinical settings. Dose–volume histograms are computed and presented from the obtained dosimetric maps as well as dose statistics such as mean, minimum and maximum dose values; the results can be saved in common medical image formats (Interfile, DICOM, Analyze, MetaImage). The MCID was validated by comparing estimated dose values versus reference data, such as gold standards phantoms (OLINDA´s spheres) and other MC simulations of non-homogeneous phantoms. A good agreement was obtained in spheres ranged 1g to 1kg of mass and in non-homogeneous phantoms. Clinical studies were also examined. Dosimetric evaluations in patients undergoing 153Sm-EDTMP therapy for osseous metastases showed non-significant differences with calculations performed by traditional methods. The possibility of creating input files to perform the simulations using the Gate Code has increased the MCID applications and improved its functionality, Different clinical situations including PET and SPECT

  5. Beam transient analyses of Accelerator Driven Subcritical Reactors based on neutron transport method

    Energy Technology Data Exchange (ETDEWEB)

    He, Mingtao; Wu, Hongchun [School of Nuclear Science and Technology, Xi’an Jiaotong University, Xi’an 710049, Shaanxi (China); Zheng, Youqi, E-mail: yqzheng@mail.xjtu.edu.cn [School of Nuclear Science and Technology, Xi’an Jiaotong University, Xi’an 710049, Shaanxi (China); Wang, Kunpeng [Nuclear and Radiation Safety Center, PO Box 8088, Beijing 100082 (China); Li, Xunzhao; Zhou, Shengcheng [School of Nuclear Science and Technology, Xi’an Jiaotong University, Xi’an 710049, Shaanxi (China)

    2015-12-15

    Highlights: • A transport-based kinetics code for Accelerator Driven Subcritical Reactors is developed. • The performance of different kinetics methods adapted to the ADSR is investigated. • The impacts of neutronic parameters deteriorating with fuel depletion are investigated. - Abstract: The Accelerator Driven Subcritical Reactor (ADSR) is almost external source dominated since there is no additional reactivity control mechanism in most designs. This paper focuses on beam-induced transients with an in-house developed dynamic analysis code. The performance of different kinetics methods adapted to the ADSR is investigated, including the point kinetics approximation and space–time kinetics methods. Then, the transient responds of beam trip and beam overpower are calculated and analyzed for an ADSR design dedicated for minor actinides transmutation. The impacts of some safety-related neutronics parameters deteriorating with fuel depletion are also investigated. The results show that the power distribution varying with burnup leads to large differences in temperature responds during transients, while the impacts of kinetic parameters and feedback coefficients are not very obvious. Classification: Core physic.

  6. INTEGRATED FUSION METHOD FOR MULTIPLE TEMPORAL-SPATIAL-SPECTRAL IMAGES

    Directory of Open Access Journals (Sweden)

    H. Shen

    2012-08-01

    Full Text Available Data fusion techniques have been widely researched and applied in remote sensing field. In this paper, an integrated fusion method for remotely sensed images is presented. Differently from the existed methods, the proposed method has the performance to integrate the complementary information in multiple temporal-spatial-spectral images. In order to represent and process the images in one unified framework, two general image observation models are firstly presented, and then the maximum a posteriori (MAP framework is used to set up the fusion model. The gradient descent method is employed to solve the fused image. The efficacy of the proposed method is validated using simulated images.

  7. Probability density function method for variable-density pressure-gradient-driven turbulence and mixing

    International Nuclear Information System (INIS)

    Bakosi, Jozsef; Ristorcelli, Raymond J.

    2010-01-01

    Probability density function (PDF) methods are extended to variable-density pressure-gradient-driven turbulence. We apply the new method to compute the joint PDF of density and velocity in a non-premixed binary mixture of different-density molecularly mixing fluids under gravity. The full time-evolution of the joint PDF is captured in the highly non-equilibrium flow: starting from a quiescent state, transitioning to fully developed turbulence and finally dissipated by molecular diffusion. High-Atwood-number effects (as distinguished from the Boussinesq case) are accounted for: both hydrodynamic turbulence and material mixing are treated at arbitrary density ratios, with the specific volume, mass flux and all their correlations in closed form. An extension of the generalized Langevin model, originally developed for the Lagrangian fluid particle velocity in constant-density shear-driven turbulence, is constructed for variable-density pressure-gradient-driven flows. The persistent small-scale anisotropy, a fundamentally 'non-Kolmogorovian' feature of flows under external acceleration forces, is captured by a tensorial diffusion term based on the external body force. The material mixing model for the fluid density, an active scalar, is developed based on the beta distribution. The beta-PDF is shown to be capable of capturing the mixing asymmetry and that it can accurately represent the density through transition, in fully developed turbulence and in the decay process. The joint model for hydrodynamics and active material mixing yields a time-accurate evolution of the turbulent kinetic energy and Reynolds stress anisotropy without resorting to gradient diffusion hypotheses, and represents the mixing state by the density PDF itself, eliminating the need for dubious mixing measures. Direct numerical simulations of the homogeneous Rayleigh-Taylor instability are used for model validation.

  8. A data-driven prediction method for fast-slow systems

    Science.gov (United States)

    Groth, Andreas; Chekroun, Mickael; Kondrashov, Dmitri; Ghil, Michael

    2016-04-01

    In this work, we present a prediction method for processes that exhibit a mixture of variability on low and fast scales. The method relies on combining empirical model reduction (EMR) with singular spectrum analysis (SSA). EMR is a data-driven methodology for constructing stochastic low-dimensional models that account for nonlinearity and serial correlation in the estimated noise, while SSA provides a decomposition of the complex dynamics into low-order components that capture spatio-temporal behavior on different time scales. Our study focuses on the data-driven modeling of partial observations from dynamical systems that exhibit power spectra with broad peaks. The main result in this talk is that the combination of SSA pre-filtering with EMR modeling improves, under certain circumstances, the modeling and prediction skill of such a system, as compared to a standard EMR prediction based on raw data. Specifically, it is the separation into "fast" and "slow" temporal scales by the SSA pre-filtering that achieves the improvement. We show, in particular that the resulting EMR-SSA emulators help predict intermittent behavior such as rapid transitions between specific regions of the system's phase space. This capability of the EMR-SSA prediction will be demonstrated on two low-dimensional models: the Rössler system and a Lotka-Volterra model for interspecies competition. In either case, the chaotic dynamics is produced through a Shilnikov-type mechanism and we argue that the latter seems to be an important ingredient for the good prediction skills of EMR-SSA emulators. Shilnikov-type behavior has been shown to arise in various complex geophysical fluid models, such as baroclinic quasi-geostrophic flows in the mid-latitude atmosphere and wind-driven double-gyre ocean circulation models. This pervasiveness of the Shilnikow mechanism of fast-slow transition opens interesting perspectives for the extension of the proposed EMR-SSA approach to more realistic situations.

  9. FUSION SEGMENTATION METHOD BASED ON FUZZY THEORY FOR COLOR IMAGES

    Directory of Open Access Journals (Sweden)

    J. Zhao

    2017-09-01

    Full Text Available The image segmentation method based on two-dimensional histogram segments the image according to the thresholds of the intensity of the target pixel and the average intensity of its neighborhood. This method is essentially a hard-decision method. Due to the uncertainties when labeling the pixels around the threshold, the hard-decision method can easily get the wrong segmentation result. Therefore, a fusion segmentation method based on fuzzy theory is proposed in this paper. We use membership function to model the uncertainties on each color channel of the color image. Then, we segment the color image according to the fuzzy reasoning. The experiment results show that our proposed method can get better segmentation results both on the natural scene images and optical remote sensing images compared with the traditional thresholding method. The fusion method in this paper can provide new ideas for the information extraction of optical remote sensing images and polarization SAR images.

  10. A copula-based sampling method for data-driven prognostics

    International Nuclear Information System (INIS)

    Xi, Zhimin; Jing, Rong; Wang, Pingfeng; Hu, Chao

    2014-01-01

    This paper develops a Copula-based sampling method for data-driven prognostics. The method essentially consists of an offline training process and an online prediction process: (i) the offline training process builds a statistical relationship between the failure time and the time realizations at specified degradation levels on the basis of off-line training data sets; and (ii) the online prediction process identifies probable failure times for online testing units based on the statistical model constructed in the offline process and the online testing data. Our contributions in this paper are three-fold, namely the definition of a generic health index system to quantify the health degradation of an engineering system, the construction of a Copula-based statistical model to learn the statistical relationship between the failure time and the time realizations at specified degradation levels, and the development of a simulation-based approach for the prediction of remaining useful life (RUL). Two engineering case studies, namely the electric cooling fan health prognostics and the 2008 IEEE PHM challenge problem, are employed to demonstrate the effectiveness of the proposed methodology. - Highlights: • We develop a novel mechanism for data-driven prognostics. • A generic health index system quantifies health degradation of engineering systems. • Off-line training model is constructed based on the Bayesian Copula model. • Remaining useful life is predicted from a simulation-based approach

  11. Method and apparatus for imaging volume data

    International Nuclear Information System (INIS)

    Drebin, R.; Carpenter, L.C.

    1987-01-01

    An imaging system projects a two dimensional representation of three dimensional volumes where surface boundaries and objects internal to the volumes are readily shown, and hidden surfaces and the surface boundaries themselves are accurately rendered by determining volume elements or voxels. An image volume representing a volume object or data structure is written into memory. A color and opacity is assigned to each voxel within the volume and stored as a red (R), green (G), blue (B), and opacity (A) component, three dimensional data volume. The RGBA assignment for each voxel is determined based on the percentage component composition of the materials represented in the volume, and thus, the percentage of color and transparency associated with those materials. The voxels in the RGBA volume are used as mathematical filters such that each successive voxel filter is overlayed over a prior background voxel filter. Through a linear interpolation, a new background filter is determined and generated. The interpolation is successively performed for all voxels up to the front most voxel for the plane of view. The method is repeated until all display voxels are determined for the plane of view. (author)

  12. Dual wavelength imaging of a scrape-off layer in an advanced beam-driven field-reversed configuration

    Energy Technology Data Exchange (ETDEWEB)

    Osin, D.; Schindler, T., E-mail: dosin@trialphaenergy.com [Tri Alpha Energy, Inc., P.O. Box 7010, Rancho Santa Margarita, California 92688-7010 (United States)

    2016-11-15

    A dual wavelength imaging system has been developed and installed on C-2U to capture 2D images of a He jet in the Scrape-Off Layer (SOL) of an advanced beam-driven Field-Reversed Configuration (FRC) plasma. The system was designed to optically split two identical images and pass them through 1 nm FWHM filters. Dual wavelength images are focused adjacent on a large format CCD chip and recorded simultaneously with a time resolution down to 10 μs using a gated micro-channel plate. The relatively compact optical system images a 10 cm plasma region with a spatial resolution of 0.2 cm and can be used in a harsh environment with high electro-magnetic noise and high magnetic field. The dual wavelength imaging system provides 2D images of either electron density or temperature by observing spectral line pairs emitted by He jet atoms in the SOL. A large field of view, combined with good space and time resolution of the imaging system, allows visualization of macro-flows in the SOL. First 2D images of the electron density and temperature observed in the SOL of the C-2U FRC are presented.

  13. Fault Detection for Nonlinear Process With Deterministic Disturbances: A Just-In-Time Learning Based Data Driven Method.

    Science.gov (United States)

    Yin, Shen; Gao, Huijun; Qiu, Jianbin; Kaynak, Okyay

    2017-11-01

    Data-driven fault detection plays an important role in industrial systems due to its applicability in case of unknown physical models. In fault detection, disturbances must be taken into account as an inherent characteristic of processes. Nevertheless, fault detection for nonlinear processes with deterministic disturbances still receive little attention, especially in data-driven field. To solve this problem, a just-in-time learning-based data-driven (JITL-DD) fault detection method for nonlinear processes with deterministic disturbances is proposed in this paper. JITL-DD employs JITL scheme for process description with local model structures to cope with processes dynamics and nonlinearity. The proposed method provides a data-driven fault detection solution for nonlinear processes with deterministic disturbances, and owns inherent online adaptation and high accuracy of fault detection. Two nonlinear systems, i.e., a numerical example and a sewage treatment process benchmark, are employed to show the effectiveness of the proposed method.

  14. Subsurface imaging by electrical and EM methods

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-12-01

    This report consists of 3 subjects. 1) Three dimensional inversion of resistivity data with topography : In this study, we developed a 3-D inversion method based on the finite element calculation of model responses, which can effectively accommodate the irregular topography. In solving the inverse problem, the iterative least-squares approach comprising the smoothness-constraints was taken along with the reciprocity approach in the calculation of Jacobian. Furthermore the Active Constraint Balancing, which has been recently developed by ourselves to enhance the resolving power of the inverse problem, was also employed. Since our new algorithm accounts for the topography in the inversion step, topography correction is not necessary as a preliminary processing and we can expect a more accurate image of the earth. 2) Electromagnetic responses due to a source in the borehole : The effects of borehole fluid and casing on the borehole EM responses should thoroughly be analyzed since they may affect the resultant image of the earth. In this study, we developed an accurate algorithm for calculating the EM responses containing the effects of borehole fluid and casing when a current-carrying ring is located on the borehole axis. An analytic expression for primary vertical magnetic field along the borehole axis was first formulated and the fast Fourier transform is to be applied to get the EM fields at any location in whole space. 3) High frequency electromagnetic impedance survey : At high frequencies the EM impedance becomes a function of the angle of incidence or the horizontal wavenumber, so the electrical properties cannot be readily extracted without first eliminating the effect of horizontal wavenumber on the impedance. For this purpose, this paper considers two independent methods for accurately determining the horizontal wavenumber, which in turn is used to correct the impedance data. The 'apparent' electrical properties derived from the corrected impedance

  15. Image Registration Methode in Radar Interferometry

    Directory of Open Access Journals (Sweden)

    S. Chelbi

    2015-08-01

    Full Text Available This article presents a methodology for the determination of the registration of an Interferometric Synthetic radar (InSAR pair images with half pixel precision. Using the two superposed radar images Single Look complexes (SLC [1-4], we developed an iterative process to superpose these two images according to their correlation coefficient with a high coherence area. This work concerns the exploitation of ERS Tandem pair of radar images SLC of the Algiers area acquired on 03 January and 04 January 1994. The former is taken as a master image and the latter as a slave image.

  16. Numerical methods in image processing for applications in jewellery industry

    OpenAIRE

    Petrla, Martin

    2016-01-01

    Presented thesis deals with a problem from the field of image processing for application in multiple scanning of jewelery stones. The aim is to develop a method for preprocessing and subsequent mathematical registration of images in order to increase the effectivity and reliability of the output quality control. For these purposes the thesis summerizes mathematical definition of digital image as well as theoretical base of image registration. It proposes a method adjusting every single image ...

  17. Keyhole imaging method for dynamic objects behind the occlusion area

    Science.gov (United States)

    Hao, Conghui; Chen, Xi; Dong, Liquan; Zhao, Yuejin; Liu, Ming; Kong, Lingqin; Hui, Mei; Liu, Xiaohua; Wu, Hong

    2018-01-01

    A method of keyhole imaging based on camera array is realized to obtain the video image behind a keyhole in shielded space at a relatively long distance. We get the multi-angle video images by using a 2×2 CCD camera array to take the images behind the keyhole in four directions. The multi-angle video images are saved in the form of frame sequences. This paper presents a method of video frame alignment. In order to remove the non-target area outside the aperture, we use the canny operator and morphological method to realize the edge detection of images and fill the images. The image stitching of four images is accomplished on the basis of the image stitching algorithm of two images. In the image stitching algorithm of two images, the SIFT method is adopted to accomplish the initial matching of images, and then the RANSAC algorithm is applied to eliminate the wrong matching points and to obtain a homography matrix. A method of optimizing transformation matrix is proposed in this paper. Finally, the video image with larger field of view behind the keyhole can be synthesized with image frame sequence in which every single frame is stitched. The results show that the screen of the video is clear and natural, the brightness transition is smooth. There is no obvious artificial stitching marks in the video, and it can be applied in different engineering environment .

  18. Stereo Imaging Velocimetry of Mixing Driven by Buoyancy Induced Flow Fields

    Science.gov (United States)

    Duval, W. M. B.; Jacqmin, D.; Bomani, B. M.; Alexander, I. J.; Kassemi, M.; Batur, C.; Tryggvason, B. V.; Lyubimov, D. V.; Lyubimova, T. P.

    2000-01-01

    Mixing of two fluids generated by steady and particularly g-jitter acceleration is fundamental towards the understanding of transport phenomena in a microgravity environment. We propose to carry out flight and ground-based experiments to quantify flow fields due to g-jitter type of accelerations using Stereo Imaging Velocimetry (SIV), and measure the concentration field using laser fluorescence. The understanding of the effects of g-jitter on transport phenomena is of great practical interest to the microgravity community and impacts the design of experiments for the Space Shuttle as well as the International Space Station. The aim of our proposed research is to provide quantitative data to the community on the effects of g-jitter on flow fields due to mixing induced by buoyancy forces. The fundamental phenomenon of mixing occurs in a broad range of materials processing encompassing the growth of opto-electronic materials and semiconductors, (by directional freezing and physical vapor transport), to solution and protein crystal growth. In materials processing of these systems, crystal homogeneity, which is affected by the solutal field distribution, is one of the major issues. The understanding of fluid mixing driven by buoyancy forces, besides its importance as a topic in fundamental science, can contribute towards the understanding of how solutal fields behave under various body forces. The body forces of interest are steady acceleration and g-jitter acceleration as in a Space Shuttle environment or the International Space Station. Since control of the body force is important, the flight experiment will be carried out on a tunable microgravity vibration isolation mount, which will permit us to precisely input the desired forcing function to simulate a range of body forces. To that end, we propose to design a flight experiment that can only be carried out under microgravity conditions to fully exploit the effects of various body forces on fluid mixing. Recent

  19. [Multimodal medical image registration using cubic spline interpolation method].

    Science.gov (United States)

    He, Yuanlie; Tian, Lianfang; Chen, Ping; Wang, Lifei; Ye, Guangchun; Mao, Zongyuan

    2007-12-01

    Based on the characteristic of the PET-CT multimodal image series, a novel image registration and fusion method is proposed, in which the cubic spline interpolation method is applied to realize the interpolation of PET-CT image series, then registration is carried out by using mutual information algorithm and finally the improved principal component analysis method is used for the fusion of PET-CT multimodal images to enhance the visual effect of PET image, thus satisfied registration and fusion results are obtained. The cubic spline interpolation method is used for reconstruction to restore the missed information between image slices, which can compensate for the shortage of previous registration methods, improve the accuracy of the registration, and make the fused multimodal images more similar to the real image. Finally, the cubic spline interpolation method has been successfully applied in developing 3D-CRT (3D Conformal Radiation Therapy) system.

  20. Performance-based parameter tuning method of model-driven PID control systems.

    Science.gov (United States)

    Zhao, Y M; Xie, W F; Tu, X W

    2012-05-01

    In this paper, performance-based parameter tuning method of model-driven Two-Degree-of-Freedom PID (MD TDOF PID) control system has been proposed to enhance the control performances of a process. Known for its ability of stabilizing the unstable processes, fast tracking to the change of set points and rejecting disturbance, the MD TDOF PID has gained research interest recently. The tuning methods for the reported MD TDOF PID are based on internal model control (IMC) method instead of optimizing the performance indices. In this paper, an Integral of Time Absolute Error (ITAE) zero-position-error optimal tuning and noise effect minimizing method is proposed for tuning two parameters in MD TDOF PID control system to achieve the desired regulating and disturbance rejection performance. The comparison with Two-Degree-of-Freedom control scheme by modified smith predictor (TDOF CS MSP) and the designed MD TDOF PID tuned by the IMC tuning method demonstrates the effectiveness of the proposed tuning method. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  1. Evaluation of processing methods for static radioisotope scan images

    International Nuclear Information System (INIS)

    Oakberg, J.A.

    1976-12-01

    Radioisotope scanning in the field of nuclear medicine provides a method for the mapping of a radioactive drug in the human body to produce maps (images) which prove useful in detecting abnormalities in vital organs. At best, radioisotope scanning methods produce images with poor counting statistics. One solution to improving the body scan images is using dedicated small computers with appropriate software to process the scan data. Eleven methods for processing image data are compared

  2. Implicit Active Contours Driven by Local and Global Image Fitting Energy for Image Segmentation and Target Localization

    Directory of Open Access Journals (Sweden)

    Xiaosheng Yu

    2013-01-01

    Full Text Available We propose a novel active contour model in a variational level set formulation for image segmentation and target localization. We combine a local image fitting term and a global image fitting term to drive the contour evolution. Our model can efficiently segment the images with intensity inhomogeneity with the contour starting anywhere in the image. In its numerical implementation, an efficient numerical schema is used to ensure sufficient numerical accuracy. We validated its effectiveness in numerous synthetic images and real images, and the promising experimental results show its advantages in terms of accuracy, efficiency, and robustness.

  3. Enhancement of Electroluminescence (EL) image measurements for failure quantification methods

    DEFF Research Database (Denmark)

    Parikh, Harsh; Spataru, Sergiu; Sera, Dezso

    2018-01-01

    Enhanced quality images are necessary for EL image analysis and failure quantification. A method is proposed which determines image quality in terms of more accurate failure detection of solar panels through electroluminescence (EL) imaging technique. The goal of the paper is to determine the most...

  4. Method for Surface Scanning in Medical Imaging and Related Apparatus

    DEFF Research Database (Denmark)

    2015-01-01

    A method and apparatus for surface scanning in medical imaging is provided. The surface scanning apparatus comprises an image source, a first optical fiber bundle comprising first optical fibers having proximal ends and distal ends, and a first optical coupler for coupling an image from the image...

  5. Alternate method for to realize image fusion

    International Nuclear Information System (INIS)

    Vargas, L.; Hernandez, F.; Fernandez, R.

    2005-01-01

    At the present time the image departments have the necessity of carrying out image fusion obtained of diverse apparatuses. Conventionally its fuse resonance or tomography images by X-rays with functional images as the gammagrams and PET images. The fusion technology is for sale with the modern image equipment and not all the cabinets of nuclear medicine have access to it. By this reason we analyze, study and we find a solution so that all the cabinets of nuclear medicine can benefit of the image fusion. The first indispensable requirement is to have a personal computer with capacity to put up image digitizer cards. It is also possible, if one has a gamma camera that can export images in JPG, GIF, TIFF or BMP formats, to do without of the digitizer card and to record the images in a disk to be able to use them in the personal computer. It is required of one of the following commercially available graph design programs: Corel Draw, Photo Shop, FreeHand, Illustrator or Macromedia Flash that are those that we evaluate and that its allow to make the images fusion. Anyone of them works well and a short training is required to be able to manage them. It is necessary a photographic digital camera with a resolution of at least 3.0 mega pixel. The procedure consists on taking photographic images of the radiological studies that the patient already has, selecting those demonstrative images of the pathology in study and that its can also be concordant with the images that we have created in the gammagraphic studies, whether for planar or tomographic. We transfer the images to the personal computer and we read them with the graph design program. To continuation also reads the gammagraphic images. We use those digital tools to make transparent the images, to clip them, to adjust the sizes and to create the fused images. The process is manual and it is requires of ability and experience to choose the images, the cuts, those sizes and the transparency grade. (Author)

  6. Ultra high-speed x-ray imaging of laser-driven shock compression using synchrotron light

    Science.gov (United States)

    Olbinado, Margie P.; Cantelli, Valentina; Mathon, Olivier; Pascarelli, Sakura; Grenzer, Joerg; Pelka, Alexander; Roedel, Melanie; Prencipe, Irene; Laso Garcia, Alejandro; Helbig, Uwe; Kraus, Dominik; Schramm, Ulrich; Cowan, Tom; Scheel, Mario; Pradel, Pierre; De Resseguier, Thibaut; Rack, Alexander

    2018-02-01

    A high-power, nanosecond pulsed laser impacting the surface of a material can generate an ablation plasma that drives a shock wave into it; while in situ x-ray imaging can provide a time-resolved probe of the shock-induced material behaviour on macroscopic length scales. Here, we report on an investigation into laser-driven shock compression of a polyurethane foam and a graphite rod by means of single-pulse synchrotron x-ray phase-contrast imaging with MHz frame rate. A 6 J, 10 ns pulsed laser was used to generate shock compression. Physical processes governing the laser-induced dynamic response such as elastic compression, compaction, pore collapse, fracture, and fragmentation have been imaged; and the advantage of exploiting the partial spatial coherence of a synchrotron source for studying low-density, carbon-based materials is emphasized. The successful combination of a high-energy laser and ultra high-speed x-ray imaging using synchrotron light demonstrates the potentiality of accessing complementary information from scientific studies of laser-driven shock compression.

  7. Exogenously-driven perceptual alternation of a bistable image: From the perspective of the visual change detection process.

    Science.gov (United States)

    Urakawa, Tomokazu; Aragaki, Tomoya; Araki, Osamu

    2017-07-13

    Based on the predictive coding framework, the present behavioral study focused on the automatic visual change detection process, which yields a concomitant prediction error, as one of the visual processes relevant to the exogenously-driven perceptual alternation of a bistable image. According to this perspective, we speculated that the automatic visual change detection process with an enhanced prediction error is relevant to the greater induction of exogenously-driven perceptual alternation and attempted to test this hypothesis. A modified version of the oddball paradigm was used based on previous electroencephalographic studies on visual change detection, in which the deviant and standard defined by the bar's orientation were symmetrically presented around a continuously presented Necker cube (a bistable image). By manipulating inter-stimulus intervals and the number of standard repetitions, we set three experimental blocks: HM, IM, and LM blocks, in which the strength of the prediction error to the deviant relative to the standard was expected to gradually decrease in that order. The results obtained showed that the deviant significantly increased perceptual alternation of the Necker cube over that by the standard from before to after the presentation of the deviant. Furthermore, the differential proportion of the deviant relative to the standard significantly decreased from the HM block to the IM and LM blocks. These results are consistent with our hypothesis, supporting the involvement of the automatic visual change detection process in the induction of exogenously-driven perceptual alternation. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Global retrieval of soil moisture and vegetation properties using data-driven methods

    Science.gov (United States)

    Rodriguez-Fernandez, Nemesio; Richaume, Philippe; Kerr, Yann

    2017-04-01

    Data-driven methods such as neural networks (NNs) are a powerful tool to retrieve soil moisture from multi-wavelength remote sensing observations at global scale. In this presentation we will review a number of recent results regarding the retrieval of soil moisture with the Soil Moisture and Ocean Salinity (SMOS) satellite, either using SMOS brightness temperatures as input data for the retrieval or using SMOS soil moisture retrievals as reference dataset for the training. The presentation will discuss several possibilities for both the input datasets and the datasets to be used as reference for the supervised learning phase. Regarding the input datasets, it will be shown that NNs take advantage of the synergy of SMOS data and data from other sensors such as the Advanced Scatterometer (ASCAT, active microwaves) and MODIS (visible and infra red). NNs have also been successfully used to construct long time series of soil moisture from the Advanced Microwave Scanning Radiometer - Earth Observing System (AMSR-E) and SMOS. A NN with input data from ASMR-E observations and SMOS soil moisture as reference for the training was used to construct a dataset sharing a similar climatology and without a significant bias with respect to SMOS soil moisture. Regarding the reference data to train the data-driven retrievals, we will show different possibilities depending on the application. Using actual in situ measurements is challenging at global scale due to the scarce distribution of sensors. In contrast, in situ measurements have been successfully used to retrieve SM at continental scale in North America, where the density of in situ measurement stations is high. Using global land surface models to train the NN constitute an interesting alternative to implement new remote sensing surface datasets. In addition, these datasets can be used to perform data assimilation into the model used as reference for the training. This approach has recently been tested at the European Centre

  9. 3D Interpolation Method for CT Images of the Lung

    Directory of Open Access Journals (Sweden)

    Noriaki Asada

    2003-06-01

    Full Text Available A 3-D image can be reconstructed from numerous CT images of the lung. The procedure reconstructs a solid from multiple cross section images, which are collected during pulsation of the heart. Thus the motion of the heart is a special factor that must be taken into consideration during reconstruction. The lung exhibits a repeating transformation synchronized to the beating of the heart as an elastic body. There are discontinuities among neighboring CT images due to the beating of the heart, if no special techniques are used in taking CT images. The 3-D heart image is reconstructed from numerous CT images in which both the heart and the lung are taken. Although the outline shape of the reconstructed 3-D heart is quite unnatural, the envelope of the 3-D unnatural heart is fit to the shape of the standard heart. The envelopes of the lung in the CT images are calculated after the section images of the best fitting standard heart are located at the same positions of the CT images. Thus the CT images are geometrically transformed to the optimal CT images fitting best to the standard heart. Since correct transformation of images is required, an Area oriented interpolation method proposed by us is used for interpolation of transformed images. An attempt to reconstruct a 3-D lung image by a series of such operations without discontinuity is shown. Additionally, the same geometrical transformation method to the original projection images is proposed as a more advanced method.

  10. An efficient direct method for image registration of flat objects

    Science.gov (United States)

    Nikolaev, Dmitry; Tihonkih, Dmitrii; Makovetskii, Artyom; Voronin, Sergei

    2017-09-01

    Image alignment of rigid surfaces is a rapidly developing area of research and has many practical applications. Alignment methods can be roughly divided into two types: feature-based methods and direct methods. Known SURF and SIFT algorithms are examples of the feature-based methods. Direct methods refer to those that exploit the pixel intensities without resorting to image features and image-based deformations are general direct method to align images of deformable objects in 3D space. Nevertheless, it is not good for the registration of images of 3D rigid objects since the underlying structure cannot be directly evaluated. In the article, we propose a model that is suitable for image alignment of rigid flat objects under various illumination models. The brightness consistency assumptions used for reconstruction of optimal geometrical transformation. Computer simulation results are provided to illustrate the performance of the proposed algorithm for computing of an accordance between pixels of two images.

  11. Image Processing Methods Usable for Object Detection on the Chessboard

    Directory of Open Access Journals (Sweden)

    Beran Ladislav

    2016-01-01

    Full Text Available Image segmentation and object detection is challenging problem in many research. Although many algorithms for image segmentation have been invented, there is no simple algorithm for image segmentation and object detection. Our research is based on combination of several methods for object detection. The first method suitable for image segmentation and object detection is colour detection. This method is very simply, but there is problem with different colours. For this method it is necessary to have precisely determined colour of segmented object before all calculations. In many cases it is necessary to determine this colour manually. Alternative simply method is method based on background removal. This method is based on difference between reference image and detected image. In this paper several methods suitable for object detection are described. Thisresearch is focused on coloured object detection on chessboard. The results from this research with fusion of neural networks for user-computer game checkers will be applied.

  12. Finite element formulation for a digital image correlation method

    International Nuclear Information System (INIS)

    Sun Yaofeng; Pang, John H. L.; Wong, Chee Khuen; Su Fei

    2005-01-01

    A finite element formulation for a digital image correlation method is presented that will determine directly the complete, two-dimensional displacement field during the image correlation process on digital images. The entire interested image area is discretized into finite elements that are involved in the common image correlation process by use of our algorithms. This image correlation method with finite element formulation has an advantage over subset-based image correlation methods because it satisfies the requirements of displacement continuity and derivative continuity among elements on images. Numerical studies and a real experiment are used to verify the proposed formulation. Results have shown that the image correlation with the finite element formulation is computationally efficient, accurate, and robust

  13. Perceptual digital imaging methods and applications

    CERN Document Server

    Lukac, Rastislav

    2012-01-01

    Visual perception is a complex process requiring interaction between the receptors in the eye that sense the stimulus and the neural system and the brain that are responsible for communicating and interpreting the sensed visual information. This process involves several physical, neural, and cognitive phenomena whose understanding is essential to design effective and computationally efficient imaging solutions. Building on advances in computer vision, image and video processing, neuroscience, and information engineering, perceptual digital imaging greatly enhances the capabilities of tradition

  14. New LSB-based colour image steganography method to enhance ...

    Indian Academy of Sciences (India)

    Mustafa Cem kasapbaşi

    2018-04-27

    Apr 27, 2018 ... evaluate the proposed method, comparative performance tests are carried out against different spatial image ... image steganography applications based on LSB are ..... worst case scenario could occur when having highest.

  15. ISAR imaging using the instantaneous range instantaneous Doppler method

    CSIR Research Space (South Africa)

    Wazna, TM

    2015-10-01

    Full Text Available In Inverse Synthetic Aperture Radar (ISAR) imaging, the Range Instantaneous Doppler (RID) method is used to compensate for the nonuniform rotational motion of the target that degrades the Doppler resolution of the ISAR image. The Instantaneous Range...

  16. Data-driven Green's function retrieval and application to imaging with multidimensional deconvolution

    Science.gov (United States)

    Broggini, Filippo; Wapenaar, Kees; van der Neut, Joost; Snieder, Roel

    2014-01-01

    An iterative method is presented that allows one to retrieve the Green's function originating from a virtual source located inside a medium using reflection data measured only at the acquisition surface. In addition to the reflection response, an estimate of the travel times corresponding to the direct arrivals is required. However, no detailed information about the heterogeneities in the medium is needed. The iterative scheme generalizes the Marchenko equation for inverse scattering to the seismic reflection problem. To give insight in the mechanism of the iterative method, its steps for a simple layered medium are analyzed using physical arguments based on the stationary phase method. The retrieved Green's wavefield is shown to correctly contain the multiples due to the inhomogeneities present in the medium. Additionally, a variant of the iterative scheme enables decomposition of the retrieved wavefield into its downgoing and upgoing components. These wavefields then enable creation of a ghost-free image of the medium with either cross correlation or multidimensional deconvolution, presenting an advantage over standard prestack migration.

  17. Dynamic reflexivity in action: an armchair walkthrough of a qualitatively driven mixed-method and multiple methods study of mindfulness training in schoolchildren.

    Science.gov (United States)

    Cheek, Julianne; Lipschitz, David L; Abrams, Elizabeth M; Vago, David R; Nakamura, Yoshio

    2015-06-01

    Dynamic reflexivity is central to enabling flexible and emergent qualitatively driven inductive mixed-method and multiple methods research designs. Yet too often, such reflexivity, and how it is used at various points of a study, is absent when we write our research reports. Instead, reports of mixed-method and multiple methods research focus on what was done rather than how it came to be done. This article seeks to redress this absence of emphasis on the reflexive thinking underpinning the way that mixed- and multiple methods, qualitatively driven research approaches are thought about and subsequently used throughout a project. Using Morse's notion of an armchair walkthrough, we excavate and explore the layers of decisions we made about how, and why, to use qualitatively driven mixed-method and multiple methods research in a study of mindfulness training (MT) in schoolchildren. © The Author(s) 2015.

  18. Improved radionuclide bone imaging agent injection needle withdrawal method can improve image quality

    International Nuclear Information System (INIS)

    Qin Yongmei; Wang Laihao; Zhao Lihua; Guo Xiaogang; Kong Qingfeng

    2009-01-01

    Objective: To investigate the improvement of radionuclide bone imaging agent injection needle withdrawal method on whole body bone scan image quality. Methods: Elbow vein injection syringe needle directly into the bone imaging agent in the routine group of 117 cases, with a cotton swab needle injection method for the rapid pull out the needle puncture point pressing, pressing moment. Improvement of 117 cases of needle injection method to put two needles into the skin swabs and blood vessels, pull out the needle while pressing two or more entry point 5min. After 2 hours underwent whole body bone SPECT imaging plane. Results: The conventional group at the injection site imaging agents uptake rate was 16.24%, improved group was 2.56%. Conclusion: The modified bone imaging agent injection needle withdrawal method, injection-site imaging agent uptake were significantly decreased whole body bone imaging can improve image quality. (authors)

  19. Comparative analysis of different methods for image enhancement

    Institute of Scientific and Technical Information of China (English)

    吴笑峰; 胡仕刚; 赵瑾; 李志明; 李劲; 唐志军; 席在芳

    2014-01-01

    Image enhancement technology plays a very important role to improve image quality in image processing. By enhancing some information and restraining other information selectively, it can improve image visual effect. The objective of this work is to implement the image enhancement to gray scale images using different techniques. After the fundamental methods of image enhancement processing are demonstrated, image enhancement algorithms based on space and frequency domains are systematically investigated and compared. The advantage and defect of the above-mentioned algorithms are analyzed. The algorithms of wavelet based image enhancement are also deduced and generalized. Wavelet transform modulus maxima (WTMM) is a method for detecting the fractal dimension of a signal, it is well used for image enhancement. The image techniques are compared by using the mean (μ), standard deviation (s), mean square error (MSE) and PSNR (peak signal to noise ratio). A group of experimental results demonstrate that the image enhancement algorithm based on wavelet transform is effective for image de-noising and enhancement. Wavelet transform modulus maxima method is one of the best methods for image enhancement.

  20. System and method for image mapping and visual attention

    Science.gov (United States)

    Peters, II, Richard A. (Inventor)

    2011-01-01

    A method is described for mapping dense sensory data to a Sensory Ego Sphere (SES). Methods are also described for finding and ranking areas of interest in the images that form a complete visual scene on an SES. Further, attentional processing of image data is best done by performing attentional processing on individual full-size images from the image sequence, mapping each attentional location to the nearest node, and then summing all attentional locations at each node.

  1. Apparatus and method X-ray image processing

    International Nuclear Information System (INIS)

    1984-01-01

    The invention relates to a method for X-ray image processing. The radiation passed through the object is transformed into an electric image signal from which the logarithmic value is determined and displayed by a display device. Its main objective is to provide a method and apparatus that renders X-ray images or X-ray subtraction images with strong reduction of stray radiation. (Auth.)

  2. A comparative study on medical image segmentation methods

    Directory of Open Access Journals (Sweden)

    Praylin Selva Blessy SELVARAJ ASSLEY

    2014-03-01

    Full Text Available Image segmentation plays an important role in medical images. It has been a relevant research area in computer vision and image analysis. Many segmentation algorithms have been proposed for medical images. This paper makes a review on segmentation methods for medical images. In this survey, segmentation methods are divided into five categories: region based, boundary based, model based, hybrid based and atlas based. The five different categories with their principle ideas, advantages and disadvantages in segmenting different medical images are discussed.

  3. An Image Encryption Method Based on Bit Plane Hiding Technology

    Institute of Scientific and Technical Information of China (English)

    LIU Bin; LI Zhitang; TU Hao

    2006-01-01

    A novel image hiding method based on the correlation analysis of bit plane is described in this paper. Firstly, based on the correlation analysis, different bit plane of a secret image is hided in different bit plane of several different open images. And then a new hiding image is acquired by a nesting "Exclusive-OR" operation on those images obtained from the first step. At last, by employing image fusion technique, the final hiding result is achieved. The experimental result shows that the method proposed in this paper is effective.

  4. Study on variance-to-mean method as subcriticality monitor for accelerator driven system operated with pulse-mode

    International Nuclear Information System (INIS)

    Yamauchi, Hideto; Kitamura, Yasunori; Yamane, Yoshihiro; Misawa, Tsuyoshi; Unesaki, Hironobu

    2003-01-01

    Two types of the variance-to-mean methods for the subcritical system that was driven by the periodic and pulsed neutron source were developed and their experimental examination was performed with the Kyoto University Critical Assembly and a pulsed neutron generator. As a result, it was demonstrated that the prompt neutron decay constant could be measured by these methods. From this fact, it was concluded that the present variance-to-mean methods had potential for being used in the subcriticality monitor for the future accelerator driven system operated with the pulse-mode. (author)

  5. Comparison of whole-body-imaging methods

    International Nuclear Information System (INIS)

    Rollo, F.D.; Hoffer, P.

    1977-01-01

    Currently there are four different devices that have found clinical utility in whole-body imaging. These are the rectilinear scanner, the multicrystal whole-body scanner, the Anger-type camera with a whole-body-imaging table, and the tomoscanner. In this text, the basic theory of operation and a discussion of the advantages and disadvantages in whole-body imaging is presented for each device. When applicable, a comparative assessment of the various devices is also presented. As with all else in life, there is no simple answer to the question ''which total body imaging device is best.'' Institutions with a very heavy total-body-imaging load may prefer to use an already available dual-headed rectilinear scanner system for these studies, rather than invest in a new instrument. Institutions with moderate total-body-imaging loads may wish to invest in moving table or moving camera devices which make total body imaging more convenient but retain the basic flexibility of the camera. The large-field Anger camera with or without motion offers another flexible option to these institutions. The laboratory with a very heavy total body imaging load may select efficiency over flexibility, thereby freeing up other instruments for additional studies. Finally, reliability as well as availability and quality of local service must be considered. After all, design features of an instrument become irrelevant when it is broken down and awaiting repair

  6. Methods of fetal MR: beyond T2-weighted imaging

    Energy Technology Data Exchange (ETDEWEB)

    Brugger, Peter C. [Center of Anatomy and Cell Biology, Integrative Morphology Group, Medical University of Vienna, Waehringerstrasse 13, 1090 Vienna (Austria)]. E-mail: peter.brugger@meduniwien.ac.at; Stuhr, Fritz [Department of Radiology, Medical University of Vienna, Waehringerguertel 18-20, 1090 Vienna (Austria); Lindner, Christian [Department of Radiology, Medical University of Vienna, Waehringerguertel 18-20, 1090 Vienna (Austria); Prayer, Daniela [Department of Radiology, Medical University of Vienna, Waehringerguertel 18-20, 1090 Vienna (Austria)

    2006-02-15

    The present work reviews the basic methods of performing fetal magnetic resonance imaging (MRI). Since fetal MRI differs in many respects from a postnatal study, several factors have to be taken into account to achieve satisfying image quality. Image quality depends on adequate positioning of the pregnant woman in the magnet, use of appropriate coils and the selection of sequences. Ultrafast T2-weighted sequences are regarded as the mainstay of fetal MR-imaging. However, additional sequences, such as T1-weighted images, diffusion-weighted images, echoplanar imaging may provide further information, especially in extra- central-nervous system regions of the fetal body.

  7. Methods of fetal MR: beyond T2-weighted imaging

    International Nuclear Information System (INIS)

    Brugger, Peter C.; Stuhr, Fritz; Lindner, Christian; Prayer, Daniela

    2006-01-01

    The present work reviews the basic methods of performing fetal magnetic resonance imaging (MRI). Since fetal MRI differs in many respects from a postnatal study, several factors have to be taken into account to achieve satisfying image quality. Image quality depends on adequate positioning of the pregnant woman in the magnet, use of appropriate coils and the selection of sequences. Ultrafast T2-weighted sequences are regarded as the mainstay of fetal MR-imaging. However, additional sequences, such as T1-weighted images, diffusion-weighted images, echoplanar imaging may provide further information, especially in extra- central-nervous system regions of the fetal body

  8. Driven equilibrium (drive) MR imaging of the cranial nerves V-VIII: comparison with the T2-weighted 3D TSE sequence

    Energy Technology Data Exchange (ETDEWEB)

    Ciftci, E. E-mail: eciftcis7@hotmail.com; Anik, Yonca; Arslan, Arzu; Akansel, Gur; Sarisoy, Tahsin; Demirci, Ali

    2004-09-01

    Purpose: The aim of this study is to evaluate the efficacy of the driven equilibrium radio frequency reset pulse (DRIVE) on image quality and nerve detection when used in adjunction with T2-weighted 3D turbo spin-echo (TSE) sequence. Materials and methods: Forty-five patients with cranial nerve symptoms referable to the cerebellopontine angle (CPA) were examined using a T2-weighted 3D TSE pulse sequence with and without DRIVE. MR imaging was performed on a 1.5-T MRI scanner. In addition to the axial resource images, reformatted oblique sagittal, oblique coronal and maximum intensity projection (MIP) images of the inner ear were evaluated. The nerve identification and image quality were graded for the cranial nerves V-VIII as well as inner ear structures. These structures were chosen because fluid-solid interfaces existed due to the CSF around (the cranial nerves V-VIII) or the endolymph within (the inner ear structures). Statistical analysis was performed using the Wilcoxon test. P<0.05 was considered significant. Results: The addition of the DRIVE pulse shortens the scan time by 25%. T2-weighted 3D TSE sequence with DRIVE performed slightly better than the T2-weighted 3D TSE sequence without DRIVE in identifying the individual nerves. The image quality was also slightly better with DRIVE. Conclusion: The addition of the DRIVE pulse to the T2-weighted 3D TSE sequence is preferable when imaging the cranial nerves surrounded by the CSF, or fluid-filled structures because of shorter scan time and better image quality due to reduced flow artifacts.

  9. Driven equilibrium (drive) MR imaging of the cranial nerves V-VIII: comparison with the T2-weighted 3D TSE sequence

    International Nuclear Information System (INIS)

    Ciftci, E.; Anik, Yonca; Arslan, Arzu; Akansel, Gur; Sarisoy, Tahsin; Demirci, Ali

    2004-01-01

    Purpose: The aim of this study is to evaluate the efficacy of the driven equilibrium radio frequency reset pulse (DRIVE) on image quality and nerve detection when used in adjunction with T2-weighted 3D turbo spin-echo (TSE) sequence. Materials and methods: Forty-five patients with cranial nerve symptoms referable to the cerebellopontine angle (CPA) were examined using a T2-weighted 3D TSE pulse sequence with and without DRIVE. MR imaging was performed on a 1.5-T MRI scanner. In addition to the axial resource images, reformatted oblique sagittal, oblique coronal and maximum intensity projection (MIP) images of the inner ear were evaluated. The nerve identification and image quality were graded for the cranial nerves V-VIII as well as inner ear structures. These structures were chosen because fluid-solid interfaces existed due to the CSF around (the cranial nerves V-VIII) or the endolymph within (the inner ear structures). Statistical analysis was performed using the Wilcoxon test. P<0.05 was considered significant. Results: The addition of the DRIVE pulse shortens the scan time by 25%. T2-weighted 3D TSE sequence with DRIVE performed slightly better than the T2-weighted 3D TSE sequence without DRIVE in identifying the individual nerves. The image quality was also slightly better with DRIVE. Conclusion: The addition of the DRIVE pulse to the T2-weighted 3D TSE sequence is preferable when imaging the cranial nerves surrounded by the CSF, or fluid-filled structures because of shorter scan time and better image quality due to reduced flow artifacts

  10. Standard test method to determine the performance of tiled roofs to wind-driven rain

    Directory of Open Access Journals (Sweden)

    Sánchez de Rojas, M. I.

    2008-09-01

    Full Text Available The extent to which roof coverings can resist water penetration from the combination of wind and rain, commonly referred to as wind driven rain, is important for the design of roofs. A new project of European Standard prEN 15601 (1 specifies a method of test to determine the performance of the roof covering against wind driven rain. The combined action of wind and rain varies considerably with geographical location of a building and the associated differences in the rain and wind climate. Three windrain conditions and one deluge condition covering Northern Europe Coastal, Central Europe and Southern Europe are specified in the project standard, each subdivided into four wind-speeds and rainfall rates to be applied to the test. The project does not contain information on the level of acceptable performance.Para el diseño de los tejados es importante determinar el punto hasta el cual éstos pueden resistirse a la penetración de agua causada por la combinación de viento y lluvia. Un nuevo proyecto de Norma Europeo prEN 15601 (1 especifica un método de ensayo para determinar el comportamiento del tejado frente a la combinación de viento y lluvia. La acción combinada de viento y lluvia varía considerablemente con la situación geográfica de un edificio y las diferencias asociadas al clima de la lluvia y del viento. El proyecto de norma especifica las condiciones de viento y lluvia y una condición de diluvio para cada una de las tres zonas de Europa: Europa del Norte y Costera, Europa Central y Europa del Sur, cada una subdividida en cuatro condiciones de velocidades de viento y caudal de lluvia para ser aplicadas en los ensayos. El proyecto no contiene la información sobre condiciones aceptables.

  11. Hiding a Covert Digital Image by Assembling the RSA Encryption Method and the Binary Encoding Method

    Directory of Open Access Journals (Sweden)

    Kuang Tsan Lin

    2014-01-01

    Full Text Available The Rivest-Shamir-Adleman (RSA encryption method and the binary encoding method are assembled to form a hybrid hiding method to hide a covert digital image into a dot-matrix holographic image. First, the RSA encryption method is used to transform the covert image to form a RSA encryption data string. Then, all the elements of the RSA encryption data string are transferred into binary data. Finally, the binary data are encoded into the dot-matrix holographic image. The pixels of the dot-matrix holographic image contain seven groups of codes used for reconstructing the covert image. The seven groups of codes are identification codes, covert-image dimension codes, covert-image graylevel codes, pre-RSA bit number codes, RSA key codes, post-RSA bit number codes, and information codes. The reconstructed covert image derived from the dot-matrix holographic image and the original covert image are exactly the same.

  12. Data-driven drug safety signal detection methods in pharmacovigilance using electronic primary care records: A population based study

    Directory of Open Access Journals (Sweden)

    Shang-Ming Zhou

    2017-04-01

    Data-driven analytic methods are a valuable aid to signal detection of ADEs from large electronic health records for drug safety monitoring. This study finds the methods can detect known ADE and so could potentially be used to detect unknown ADE.

  13. Research on image complexity evaluation method based on color information

    Science.gov (United States)

    Wang, Hao; Duan, Jin; Han, Xue-hui; Xiao, Bo

    2017-11-01

    In order to evaluate the complexity of a color image more effectively and find the connection between image complexity and image information, this paper presents a method to compute the complexity of image based on color information.Under the complexity ,the theoretical analysis first divides the complexity from the subjective level, divides into three levels: low complexity, medium complexity and high complexity, and then carries on the image feature extraction, finally establishes the function between the complexity value and the color characteristic model. The experimental results show that this kind of evaluation method can objectively reconstruct the complexity of the image from the image feature research. The experimental results obtained by the method of this paper are in good agreement with the results of human visual perception complexity,Color image complexity has a certain reference value.

  14. A new method for mobile phone image denoising

    Science.gov (United States)

    Jin, Lianghai; Jin, Min; Li, Xiang; Xu, Xiangyang

    2015-12-01

    Images captured by mobile phone cameras via pipeline processing usually contain various kinds of noises, especially granular noise with different shapes and sizes in both luminance and chrominance channels. In chrominance channels, noise is closely related to image brightness. To improve image quality, this paper presents a new method to denoise such mobile phone images. The proposed scheme converts the noisy RGB image to luminance and chrominance images, which are then denoised by a common filtering framework. The common filtering framework processes a noisy pixel by first excluding the neighborhood pixels that significantly deviate from the (vector) median and then utilizing the other neighborhood pixels to restore the current pixel. In the framework, the strength of chrominance image denoising is controlled by image brightness. The experimental results show that the proposed method obviously outperforms some other representative denoising methods in terms of both objective measure and visual evaluation.

  15. Data-Driven Method for Wind Turbine Yaw Angle Sensor Zero-Point Shifting Fault Detection

    Directory of Open Access Journals (Sweden)

    Yan Pei

    2018-03-01

    Full Text Available Wind turbine yaw control plays an important role in increasing the wind turbine production and also in protecting the wind turbine. Accurate measurement of yaw angle is the basis of an effective wind turbine yaw controller. The accuracy of yaw angle measurement is affected significantly by the problem of zero-point shifting. Hence, it is essential to evaluate the zero-point shifting error on wind turbines on-line in order to improve the reliability of yaw angle measurement in real time. Particularly, qualitative evaluation of the zero-point shifting error could be useful for wind farm operators to realize prompt and cost-effective maintenance on yaw angle sensors. In the aim of qualitatively evaluating the zero-point shifting error, the yaw angle sensor zero-point shifting fault is firstly defined in this paper. A data-driven method is then proposed to detect the zero-point shifting fault based on Supervisory Control and Data Acquisition (SCADA data. The zero-point shifting fault is detected in the proposed method by analyzing the power performance under different yaw angles. The SCADA data are partitioned into different bins according to both wind speed and yaw angle in order to deeply evaluate the power performance. An indicator is proposed in this method for power performance evaluation under each yaw angle. The yaw angle with the largest indicator is considered as the yaw angle measurement error in our work. A zero-point shifting fault would trigger an alarm if the error is larger than a predefined threshold. Case studies from several actual wind farms proved the effectiveness of the proposed method in detecting zero-point shifting fault and also in improving the wind turbine performance. Results of the proposed method could be useful for wind farm operators to realize prompt adjustment if there exists a large error of yaw angle measurement.

  16. LightCDD: Application of a Capability-Driven Development Method for Start-ups Development

    Directory of Open Access Journals (Sweden)

    Hasan Koç

    2017-04-01

    Full Text Available Novice innovators and entrepreneurs face the risk of designing naive business models. In fact, lack of viability in business models is perceived to be a major threat for the start-up success. Both the literature and the responses we gathered from experts in incubation present evidences of this problem. The LightCDD method helps entrepreneurs in the analysis, design and specification of start-ups that are context aware and adaptive to contextual changes and evolution. In this article we describe the LightCDD method, a context-aware enterprise modeling method that is tailored for business model generation. The LightCDD applies a lightweight Capability‑Driven Development (CDD methodology. It reduces the set of modeling constructs and guidelines to facilitate its adoption by entrepreneurs, yet keeping it expressive enough for their purposes and, at the same time, compatible with the CDD methodology. We provide a booklet with the LightCDD method for start-ups development. The feasibility of the LightCDD method is validated by means of its application to one start-up development case. From a practitioner viewpoint (entrepreneurs and experts in incubation, it is important to provide integrative modeling perspectives to specify business ideas, but it is vital to keep it light. The LightCDD is giving a step forward in this direction. From a researcher point of view, the LightCDD booklet facilitates the application of LightCDD to different start-up development cases. The feasibility validation has produced important feedback for further empirical validation exercises in which is necessary to study the scalability and sensitivity of LightCDD.

  17. New mobile methods for dietary assessment: review of image-assisted and image-based dietary assessment methods.

    Science.gov (United States)

    Boushey, C J; Spoden, M; Zhu, F M; Delp, E J; Kerr, D A

    2017-08-01

    For nutrition practitioners and researchers, assessing dietary intake of children and adults with a high level of accuracy continues to be a challenge. Developments in mobile technologies have created a role for images in the assessment of dietary intake. The objective of this review was to examine peer-reviewed published papers covering development, evaluation and/or validation of image-assisted or image-based dietary assessment methods from December 2013 to January 2016. Images taken with handheld devices or wearable cameras have been used to assist traditional dietary assessment methods for portion size estimations made by dietitians (image-assisted methods). Image-assisted approaches can supplement either dietary records or 24-h dietary recalls. In recent years, image-based approaches integrating application technology for mobile devices have been developed (image-based methods). Image-based approaches aim at capturing all eating occasions by images as the primary record of dietary intake, and therefore follow the methodology of food records. The present paper reviews several image-assisted and image-based methods, their benefits and challenges; followed by details on an image-based mobile food record. Mobile technology offers a wide range of feasible options for dietary assessment, which are easier to incorporate into daily routines. The presented studies illustrate that image-assisted methods can improve the accuracy of conventional dietary assessment methods by adding eating occasion detail via pictures captured by an individual (dynamic images). All of the studies reduced underreporting with the help of images compared with results with traditional assessment methods. Studies with larger sample sizes are needed to better delineate attributes with regards to age of user, degree of error and cost.

  18. Separation method of heavy-ion particle image from gamma-ray mixed images using an imaging plate

    CERN Document Server

    Yamadera, A; Ohuchi, H; Nakamura, T; Fukumura, A

    1999-01-01

    We have developed a separation method of alpha-ray and gamma-ray images using the imaging plate (IP). The IP from which the first image was read out by an image reader was annealed at 50 deg. C for 2 h in a drying oven and the second image was read out by the image reader. It was found out that an annealing ratio, k, which is defined as a ratio of the photo-stimulated luminescence (PSL) density at the first measurement to that at the second measurement, was different for alpha rays and gamma rays. By subtracting the second image multiplied by a factor of k from the first image, the alpha-ray image was separated from the alpha and gamma-ray mixed images. This method was applied to identify the images of helium, carbon and neon particles of high energies using the heavy-ion medical accelerator, HIMAC. (author)

  19. Quantum dynamic imaging theoretical and numerical methods

    CERN Document Server

    Ivanov, Misha

    2011-01-01

    Studying and using light or "photons" to image and then to control and transmit molecular information is among the most challenging and significant research fields to emerge in recent years. One of the fastest growing areas involves research in the temporal imaging of quantum phenomena, ranging from molecular dynamics in the femto (10-15s) time regime for atomic motion to the atto (10-18s) time scale of electron motion. In fact, the attosecond "revolution" is now recognized as one of the most important recent breakthroughs and innovations in the science of the 21st century. A major participant in the development of ultrafast femto and attosecond temporal imaging of molecular quantum phenomena has been theory and numerical simulation of the nonlinear, non-perturbative response of atoms and molecules to ultrashort laser pulses. Therefore, imaging quantum dynamics is a new frontier of science requiring advanced mathematical approaches for analyzing and solving spatial and temporal multidimensional partial differ...

  20. Blind compressed sensing image reconstruction based on alternating direction method

    Science.gov (United States)

    Liu, Qinan; Guo, Shuxu

    2018-04-01

    In order to solve the problem of how to reconstruct the original image under the condition of unknown sparse basis, this paper proposes an image reconstruction method based on blind compressed sensing model. In this model, the image signal is regarded as the product of a sparse coefficient matrix and a dictionary matrix. Based on the existing blind compressed sensing theory, the optimal solution is solved by the alternative minimization method. The proposed method solves the problem that the sparse basis in compressed sensing is difficult to represent, which restrains the noise and improves the quality of reconstructed image. This method ensures that the blind compressed sensing theory has a unique solution and can recover the reconstructed original image signal from a complex environment with a stronger self-adaptability. The experimental results show that the image reconstruction algorithm based on blind compressed sensing proposed in this paper can recover high quality image signals under the condition of under-sampling.

  1. Novel welding image processing method based on fractal theory

    Institute of Scientific and Technical Information of China (English)

    陈强; 孙振国; 肖勇; 路井荣

    2002-01-01

    Computer vision has come into used in the fields of welding process control and automation. In order to improve precision and rapidity of welding image processing, a novel method based on fractal theory has been put forward in this paper. Compared with traditional methods, the image is preliminarily processed in the macroscopic regions then thoroughly analyzed in the microscopic regions in the new method. With which, an image is divided up to some regions according to the different fractal characters of image edge, and the fuzzy regions including image edges are detected out, then image edges are identified with Sobel operator and curved by LSM (Lease Square Method). Since the data to be processed have been decreased and the noise of image has been reduced, it has been testified through experiments that edges of weld seam or weld pool could be recognized correctly and quickly.

  2. Blind Methods for Detecting Image Fakery

    Czech Academy of Sciences Publication Activity Database

    Mahdian, Babak; Saic, Stanislav

    2010-01-01

    Roč. 25, č. 4 (2010), s. 18-24 ISSN 0885-8985 R&D Projects: GA ČR GA102/08/0470 Institutional research plan: CEZ:AV0Z10750506 Keywords : Image forensics * Image Fakery * Forgery detection * Authentication Subject RIV: BD - Theory of Information Impact factor: 0.179, year: 2010 http://library.utia.cas.cz/separaty/2010/ZOI/saic-0343316.pdf

  3. Speckle imaging using the principle value decomposition method

    International Nuclear Information System (INIS)

    Sherman, J.W.

    1978-01-01

    Obtaining diffraction-limited images in the presence of atmospheric turbulence is a topic of current interest. Two types of approaches have evolved: real-time correction and speckle imaging. A speckle imaging reconstruction method was developed by use of an ''optimal'' filtering approach. This method is based on a nonlinear integral equation which is solved by principle value decomposition. The method was implemented on a CDC 7600 for study. The restoration algorithm is discussed and its performance is illustrated. 7 figures

  4. Sample Size Calculations for Population Size Estimation Studies Using Multiplier Methods With Respondent-Driven Sampling Surveys.

    Science.gov (United States)

    Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R

    2017-09-14

    While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey. ©Elizabeth Fearon, Sungai T Chabata, Jennifer A Thompson, Frances M Cowan, James R Hargreaves. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 14.09.2017.

  5. Survey: interpolation methods for whole slide image processing.

    Science.gov (United States)

    Roszkowiak, L; Korzynska, A; Zak, J; Pijanowska, D; Swiderska-Chadaj, Z; Markiewicz, T

    2017-02-01

    Evaluating whole slide images of histological and cytological samples is used in pathology for diagnostics, grading and prognosis . It is often necessary to rescale whole slide images of a very large size. Image resizing is one of the most common applications of interpolation. We collect the advantages and drawbacks of nine interpolation methods, and as a result of our analysis, we try to select one interpolation method as the preferred solution. To compare the performance of interpolation methods, test images were scaled and then rescaled to the original size using the same algorithm. The modified image was compared to the original image in various aspects. The time needed for calculations and results of quantification performance on modified images were also compared. For evaluation purposes, we used four general test images and 12 specialized biological immunohistochemically stained tissue sample images. The purpose of this survey is to determine which method of interpolation is the best to resize whole slide images, so they can be further processed using quantification methods. As a result, the interpolation method has to be selected depending on the task involving whole slide images. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  6. Hypothesis-driven methods to augment human cognition by optimizing cortical oscillations

    Directory of Open Access Journals (Sweden)

    Jörn M. Horschig

    2014-06-01

    Full Text Available Cortical oscillations have been shown to represent fundamental functions of a working brain, e.g. communication, stimulus binding, error monitoring, and inhibition, and are directly linked to behavior. Recent studies intervening with these oscillations have demonstrated effective modulation of both the oscillations and behavior. In this review, we collect evidence in favor of how hypothesis-driven methods can be used to augment cognition by optimizing cortical oscillations. We elaborate their potential usefulness for three target groups: healthy elderly, patients with attention deficit/hyperactivity disorder, and healthy young adults. We discuss the relevance of neuronal oscillations in each group and show how each of them can benefit from the manipulation of functionally-related oscillations. Further, we describe methods for manipulation of neuronal oscillations including direct brain stimulation as well as indirect task alterations. We also discuss practical considerations about the proposed techniques. In conclusion, we propose that insights from neuroscience should guide techniques to augment human cognition, which in turn can provide a better understanding of how the human brain works.

  7. Hydrogen production methods efficiency coupled to an advanced high temperature accelerator driven system

    International Nuclear Information System (INIS)

    Rodríguez, Daniel González; Lira, Carlos Alberto Brayner de Oliveira

    2017-01-01

    The hydrogen economy is one of the most promising concepts for the energy future. In this scenario, oil is replaced by hydrogen as an energy carrier. This hydrogen, rather than oil, must be produced in volumes not provided by the currently employed methods. In this work two high temperature hydrogen production methods coupled to an advanced nuclear system are presented. A new design of a pebbled-bed accelerator nuclear driven system called TADSEA is chosen because of the advantages it has in matters of transmutation and safety. For the conceptual design of the high temperature electrolysis process a detailed computational fluid dynamics model was developed to analyze the solid oxide electrolytic cell that has a huge influence on the process efficiency. A detailed flowsheet of the high temperature electrolysis process coupled to TADSEA through a Brayton gas cycle was developed using chemical process simulation software: Aspen HYSYS®. The model with optimized operating conditions produces 0.1627 kg/s of hydrogen, resulting in an overall process efficiency of 34.51%, a value in the range of results reported by other authors. A conceptual design of the iodine-sulfur thermochemical water splitting cycle was also developed. The overall efficiency of the process was calculated performing an energy balance resulting in 22.56%. The values of efficiency, hydrogen production rate and energy consumption of the proposed models are in the values considered acceptable in the hydrogen economy concept, being also compatible with the TADSEA design parameters. (author)

  8. Hydrogen production methods efficiency coupled to an advanced high temperature accelerator driven system

    Energy Technology Data Exchange (ETDEWEB)

    Rodríguez, Daniel González; Lira, Carlos Alberto Brayner de Oliveira [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil). Departamento de Energia Nuclear; Fernández, Carlos García, E-mail: danielgonro@gmail.com, E-mail: mmhamada@ipen.br [Instituto Superior de Tecnologías y Ciencias aplicadas (InSTEC), La Habana (Cuba)

    2017-07-01

    The hydrogen economy is one of the most promising concepts for the energy future. In this scenario, oil is replaced by hydrogen as an energy carrier. This hydrogen, rather than oil, must be produced in volumes not provided by the currently employed methods. In this work two high temperature hydrogen production methods coupled to an advanced nuclear system are presented. A new design of a pebbled-bed accelerator nuclear driven system called TADSEA is chosen because of the advantages it has in matters of transmutation and safety. For the conceptual design of the high temperature electrolysis process a detailed computational fluid dynamics model was developed to analyze the solid oxide electrolytic cell that has a huge influence on the process efficiency. A detailed flowsheet of the high temperature electrolysis process coupled to TADSEA through a Brayton gas cycle was developed using chemical process simulation software: Aspen HYSYS®. The model with optimized operating conditions produces 0.1627 kg/s of hydrogen, resulting in an overall process efficiency of 34.51%, a value in the range of results reported by other authors. A conceptual design of the iodine-sulfur thermochemical water splitting cycle was also developed. The overall efficiency of the process was calculated performing an energy balance resulting in 22.56%. The values of efficiency, hydrogen production rate and energy consumption of the proposed models are in the values considered acceptable in the hydrogen economy concept, being also compatible with the TADSEA design parameters. (author)

  9. On a selection method of imaging condition in scintigraphy

    International Nuclear Information System (INIS)

    Ikeda, Hozumi; Kishimoto, Kenji; Shimonishi, Yoshihiro; Ohmura, Masahiro; Kosakai, Kazuhisa; Ochi, Hironobu

    1992-01-01

    Selection of imaging condition in scintigraphy was evaluated using analytic hierarchy process. First, a method of the selection was led by determining at the points of image quantity and imaging time. Influence of image quality was thought to depend on changes of system resolution, count density, image size, and image density. Also influence of imaging time was thought to depend on changes of system sensitivity and data acquisition time. Phantom study was done for paired comparison of these selection factors, and relations of sample data and the factors, that is Rollo phantom images were taken by changing count density, image size, and image density. Image quality was shown by calculating the score of visual evaluation that done by comparing of a pair of images in clearer cold lesion on the scintigrams. Imaging time was shown by relative values for changes of count density. However, system resolution and system sensitivity were constant in this study. Next, using these values analytic hierarchy process was adapted for this selection of imaging conditions. We conclude that this selection of imaging conditions can be analyzed quantitatively using analytic hierarchy process and this analysis develops theoretical consideration of imaging technique. (author)

  10. WaveSeq: a novel data-driven method of detecting histone modification enrichments using wavelets.

    Directory of Open Access Journals (Sweden)

    Apratim Mitra

    Full Text Available BACKGROUND: Chromatin immunoprecipitation followed by next-generation sequencing is a genome-wide analysis technique that can be used to detect various epigenetic phenomena such as, transcription factor binding sites and histone modifications. Histone modification profiles can be either punctate or diffuse which makes it difficult to distinguish regions of enrichment from background noise. With the discovery of histone marks having a wide variety of enrichment patterns, there is an urgent need for analysis methods that are robust to various data characteristics and capable of detecting a broad range of enrichment patterns. RESULTS: To address these challenges we propose WaveSeq, a novel data-driven method of detecting regions of significant enrichment in ChIP-Seq data. Our approach utilizes the wavelet transform, is free of distributional assumptions and is robust to diverse data characteristics such as low signal-to-noise ratios and broad enrichment patterns. Using publicly available datasets we showed that WaveSeq compares favorably with other published methods, exhibiting high sensitivity and precision for both punctate and diffuse enrichment regions even in the absence of a control data set. The application of our algorithm to a complex histone modification data set helped make novel functional discoveries which further underlined its utility in such an experimental setup. CONCLUSIONS: WaveSeq is a highly sensitive method capable of accurate identification of enriched regions in a broad range of data sets. WaveSeq can detect both narrow and broad peaks with a high degree of accuracy even in low signal-to-noise ratio data sets. WaveSeq is also suited for application in complex experimental scenarios, helping make biologically relevant functional discoveries.

  11. Methods of filtering the graph images of the functions

    Directory of Open Access Journals (Sweden)

    Олександр Григорович Бурса

    2017-06-01

    Full Text Available The theoretical aspects of cleaning raster images of scanned graphs of functions from digital, chromatic and luminance distortions by using computer graphics techniques have been considered. The basic types of distortions characteristic of graph images of functions have been stated. To suppress the distortion several methods, providing for high-quality of the resulting images and saving their topological features, were suggested. The paper describes the techniques developed and improved by the authors: the method of cleaning the image of distortions by means of iterative contrasting, based on the step-by-step increase in image contrast in the graph by 1%; the method of small entities distortion restoring, based on the thinning of the known matrix of contrast increase filter (the allowable dimensions of the nucleus dilution radius convolution matrix, which provide for the retention of the graph lines have been established; integration technique of the noise reduction method by means of contrasting and distortion restoring method of small entities with known σ-filter. Each method in the complex has been theoretically substantiated. The developed methods involve treatment of graph images as the entire image (global processing and its fragments (local processing. The metrics assessing the quality of the resulting image with the global and local processing have been chosen, the substantiation of the choice as well as the formulas have been given. The proposed complex methods of cleaning the graphs images of functions from grayscale image distortions is adaptive to the form of an image carrier, the distortion level in the image and its distribution. The presented results of testing the developed complex of methods for a representative sample of images confirm its effectiveness

  12. Wavelet imaging cleaning method for atmospheric Cherenkov telescopes

    Science.gov (United States)

    Lessard, R. W.; Cayón, L.; Sembroski, G. H.; Gaidos, J. A.

    2002-07-01

    We present a new method of image cleaning for imaging atmospheric Cherenkov telescopes. The method is based on the utilization of wavelets to identify noise pixels in images of gamma-ray and hadronic induced air showers. This method selects more signal pixels with Cherenkov photons than traditional image processing techniques. In addition, the method is equally efficient at rejecting pixels with noise alone. The inclusion of more signal pixels in an image of an air shower allows for a more accurate reconstruction, especially at lower gamma-ray energies that produce low levels of light. We present the results of Monte Carlo simulations of gamma-ray and hadronic air showers which show improved angular resolution using this cleaning procedure. Data from the Whipple Observatory's 10-m telescope are utilized to show the efficacy of the method for extracting a gamma-ray signal from the background of hadronic generated images.

  13. Training Methods for Image Noise Level Estimation on Wavelet Components

    Directory of Open Access Journals (Sweden)

    A. De Stefano

    2004-12-01

    Full Text Available The estimation of the standard deviation of noise contaminating an image is a fundamental step in wavelet-based noise reduction techniques. The method widely used is based on the mean absolute deviation (MAD. This model-based method assumes specific characteristics of the noise-contaminated image component. Three novel and alternative methods for estimating the noise standard deviation are proposed in this work and compared with the MAD method. Two of these methods rely on a preliminary training stage in order to extract parameters which are then used in the application stage. The sets used for training and testing, 13 and 5 images, respectively, are fully disjoint. The third method assumes specific statistical distributions for image and noise components. Results showed the prevalence of the training-based methods for the images and the range of noise levels considered.

  14. The column architecture -- A novel architecture for event driven 2D pixel imagers

    International Nuclear Information System (INIS)

    Millaud, J.; Nygren, D.

    1996-01-01

    The authors describe an electronic architecture for two-dimensional pixel arrays that permits very large increases in rate capability for event- or data-driven applications relative to conventional x-y architectures. The column architecture also permits more efficient use of silicon area in applications requiring local buffering, frameless data acquisition, and it avoids entirely the problem of ambiguities that may arise in conventional approaches. Two examples of active implementation are described: high energy physics and protein crystallography

  15. Distributed MIMO-ISAR Sub-image Fusion Method

    Directory of Open Access Journals (Sweden)

    Gu Wenkun

    2017-02-01

    Full Text Available The fast fluctuation associated with maneuvering a target’s radar cross-section often affects the imaging performance stability of traditional monostatic Inverse Synthetic Aperture Radar (ISAR. To address this problem, in this study, we propose an imaging method based on the fusion of sub-images of frequencydiversity-distributed multiple Input-Multiple Output-Inverse Synthetic Aperture Radar (MIMO-ISAR. First, we establish the analytic expression of a two-dimensional ISAR sub-image acquired by different channels of distributed MIMO-ISAR. Then, we derive the distance and azimuth distortion factors of the image acquired by the different channels. By compensating for the distortion of the ISAR image, we ultimately realize distributed MIMO-ISAR fusion imaging. Simulations verify the validity of this imaging method using distributed MIMO-ISAR.

  16. Investigation of Optimal Integrated Circuit Raster Image Vectorization Method

    Directory of Open Access Journals (Sweden)

    Leonas Jasevičius

    2011-03-01

    Full Text Available Visual analysis of integrated circuit layer requires raster image vectorization stage to extract layer topology data to CAD tools. In this paper vectorization problems of raster IC layer images are presented. Various line extraction from raster images algorithms and their properties are discussed. Optimal raster image vectorization method was developed which allows utilization of common vectorization algorithms to achieve the best possible extracted vector data match with perfect manual vectorization results. To develop the optimal method, vectorized data quality dependence on initial raster image skeleton filter selection was assessed.Article in Lithuanian

  17. A SAR IMAGE REGISTRATION METHOD BASED ON SIFT ALGORITHM

    Directory of Open Access Journals (Sweden)

    W. Lu

    2017-09-01

    Full Text Available In order to improve the stability and rapidity of synthetic aperture radar (SAR images matching, an effective method was presented. Firstly, the adaptive smoothing filtering was employed for image denoising in image processing based on Wallis filtering to avoid the follow-up noise is amplified. Secondly, feature points were extracted by a simplified SIFT algorithm. Finally, the exact matching of the images was achieved with these points. Compared with the existing methods, it not only maintains the richness of features, but a-lso reduces the noise of the image. The simulation results show that the proposed algorithm can achieve better matching effect.

  18. Anabolic steroid use and body image psychopathology in men: Delineating between appearance- versus performance-driven motivations.

    Science.gov (United States)

    Murray, Stuart B; Griffiths, Scott; Mond, Jonathan M; Kean, Joseph; Blashill, Aaron J

    2016-08-01

    Anabolic androgenic steroid (AAS) use has been robustly associated with negative body image, and eating- and muscularity-oriented psychopathology. However, with AAS being increasingly utilized for both appearance and athletic performance-related purposes, we investigated whether comorbid body image psychopathology varies as a function of motivation for usage. Self-reported motivation for current and initial AAS use was recorded amongst 122 AAS using males, alongside measures of current disordered eating and muscle dysmorphia psychopathology. Those reporting AAS for appearance purposes reported greater overall eating disorder psychopathology, F(2, 118)=7.45, p=0.001, ηp(2)=0.11, and muscle dysmorphia psychopathology, F(2, 118)=7.22, ppsychopathology amongst users. Men whose AAS use is driven primarily by appearance-related concerns may be a particularly dysfunctional subgroup. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  19. A method for fast automated microscope image stitching.

    Science.gov (United States)

    Yang, Fan; Deng, Zhen-Sheng; Fan, Qiu-Hong

    2013-05-01

    Image stitching is an important technology to produce a panorama or larger image by combining several images with overlapped areas. In many biomedical researches, image stitching is highly desirable to acquire a panoramic image which represents large areas of certain structures or whole sections, while retaining microscopic resolution. In this study, we develop a fast normal light microscope image stitching algorithm based on feature extraction. At first, an algorithm of scale-space reconstruction of speeded-up robust features (SURF) was proposed to extract features from the images to be stitched with a short time and higher repeatability. Then, the histogram equalization (HE) method was employed to preprocess the images to enhance their contrast for extracting more features. Thirdly, the rough overlapping zones of the images preprocessed were calculated by phase correlation, and the improved SURF was used to extract the image features in the rough overlapping areas. Fourthly, the features were corresponded by matching algorithm and the transformation parameters were estimated, then the images were blended seamlessly. Finally, this procedure was applied to stitch normal light microscope images to verify its validity. Our experimental results demonstrate that the improved SURF algorithm is very robust to viewpoint, illumination, blur, rotation and zoom of the images and our method is able to stitch microscope images automatically with high precision and high speed. Also, the method proposed in this paper is applicable to registration and stitching of common images as well as stitching the microscope images in the field of virtual microscope for the purpose of observing, exchanging, saving, and establishing a database of microscope images. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Computer driven optical keratometer and method of evaluating the shape of the cornea

    Science.gov (United States)

    Baroth, Edmund C. (Inventor); Mouneimme, Samih A. (Inventor)

    1994-01-01

    An apparatus and method for measuring the shape of the cornea utilize only one reticle to generate a pattern of rings projected onto the surface of a subject's eye. The reflected pattern is focused onto an imaging device such as a video camera and a computer compares the reflected pattern with a reference pattern stored in the computer's memory. The differences between the reflected and stored patterns are used to calculate the deformation of the cornea which may be useful for pre-and post-operative evaluation of the eye by surgeons.

  1. On two methods of statistical image analysis

    NARCIS (Netherlands)

    Missimer, J; Knorr, U; Maguire, RP; Herzog, H; Seitz, RJ; Tellman, L; Leenders, K.L.

    1999-01-01

    The computerized brain atlas (CBA) and statistical parametric mapping (SPM) are two procedures for voxel-based statistical evaluation of PET activation studies. Each includes spatial standardization of image volumes, computation of a statistic, and evaluation of its significance. In addition,

  2. Ultrasound Imaging Methods for Breast Cancer Detection

    NARCIS (Netherlands)

    Ozmen, N.

    2014-01-01

    The main focus of this thesis is on modeling acoustic wavefield propagation and implementing imaging algorithms for breast cancer detection using ultrasound. As a starting point, we use an integral equation formulation, which can be used to solve both the forward and inverse problems. This thesis

  3. Mixed convection in inclined lid driven cavity by Lattice Boltzmann Method and heat flux boundary condition

    International Nuclear Information System (INIS)

    D'Orazio, A; Karimipour, A; Nezhad, A H; Shirani, E

    2014-01-01

    Laminar mixed convective heat transfer in two-dimensional rectangular inclined driven cavity is studied numerically by means of a double population thermal Lattice Boltzmann method. Through the top moving lid the heat flux enters the cavity whereas it leaves the system through the bottom wall; side walls are adiabatic. The counter-slip internal energy density boundary condition, able to simulate an imposed non zero heat flux at the wall, is applied, in order to demonstrate that it can be effectively used to simulate heat transfer phenomena also in case of moving walls. Results are analyzed over a range of the Richardson numbers and tilting angles of the enclosure, encompassing the dominating forced convection, mixed convection, and dominating natural convection flow regimes. As expected, heat transfer rate increases as increases the inclination angle, but this effect is significant for higher Richardson numbers, when buoyancy forces dominate the problem; for horizontal cavity, average Nusselt number decreases with the increase of Richardson number because of the stratified field configuration

  4. Analytical method of CIM to PIM transformation in Model Driven Architecture (MDA

    Directory of Open Access Journals (Sweden)

    Martin Kardos

    2010-06-01

    Full Text Available Information system’s models on higher level of abstraction have become a daily routine in many software companies. The concept of Model Driven Architecture (MDA published by standardization body OMG1 since 2001 has become a concept for creation of software applications and information systems. MDA specifies four levels of abstraction: top three levels are created as graphical models and the last one as implementation code model. Many research works of MDA are focusing on the lower levels and transformations between each other. The top level of abstraction, called Computation Independent Model (CIM and its transformation to the lower level called Platform Independent Model (PIM is not so extensive research topic. Considering to a great importance and usability of this level in practice of IS2Keywords: transformation, MDA, CIM, PIM, UML, DFD. development now our research activity is focused to this highest level of abstraction – CIM and its possible transformation to the lower PIM level. In this article we are presenting a possible solution of CIM modeling and its analytic method of transformation to PIM.

  5. Probability Sampling Method for a Hidden Population Using Respondent-Driven Sampling: Simulation for Cancer Survivors.

    Science.gov (United States)

    Jung, Minsoo

    2015-01-01

    When there is no sampling frame within a certain group or the group is concerned that making its population public would bring social stigma, we say the population is hidden. It is difficult to approach this kind of population survey-methodologically because the response rate is low and its members are not quite honest with their responses when probability sampling is used. The only alternative known to address the problems caused by previous methods such as snowball sampling is respondent-driven sampling (RDS), which was developed by Heckathorn and his colleagues. RDS is based on a Markov chain, and uses the social network information of the respondent. This characteristic allows for probability sampling when we survey a hidden population. We verified through computer simulation whether RDS can be used on a hidden population of cancer survivors. According to the simulation results of this thesis, the chain-referral sampling of RDS tends to minimize as the sample gets bigger, and it becomes stabilized as the wave progresses. Therefore, it shows that the final sample information can be completely independent from the initial seeds if a certain level of sample size is secured even if the initial seeds were selected through convenient sampling. Thus, RDS can be considered as an alternative which can improve upon both key informant sampling and ethnographic surveys, and it needs to be utilized for various cases domestically as well.

  6. Human body region enhancement method based on Kinect infrared imaging

    Science.gov (United States)

    Yang, Lei; Fan, Yubo; Song, Xiaowei; Cai, Wenjing

    2016-10-01

    To effectively improve the low contrast of human body region in the infrared images, a combing method of several enhancement methods is utilized to enhance the human body region. Firstly, for the infrared images acquired by Kinect, in order to improve the overall contrast of the infrared images, an Optimal Contrast-Tone Mapping (OCTM) method with multi-iterations is applied to balance the contrast of low-luminosity infrared images. Secondly, to enhance the human body region better, a Level Set algorithm is employed to improve the contour edges of human body region. Finally, to further improve the human body region in infrared images, Laplacian Pyramid decomposition is adopted to enhance the contour-improved human body region. Meanwhile, the background area without human body region is processed by bilateral filtering to improve the overall effect. With theoretical analysis and experimental verification, the results show that the proposed method could effectively enhance the human body region of such infrared images.

  7. Imaging systems and methods for obtaining and using biometric information

    Science.gov (United States)

    McMakin, Douglas L [Richland, WA; Kennedy, Mike O [Richland, WA

    2010-11-30

    Disclosed herein are exemplary embodiments of imaging systems and methods of using such systems. In one exemplary embodiment, one or more direct images of the body of a clothed subject are received, and a motion signature is determined from the one or more images. In this embodiment, the one or more images show movement of the body of the subject over time, and the motion signature is associated with the movement of the subject's body. In certain implementations, the subject can be identified based at least in part on the motion signature. Imaging systems for performing any of the disclosed methods are also disclosed herein. Furthermore, the disclosed imaging, rendering, and analysis methods can be implemented, at least in part, as one or more computer-readable media comprising computer-executable instructions for causing a computer to perform the respective methods.

  8. Imaging method of brain surface anatomy structures using conventional T2-weighted MR images

    International Nuclear Information System (INIS)

    Hatanaka, Masahiko; Machida, Yoshio; Yoshida, Tadatoki; Katada, Kazuhiro.

    1992-01-01

    As a non-invasive technique for visualizing the brain surface structure by MRI, surface anatomy scanning (SAS) and the multislice SAS methods have been developed. Both techniques require additional MRI scanning to obtain images for the brain surface. In this paper, we report an alternative method to obtain the brain surface image using conventional T2-weighted multislice images without any additional scanning. The power calculation of the image pixel values, which is incorporated in the routine processing, has been applied in order to enhance the cerebrospinal fluid (CSF) contrast. We think that this method is one of practical approaches for imaging the surface anatomy of the brain. (author)

  9. Brain diagnosis with imaging methods: Psychical changes made visible

    International Nuclear Information System (INIS)

    Anon.

    1988-01-01

    The First International Symposium on Imaging Methods in Psychiatry, held in May 1988 in Wuerzburg, very impressively has shown that imaging methods are on advance not only in medical diagnostics, but also in psychiatric diagnostics, where they already proved to be a valuable tool. (orig./MG) [de

  10. A portable measurement system for subcriticality measurements by the Cf-source-driven neutron noise analysis method

    International Nuclear Information System (INIS)

    Mihalczo, J.T.; Ragan, G.E.; Blakeman, E.D.

    1987-01-01

    A portable measurement system consisting of a personal computer used as a Fourier analyzer and three detection channels (with associated electronics that provide the signals to analog-to-digital (A/D) convertors) has been assembled to measure subcriticality by the 252 Cf-source-driven neutron noise analysis method. 8 refs

  11. Respondent driven sampling: determinants of recruitment and a method to improve point estimation.

    Directory of Open Access Journals (Sweden)

    Nicky McCreesh

    Full Text Available Respondent-driven sampling (RDS is a variant of a link-tracing design intended for generating unbiased estimates of the composition of hidden populations that typically involves giving participants several coupons to recruit their peers into the study. RDS may generate biased estimates if coupons are distributed non-randomly or if potential recruits present for interview non-randomly. We explore if biases detected in an RDS study were due to either of these mechanisms, and propose and apply weights to reduce bias due to non-random presentation for interview.Using data from the total population, and the population to whom recruiters offered their coupons, we explored how age and socioeconomic status were associated with being offered a coupon, and, if offered a coupon, with presenting for interview. Population proportions were estimated by weighting by the assumed inverse probabilities of being offered a coupon (as in existing RDS methods, and also of presentation for interview if offered a coupon by age and socioeconomic status group.Younger men were under-recruited primarily because they were less likely to be offered coupons. The under-recruitment of higher socioeconomic status men was due in part to them being less likely to present for interview. Consistent with these findings, weighting for non-random presentation for interview by age and socioeconomic status group greatly improved the estimate of the proportion of men in the lowest socioeconomic group, reducing the root-mean-squared error of RDS estimates of socioeconomic status by 38%, but had little effect on estimates for age. The weighting also improved estimates for tribe and religion (reducing root-mean-squared-errors by 19-29%, but had little effect for sexual activity or HIV status.Data collected from recruiters on the characteristics of men to whom they offered coupons may be used to reduce bias in RDS studies. Further evaluation of this new method is required.

  12. Microcomputer-based artificial vision support system for real-time image processing for camera-driven visual prostheses

    Science.gov (United States)

    Fink, Wolfgang; You, Cindy X.; Tarbell, Mark A.

    2010-01-01

    It is difficult to predict exactly what blind subjects with camera-driven visual prostheses (e.g., retinal implants) can perceive. Thus, it is prudent to offer them a wide variety of image processing filters and the capability to engage these filters repeatedly in any user-defined order to enhance their visual perception. To attain true portability, we employ a commercial off-the-shelf battery-powered general purpose Linux microprocessor platform to create the microcomputer-based artificial vision support system (μAVS2) for real-time image processing. Truly standalone, μAVS2 is smaller than a deck of playing cards, lightweight, fast, and equipped with USB, RS-232 and Ethernet interfaces. Image processing filters on μAVS2 operate in a user-defined linear sequential-loop fashion, resulting in vastly reduced memory and CPU requirements during execution. μAVS2 imports raw video frames from a USB or IP camera, performs image processing, and issues the processed data over an outbound Internet TCP/IP or RS-232 connection to the visual prosthesis system. Hence, μAVS2 affords users of current and future visual prostheses independent mobility and the capability to customize the visual perception generated. Additionally, μAVS2 can easily be reconfigured for other prosthetic systems. Testing of μAVS2 with actual retinal implant carriers is envisioned in the near future.

  13. Radiopharmaceutical chelates and method of external imaging

    International Nuclear Information System (INIS)

    1976-01-01

    The preparation of the following chemicals is described: chelates of technetium-99m, cobalt-57, gallium-67, gallium-68, indium-111 or indium-113m and a substituted iminodiacetic acid or an 8-hydroxyquinoline useful as a radiopharmaceutical external imaging agent. The compounds described are suitable for intravenous injection, have an excellent in vivo stability and are good organ seekers. Tin(II) choride or other tin(II) compounds are used as chelating agents

  14. Soft Shadow Removal and Image Evaluation Methods

    OpenAIRE

    Gryka, M.

    2016-01-01

    High-level image manipulation techniques are in increasing demand as they allow users to intuitively edit photographs to achieve desired effects quickly. As opposed to low-level manipulations, which provide complete freedom, but also require specialized skills and significant effort, high-level editing operations, such as removing objects (inpainting), relighting and material editing, need to respect semantic constraints. As such they shift the burden from the user to the algorithm to only al...

  15. An FPGA-based heterogeneous image fusion system design method

    Science.gov (United States)

    Song, Le; Lin, Yu-chi; Chen, Yan-hua; Zhao, Mei-rong

    2011-08-01

    Taking the advantages of FPGA's low cost and compact structure, an FPGA-based heterogeneous image fusion platform is established in this study. Altera's Cyclone IV series FPGA is adopted as the core processor of the platform, and the visible light CCD camera and infrared thermal imager are used as the image-capturing device in order to obtain dualchannel heterogeneous video images. Tailor-made image fusion algorithms such as gray-scale weighted averaging, maximum selection and minimum selection methods are analyzed and compared. VHDL language and the synchronous design method are utilized to perform a reliable RTL-level description. Altera's Quartus II 9.0 software is applied to simulate and implement the algorithm modules. The contrast experiments of various fusion algorithms show that, preferably image quality of the heterogeneous image fusion can be obtained on top of the proposed system. The applied range of the different fusion algorithms is also discussed.

  16. Towards a novel laser-driven method of exotic nuclei extraction−acceleration for fundamental physics and technology

    Energy Technology Data Exchange (ETDEWEB)

    Nishiuchi, M., E-mail: sergei@jaea.go.jp; Sakaki, H.; Esirkepov, T. Zh. [Japan Atomic Energy Agency, Kansai Photon Science Institute (Japan); Nishio, K. [Japan Atomic Energy Agency, Advanced Science Research Center (Japan); Pikuz, T. A.; Faenov, A. Ya. [Japan Atomic Energy Agency, Kansai Photon Science Institute (Japan); Skobelev, I. Yu. [Russian Academy of Sciences, Joint Institute for High Temperature (Russian Federation); Orlandi, R. [Japan Atomic Energy Agency, Advanced Science Research Center (Japan); Pirozhkov, A. S.; Sagisaka, A.; Ogura, K.; Kanasaki, M.; Kiriyama, H.; Fukuda, Y. [Japan Atomic Energy Agency, Kansai Photon Science Institute (Japan); Koura, H. [Japan Atomic Energy Agency, Advanced Science Research Center (Japan); Kando, M. [Japan Atomic Energy Agency, Kansai Photon Science Institute (Japan); Yamauchi, T. [Graduate School of Maritime Sciences (Japan); Watanabe, Y. [Kyushu University, Interdisciplinary Graduate School of Engineering Sciences (Japan); Bulanov, S. V., E-mail: svbulanov@gmail.com; Kondo, K. [Japan Atomic Energy Agency, Kansai Photon Science Institute (Japan); and others

    2016-04-15

    A combination of a petawatt laser and nuclear physics techniques can crucially facilitate the measurement of exotic nuclei properties. With numerical simulations and laser-driven experiments we show prospects for the Laser-driven Exotic Nuclei extraction–acceleration method proposed in [M. Nishiuchi et al., Phys, Plasmas 22, 033107 (2015)]: a femtosecond petawatt laser, irradiating a target bombarded by an external ion beam, extracts from the target and accelerates to few GeV highly charged short-lived heavy exotic nuclei created in the target via nuclear reactions.

  17. Level set method for image segmentation based on moment competition

    Science.gov (United States)

    Min, Hai; Wang, Xiao-Feng; Huang, De-Shuang; Jin, Jing; Wang, Hong-Zhi; Li, Hai

    2015-05-01

    We propose a level set method for image segmentation which introduces the moment competition and weakly supervised information into the energy functional construction. Different from the region-based level set methods which use force competition, the moment competition is adopted to drive the contour evolution. Here, a so-called three-point labeling scheme is proposed to manually label three independent points (weakly supervised information) on the image. Then the intensity differences between the three points and the unlabeled pixels are used to construct the force arms for each image pixel. The corresponding force is generated from the global statistical information of a region-based method and weighted by the force arm. As a result, the moment can be constructed and incorporated into the energy functional to drive the evolving contour to approach the object boundary. In our method, the force arm can take full advantage of the three-point labeling scheme to constrain the moment competition. Additionally, the global statistical information and weakly supervised information are successfully integrated, which makes the proposed method more robust than traditional methods for initial contour placement and parameter setting. Experimental results with performance analysis also show the superiority of the proposed method on segmenting different types of complicated images, such as noisy images, three-phase images, images with intensity inhomogeneity, and texture images.

  18. Analysis of live cell images: Methods, tools and opportunities.

    Science.gov (United States)

    Nketia, Thomas A; Sailem, Heba; Rohde, Gustavo; Machiraju, Raghu; Rittscher, Jens

    2017-02-15

    Advances in optical microscopy, biosensors and cell culturing technologies have transformed live cell imaging. Thanks to these advances live cell imaging plays an increasingly important role in basic biology research as well as at all stages of drug development. Image analysis methods are needed to extract quantitative information from these vast and complex data sets. The aim of this review is to provide an overview of available image analysis methods for live cell imaging, in particular required preprocessing image segmentation, cell tracking and data visualisation methods. The potential opportunities recent advances in machine learning, especially deep learning, and computer vision provide are being discussed. This review includes overview of the different available software packages and toolkits. Copyright © 2017. Published by Elsevier Inc.

  19. Whither RDS? An investigation of Respondent Driven Sampling as a method of recruiting mainstream marijuana users

    Directory of Open Access Journals (Sweden)

    Cousineau Marie-Marthe

    2010-07-01

    Full Text Available Abstract Background An important challenge in conducting social research of specific relevance to harm reduction programs is locating hidden populations of consumers of substances like cannabis who typically report few adverse or unwanted consequences of their use. Much of the deviant, pathologized perception of drug users is historically derived from, and empirically supported, by a research emphasis on gaining ready access to users in drug treatment or in prison populations with higher incidence of problems of dependence and misuse. Because they are less visible, responsible recreational users of illicit drugs have been more difficult to study. Methods This article investigates Respondent Driven Sampling (RDS as a method of recruiting experienced marijuana users representative of users in the general population. Based on sampling conducted in a multi-city study (Halifax, Montreal, Toronto, and Vancouver, and compared to samples gathered using other research methods, we assess the strengths and weaknesses of RDS recruitment as a means of gaining access to illicit substance users who experience few harmful consequences of their use. Demographic characteristics of the sample in Toronto are compared with those of users in a recent household survey and a pilot study of Toronto where the latter utilized nonrandom self-selection of respondents. Results A modified approach to RDS was necessary to attain the target sample size in all four cities (i.e., 40 'users' from each site. The final sample in Toronto was largely similar, however, to marijuana users in a random household survey that was carried out in the same city. Whereas well-educated, married, whites and females in the survey were all somewhat overrepresented, the two samples, overall, were more alike than different with respect to economic status and employment. Furthermore, comparison with a self-selected sample suggests that (even modified RDS recruitment is a cost-effective way of

  20. Characterisation of deuterium spectra from laser driven multi-species sources by employing differentially filtered image plate detectors in Thomson spectrometers

    International Nuclear Information System (INIS)

    Alejo, A.; Kar, S.; Ahmed, H.; Doria, D.; Green, A.; Jung, D.; Lewis, C. L. S.; Nersisyan, G.; Krygier, A. G.; Freeman, R. R.; Clarke, R.; Green, J. S.; Notley, M.; Fernandez, J.; Fuchs, J.; Kleinschmidt, A.; Roth, M.; Morrison, J. T.; Najmudin, Z.; Nakamura, H.

    2014-01-01

    A novel method for characterising the full spectrum of deuteron ions emitted by laser driven multi-species ion sources is discussed. The procedure is based on using differential filtering over the detector of a Thompson parabola ion spectrometer, which enables discrimination of deuterium ions from heavier ion species with the same charge-to-mass ratio (such as C 6+ , O 8+ , etc.). Commonly used Fuji Image plates were used as detectors in the spectrometer, whose absolute response to deuterium ions over a wide range of energies was calibrated by using slotted CR-39 nuclear track detectors. A typical deuterium ion spectrum diagnosed in a recent experimental campaign is presented, which was produced from a thin deuterated plastic foil target irradiated by a high power laser

  1. Characterisation of deuterium spectra from laser driven multi-species sources by employing differentially filtered image plate detectors in Thomson spectrometers

    Science.gov (United States)

    Alejo, A.; Kar, S.; Ahmed, H.; Krygier, A. G.; Doria, D.; Clarke, R.; Fernandez, J.; Freeman, R. R.; Fuchs, J.; Green, A.; Green, J. S.; Jung, D.; Kleinschmidt, A.; Lewis, C. L. S.; Morrison, J. T.; Najmudin, Z.; Nakamura, H.; Nersisyan, G.; Norreys, P.; Notley, M.; Oliver, M.; Roth, M.; Ruiz, J. A.; Vassura, L.; Zepf, M.; Borghesi, M.

    2014-09-01

    A novel method for characterising the full spectrum of deuteron ions emitted by laser driven multi-species ion sources is discussed. The procedure is based on using differential filtering over the detector of a Thompson parabola ion spectrometer, which enables discrimination of deuterium ions from heavier ion species with the same charge-to-mass ratio (such as C6 +, O8 +, etc.). Commonly used Fuji Image plates were used as detectors in the spectrometer, whose absolute response to deuterium ions over a wide range of energies was calibrated by using slotted CR-39 nuclear track detectors. A typical deuterium ion spectrum diagnosed in a recent experimental campaign is presented, which was produced from a thin deuterated plastic foil target irradiated by a high power laser.

  2. Characterisation of deuterium spectra from laser driven multi-species sources by employing differentially filtered image plate detectors in Thomson spectrometers

    Energy Technology Data Exchange (ETDEWEB)

    Alejo, A.; Kar, S., E-mail: s.kar@qub.ac.uk; Ahmed, H.; Doria, D.; Green, A.; Jung, D.; Lewis, C. L. S.; Nersisyan, G. [Centre for Plasma Physics, School of Mathematics and Physics, Queen' s University Belfast, Belfast BT7 1NN (United Kingdom); Krygier, A. G.; Freeman, R. R. [Department of Physics, The Ohio State University, Columbus, Ohio 43210 (United States); Clarke, R.; Green, J. S.; Notley, M. [Central Laser Facility, Rutherford Appleton Laboratory, Didcot, Oxfordshire OX11 0QX (United Kingdom); Fernandez, J. [Central Laser Facility, Rutherford Appleton Laboratory, Didcot, Oxfordshire OX11 0QX (United Kingdom); Instituto de Fusión Nuclear, Universidad Politécnica de Madrid, 28006 Madrid (Spain); Fuchs, J. [LULI, École Polytechnique, CNRS, CEA, UPMC, 91128 Palaiseau (France); Kleinschmidt, A.; Roth, M. [Institut für Kernphysik, Technische Universität Darmstadt, Schloßgartenstrasse 9, D-64289 Darmstadt (Germany); Morrison, J. T. [Propulsion Systems Directorate, Air Force Research Lab, Wright Patterson Air Force Base, Ohio 45433 (United States); Najmudin, Z.; Nakamura, H. [Blackett Laboratory, Department of Physics, Imperial College, London SW7 2AZ (United Kingdom); and others

    2014-09-15

    A novel method for characterising the full spectrum of deuteron ions emitted by laser driven multi-species ion sources is discussed. The procedure is based on using differential filtering over the detector of a Thompson parabola ion spectrometer, which enables discrimination of deuterium ions from heavier ion species with the same charge-to-mass ratio (such as C{sup 6+}, O{sup 8+}, etc.). Commonly used Fuji Image plates were used as detectors in the spectrometer, whose absolute response to deuterium ions over a wide range of energies was calibrated by using slotted CR-39 nuclear track detectors. A typical deuterium ion spectrum diagnosed in a recent experimental campaign is presented, which was produced from a thin deuterated plastic foil target irradiated by a high power laser.

  3. Quantitative Analysis of Range Image Patches by NEB Method

    Directory of Open Access Journals (Sweden)

    Wang Wen

    2017-01-01

    Full Text Available In this paper we analyze sampled high dimensional data with the NEB method from a range image database. Select a large random sample of log-valued, high contrast, normalized, 8×8 range image patches from the Brown database. We make a density estimator and we establish 1-dimensional cell complexes from the range image patch data. We find topological properties of 8×8 range image patches, prove that there exist two types of subsets of 8×8 range image patches modelled as a circle.

  4. New Finger Biometric Method Using Near Infrared Imaging

    Science.gov (United States)

    Lee, Eui Chul; Jung, Hyunwoo; Kim, Daeyeoul

    2011-01-01

    In this paper, we propose a new finger biometric method. Infrared finger images are first captured, and then feature extraction is performed using a modified Gaussian high-pass filter through binarization, local binary pattern (LBP), and local derivative pattern (LDP) methods. Infrared finger images include the multimodal features of finger veins and finger geometries. Instead of extracting each feature using different methods, the modified Gaussian high-pass filter is fully convolved. Therefore, the extracted binary patterns of finger images include the multimodal features of veins and finger geometries. Experimental results show that the proposed method has an error rate of 0.13%. PMID:22163741

  5. Image Classification Workflow Using Machine Learning Methods

    Science.gov (United States)

    Christoffersen, M. S.; Roser, M.; Valadez-Vergara, R.; Fernández-Vega, J. A.; Pierce, S. A.; Arora, R.

    2016-12-01

    Recent increases in the availability and quality of remote sensing datasets have fueled an increasing number of scientifically significant discoveries based on land use classification and land use change analysis. However, much of the software made to work with remote sensing data products, specifically multispectral images, is commercial and often prohibitively expensive. The free to use solutions that are currently available come bundled up as small parts of much larger programs that are very susceptible to bugs and difficult to install and configure. What is needed is a compact, easy to use set of tools to perform land use analysis on multispectral images. To address this need, we have developed software using the Python programming language with the sole function of land use classification and land use change analysis. We chose Python to develop our software because it is relatively readable, has a large body of relevant third party libraries such as GDAL and Spectral Python, and is free to install and use on Windows, Linux, and Macintosh operating systems. In order to test our classification software, we performed a K-means unsupervised classification, Gaussian Maximum Likelihood supervised classification, and a Mahalanobis Distance based supervised classification. The images used for testing were three Landsat rasters of Austin, Texas with a spatial resolution of 60 meters for the years of 1984 and 1999, and 30 meters for the year 2015. The testing dataset was easily downloaded using the Earth Explorer application produced by the USGS. The software should be able to perform classification based on any set of multispectral rasters with little to no modification. Our software makes the ease of land use classification using commercial software available without an expensive license.

  6. THE EFFECT OF IMAGE ENHANCEMENT METHODS DURING FEATURE DETECTION AND MATCHING OF THERMAL IMAGES

    Directory of Open Access Journals (Sweden)

    O. Akcay

    2017-05-01

    Full Text Available A successful image matching is essential to provide an automatic photogrammetric process accurately. Feature detection, extraction and matching algorithms have performed on the high resolution images perfectly. However, images of cameras, which are equipped with low-resolution thermal sensors are problematic with the current algorithms. In this paper, some digital image processing techniques were applied to the low-resolution images taken with Optris PI 450 382 x 288 pixel optical resolution lightweight thermal camera to increase extraction and matching performance. Image enhancement methods that adjust low quality digital thermal images, were used to produce more suitable images for detection and extraction. Three main digital image process techniques: histogram equalization, high pass and low pass filters were considered to increase the signal-to-noise ratio, sharpen image, remove noise, respectively. Later on, the pre-processed images were evaluated using current image detection and feature extraction methods Maximally Stable Extremal Regions (MSER and Speeded Up Robust Features (SURF algorithms. Obtained results showed that some enhancement methods increased number of extracted features and decreased blunder errors during image matching. Consequently, the effects of different pre-process techniques were compared in the paper.

  7. Multi-band Image Registration Method Based on Fourier Transform

    Institute of Scientific and Technical Information of China (English)

    庹红娅; 刘允才

    2004-01-01

    This paper presented a registration method based on Fourier transform for multi-band images which is involved in translation and small rotation. Although different band images differ a lot in the intensity and features,they contain certain common information which we can exploit. A model was given that the multi-band images have linear correlations under the least-square sense. It is proved that the coefficients have no effect on the registration progress if two images have linear correlations. Finally, the steps of the registration method were proposed. The experiments show that the model is reasonable and the results are satisfying.

  8. Reactivity determination in accelerator driven nuclear reactors by statistics from neutron detectors (Feynman-Alpha Method)

    International Nuclear Information System (INIS)

    Ceder, M.

    2002-03-01

    The Feynman-alpha method is used in traditional nuclear reactors to determine the subcritical reactivity of a system. The method is based on the measurement of the mean number and the variance of detector counts for different measurement times. The measurement is performed while a steady-state neutron flux is maintained in the reactor by an external neutron source, as a rule a radioactive source. From a plot of the variance-to-mean ratio as a function of measurement time ('gate length'), the reactivity can be determined by fitting the measured curve to the analytical solution. A new situation arises in the planned accelerator driven systems (ADS). An ADS will be run in a subcritical mode, and the steady flux will be maintained by an accelerator based source. Such a source has statistical properties that are different from those of a steady radioactive source. As one example, in a currently running European Community project for ADS research, the MUSE project, the source will be a periodically pulsed neutron generator. The theory of Feynman-alpha method needs to be extended to such nonstationary sources. There are two ways of performing and evaluating such pulsed source experiments. One is to synchronise the detector time gate start with the beginning of an incoming pulse. The Feynman-alpha method has been elaborated for such a case recently. The other method can be called stochastic pulsing. It means that there is no synchronisation between the detector time gate start and the source pulsing, i.e. the start of each measurement is chosen at a random time. The analytical solution to the Feynman-alpha formula from this latter method is the subject of this report. We have obtained an analytical Feynman-alpha formula for the case of stochastic pulsing by two different methods. One is completely based on the use of the symbolic algebra code Mathematica, whereas the other is based on complex function techniques. Closed form solutions could be obtained by both methods

  9. Reactivity determination in accelerator driven nuclear reactors by statistics from neutron detectors (Feynman-Alpha Method)

    Energy Technology Data Exchange (ETDEWEB)

    Ceder, M

    2002-03-01

    The Feynman-alpha method is used in traditional nuclear reactors to determine the subcritical reactivity of a system. The method is based on the measurement of the mean number and the variance of detector counts for different measurement times. The measurement is performed while a steady-state neutron flux is maintained in the reactor by an external neutron source, as a rule a radioactive source. From a plot of the variance-to-mean ratio as a function of measurement time ('gate length'), the reactivity can be determined by fitting the measured curve to the analytical solution. A new situation arises in the planned accelerator driven systems (ADS). An ADS will be run in a subcritical mode, and the steady flux will be maintained by an accelerator based source. Such a source has statistical properties that are different from those of a steady radioactive source. As one example, in a currently running European Community project for ADS research, the MUSE project, the source will be a periodically pulsed neutron generator. The theory of Feynman-alpha method needs to be extended to such nonstationary sources. There are two ways of performing and evaluating such pulsed source experiments. One is to synchronise the detector time gate start with the beginning of an incoming pulse. The Feynman-alpha method has been elaborated for such a case recently. The other method can be called stochastic pulsing. It means that there is no synchronisation between the detector time gate start and the source pulsing, i.e. the start of each measurement is chosen at a random time. The analytical solution to the Feynman-alpha formula from this latter method is the subject of this report. We have obtained an analytical Feynman-alpha formula for the case of stochastic pulsing by two different methods. One is completely based on the use of the symbolic algebra code Mathematica, whereas the other is based on complex function techniques. Closed form solutions could be obtained by both methods

  10. Image Mosaic Method Based on SIFT Features of Line Segment

    Directory of Open Access Journals (Sweden)

    Jun Zhu

    2014-01-01

    Full Text Available This paper proposes a novel image mosaic method based on SIFT (Scale Invariant Feature Transform feature of line segment, aiming to resolve incident scaling, rotation, changes in lighting condition, and so on between two images in the panoramic image mosaic process. This method firstly uses Harris corner detection operator to detect key points. Secondly, it constructs directed line segments, describes them with SIFT feature, and matches those directed segments to acquire rough point matching. Finally, Ransac method is used to eliminate wrong pairs in order to accomplish image mosaic. The results from experiment based on four pairs of images show that our method has strong robustness for resolution, lighting, rotation, and scaling.

  11. Development of motion image prediction method using principal component analysis

    International Nuclear Information System (INIS)

    Chhatkuli, Ritu Bhusal; Demachi, Kazuyuki; Kawai, Masaki; Sakakibara, Hiroshi; Kamiaka, Kazuma

    2012-01-01

    Respiratory motion can induce the limit in the accuracy of area irradiated during lung cancer radiation therapy. Many methods have been introduced to minimize the impact of healthy tissue irradiation due to the lung tumor motion. The purpose of this research is to develop an algorithm for the improvement of image guided radiation therapy by the prediction of motion images. We predict the motion images by using principal component analysis (PCA) and multi-channel singular spectral analysis (MSSA) method. The images/movies were successfully predicted and verified using the developed algorithm. With the proposed prediction method it is possible to forecast the tumor images over the next breathing period. The implementation of this method in real time is believed to be significant for higher level of tumor tracking including the detection of sudden abdominal changes during radiation therapy. (author)

  12. Management and Nonlinear Analysis of Disinfection System of Water Distribution Networks Using Data Driven Methods

    Directory of Open Access Journals (Sweden)

    Mohammad Zounemat-Kermani

    2018-03-01

    Full Text Available Chlorination unit is widely used to supply safe drinking water and removal of pathogens from water distribution networks. Data-driven approach is one appropriate method for analyzing performance of chlorine in water supply network. In this study, multi-layer perceptron neural network (MLP with three training algorithms (gradient descent, conjugate gradient and BFGS and support vector machine (SVM with RBF kernel function were used to predict the concentration of residual chlorine in water supply networks of Ahmadabad Dafeh and Ahruiyeh villages in Kerman Province. Daily data including discharge (flow, chlorine consumption and residual chlorine were employed from the beginning of 1391 Hijri until the end of 1393 Hijri (for 3 years. To assess the performance of studied models, the criteria such as Nash-Sutcliffe efficiency (NS, root mean square error (RMSE, mean absolute percentage error (MAPE and correlation coefficient (CORR were used that in best modeling situation were 0.9484, 0.0255, 1.081, and 0.974 respectively which resulted from BFGS algorithm. The criteria indicated that MLP model with BFGS and conjugate gradient algorithms were better than all other models in 90 and 10 percent of cases respectively; while the MLP model based on gradient descent algorithm and the SVM model were better in none of the cases. According to the results of this study, proper management of chlorine concentration can be implemented by predicted values of residual chlorine in water supply network. Thus, decreased performance of perceptron network and support vector machine in water supply network of Ahruiyeh in comparison to Ahmadabad Dafeh can be inferred from improper management of chlorination.

  13. Iterative methods for dose reduction and image enhancement in tomography

    Science.gov (United States)

    Miao, Jianwei; Fahimian, Benjamin Pooya

    2012-09-18

    A system and method for creating a three dimensional cross sectional image of an object by the reconstruction of its projections that have been iteratively refined through modification in object space and Fourier space is disclosed. The invention provides systems and methods for use with any tomographic imaging system that reconstructs an object from its projections. In one embodiment, the invention presents a method to eliminate interpolations present in conventional tomography. The method has been experimentally shown to provide higher resolution and improved image quality parameters over existing approaches. A primary benefit of the method is radiation dose reduction since the invention can produce an image of a desired quality with a fewer number projections than seen with conventional methods.

  14. Evaluating sediment transport in flood-driven ephemeral tributaries using direct and acoustic methods.

    Science.gov (United States)

    Stark, K.

    2017-12-01

    One common source of uncertainty in sediment transport modeling of large semi-arid rivers is sediment influx delivered by ephemeral, flood-driven tributaries. Large variations in sediment delivery are associated with these regimes due to the highly variable nature of flows within them. While there are many sediment transport equations, they are typically developed for perennial streams and can be inaccurate for ephemeral channels. Discrete, manual sampling is labor intensive and requires personnel to be on site during flooding. In addition, flooding within these tributaries typically last on the order of hours, making it difficult to be present during an event. To better understand these regimes, automated systems are needed to continuously sample bedload and suspended load. In preparation for the pending installation of an automated site on the Arroyo de los Piños in New Mexico, manual sediment and flow samples have been collected over the summer monsoon season of 2017, in spite of the logistical challenges. These data include suspended and bedload sediment samples at the basin outlet, and stage and precipitation data from throughout the basin. Data indicate a complex system; flow is generated primarily in areas of exposed bedrock in the center and higher elevations of the watershed. Bedload samples show a large coarse-grained fraction, with 50% >2 mm and 25% >6 mm, which is compatible with acoustic measuring techniques. These data will be used to inform future site operations, which will combine direct sediment measurement from Reid-type slot samplers and non-invasive acoustic measuring methods. Bedload will be indirectly monitored using pipe-style microphones, plate-style geophones, channel hydrophones, and seismometers. These instruments record vibrations and acoustic signals from bedload impacts and movement. Indirect methods for measuring of bedload have never been extensively evaluated in ephemeral channels in the southwest United States. Once calibrated

  15. Thresholding methods for PET imaging: A review

    International Nuclear Information System (INIS)

    Dewalle-Vignion, A.S.; Betrouni, N.; Huglo, D.; Vermandel, M.; Dewalle-Vignion, A.S.; Hossein-Foucher, C.; Huglo, D.; Vermandel, M.; Dewalle-Vignion, A.S.; Hossein-Foucher, C.; Huglo, D.; Vermandel, M.; El Abiad, A.

    2010-01-01

    This work deals with positron emission tomography segmentation methods for tumor volume determination. We propose a state of art techniques based on fixed or adaptive threshold. Methods found in literature are analysed with an objective point of view on their methodology, advantages and limitations. Finally, a comparative study is presented. (authors)

  16. User Driven Image Stacking for ODI Data and Beyond via a Highly Customizable Web Interface

    Science.gov (United States)

    Hayashi, S.; Gopu, A.; Young, M. D.; Kotulla, R.

    2015-09-01

    While some astronomical archives have begun serving standard calibrated data products, the process of producing stacked images remains a challenge left to the end-user. The benefits of astronomical image stacking are well established, and dither patterns are recommended for almost all observing targets. Some archives automatically produce stacks of limited scientific usefulness without any fine-grained user or operator configurability. In this paper, we present PPA Stack, a web based stacking framework within the ODI - Portal, Pipeline, and Archive system. PPA Stack offers a web user interface with built-in heuristics (based on pointing, filter, and other metadata information) to pre-sort images into a set of likely stacks while still allowing the user or operator complete control over the images and parameters for each of the stacks they wish to produce. The user interface, designed using AngularJS, provides multiple views of the input dataset and parameters, all of which are synchronized in real time. A backend consisting of a Python application optimized for ODI data, wrapped around the SWarp software, handles the execution of stacking workflow jobs on Indiana University's Big Red II supercomputer, and the subsequent ingestion of the combined images back into the PPA archive. PPA Stack is designed to enable seamless integration of other stacking applications in the future, so users can select the most appropriate option for their science.

  17. A laser driven pulsed X-ray backscatter technique for enhanced penetrative imaging.

    Science.gov (United States)

    Deas, R M; Wilson, L A; Rusby, D; Alejo, A; Allott, R; Black, P P; Black, S E; Borghesi, M; Brenner, C M; Bryant, J; Clarke, R J; Collier, J C; Edwards, B; Foster, P; Greenhalgh, J; Hernandez-Gomez, C; Kar, S; Lockley, D; Moss, R M; Najmudin, Z; Pattathil, R; Symes, D; Whittle, M D; Wood, J C; McKenna, P; Neely, D

    2015-01-01

    X-ray backscatter imaging can be used for a wide range of imaging applications, in particular for industrial inspection and portal security. Currently, the application of this imaging technique to the detection of landmines is limited due to the surrounding sand or soil strongly attenuating the 10s to 100s of keV X-rays required for backscatter imaging. Here, we introduce a new approach involving a 140 MeV short-pulse (< 100 fs) electron beam generated by laser wakefield acceleration to probe the sample, which produces Bremsstrahlung X-rays within the sample enabling greater depths to be imaged. A variety of detector and scintillator configurations are examined, with the best time response seen from an absorptive coated BaF2 scintillator with a bandpass filter to remove the slow scintillation emission components. An X-ray backscatter image of an array of different density and atomic number items is demonstrated. The use of a compact laser wakefield accelerator to generate the electron source, combined with the rapid development of more compact, efficient and higher repetition rate high power laser systems will make this system feasible for applications in the field. Content includes material subject to Dstl (c) Crown copyright (2014). Licensed under the terms of the Open Government Licence except where otherwise stated. To view this licence, visit http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3 or write to the Information Policy Team, The National Archives, Kew, London TW9 4DU, or email: psi@ nationalarchives.gsi.gov.uk.

  18. Knowledge-driven information mining in remote-sensing image archives

    Science.gov (United States)

    Datcu, M.; Seidel, K.; D'Elia, S.; Marchetti, P. G.

    2002-05-01

    Users in all domains require information or information-related services that are focused, concise, reliable, low cost and timely and which are provided in forms and formats compatible with the user's own activities. In the current Earth Observation (EO) scenario, the archiving centres generally only offer data, images and other "low level" products. The user's needs are being only partially satisfied by a number of, usually small, value-adding companies applying time-consuming (mostly manual) and expensive processes relying on the knowledge of experts to extract information from those data or images.

  19. Integration of Architectural and Cytologic Driven Image Algorithms for Prostate Adenocarcinoma Identification

    Directory of Open Access Journals (Sweden)

    Jason Hipp

    2012-01-01

    Full Text Available Introduction: The advent of digital slides offers new opportunities within the practice of pathology such as the use of image analysis techniques to facilitate computer aided diagnosis (CAD solutions. Use of CAD holds promise to enable new levels of decision support and allow for additional layers of quality assurance and consistency in rendered diagnoses. However, the development and testing of prostate cancer CAD solutions requires a ground truth map of the cancer to enable the generation of receiver operator characteristic (ROC curves. This requires a pathologist to annotate, or paint, each of the malignant glands in prostate cancer with an image editor software - a time consuming and exhaustive process.

  20. Generalized Row-Action Methods for Tomographic Imaging

    DEFF Research Database (Denmark)

    Andersen, Martin Skovgaard; Hansen, Per Christian

    2014-01-01

    Row-action methods play an important role in tomographic image reconstruction. Many such methods can be viewed as incremental gradient methods for minimizing a sum of a large number of convex functions, and despite their relatively poor global rate of convergence, these methods often exhibit fast...... initial convergence which is desirable in applications where a low-accuracy solution is acceptable. In this paper, we propose relaxed variants of a class of incremental proximal gradient methods, and these variants generalize many existing row-action methods for tomographic imaging. Moreover, they allow...

  1. Image based method for aberration measurement of lithographic tools

    Science.gov (United States)

    Xu, Shuang; Tao, Bo; Guo, Yongxing; Li, Gongfa

    2018-01-01

    Information of lens aberration of lithographic tools is important as it directly affects the intensity distribution in the image plane. Zernike polynomials are commonly used for a mathematical description of lens aberrations. Due to the advantage of lower cost and easier implementation of tools, image based measurement techniques have been widely used. Lithographic tools are typically partially coherent systems that can be described by a bilinear model, which entails time consuming calculations and does not lend a simple and intuitive relationship between lens aberrations and the resulted images. Previous methods for retrieving lens aberrations in such partially coherent systems involve through-focus image measurements and time-consuming iterative algorithms. In this work, we propose a method for aberration measurement in lithographic tools, which only requires measuring two images of intensity distribution. Two linear formulations are derived in matrix forms that directly relate the measured images to the unknown Zernike coefficients. Consequently, an efficient non-iterative solution is obtained.

  2. A NDVI assisted remote sensing image adaptive scale segmentation method

    Science.gov (United States)

    Zhang, Hong; Shen, Jinxiang; Ma, Yanmei

    2018-03-01

    Multiscale segmentation of images can effectively form boundaries of different objects with different scales. However, for the remote sensing image which widely coverage with complicated ground objects, the number of suitable segmentation scales, and each of the scale size is still difficult to be accurately determined, which severely restricts the rapid information extraction of the remote sensing image. A great deal of experiments showed that the normalized difference vegetation index (NDVI) can effectively express the spectral characteristics of a variety of ground objects in remote sensing images. This paper presents a method using NDVI assisted adaptive segmentation of remote sensing images, which segment the local area by using NDVI similarity threshold to iteratively select segmentation scales. According to the different regions which consist of different targets, different segmentation scale boundaries could be created. The experimental results showed that the adaptive segmentation method based on NDVI can effectively create the objects boundaries for different ground objects of remote sensing images.

  3. An efficient method for facial component detection in thermal images

    Science.gov (United States)

    Paul, Michael; Blanik, Nikolai; Blazek, Vladimir; Leonhardt, Steffen

    2015-04-01

    A method to detect certain regions in thermal images of human faces is presented. In this approach, the following steps are necessary to locate the periorbital and the nose regions: First, the face is segmented from the background by thresholding and morphological filtering. Subsequently, a search region within the face, around its center of mass, is evaluated. Automatically computed temperature thresholds are used per subject and image or image sequence to generate binary images, in which the periorbital regions are located by integral projections. Then, the located positions are used to approximate the nose position. It is possible to track features in the located regions. Therefore, these regions are interesting for different applications like human-machine interaction, biometrics and biomedical imaging. The method is easy to implement and does not rely on any training images or templates. Furthermore, the approach saves processing resources due to simple computations and restricted search regions.

  4. New diffusion imaging method with a single acquisition sequence

    International Nuclear Information System (INIS)

    Melki, Ph.S.; Bittoun, J.; Lefevre, J.E.

    1987-01-01

    The apparent diffusion coefficient (ADC) is related to the molecular diffusion coefficient and to physiologic information: microcirculation in the capillary network, incoherent slow flow, and restricted diffusion. The authors present a new MR imaging sequence that yields computed ADC images in only one acquisition of 9-minutes with a 1.5-T imager (GE Signa). Compared to the previous method, this sequence is at least two times faster and thus can be used as a routine examination to supplement T1-, T2-, and density-weighted images. The method was assessed by measurement of the molecular diffusion in liquids, and the first clinical images obtained in neurologic diseases demonstrate its efficiency for clinical investigation. The possibility of separately imaging diffusion and perfusion is supported by an algorithm

  5. Profiling pleural effusion cells by a diffraction imaging method

    Science.gov (United States)

    Al-Qaysi, Safaa; Hong, Heng; Wen, Yuhua; Lu, Jun Q.; Feng, Yuanming; Hu, Xin-Hua

    2018-02-01

    Assay of cells in pleural effusion (PE) is an important means of disease diagnosis. Conventional cytology of effusion samples, however, has low sensitivity and depends heavily on the expertise of cytopathologists. We applied a polarization diffraction imaging flow cytometry method on effusion cells to investigate their features. Diffraction imaging of the PE cell samples has been performed on 6000 to 12000 cells for each effusion cell sample of three patients. After prescreening to remove images by cellular debris and aggregated non-cellular particles, the image textures were extracted with a gray level co-occurrence matrix (GLCM) algorithm. The distribution of the imaged cells in the GLCM parameters space was analyzed by a Gaussian Mixture Model (GMM) to determine the number of clusters among the effusion cells. These results yield insight on textural features of diffraction images and related cellular morphology in effusion samples and can be used toward the development of a label-free method for effusion cells assay.

  6. Imaging methods for detection of infectious foci

    International Nuclear Information System (INIS)

    Couret, I.; Rossi, M.; Weinemann, P.; Moretti, J.L.

    1993-01-01

    Several tracers can be used for imaging infection. None is a worthwhile agent for all infectious foci, but each one has preferential applications, depending on its uptake mechanism by the infectious and/or inflammatory focus. Autologous leucocytes labeled in vitro with indium-111 (In-111) or with technetium-99-hexamethylpropyleneamine oxime (Tc-99m HMPAO) were applied with success in the detection of peripheral bone infection, focal vascular graft infection and inflammatory bowel disease. Labeling with In-111 is of interest in chronic bone infection, while labeling with Tc-99m HMPAO gets the advantage of a better dosimetry and imaging. The interest of in vivo labeled leucocytes with a Tc-99m labeled monoclonal antigranulocyte antibody anti-NCA 95 (BW 250/183) was proved in the same principal type of infectious foci than in vitro labeled leucocytes. Sites of chronic infection in the spine and the pelvis, whether active or healed, appear as photopenic defects on both in vitro labeled leucocytes and Tc-99m monoclonal antigranulocyte antibody (BW 250/183) scintigraphies. With gallium-67 results showed a high sensitivity with a low specificity. This tracer demonstrated good performance to delineate foci of infectious spondylitis. In-111 and Tc-99m labeled polyclonal human immunoglobulin (HIG) was applied with success in the assessment of various infectious foci, particularly in chronic sepsis. As labeled leucocytes, labeled HIG showed cold defects in infectious sepsis of the spine. Research in nuclear medicine is very active in the development of more specific tracers of infection, mainly involved in Tc-99m or In-111 labeled chemotactic peptides, antigranulocyte antibody fragments, antibiotic derivatives and interleukins. (authors). 70 refs

  7. The best printing methods to print satellite images

    OpenAIRE

    G.A. Yousif; R.Sh. Mohamed

    2011-01-01

    Printing systems operate in general as a system of color its color scale is limited as compared with the system color satellite images. Satellite image is building from very small cell named pixel, which represents the picture element and the unity of color when the image is displayed on the screen, this unit becomes lesser in size and called screen point. This unit posseses different size and shape from the method of printing to another, depending on the output resolution, tools and material...

  8. Mathematical methods in time series analysis and digital image processing

    CERN Document Server

    Kurths, J; Maass, P; Timmer, J

    2008-01-01

    The aim of this volume is to bring together research directions in theoretical signal and imaging processing developed rather independently in electrical engineering, theoretical physics, mathematics and the computer sciences. In particular, mathematically justified algorithms and methods, the mathematical analysis of these algorithms, and methods as well as the investigation of connections between methods from time series analysis and image processing are reviewed. An interdisciplinary comparison of these methods, drawing upon common sets of test problems from medicine and geophysical/enviromental sciences, is also addressed. This volume coherently summarizes work carried out in the field of theoretical signal and image processing. It focuses on non-linear and non-parametric models for time series as well as on adaptive methods in image processing.

  9. Fingerprint image reconstruction for swipe sensor using Predictive Overlap Method

    Directory of Open Access Journals (Sweden)

    Mardiansyah Ahmad Zafrullah

    2018-01-01

    Full Text Available Swipe sensor is one of many biometric authentication sensor types that widely applied to embedded devices. The sensor produces an overlap on every pixel block of the image, so the picture requires a reconstruction process before heading to the feature extraction process. Conventional reconstruction methods require extensive computation, causing difficult to apply to embedded devices that have limited computing process. In this paper, image reconstruction is proposed using predictive overlap method, which determines the image block shift from the previous set of change data. The experiments were performed using 36 images generated by a swipe sensor with 128 x 8 pixels size of the area, where each image has an overlap in each block. The results reveal computation can increase up to 86.44% compared with conventional methods, with accuracy decreasing to 0.008% in average.

  10. Texture recognition of medical images with the ICM method

    International Nuclear Information System (INIS)

    Kinser, Jason M.; Wang Guisong

    2004-01-01

    The Integrated Cortical Model (ICM) is based upon several models of the mammalian visual cortex and produces pulse images over several iterations. These pulse images tend to isolate segments, edges, and textures that are inherent in the input image. To create a texture recognition engine the pulse spectrum of individual pixels are collected and used to develop a recognition library. Recognition is performed by comparing pulse spectra of unclassified regions of images with the known regions. Because signatures are smaller than images, signature-based computation is quite efficient and parasites can be recognized quickly. The precision of this method depends on the representative of signatures and classification. Our experiment results support the theoretical findings and show perspectives of practical applications of ICM-based method. The advantage of ICM method is using signatures to represent objects. ICM can extract the internal features of objects and represent them with signatures. Signature classification is critical for the precision of recognition

  11. Some selected quantitative methods of thermal image analysis in Matlab.

    Science.gov (United States)

    Koprowski, Robert

    2016-05-01

    The paper presents a new algorithm based on some selected automatic quantitative methods for analysing thermal images. It shows the practical implementation of these image analysis methods in Matlab. It enables to perform fully automated and reproducible measurements of selected parameters in thermal images. The paper also shows two examples of the use of the proposed image analysis methods for the area of ​​the skin of a human foot and face. The full source code of the developed application is also provided as an attachment. The main window of the program during dynamic analysis of the foot thermal image. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Matrix-based image reconstruction methods for tomography

    International Nuclear Information System (INIS)

    Llacer, J.; Meng, J.D.

    1984-10-01

    Matrix methods of image reconstruction have not been used, in general, because of the large size of practical matrices, ill condition upon inversion and the success of Fourier-based techniques. An exception is the work that has been done at the Lawrence Berkeley Laboratory for imaging with accelerated radioactive ions. An extension of that work into more general imaging problems shows that, with a correct formulation of the problem, positron tomography with ring geometries results in well behaved matrices which can be used for image reconstruction with no distortion of the point response in the field of view and flexibility in the design of the instrument. Maximum Likelihood Estimator methods of reconstruction, which use the system matrices tailored to specific instruments and do not need matrix inversion, are shown to result in good preliminary images. A parallel processing computer structure based on multiple inexpensive microprocessors is proposed as a system to implement the matrix-MLE methods. 14 references, 7 figures

  13. Method and apparatus for improving the alignment of radiographic images

    International Nuclear Information System (INIS)

    Schuller, P.D.; Hatcher, D.C.; Caelli, T.M.; Eggert, F.M.; Yuzyk, J.

    1991-01-01

    This invention relates generally to the field of radiology, and has to do particularly with a method and apparatus for improving the alignment of radiographic images taken at different times of the same tissue structure, so that the images can be sequentially shown in aligned condition, whereby changes in the structure can be noted. (author). 10 figs

  14. Method for analysis of failure of material employing imaging

    Energy Technology Data Exchange (ETDEWEB)

    Vinegar, H.J.; Wellington, S.L.; de Waal, J.A.

    1989-12-05

    This patent describes a method for determining at least one preselected property of a sample of material employing an imaging apparatus. It comprises: imaging the sample during the application of known preselected forces to the sample, and determining density in the sample responsive to the preselected forces.

  15. An attenuation correction method for PET/CT images

    International Nuclear Information System (INIS)

    Ue, Hidenori; Yamazaki, Tomohiro; Haneishi, Hideaki

    2006-01-01

    In PET/CT systems, accurate attenuation correction can be achieved by creating an attenuation map from an X-ray CT image. On the other hand, respiratory-gated PET acquisition is an effective method for avoiding motion blurring of the thoracic and abdominal organs caused by respiratory motion. In PET/CT systems employing respiratory-gated PET, using an X-ray CT image acquired during breath-holding for attenuation correction may have a large effect on the voxel values, especially in regions with substantial respiratory motion. In this report, we propose an attenuation correction method in which, as the first step, a set of respiratory-gated PET images is reconstructed without attenuation correction, as the second step, the motion of each phase PET image from the PET image in the same phase as the CT acquisition timing is estimated by the previously proposed method, as the third step, the CT image corresponding to each respiratory phase is generated from the original CT image by deformation according to the motion vector maps, and as the final step, attenuation correction using these CT images and reconstruction are performed. The effectiveness of the proposed method was evaluated using 4D-NCAT phantoms, and good stability of the voxel values near the diaphragm was observed. (author)

  16. Method and Apparatus for Computed Imaging Backscatter Radiography

    Science.gov (United States)

    Shedlock, Daniel (Inventor); Meng, Christopher (Inventor); Sabri, Nissia (Inventor); Dugan, Edward T. (Inventor); Jacobs, Alan M. (Inventor)

    2013-01-01

    Systems and methods of x-ray backscatter radiography are provided. A single-sided, non-destructive imaging technique utilizing x-ray radiation to image subsurface features is disclosed, capable of scanning a region using a fan beam aperture and gathering data using rotational motion.

  17. Image segmentation with a finite element method

    DEFF Research Database (Denmark)

    Bourdin, Blaise

    1999-01-01

    regularization results, make possible to imagine a finite element resolution method.In a first time, the Mumford-Shah functional is introduced and some existing results are quoted. Then, a discrete formulation for the Mumford-Shah problem is proposed and its $\\Gamma$-convergence is proved. Finally, some...

  18. Method and apparatus for producing tomographic images

    International Nuclear Information System (INIS)

    Annis, M.

    1989-01-01

    A device useful in producing a tomographic image of a selected slice of an object to be examined is described comprising: a source of penetrating radiation, sweep means for forming energy from the source into a pencil beam and repeatedly sweeping the pencil beam over a line in space to define a sweep plane, first means for supporting an object to be examined so that the pencil beam intersections the object along a path passing through the object and the selected slice, line collimating means for filtering radiation scattered by the object, the line collimating means having a field of view which intersects and sweep plane in a bounded line so that the line collimating means passes only radiation scattered by elementary volumes of the object lying along the bounded line, and line collimating means including a plurality of channels such substantially planar in form to collectively define the field of view, the channels oriented so that pencil beam sweeps along the bounded line as a function of time, and radiation detector means responsive to radiation passed by the line collimating means

  19. Method for imaging pulmonary arterial hypoplasia

    International Nuclear Information System (INIS)

    Triantafillou, M.

    2000-01-01

    Full text: Pulmonary hypoplasia represents an incomplete development of the lung, resulting in the reduction of distended lung volume. This is associated with small or absent number of airway divisions, alveoli, arteries and veins. Unilateral pulmonary Hypoplasia is often asymptomatic and may be demonstrated as a hypodense lung on a chest X-ray. Computer Tomography (CT) scanning would show anatomical detail and proximal vessels. Magnetic Resonance Imaging (MRI) will show no more detail than which the CT scan has already demonstrated. It is, also, difficult to visualise collateral vessels from systemic and/or bronchial vessels on both these modalities. Pulmonary Angiography would give the definitive answer, but it is time consuming and has significant risks associated with the procedure. There are high costs associated with these modalities. Nuclear Medicine Ventilation/Perfusion (V/Q) scan performed on these patients would demonstrate diminished ventilation due to reduced lung volume and absence of perfusion to the hypoplastic lung. To date, we have performed V/Q lung scan on two children in our department. Both cases demonstrate diminished ventilation with no perfusion to the hypoplastic lung. Though the gold standard is Pulmonary Angiography, V/Q scanning is cost effective, less time consuming and a non invasive procedure that can be performed as an outpatient. It is accurate as it demonstrates absent lung perfusion, confirming the patient has pulmonary arterial hypoplasia. Copyright (2000) The Australian and New Zealand Society of Nuclear Medicine Inc

  20. IMPROVING THE QUALITY OF NEAR-INFRARED IMAGING OF IN VIVOBLOOD VESSELS USING IMAGE FUSION METHODS

    DEFF Research Database (Denmark)

    Jensen, Andreas Kryger; Savarimuthu, Thiusius Rajeeth; Sørensen, Anders Stengaard

    2009-01-01

    We investigate methods for improving the visual quality of in vivo images of blood vessels in the human forearm. Using a near-infrared light source and a dual CCD chip camera system capable of capturing images at visual and nearinfrared spectra, we evaluate three fusion methods in terms...... of their capability of enhancing the blood vessels while preserving the spectral signature of the original color image. Furthermore, we investigate a possibility of removing hair in the images using a fusion rule based on the "a trous" stationary wavelet decomposition. The method with the best overall performance...... with both speed and quality in mind is the Intensity Injection method. Using the developed system and the methods presented in this article, it is possible to create images of high visual quality with highly emphasized blood vessels....

  1. Image quality enhancement in low-light-level ghost imaging using modified compressive sensing method

    Science.gov (United States)

    Shi, Xiaohui; Huang, Xianwei; Nan, Suqin; Li, Hengxing; Bai, Yanfeng; Fu, Xiquan

    2018-04-01

    Detector noise has a significantly negative impact on ghost imaging at low light levels, especially for existing recovery algorithm. Based on the characteristics of the additive detector noise, a method named modified compressive sensing ghost imaging is proposed to reduce the background imposed by the randomly distributed detector noise at signal path. Experimental results show that, with an appropriate choice of threshold value, modified compressive sensing ghost imaging algorithm can dramatically enhance the contrast-to-noise ratio of the object reconstruction significantly compared with traditional ghost imaging and compressive sensing ghost imaging methods. The relationship between the contrast-to-noise ratio of the reconstruction image and the intensity ratio (namely, the average signal intensity to average noise intensity ratio) for the three reconstruction algorithms are also discussed. This noise suppression imaging technique will have great applications in remote-sensing and security areas.

  2. Fluorine-labeled Dasatinib Nanoformulations as Targeted Molecular Imaging Probes in a PDGFB-driven Murine Glioblastoma Model

    Directory of Open Access Journals (Sweden)

    Miriam Benezra

    2012-12-01

    Full Text Available Dasatinib, a new-generation Src and platelet-derived growth factor receptor (PDGFR inhibitor, is currently under evaluation in high-grade glioma clinical trials. To achieve optimum physicochemical and/or biologic properties, alternative drug delivery vehicles may be needed. We used a novel fluorinated dasatinib derivative (F-SKI249380, in combination with nanocarrier vehicles and metabolic imaging tools (microPET to evaluate drug delivery and uptake in a platelet-derived growth factor B (PDGFB-driven genetically engineered mouse model (GEMM of high-grade glioma. We assessed dasatinib survival benefit on the basis of measured tumor volumes. Using brain tumor cells derived from PDGFB-driven gliomas, dose-dependent uptake and time-dependent inhibitory effects of F-SKI249380 on biologic activity were investigated and compared with the parent drug. PDGFR receptor status and tumor-specific targeting were non-invasively evaluated in vivo using 18F-SKI249380 and 18F-SKI249380-containing micellar and liposomal nanoformulations. A statistically significant survival benefit was found using dasatinib (95 mg/kg versus saline vehicle (P < .001 in tumor volume-matched GEMM pairs. Competitive binding and treatment assays revealed comparable biologic properties for F-SKI249380 and the parent drug. In vivo, Significantly higher tumor uptake was observed for 18F-SKI249380-containing micelle formulations [4.9 percentage of the injected dose per gram tissue (%ID/g; P = .002] compared to control values (1.6%ID/g. Saturation studies using excess cold dasatinib showed marked reduction of tumor uptake values to levels in normal brain (1.5%ID/g, consistent with in vivo binding specificity. Using 18F-SKI249380-containing micelles as radiotracers to estimate therapeutic dosing requirements, we calculated intratumoral drug concentrations (24–60 nM that were comparable to in vitro 50% inhibitory concentration values. 18F-SKI249380 is a PDGFR-selective tracer, which

  3. A method of fast mosaic for massive UAV images

    Science.gov (United States)

    Xiang, Ren; Sun, Min; Jiang, Cheng; Liu, Lei; Zheng, Hui; Li, Xiaodong

    2014-11-01

    With the development of UAV technology, UAVs are used widely in multiple fields such as agriculture, forest protection, mineral exploration, natural disaster management and surveillances of public security events. In contrast of traditional manned aerial remote sensing platforms, UAVs are cheaper and more flexible to use. So users can obtain massive image data with UAVs, but this requires a lot of time to process the image data, for example, Pix4UAV need approximately 10 hours to process 1000 images in a high performance PC. But disaster management and many other fields require quick respond which is hard to realize with massive image data. Aiming at improving the disadvantage of high time consumption and manual interaction, in this article a solution of fast UAV image stitching is raised. GPS and POS data are used to pre-process the original images from UAV, belts and relation between belts and images are recognized automatically by the program, in the same time useless images are picked out. This can boost the progress of finding match points between images. Levenberg-Marquard algorithm is improved so that parallel computing can be applied to shorten the time of global optimization notably. Besides traditional mosaic result, it can also generate superoverlay result for Google Earth, which can provide a fast and easy way to show the result data. In order to verify the feasibility of this method, a fast mosaic system of massive UAV images is developed, which is fully automated and no manual interaction is needed after original images and GPS data are provided. A test using 800 images of Kelan River in Xinjiang Province shows that this system can reduce 35%-50% time consumption in contrast of traditional methods, and increases respond speed of UAV image processing rapidly.

  4. Hiding a Covert Digital Image by Assembling the RSA Encryption Method and the Binary Encoding Method

    OpenAIRE

    Kuang Tsan Lin; Sheng Lih Yeh

    2014-01-01

    The Rivest-Shamir-Adleman (RSA) encryption method and the binary encoding method are assembled to form a hybrid hiding method to hide a covert digital image into a dot-matrix holographic image. First, the RSA encryption method is used to transform the covert image to form a RSA encryption data string. Then, all the elements of the RSA encryption data string are transferred into binary data. Finally, the binary data are encoded into the dot-matrix holographic image. The pixels of the dot-matri...

  5. Method and algorithm for image processing

    Science.gov (United States)

    He, George G.; Moon, Brain D.

    2003-12-16

    The present invention is a modified Radon transform. It is similar to the traditional Radon transform for the extraction of line parameters and similar to traditional slant stack for the intensity summation of pixels away from a given pixel, for example ray paths that spans 360 degree at a given grid in the time and offset domain. However, the present invention differs from these methods in that the intensity and direction of a composite intensity for each pixel are maintained separately instead of combined after the transformation. An advantage of this approach is elimination of the work required to extract the line parameters in the transformed domain. The advantage of the modified Radon Transform method is amplified when many lines are present in the imagery or when the lines are just short segments which both occur in actual imagery.

  6. Calculation of the neutron importance and weighted neutron generation time using MCNIC method in accelerator driven subcritical reactors

    Energy Technology Data Exchange (ETDEWEB)

    Hassanzadeh, M. [Nuclear Science and Technology Research Institute, AEOI, Tehran, Islamic Republic of Iran (Iran, Islamic Republic of); Feghhi, S.A.H., E-mail: a_feghhi@sbu.ac.ir [Department of Radiation Application, Shahid Beheshti University, G.C., Tehran, Islamic Republic of Iran (Iran, Islamic Republic of); Khalafi, H. [Nuclear Science and Technology Research Institute, AEOI, Tehran, Islamic Republic of Iran (Iran, Islamic Republic of)

    2013-09-15

    Highlights: • All reactor kinetic parameters are importance weighted quantities. • MCNIC method has been developed for calculating neutron importance in ADSRs. • Mean generation time has been calculated in spallation driven systems. -- Abstract: The difference between non-weighted neutron generation time (Λ) and the weighted one (Λ{sup †}) can be quite significant depending on the type of the system. In the present work, we will focus on developing MCNIC method for calculation of the neutron importance (Φ{sup †}) and importance weighted neutron generation time (Λ{sup †}) in accelerator driven systems (ADS). Two hypothetic bare and graphite reflected spallation source driven system have been considered as illustrative examples for this means. The results of this method have been compared with those obtained by MCNPX code. According to the results, the relative difference between Λ and Λ{sup †} is within 36% and 24,840% in bare and reflected illustrative examples respectively. The difference is quite significant in reflected systems and increases with reflector thickness. In Conclusion, this method may be used for better estimation of kinetic parameters rather than the MCNPX code because of using neutron importance function.

  7. Calculation of the neutron importance and weighted neutron generation time using MCNIC method in accelerator driven subcritical reactors

    International Nuclear Information System (INIS)

    Hassanzadeh, M.; Feghhi, S.A.H.; Khalafi, H.

    2013-01-01

    Highlights: • All reactor kinetic parameters are importance weighted quantities. • MCNIC method has been developed for calculating neutron importance in ADSRs. • Mean generation time has been calculated in spallation driven systems. -- Abstract: The difference between non-weighted neutron generation time (Λ) and the weighted one (Λ † ) can be quite significant depending on the type of the system. In the present work, we will focus on developing MCNIC method for calculation of the neutron importance (Φ † ) and importance weighted neutron generation time (Λ † ) in accelerator driven systems (ADS). Two hypothetic bare and graphite reflected spallation source driven system have been considered as illustrative examples for this means. The results of this method have been compared with those obtained by MCNPX code. According to the results, the relative difference between Λ and Λ † is within 36% and 24,840% in bare and reflected illustrative examples respectively. The difference is quite significant in reflected systems and increases with reflector thickness. In Conclusion, this method may be used for better estimation of kinetic parameters rather than the MCNPX code because of using neutron importance function

  8. A Method for Improving the Progressive Image Coding Algorithms

    Directory of Open Access Journals (Sweden)

    Ovidiu COSMA

    2014-12-01

    Full Text Available This article presents a method for increasing the performance of the progressive coding algorithms for the subbands of images, by representing the coefficients with a code that reduces the truncation error.

  9. Development of digital image correlation method to analyse crack ...

    Indian Academy of Sciences (India)

    samples were performed to verify the performance of the digital image correlation method. ... development cannot be measured accurately. ..... Mendelson A 1983 Plasticity: Theory and application (USA: Krieger Publishing company Malabar,.

  10. Quantitative Methods for Molecular Diagnostic and Therapeutic Imaging

    OpenAIRE

    Li, Quanzheng

    2013-01-01

    This theme issue provides an overview on the basic quantitative methods, an in-depth discussion on the cutting-edge quantitative analysis approaches as well as their applications for both static and dynamic molecular diagnostic and therapeutic imaging.

  11. Discrete gradient methods for solving variational image regularisation models

    International Nuclear Information System (INIS)

    Grimm, V; McLachlan, Robert I; McLaren, David I; Quispel, G R W; Schönlieb, C-B

    2017-01-01

    Discrete gradient methods are well-known methods of geometric numerical integration, which preserve the dissipation of gradient systems. In this paper we show that this property of discrete gradient methods can be interesting in the context of variational models for image processing, that is where the processed image is computed as a minimiser of an energy functional. Numerical schemes for computing minimisers of such energies are desired to inherit the dissipative property of the gradient system associated to the energy and consequently guarantee a monotonic decrease of the energy along iterations, avoiding situations in which more computational work might lead to less optimal solutions. Under appropriate smoothness assumptions on the energy functional we prove that discrete gradient methods guarantee a monotonic decrease of the energy towards stationary states, and we promote their use in image processing by exhibiting experiments with convex and non-convex variational models for image deblurring, denoising, and inpainting. (paper)

  12. Double Minimum Variance Beamforming Method to Enhance Photoacoustic Imaging

    OpenAIRE

    Paridar, Roya; Mozaffarzadeh, Moein; Nasiriavanaki, Mohammadreza; Orooji, Mahdi

    2018-01-01

    One of the common algorithms used to reconstruct photoacoustic (PA) images is the non-adaptive Delay-and-Sum (DAS) beamformer. However, the quality of the reconstructed PA images obtained by DAS is not satisfying due to its high level of sidelobes and wide mainlobe. In contrast, adaptive beamformers, such as minimum variance (MV), result in an improved image compared to DAS. In this paper, a novel beamforming method, called Double MV (D-MV) is proposed to enhance the image quality compared to...

  13. Beam imaging sensor and method for using same

    Energy Technology Data Exchange (ETDEWEB)

    McAninch, Michael D.; Root, Jeffrey J.

    2017-01-03

    The present invention relates generally to the field of sensors for beam imaging and, in particular, to a new and useful beam imaging sensor for use in determining, for example, the power density distribution of a beam including, but not limited to, an electron beam or an ion beam. In one embodiment, the beam imaging sensor of the present invention comprises, among other items, a circumferential slit that is either circular, elliptical or polygonal in nature. In another embodiment, the beam imaging sensor of the present invention comprises, among other things, a discontinuous partially circumferential slit. Also disclosed is a method for using the various beams sensor embodiments of the present invention.

  14. Quantitative methods for the analysis of electron microscope images

    DEFF Research Database (Denmark)

    Skands, Peter Ulrik Vallø

    1996-01-01

    The topic of this thesis is an general introduction to quantitative methods for the analysis of digital microscope images. The images presented are primarily been acquired from Scanning Electron Microscopes (SEM) and interfermeter microscopes (IFM). The topic is approached though several examples...... foundation of the thesis fall in the areas of: 1) Mathematical Morphology; 2) Distance transforms and applications; and 3) Fractal geometry. Image analysis opens in general the possibility of a quantitative and statistical well founded measurement of digital microscope images. Herein lies also the conditions...

  15. System and method for image registration of multiple video streams

    Science.gov (United States)

    Dillavou, Marcus W.; Shum, Phillip Corey; Guthrie, Baron L.; Shenai, Mahesh B.; Deaton, Drew Steven; May, Matthew Benton

    2018-02-06

    Provided herein are methods and systems for image registration from multiple sources. A method for image registration includes rendering a common field of interest that reflects a presence of a plurality of elements, wherein at least one of the elements is a remote element located remotely from another of the elements and updating the common field of interest such that the presence of the at least one of the elements is registered relative to another of the elements.

  16. IMAGE TO POINT CLOUD METHOD OF 3D-MODELING

    Directory of Open Access Journals (Sweden)

    A. G. Chibunichev

    2012-07-01

    Full Text Available This article describes the method of constructing 3D models of objects (buildings, monuments based on digital images and a point cloud obtained by terrestrial laser scanner. The first step is the automated determination of exterior orientation parameters of digital image. We have to find the corresponding points of the image and point cloud to provide this operation. Before the corresponding points searching quasi image of point cloud is generated. After that SIFT algorithm is applied to quasi image and real image. SIFT algorithm allows to find corresponding points. Exterior orientation parameters of image are calculated from corresponding points. The second step is construction of the vector object model. Vectorization is performed by operator of PC in an interactive mode using single image. Spatial coordinates of the model are calculated automatically by cloud points. In addition, there is automatic edge detection with interactive editing available. Edge detection is performed on point cloud and on image with subsequent identification of correct edges. Experimental studies of the method have demonstrated its efficiency in case of building facade modeling.

  17. Analysis and Comparison of Objective Methods for Image Quality Assessment

    Directory of Open Access Journals (Sweden)

    P. S. Babkin

    2014-01-01

    Full Text Available The purpose of this work is research and modification of the reference objective methods for image quality assessment. The ultimate goal is to obtain a modification of formal assessments that more closely corresponds to the subjective expert estimates (MOS.In considering the formal reference objective methods for image quality assessment we used the results of other authors, which offer results and comparative analyzes of the most effective algorithms. Based on these investigations we have chosen two of the most successful algorithm for which was made a further analysis in the MATLAB 7.8 R 2009 a (PQS and MSSSIM. The publication focuses on the features of the algorithms, which have great importance in practical implementation, but are insufficiently covered in the publications by other authors.In the implemented modification of the algorithm PQS boundary detector Kirsch was replaced by the boundary detector Canny. Further experiments were carried out according to the method of the ITU-R VT.500-13 (01/2012 using monochrome images treated with different types of filters (should be emphasized that an objective assessment of image quality PQS is applicable only to monochrome images. Images were obtained with a thermal imaging surveillance system. The experimental results proved the effectiveness of this modification.In the specialized literature in the field of formal to evaluation methods pictures, this type of modification was not mentioned.The method described in the publication can be applied to various practical implementations of digital image processing.Advisability and effectiveness of using the modified method of PQS to assess the structural differences between the images are shown in the article and this will be used in solving the problems of identification and automatic control.

  18. Solution of the square lid-driven cavity flow of a Bingham plastic using the finite volume method

    OpenAIRE

    Syrakos, Alexandros; Georgiou, Georgios C.; Alexandrou, Andreas N.

    2016-01-01

    We investigate the performance of the finite volume method in solving viscoplastic flows. The creeping square lid-driven cavity flow of a Bingham plastic is chosen as the test case and the constitutive equation is regularised as proposed by Papanastasiou [J. Rheol. 31 (1987) 385-404]. It is shown that the convergence rate of the standard SIMPLE pressure-correction algorithm, which is used to solve the algebraic equation system that is produced by the finite volume discretisation, severely det...

  19. Method and apparatus to image biological interactions in plants

    Science.gov (United States)

    Weisenberger, Andrew; Bonito, Gregory M.; Reid, Chantal D.; Smith, Mark Frederick

    2015-12-22

    A method to dynamically image the actual translocation of molecular compounds of interest in a plant root, root system, and rhizosphere without disturbing the root or the soil. The technique makes use of radioactive isotopes as tracers to label molecules of interest and to image their distribution in the plant and/or soil. The method allows for the study and imaging of various biological and biochemical interactions in the rhizosphere of a plant, including, but not limited to, mycorrhizal associations in such regions.

  20. Neutron imaging with the short-pulse laser driven neutron source at the Trident laser facility

    Czech Academy of Sciences Publication Activity Database

    Guler, N.; Volegov, P.; Favalli, A.; Merrill, F.E.; Falk, Kateřina; Jung, D.; Tybo, J.L.; Wilde, C.H.; Croft, S.; Danly, C.; Deppert, O.; Devlin, M.; Fernandez, J.; Gautier, D.C.; Geissel, M.; Haight, R.; Hamilton, C.E.; Hegelich, B.M.; Henzlova, D.; Johnson, R. P.; Schaumann, G.; Schoenberg, K.; Schollmeier, M.; Shimada, T.; Swinhoe, M.T.; Taddeucci, T.; Wender, S.A.; Wurden, G.A.; Roth, M.

    2016-01-01

    Roč. 120, č. 15 (2016), s. 1-12, č. článku 154901. ISSN 0021-8979 R&D Projects: GA MŠk EF15_008/0000162 Grant - others:ELI Beamlines(XE) CZ.02.1.01/0.0/0.0/15_008/0000162 Institutional support: RVO:68378271 Keywords : inertial confinement fusion * ion-beams * plasma interactions * reconstruction * acceleration * dynamics * targets * images Subject RIV: BL - Plasma and Gas Discharge Physics Impact factor: 2.068, year: 2016

  1. A new optimal seam method for seamless image stitching

    Science.gov (United States)

    Xue, Jiale; Chen, Shengyong; Cheng, Xu; Han, Ying; Zhao, Meng

    2017-07-01

    A novel optimal seam method which aims to stitch those images with overlapping area more seamlessly has been propos ed. Considering the traditional gradient domain optimal seam method and fusion algorithm result in bad color difference measurement and taking a long time respectively, the input images would be converted to HSV space and a new energy function is designed to seek optimal stitching path. To smooth the optimal stitching path, a simplified pixel correction and weighted average method are utilized individually. The proposed methods exhibit performance in eliminating the stitching seam compared with the traditional gradient optimal seam and high efficiency with multi-band blending algorithm.

  2. Efficient, symmetry-driven SIMD access patterns for 3D PET image reconstruction applicable for CPUs and GPUs

    Energy Technology Data Exchange (ETDEWEB)

    Scheins, J.J.; Garcia Lucio, L.F.; Herzog, H.; Shah, N.J. [Forschungszentrum Juelich GmbH (Germany). Inst. of Neuroscience and Medicine (INM-4)

    2011-07-01

    Fully 3D PET image reconstruction still remains a challenging computational task due to the tremendous number of registered Lines-of-Response. Typically, billions of geometrical weights have to be repeatedly calculated and evaluated for iterative algorithms. In this context, the reconstruction software PRESTO (PET REconstruction Software TOolkit) provides accurate geometrical weighting schemes for the forward projection and backward projection, e.g. Volume-of-Intersection, while using all measured LORs separately. PRESTO exploits redundancies to realise a strongly compressed, memory-resident system matrix. Consequently, the needed time to calculate matrix weights no longer influences the reconstruction time. Very high compression factors (>300) are achieved by using unconventional non-cartesian voxel patterns. However, in the original implementation the addressing of matrix weights, projection values and voxel values happens in disfavoured memory access patterns. This causes severe computational inefficiencies due to the limited memory bandwidth using CPUs. In this work, the image data and projection data in memory as well as the order of mathematical operations have been completely re-organised to provide an optimal merit for the Single Instruction Multiple Data (SIMD) approach. This reorganisation is directly driven by the induced symmetries of PRESTO. A global speedup factor of 15 for has been achieved for the CPU-based implementation while obtaining identical results. In addition, a GPU-based implementation using CUDA on Nvidia TESLA C1060/S1070 hardware provides another speed up factor of 4 compared to single core CPU processing. (orig.)

  3. Efficient, symmetry-driven SIMD access patterns for 3D PET image reconstruction applicable for CPUs and GPUs

    International Nuclear Information System (INIS)

    Scheins, J.J.; Garcia Lucio, L.F.; Herzog, H.; Shah, N.J.

    2011-01-01

    Fully 3D PET image reconstruction still remains a challenging computational task due to the tremendous number of registered Lines-of-Response. Typically, billions of geometrical weights have to be repeatedly calculated and evaluated for iterative algorithms. In this context, the reconstruction software PRESTO (PET REconstruction Software TOolkit) provides accurate geometrical weighting schemes for the forward projection and backward projection, e.g. Volume-of-Intersection, while using all measured LORs separately. PRESTO exploits redundancies to realise a strongly compressed, memory-resident system matrix. Consequently, the needed time to calculate matrix weights no longer influences the reconstruction time. Very high compression factors (>300) are achieved by using unconventional non-cartesian voxel patterns. However, in the original implementation the addressing of matrix weights, projection values and voxel values happens in disfavoured memory access patterns. This causes severe computational inefficiencies due to the limited memory bandwidth using CPUs. In this work, the image data and projection data in memory as well as the order of mathematical operations have been completely re-organised to provide an optimal merit for the Single Instruction Multiple Data (SIMD) approach. This reorganisation is directly driven by the induced symmetries of PRESTO. A global speedup factor of 15 for has been achieved for the CPU-based implementation while obtaining identical results. In addition, a GPU-based implementation using CUDA on Nvidia TESLA C1060/S1070 hardware provides another speed up factor of 4 compared to single core CPU processing. (orig.)

  4. Defining the value of magnetic resonance imaging in prostate brachytherapy using time-driven activity-based costing.

    Science.gov (United States)

    Thaker, Nikhil G; Orio, Peter F; Potters, Louis

    Magnetic resonance imaging (MRI) simulation and planning for prostate brachytherapy (PBT) may deliver potential clinical benefits but at an unknown cost to the provider and healthcare system. Time-driven activity-based costing (TDABC) is an innovative bottom-up costing tool in healthcare that can be used to measure the actual consumption of resources required over the full cycle of care. TDABC analysis was conducted to compare patient-level costs for an MRI-based versus traditional PBT workflow. TDABC cost was only 1% higher for the MRI-based workflow, and utilization of MRI allowed for cost shifting from other imaging modalities, such as CT and ultrasound, to MRI during the PBT process. Future initiatives will be required to follow the costs of care over longer periods of time to determine if improvements in outcomes and toxicities with an MRI-based approach lead to lower resource utilization and spending over the long-term. Understanding provider costs will become important as healthcare reform transitions to value-based purchasing and other alternative payment models. Copyright © 2016 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  5. Methods for processing and analysis functional and anatomical brain images: computerized tomography, emission tomography and nuclear resonance imaging

    International Nuclear Information System (INIS)

    Mazoyer, B.M.

    1988-01-01

    The various methods for brain image processing and analysis are presented and compared. The following topics are developed: the physical basis of brain image comparison (nature and formation of signals intrinsic performance of the methods image characteristics); mathematical methods for image processing and analysis (filtering, functional parameter extraction, morphological analysis, robotics and artificial intelligence); methods for anatomical localization (neuro-anatomy atlas, proportional stereotaxic atlas, numerized atlas); methodology of cerebral image superposition (normalization, retiming); image networks [fr

  6. A distortion correction method for image intensifier and electronic portal images used in radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Ioannidis, G T; Geramani, K N; Zamboglou, N [Strahlenklinik, Stadtische Kliniken Offenbach, Offenbach (Germany); Uzunoglu, N [Department of Electrical and Computer Engineering, National Technical University of Athens, Athens (Greece)

    1999-12-31

    At the most of radiation departments a simulator and an `on line` verification system of the treated volume, in form of an electronic portal imaging device (EPID), are available. Networking and digital handling (saving, archiving etc.) of the image information is a necessity in the image processing procedures in order to evaluate verification and simulation recordings at the computer screen. Distortion is on the other hand prerequisite for quantitative comparison of both image modalities. Another limitation factor, in order to make quantitative assertions, is the fact that the irradiation fields in radiotherapy are usually bigger than the field of view of an image intensifier. Several segments of the irradiation field must therefore be acquired. Using pattern recognition techniques these segments can be composed into a single image. In this paper a distortion correction method will be presented. The method is based upon a well defined Grid which is embedded during the registration process on the image. The video signal from the image intensifier is acquired and processed. The grid is then recognised using image processing techniques. Ideally if all grid points are recognised, various methods can be applied in order to correct the distortion. But in practice this is not the case. Overlapping structures (bones etc.) have as a consequence that not all of the grid points can be recognised. Mathematical models from the Graph theory are applied in order to reconstruct the whole grid. The deviation of the grid points positions from the rated value is then used to calculate correction coefficients. This method (well defined grid, grid recognition, correction factors) can also be applied in verification images from the EPID or in other image modalities, and therefore a quantitative comparison in radiation treatment is possible. The distortion correction method and the application on simulator images will be presented. (authors)

  7. Three-dimensional imaging of vortex structure in a ferroelectric nanoparticle driven by an electric field.

    Science.gov (United States)

    Karpov, D; Liu, Z; Rolo, T Dos Santos; Harder, R; Balachandran, P V; Xue, D; Lookman, T; Fohtung, E

    2017-08-17

    Topological defects of spontaneous polarization are extensively studied as templates for unique physical phenomena and in the design of reconfigurable electronic devices. Experimental investigations of the complex topologies of polarization have been limited to surface phenomena, which has restricted the probing of the dynamic volumetric domain morphology in operando. Here, we utilize Bragg coherent diffractive imaging of a single BaTiO 3 nanoparticle in a composite polymer/ferroelectric capacitor to study the behavior of a three-dimensional vortex formed due to competing interactions involving ferroelectric domains. Our investigation of the structural phase transitions under the influence of an external electric field shows a mobile vortex core exhibiting a reversible hysteretic transformation path. We also study the toroidal moment of the vortex under the action of the field. Our results open avenues for the study of the structure and evolution of polar vortices and other topological structures in operando in functional materials under cross field configurations.Imaging of topological states of matter such as vortex configurations has generally been limited to 2D surface effects. Here Karpov et al. study the volumetric structure and dynamics of a vortex core mediated by electric-field induced structural phase transition in a ferroelectric BaTiO 3 nanoparticle.

  8. Visible-light-driven dynamic cancer therapy and imaging using graphitic carbon nitride nanoparticles.

    Science.gov (United States)

    Heo, Nam Su; Lee, Sun Uk; Rethinasabapathy, Muruganantham; Lee, Eun Zoo; Cho, Hye-Jin; Oh, Seo Yeong; Choe, Sang Rak; Kim, Yeonho; Hong, Won G; Krishnan, Giribabu; Hong, Won Hi; Jeon, Tae-Joon; Jun, Young-Si; Kim, Hae Jin; Huh, Yun Suk

    2018-09-01

    Organic graphitic carbon nitride nanoparticles (NP-g-CN), less than 30 nm in size, were synthesized and evaluated for photodynamic therapy (PDT) and cell imaging applications. NP-g-CN particles were prepared through an intercalation process using a rod-like melamine-cyanuric acid adduct (MCA) as the molecular precursor and a eutectic mixture of LiCl-KCl (45:55 wt%) as the reaction medium for polycondensation. The nano-dimensional NP-g-CN penetrated the malignant tumor cells with minimal hindrance and effectively generated reactive oxygen species (ROS) under visible light irradiation, which could ablate cancer cells. When excited by visible light irradiation (λ > 420 nm), NP-g-CN introduced to HeLa and cos-7 cells generated a significant amount of ROS and killed the cancerous cells selectively. The cytotoxicity of NP-g-CN was manipulated by altering the light irradiation and the BP-g-CN caused more damage to the cancer cells than normal cells at low concentrations. As a potential non-toxic organic nanomaterial, the synthesized NP-g-CN are biocompatible with less cytotoxicity than toxic inorganic materials. The combined effects of the high efficacy of ROS generation under visible light irradiation, low toxicity, and bio-compatibility highlight the potential of NP-g-CN for PDT and imaging without further modification. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Scenario driven data modelling: a method for integrating diverse sources of data and data streams

    Science.gov (United States)

    2011-01-01

    Background Biology is rapidly becoming a data intensive, data-driven science. It is essential that data is represented and connected in ways that best represent its full conceptual content and allows both automated integration and data driven decision-making. Recent advancements in distributed multi-relational directed graphs, implemented in the form of the Semantic Web make it possible to deal with complicated heterogeneous data in new and interesting ways. Results This paper presents a new approach, scenario driven data modelling (SDDM), that integrates multi-relational directed graphs with data streams. SDDM can be applied to virtually any data integration challenge with widely divergent types of data and data streams. In this work, we explored integrating genetics data with reports from traditional media. SDDM was applied to the New Delhi metallo-beta-lactamase gene (NDM-1), an emerging global health threat. The SDDM process constructed a scenario, created a RDF multi-relational directed graph that linked diverse types of data to the Semantic Web, implemented RDF conversion tools (RDFizers) to bring content into the Sematic Web, identified data streams and analytical routines to analyse those streams, and identified user requirements and graph traversals to meet end-user requirements. Conclusions We provided an example where SDDM was applied to a complex data integration challenge. The process created a model of the emerging NDM-1 health threat, identified and filled gaps in that model, and constructed reliable software that monitored data streams based on the scenario derived multi-relational directed graph. The SDDM process significantly reduced the software requirements phase by letting the scenario and resulting multi-relational directed graph define what is possible and then set the scope of the user requirements. Approaches like SDDM will be critical to the future of data intensive, data-driven science because they automate the process of converting

  10. Cost Analysis by Applying Time-Driven Activity Based Costing Method in Container Terminals

    OpenAIRE

    Yaşar, R. Şebnem

    2017-01-01

    Container transportation, which can also be called as “industrialization of maritime transportation”, gained significant ground in the world trade by offering numerous technical and economic advantages, and accordingly the container terminals have grown up in importance. Increased competition between container terminals puts pressure on the ports to reduce costs and increase operational productivity. To have the right cost information constitutes a prerequisite for cost reduction. Time-Driven...

  11. Model Driven Development of Web Application with SPACE Method and Tool-suit

    OpenAIRE

    Rehana, Jinat

    2010-01-01

    Enterprise level software development using traditional software engineeringapproaches with third-generation programming languages is becoming morechallenging and cumbersome task with the increased complexity of products,shortened development cycles and heightened expectations of quality. MDD(Model Driven Development) has been counting as an exciting and magicaldevelopment approach in the software industry from several years. The ideabehind MDD is the separation of business logic of a system ...

  12. Image Registration Using Single Cluster PHD Methods

    Science.gov (United States)

    Campbell, M.; Schlangen, I.; Delande, E.; Clark, D.

    Cadets in the Department of Physics at the United States Air Force Academy are using the technique of slitless spectroscopy to analyze the spectra from geostationary satellites during glint season. The equinox periods of the year are particularly favorable for earth-based observers to detect specular reflections off satellites (glints), which have been observed in the past using broadband photometry techniques. Three seasons of glints were observed and analyzed for multiple satellites, as measured across the visible spectrum using a diffraction grating on the Academy’s 16-inch, f/8.2 telescope. It is clear from the results that the glint maximum wavelength decreases relative to the time periods before and after the glint, and that the spectral reflectance during the glint is less like a blackbody. These results are consistent with the presumption that solar panels are the predominant source of specular reflection. The glint spectra are also quantitatively compared to different blackbody curves and the solar spectrum by means of absolute differences and standard deviations. Our initial analysis appears to indicate a potential method of determining relative power capacity.

  13. New adaptive sampling method in particle image velocimetry

    International Nuclear Information System (INIS)

    Yu, Kaikai; Xu, Jinglei; Tang, Lan; Mo, Jianwei

    2015-01-01

    This study proposes a new adaptive method to enable the number of interrogation windows and their positions in a particle image velocimetry (PIV) image interrogation algorithm to become self-adapted according to the seeding density. The proposed method can relax the constraint of uniform sampling rate and uniform window size commonly adopted in the traditional PIV algorithm. In addition, the positions of the sampling points are redistributed on the basis of the spring force generated by the sampling points. The advantages include control of the number of interrogation windows according to the local seeding density and smoother distribution of sampling points. The reliability of the adaptive sampling method is illustrated by processing synthetic and experimental images. The synthetic example attests to the advantages of the sampling method. Compared with that of the uniform interrogation technique in the experimental application, the spatial resolution is locally enhanced when using the proposed sampling method. (technical design note)

  14. Research on interpolation methods in medical image processing.

    Science.gov (United States)

    Pan, Mei-Sen; Yang, Xiao-Li; Tang, Jing-Tian

    2012-04-01

    Image interpolation is widely used for the field of medical image processing. In this paper, interpolation methods are divided into three groups: filter interpolation, ordinary interpolation and general partial volume interpolation. Some commonly-used filter methods for image interpolation are pioneered, but the interpolation effects need to be further improved. When analyzing and discussing ordinary interpolation, many asymmetrical kernel interpolation methods are proposed. Compared with symmetrical kernel ones, the former are have some advantages. After analyzing the partial volume and generalized partial volume estimation interpolations, the new concept and constraint conditions of the general partial volume interpolation are defined, and several new partial volume interpolation functions are derived. By performing the experiments of image scaling, rotation and self-registration, the interpolation methods mentioned in this paper are compared in the entropy, peak signal-to-noise ratio, cross entropy, normalized cross-correlation coefficient and running time. Among the filter interpolation methods, the median and B-spline filter interpolations have a relatively better interpolating performance. Among the ordinary interpolation methods, on the whole, the symmetrical cubic kernel interpolations demonstrate a strong advantage, especially the symmetrical cubic B-spline interpolation. However, we have to mention that they are very time-consuming and have lower time efficiency. As for the general partial volume interpolation methods, from the total error of image self-registration, the symmetrical interpolations provide certain superiority; but considering the processing efficiency, the asymmetrical interpolations are better.

  15. Advanced methods for image registration applied to JET videos

    Energy Technology Data Exchange (ETDEWEB)

    Craciunescu, Teddy, E-mail: teddy.craciunescu@jet.uk [EURATOM-MEdC Association, NILPRP, Bucharest (Romania); Murari, Andrea [Consorzio RFX, Associazione EURATOM-ENEA per la Fusione, Padova (Italy); Gelfusa, Michela [Associazione EURATOM-ENEA – University of Rome “Tor Vergata”, Roma (Italy); Tiseanu, Ion; Zoita, Vasile [EURATOM-MEdC Association, NILPRP, Bucharest (Romania); Arnoux, Gilles [EURATOM/CCFE Fusion Association, Culham Science Centre, Abingdon, Oxon (United Kingdom)

    2015-10-15

    Graphical abstract: - Highlights: • Development of an image registration method for JET IR and fast visible cameras. • Method based on SIFT descriptors and coherent point drift points set registration technique. • Method able to deal with extremely noisy images and very low luminosity images. • Computation time compatible with the inter-shot analysis. - Abstract: The last years have witnessed a significant increase in the use of digital cameras on JET. They are routinely applied for imaging in the IR and visible spectral regions. One of the main technical difficulties in interpreting the data of camera based diagnostics is the presence of movements of the field of view. Small movements occur due to machine shaking during normal pulses while large ones may arise during disruptions. Some cameras show a correlation of image movement with change of magnetic field strength. For deriving unaltered information from the videos and for allowing correct interpretation an image registration method, based on highly distinctive scale invariant feature transform (SIFT) descriptors and on the coherent point drift (CPD) points set registration technique, has been developed. The algorithm incorporates a complex procedure for rejecting outliers. The method has been applied for vibrations correction to videos collected by the JET wide angle infrared camera and for the correction of spurious rotations in the case of the JET fast visible camera (which is equipped with an image intensifier). The method has proved to be able to deal with the images provided by this camera frequently characterized by low contrast and a high level of blurring and noise.

  16. Method for estimating modulation transfer function from sample images.

    Science.gov (United States)

    Saiga, Rino; Takeuchi, Akihisa; Uesugi, Kentaro; Terada, Yasuko; Suzuki, Yoshio; Mizutani, Ryuta

    2018-02-01

    The modulation transfer function (MTF) represents the frequency domain response of imaging modalities. Here, we report a method for estimating the MTF from sample images. Test images were generated from a number of images, including those taken with an electron microscope and with an observation satellite. These original images were convolved with point spread functions (PSFs) including those of circular apertures. The resultant test images were subjected to a Fourier transformation. The logarithm of the squared norm of the Fourier transform was plotted against the squared distance from the origin. Linear correlations were observed in the logarithmic plots, indicating that the PSF of the test images can be approximated with a Gaussian. The MTF was then calculated from the Gaussian-approximated PSF. The obtained MTF closely coincided with the MTF predicted from the original PSF. The MTF of an x-ray microtomographic section of a fly brain was also estimated with this method. The obtained MTF showed good agreement with the MTF determined from an edge profile of an aluminum test object. We suggest that this approach is an alternative way of estimating the MTF, independently of the image type. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Comparison between two braking control methods integrating energy recovery for a two-wheel front driven electric vehicle

    International Nuclear Information System (INIS)

    Itani, Khaled; De Bernardinis, Alexandre; Khatir, Zoubir; Jammal, Ahmad

    2016-01-01

    Highlights: • Comparison between two braking methods for an EV maximizing the energy recovery. • Wheels slip ratio control based on robust sliding mode and ECE R13 control methods. • Regenerative braking control strategy. • Energy recovery of a HESS with respect to road surface type and road condition. - Abstract: This paper presents the comparison between two braking methods for a two-wheel front driven Electric Vehicle maximizing the energy recovery on the Hybrid Energy Storage System. The first method consists in controlling the wheels slip ratio while braking using a robust sliding mode controller. The second method will be based on ECE R13H constraints for an M1 passenger vehicle. The vehicle model used for simulation is a simplified five degrees of freedom model. It is driven by two 30 kW permanent magnet synchronous motor (PMSM) recovering energy during braking phases. Several simulation results for extreme braking conditions will be performed and compared on various road type surfaces using Matlab/Simulink®. For an initial speed of 80 km/h, simulation results demonstrate that the difference of energy recovery efficiency between the two control braking methods is beneficial to the ECE constraints control method and it can vary from 3.7% for high friction road type to 11.2% for medium friction road type. At low friction road type, the difference attains 6.6% due to different reasons treated in the paper. The stability deceleration is also discussed and detailed.

  18. Research of x-ray automatic image mosaic method

    Science.gov (United States)

    Liu, Bin; Chen, Shunan; Guo, Lianpeng; Xu, Wanpeng

    2013-10-01

    Image mosaic has widely applications value in the fields of medical image analysis, and it is a technology that carries on the spatial matching to a series of image which are overlapped with each other, and finally builds a seamless and high quality image which has high resolution and big eyeshot. In this paper, the method of grayscale cutting pseudo-color enhancement was firstly used to complete the mapping transformation from gray to the pseudo-color, and to extract SIFT features from the images. And then by making use of a similar measure of NCC (normalized cross correlation - Normalized cross-correlation), the method of RANSAC (Random Sample Consensus) was used to exclude the pseudofeature points right in order to complete the exact match of feature points. Finally, seamless mosaic and color fusion were completed by using wavelet multi-decomposition. The experiment shows that the method we used can effectively improve the precision and automation of the medical image mosaic, and provide an effective technical approach for automatic medical image mosaic.

  19. Study of the Influence of Age in 18F-FDG PET Images Using a Data-Driven Approach and Its Evaluation in Alzheimer’s Disease

    Directory of Open Access Journals (Sweden)

    Jiehui Jiang

    2018-01-01

    Full Text Available Objectives. 18F-FDG PET scan is one of the most frequently used neural imaging scans. However, the influence of age has proven to be the greatest interfering factor for many clinical dementia diagnoses when analyzing 18F-FDG PET images, since radiologists encounter difficulties when deciding whether the abnormalities in specific regions correlate with normal aging, disease, or both. In the present paper, the authors aimed to define specific brain regions and determine an age-correction mathematical model. Methods. A data-driven approach was used based on 255 healthy subjects. Results. The inferior frontal gyrus, the left medial part and the left medial orbital part of superior frontal gyrus, the right insula, the left anterior cingulate, the left median cingulate, and paracingulate gyri, and bilateral superior temporal gyri were found to have a strong negative correlation with age. For evaluation, an age-correction model was applied to 262 healthy subjects and 50 AD subjects selected from the ADNI database, and partial correlations between SUVR mean and three clinical results were carried out before and after age correction. Conclusion. All correlation coefficients were significantly improved after the age correction. The proposed model was effective in the age correction of both healthy and AD subjects.

  20. [An Improved Spectral Quaternion Interpolation Method of Diffusion Tensor Imaging].

    Science.gov (United States)

    Xu, Yonghong; Gao, Shangce; Hao, Xiaofei

    2016-04-01

    Diffusion tensor imaging(DTI)is a rapid development technology in recent years of magnetic resonance imaging.The diffusion tensor interpolation is a very important procedure in DTI image processing.The traditional spectral quaternion interpolation method revises the direction of the interpolation tensor and can preserve tensors anisotropy,but the method does not revise the size of tensors.The present study puts forward an improved spectral quaternion interpolation method on the basis of traditional spectral quaternion interpolation.Firstly,we decomposed diffusion tensors with the direction of tensors being represented by quaternion.Then we revised the size and direction of the tensor respectively according to different situations.Finally,we acquired the tensor of interpolation point by calculating the weighted average.We compared the improved method with the spectral quaternion method and the Log-Euclidean method by the simulation data and the real data.The results showed that the improved method could not only keep the monotonicity of the fractional anisotropy(FA)and the determinant of tensors,but also preserve the tensor anisotropy at the same time.In conclusion,the improved method provides a kind of important interpolation method for diffusion tensor image processing.

  1. Non-image-forming light driven functions are preserved in a mouse model of autosomal dominant optic atrophy.

    Directory of Open Access Journals (Sweden)

    Georgia Perganta

    Full Text Available Autosomal dominant optic atrophy (ADOA is a slowly progressive optic neuropathy that has been associated with mutations of the OPA1 gene. In patients, the disease primarily affects the retinal ganglion cells (RGCs and causes optic nerve atrophy and visual loss. A subset of RGCs are intrinsically photosensitive, express the photopigment melanopsin and drive non-image-forming (NIF visual functions including light driven circadian and sleep behaviours and the pupil light reflex. Given the RGC pathology in ADOA, disruption of NIF functions might be predicted. Interestingly in ADOA patients the pupil light reflex was preserved, although NIF behavioural outputs were not examined. The B6; C3-Opa1(Q285STOP mouse model of ADOA displays optic nerve abnormalities, RGC dendropathy and functional visual disruption. We performed a comprehensive assessment of light driven NIF functions in this mouse model using wheel running activity monitoring, videotracking and pupillometry. Opa1 mutant mice entrained their activity rhythm to the external light/dark cycle, suppressed their activity in response to acute light exposure at night, generated circadian phase shift responses to 480 nm and 525 nm pulses, demonstrated immobility-defined sleep induction following exposure to a brief light pulse at night and exhibited an intensity dependent pupil light reflex. There were no significant differences in any parameter tested relative to wildtype littermate controls. Furthermore, there was no significant difference in the number of melanopsin-expressing RGCs, cell morphology or melanopsin transcript levels between genotypes. Taken together, these findings suggest the preservation of NIF functions in Opa1 mutants. The results provide support to growing evidence that the melanopsin-expressing RGCs are protected in mitochondrial optic neuropathies.

  2. EXPRESS METHOD OF BARCODE GENERATION FROM FACIAL IMAGES

    Directory of Open Access Journals (Sweden)

    G. A. Kukharev

    2014-03-01

    Full Text Available In the paper a method of generating of standard type linear barcodes from facial images is proposed. The method is based on use of the histogram of facial image brightness, averaging the histogram on a limited number of intervals, quantization of results in a range of decimal numbers from 0 to 9 and table conversion into the final barcode. The proposed solution is computationally low-cost and not requires the use of specialized software on image processing that allows generating of facial barcodes in mobile systems, and thus the proposed method can be interpreted as an express method. Results of tests on the Face94 and CUHK Face Sketch FERET Databases showed that the proposed method is a new solution for use in the real-world practice and ensures the stability of generated barcodes in changes of scale, pose and mirroring of a facial image, and also changes of a facial expression and shadows on faces from local lighting. The proposed method is based on generating of a standard barcode directly from the facial image, and thus contains the subjective information about a person's face.

  3. Minimization of energy consumption in HVAC systems with data-driven models and an interior-point method

    International Nuclear Information System (INIS)

    Kusiak, Andrew; Xu, Guanglin; Zhang, Zijun

    2014-01-01

    Highlights: • We study the energy saving of HVAC systems with a data-driven approach. • We conduct an in-depth analysis of the topology of developed Neural Network based HVAC model. • We apply interior-point method to solving a Neural Network based HVAC optimization model. • The uncertain building occupancy is incorporated in the minimization of HVAC energy consumption. • A significant potential of saving HVAC energy is discovered. - Abstract: In this paper, a data-driven approach is applied to minimize energy consumption of a heating, ventilating, and air conditioning (HVAC) system while maintaining the thermal comfort of a building with uncertain occupancy level. The uncertainty of arrival and departure rate of occupants is modeled by the Poisson and uniform distributions, respectively. The internal heating gain is calculated from the stochastic process of the building occupancy. Based on the observed and simulated data, a multilayer perceptron algorithm is employed to model and simulate the HVAC system. The data-driven models accurately predict future performance of the HVAC system based on the control settings and the observed historical information. An optimization model is formulated and solved with the interior-point method. The optimization results are compared with the results produced by the simulation models

  4. Spectral analysis of mammographic images using a multitaper method

    International Nuclear Information System (INIS)

    Wu Gang; Mainprize, James G.; Yaffe, Martin J.

    2012-01-01

    Purpose: Power spectral analysis in radiographic images is conventionally performed using a windowed overlapping averaging periodogram. This study describes an alternative approach using a multitaper technique and compares its performance with that of the standard method. This tool will be valuable in power spectrum estimation of images, whose content deviates significantly from uniform white noise. The performance of the multitaper approach will be evaluated in terms of spectral stability, variance reduction, bias, and frequency precision. The ultimate goal is the development of a useful tool for image quality assurance. Methods: A multitaper approach uses successive data windows of increasing order. This mitigates spectral leakage allowing one to calculate a reduced-variance power spectrum. The multitaper approach will be compared with the conventional power spectrum method in several typical situations, including the noise power spectra (NPS) measurements of simulated projection images of a uniform phantom, NPS measurement of real detector images of a uniform phantom for two clinical digital mammography systems, and the estimation of the anatomic noise in mammographic images (simulated images and clinical mammograms). Results: Examination of spectrum variance versus frequency resolution and bias indicates that the multitaper approach is superior to the conventional single taper methods in the prevention of spectrum leakage and variance reduction. More than four times finer frequency precision can be achieved with equivalent or less variance and bias. Conclusions: Without any shortening of the image data length, the bias is smaller and the frequency resolution is higher with the multitaper method, and the need to compromise in the choice of regions of interest size to balance between the reduction of variance and the loss of frequency resolution is largely eliminated.

  5. Reconstruction of CT images by the Bayes- back projection method

    CERN Document Server

    Haruyama, M; Takase, M; Tobita, H

    2002-01-01

    In the course of research on quantitative assay of non-destructive measurement of radioactive waste, the have developed a unique program based on the Bayesian theory for reconstruction of transmission computed tomography (TCT) image. The reconstruction of cross-section images in the CT technology usually employs the Filtered Back Projection method. The new imaging reconstruction program reported here is based on the Bayesian Back Projection method, and it has a function of iterative improvement images by every step of measurement. Namely, this method has the capability of prompt display of a cross-section image corresponding to each angled projection data from every measurement. Hence, it is possible to observe an improved cross-section view by reflecting each projection data in almost real time. From the basic theory of Baysian Back Projection method, it can be not only applied to CT types of 1st, 2nd, and 3rd generation. This reported deals with a reconstruction program of cross-section images in the CT of ...

  6. Multi-crack imaging using nonclassical nonlinear acoustic method

    International Nuclear Information System (INIS)

    Zhang Lue; Zhang Ying; Liu Xiao-Zhou; Gong Xiu-Fen

    2014-01-01

    Solid materials with cracks exhibit the nonclassical nonlinear acoustical behavior. The micro-defects in solid materials can be detected by nonlinear elastic wave spectroscopy (NEWS) method with a time-reversal (TR) mirror. While defects lie in viscoelastic solid material with different distances from one another, the nonlinear and hysteretic stress—strain relation is established with Preisach—Mayergoyz (PM) model in crack zone. Pulse inversion (PI) and TR methods are used in numerical simulation and defect locations can be determined from images obtained by the maximum value. Since false-positive defects might appear and degrade the imaging when the defects are located quite closely, the maximum value imaging with a time window is introduced to analyze how defects affect each other and how the fake one occurs. Furthermore, NEWS-TR-NEWS method is put forward to improve NEWS-TR scheme, with another forward propagation (NEWS) added to the existing phases (NEWS and TR). In the added phase, scanner locations are determined by locations of all defects imaged in previous phases, so that whether an imaged defect is real can be deduced. NEWS-TR-NEWS method is proved to be effective to distinguish real defects from the false-positive ones. Moreover, it is also helpful to detect the crack that is weaker than others during imaging procedure. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  7. Multi-crack imaging using nonclassical nonlinear acoustic method

    Science.gov (United States)

    Zhang, Lue; Zhang, Ying; Liu, Xiao-Zhou; Gong, Xiu-Fen

    2014-10-01

    Solid materials with cracks exhibit the nonclassical nonlinear acoustical behavior. The micro-defects in solid materials can be detected by nonlinear elastic wave spectroscopy (NEWS) method with a time-reversal (TR) mirror. While defects lie in viscoelastic solid material with different distances from one another, the nonlinear and hysteretic stress—strain relation is established with Preisach—Mayergoyz (PM) model in crack zone. Pulse inversion (PI) and TR methods are used in numerical simulation and defect locations can be determined from images obtained by the maximum value. Since false-positive defects might appear and degrade the imaging when the defects are located quite closely, the maximum value imaging with a time window is introduced to analyze how defects affect each other and how the fake one occurs. Furthermore, NEWS-TR-NEWS method is put forward to improve NEWS-TR scheme, with another forward propagation (NEWS) added to the existing phases (NEWS and TR). In the added phase, scanner locations are determined by locations of all defects imaged in previous phases, so that whether an imaged defect is real can be deduced. NEWS-TR-NEWS method is proved to be effective to distinguish real defects from the false-positive ones. Moreover, it is also helpful to detect the crack that is weaker than others during imaging procedure.

  8. A Quick and Affine Invariance Matching Method for Oblique Images

    Directory of Open Access Journals (Sweden)

    XIAO Xiongwu

    2015-04-01

    Full Text Available This paper proposed a quick, affine invariance matching method for oblique images. It calculated the initial affine matrix by making full use of the two estimated camera axis orientation parameters of an oblique image, then recovered the oblique image to a rectified image by doing the inverse affine transform, and left over by the SIFT method. We used the nearest neighbor distance ratio(NNDR, normalized cross correlation(NCC measure constraints and consistency check to get the coarse matches, then used RANSAC method to calculate the fundamental matrix and the homography matrix. And we got the matches that they were interior points when calculating the homography matrix, then calculated the average value of the matches' principal direction differences. During the matching process, we got the initial matching features by the nearest neighbor(NN matching strategy, then used the epipolar constrains, homography constrains, NCC measure constrains and consistency check of the initial matches' principal direction differences with the calculated average value of the interior matches' principal direction differences to eliminate false matches. Experiments conducted on three pairs of typical oblique images demonstrate that our method takes about the same time as SIFT to match a pair of oblique images with a plenty of corresponding points distributed evenly and an extremely low mismatching rate.

  9. A Study on the Improvement of Digital Periapical Images using Image Interpolation Methods

    International Nuclear Information System (INIS)

    Song, Nam Kyu; Koh, Kwang Joon

    1998-01-01

    Image resampling is of particular interest in digital radiology. When resampling an image to a new set of coordinate, there appears blocking artifacts and image changes. To enhance image quality, interpolation algorithms have been used. Resampling is used to increase the number of points in an image to improve its appearance for display. The process of interpolation is fitting a continuous function to the discrete points in the digital image. The purpose of this study was to determine the effects of the seven interpolation functions when image resampling in digital periapical images. The images were obtained by Digora, CDR and scanning of Ektaspeed plus periapical radiograms on the dry skull and human subject. The subjects were exposed to intraoral X-ray machine at 60 kVp and 70 kVp with exposure time varying between 0.01 and 0.50 second. To determine which interpolation method would provide the better image, seven functions were compared ; (1) nearest neighbor (2) linear (3) non-linear (4) facet model (5) cubic convolution (6) cubic spline (7) gray segment expansion. And resampled images were compared in terms of SNR (Signal to Noise Ratio) and MTF (Modulation Transfer Function) coefficient value. The obtained results were as follows ; 1. The highest SNR value (75.96 dB) was obtained with cubic convolution method and the lowest SNR value (72.44 dB) was obtained with facet model method among seven interpolation methods. 2. There were significant differences of SNR values among CDR, Digora and film scan (P 0.05). 4. There were significant differences of MTF coefficient values between linear interpolation method and the other six interpolation methods (P<0.05). 5. The speed of computation time was the fastest with nearest neighbor method and the slowest with non-linear method. 6. The better image was obtained with cubic convolution, cubic spline and gray segment method in ROC analysis. 7. The better sharpness of edge was obtained with gray segment expansion method

  10. A new method of the light irradiation image by the computed radiography (imaging plate) system

    International Nuclear Information System (INIS)

    Aiba, Susumu; Nishi, Katsuki.

    1997-01-01

    There are two method for the purpose of diagnosing medically by using gamma-ray light irradiation image. One is to use of the scintillation camera for gamma-ray, the other is to use of the photostimulable luminescence point by the secondary excitation of the image plate (IP) system for X-ray. The standpoint of the spatial resolution at the total medical image, using gamma-ray, the first can get the image on a short time, but the first is a poor image quality, and the second is good image quality, but the second can get the image on a long time, because of insensitive to gamma-ray. We report on the improvement for IP's week point by our proposal method, and by our clinical and quantitative analysis data, to use the highly efficient IP (ST-III). We make the improvement on the imaging time (from 30 minutes to 20 minutes), and the inprocessing time (from 33-50 minutes to 27 minutes) for a former method on an organism. We strongly believe that our convenience improvement method, and our clinical quantitative analysis data can contribute to the wide application as well as the quality up for the clinical diagnosis to use gamma-ray. (author)

  11. Dynamic PET Image reconstruction for parametric imaging using the HYPR kernel method

    Science.gov (United States)

    Spencer, Benjamin; Qi, Jinyi; Badawi, Ramsey D.; Wang, Guobao

    2017-03-01

    Dynamic PET image reconstruction is a challenging problem because of the ill-conditioned nature of PET and the lowcounting statistics resulted from short time-frames in dynamic imaging. The kernel method for image reconstruction has been developed to improve image reconstruction of low-count PET data by incorporating prior information derived from high-count composite data. In contrast to most of the existing regularization-based methods, the kernel method embeds image prior information in the forward projection model and does not require an explicit regularization term in the reconstruction formula. Inspired by the existing highly constrained back-projection (HYPR) algorithm for dynamic PET image denoising, we propose in this work a new type of kernel that is simpler to implement and further improves the kernel-based dynamic PET image reconstruction. Our evaluation study using a physical phantom scan with synthetic FDG tracer kinetics has demonstrated that the new HYPR kernel-based reconstruction can achieve a better region-of-interest (ROI) bias versus standard deviation trade-off for dynamic PET parametric imaging than the post-reconstruction HYPR denoising method and the previously used nonlocal-means kernel.

  12. A predictive estimation method for carbon dioxide transport by data-driven modeling with a physically-based data model

    Science.gov (United States)

    Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kue-Young; Jun, Seong-Chun; Choung, Sungwook; Yun, Seong-Taek; Oh, Junho; Kim, Hyun-Jun

    2017-11-01

    In this study, a data-driven method for predicting CO2 leaks and associated concentrations from geological CO2 sequestration is developed. Several candidate models are compared based on their reproducibility and predictive capability for CO2 concentration measurements from the Environment Impact Evaluation Test (EIT) site in Korea. Based on the data mining results, a one-dimensional solution of the advective-dispersive equation for steady flow (i.e., Ogata-Banks solution) is found to be most representative for the test data, and this model is adopted as the data model for the developed method. In the validation step, the method is applied to estimate future CO2 concentrations with the reference estimation by the Ogata-Banks solution, where a part of earlier data is used as the training dataset. From the analysis, it is found that the ensemble mean of multiple estimations based on the developed method shows high prediction accuracy relative to the reference estimation. In addition, the majority of the data to be predicted are included in the proposed quantile interval, which suggests adequate representation of the uncertainty by the developed method. Therefore, the incorporation of a reasonable physically-based data model enhances the prediction capability of the data-driven model. The proposed method is not confined to estimations of CO2 concentration and may be applied to various real-time monitoring data from subsurface sites to develop automated control, management or decision-making systems.

  13. Bias in calculated keff from subcritical measurements by the 252Cf-source-driven noise analysis method

    International Nuclear Information System (INIS)

    Mihalczo, J.T.; Valentine, T.E.

    1995-01-01

    The development of MCNP-DSP, which allows direct calculation of the measured time and frequency analysis parameters from subcritical measurements using the 252 Cf-source-driven noise analysis method, permits the validation of calculational methods for criticality safety with in-plant subcritical measurements. In addition, a method of obtaining the bias in the calculations, which is essential to the criticality safety specialist, is illustrated using the results of measurements with 17.771-cm-diam, enriched (93.15), unreflected, and unmoderated uranium metal cylinders. For these uranium metal cylinders the bias obtained using MCNP-DSP and ENDF/B-V cross-section data increased with subcriticality. For a critical experiment [height (h) = 12.629 cm], it was -0.0061 ± 0.0003. For a 10.16-cm-high cylinder (k ∼ 0.93), it was 0.0060 ± 0.0016, and for a subcritical cylinder (h = 8.13 cm, k ∼ 0.85), the bias was -0.0137 ± 0.0037, more than a factor of 2 larger in magnitude. This method allows the nuclear criticality safety specialist to establish the bias in calculational methods for criticality safety from in-plant subcritical measurements by the 252 Cf-source-driven noise analysis method

  14. Denoising imaging polarimetry by adapted BM3D method.

    Science.gov (United States)

    Tibbs, Alexander B; Daly, Ilse M; Roberts, Nicholas W; Bull, David R

    2018-04-01

    In addition to the visual information contained in intensity and color, imaging polarimetry allows visual information to be extracted from the polarization of light. However, a major challenge of imaging polarimetry is image degradation due to noise. This paper investigates the mitigation of noise through denoising algorithms and compares existing denoising algorithms with a new method, based on BM3D (Block Matching 3D). This algorithm, Polarization-BM3D (PBM3D), gives visual quality superior to the state of the art across all images and noise standard deviations tested. We show that denoising polarization images using PBM3D allows the degree of polarization to be more accurately calculated by comparing it with spectral polarimetry measurements.

  15. Metal artifact reduction method using metal streaks image subtraction

    International Nuclear Information System (INIS)

    Pua, Rizza D.; Cho, Seung Ryong

    2014-01-01

    Many studies have been dedicated for metal artifact reduction (MAR); however, the methods are successful to varying degrees depending on situations. Sinogram in-painting, filtering, iterative method are some of the major categories of MAR. Each has its own merits and weaknesses. A combination of these methods or hybrid methods have also been developed to make use of the different benefits of two techniques and minimize the unfavorable results. Our method focuses on the in-paitning approach and a hybrid MAR described by Xia et al. Although in-painting scheme is an effective technique in reducing the primary metal artifacts, a major drawback is the reintroduction of new artifacts that can be caused by an inaccurate interpolation process. Furthermore, combining the segmented metal image to the corrected nonmetal image in the final step of a conventional inpainting approach causes an issue of incorrect metal pixel values. Our proposed method begins with a sinogram in-painting approach and ends with an image-based metal artifact reduction scheme. This work provides a simple, yet effective solution for reducing metal artifacts and acquiring the original metal pixel information. The proposed method demonstrated its effectiveness in a simulation setting. The proposed method showed image quality that is comparable to the standard MAR; however, quantitatively more accurate than the standard MAR

  16. Combination of acoustical radiosity and the image source method

    DEFF Research Database (Denmark)

    Koutsouris, Georgios I; Brunskog, Jonas; Jeong, Cheol-Ho

    2013-01-01

    A combined model for room acoustic predictions is developed, aiming to treat both diffuse and specular reflections in a unified way. Two established methods are incorporated: acoustical radiosity, accounting for the diffuse part, and the image source method, accounting for the specular part...

  17. Multi-spectral lifetime imaging: methods and applications

    NARCIS (Netherlands)

    Fereidouni, F.

    2013-01-01

    The aim of this PhD project is to further develop multispectral life time imaging hardware and analyses methods. The hardware system, Lambda-Tau, generates a considerable amount of data at high speed. To fully exploit the power of this new hardware, fast and reliable data analyses methods are

  18. Response-driven imaging biomarkers for predicting radiation necrosis of the brain

    International Nuclear Information System (INIS)

    Nazem-Zadeh, Mohammad-Reza; Chapman, Christopher H; Lawrence, Theodore S; Ten Haken, Randall K; Tsien, Christina I; Cao, Yue; Chenevert, Thomas

    2014-01-01

    Radiation necrosis is an uncommon but severe adverse effect of brain radiation therapy (RT). Current predictive models based on radiation dose have limited accuracy. We aimed to identify early individual response biomarkers based upon diffusion tensor (DT) imaging and incorporated them into a response model for prediction of radiation necrosis. Twenty-nine patients with glioblastoma received six weeks of intensity modulated RT and concurrent temozolomide. Patients underwent DT-MRI scans before treatment, at three weeks during RT, and one, three, and six months after RT. Cases with radiation necrosis were classified based on generalized equivalent uniform dose (gEUD) of whole brain and DT index early changes in the corpus callosum and its substructures. Significant covariates were used to develop normal tissue complication probability models using binary logistic regression. Seven patients developed radiation necrosis. Percentage changes of radial diffusivity (RD) in the splenium at three weeks during RT and at six months after RT differed significantly between the patients with and without necrosis (p = 0.05 and p = 0.01). Percentage change of RD at three weeks during RT in the 30 Gy dose–volume of the splenium and brain gEUD combined yielded the best-fit logistic regression model. Our findings indicate that early individual response during the course of RT, assessed by radial diffusivity, has the potential to aid the prediction of delayed radiation necrosis, which could provide guidance in dose-escalation trials. (paper)

  19. Data-driven sampling method for building 3D anatomical models from serial histology

    Science.gov (United States)

    Salunke, Snehal Ulhas; Ablove, Tova; Danforth, Theresa; Tomaszewski, John; Doyle, Scott

    2017-03-01

    In this work, we investigate the effect of slice sampling on 3D models of tissue architecture using serial histopathology. We present a method for using a single fully-sectioned tissue block as pilot data, whereby we build a fully-realized 3D model and then determine the optimal set of slices needed to reconstruct the salient features of the model objects under biological investigation. In our work, we are interested in the 3D reconstruction of microvessel architecture in the trigone region between the vagina and the bladder. This region serves as a potential avenue for drug delivery to treat bladder infection. We collect and co-register 23 serial sections of CD31-stained tissue images (6 μm thick sections), from which four microvessels are selected for analysis. To build each model, we perform semi-automatic segmentation of the microvessels. Subsampled meshes are then created by removing slices from the stack, interpolating the missing data, and re-constructing the mesh. We calculate the Hausdorff distance between the full and subsampled meshes to determine the optimal sampling rate for the modeled structures. In our application, we found that a sampling rate of 50% (corresponding to just 12 slices) was sufficient to recreate the structure of the microvessels without significant deviation from the fullyrendered mesh. This pipeline effectively minimizes the number of histopathology slides required for 3D model reconstruction, and can be utilized to either (1) reduce the overall costs of a project, or (2) enable additional analysis on the intermediate slides.

  20. Diffusion-weighted magnetic resonance imaging of extraocular muscles in patients with Grave's ophthalmopathy using turbo field echo with diffusion-sensitized driven-equilibrium preparation.

    Science.gov (United States)

    Hiwatashi, A; Togao, O; Yamashita, K; Kikuchi, K; Momosaka, D; Honda, H

    2018-03-20

    The purpose of this study was to correlate diffusivity of extraocular muscles, measured by three-dimensional turbo field echo (3DTFE) magnetic resonance (MR) imaging using diffusion-sensitized driven-equilibrium preparation, with their size and activity in patients with Grave's ophthalmopathy. Twenty-three patients with Grave's ophthalmopathy were included. There were 17 women and 6 men with a mean age of 55.8±12.6 (SD) years (range: 26-83 years). 3DTFE with diffusion-sensitized driven-equilibrium MR images were obtained with b-values of 0 and 500s/mm 2 . The apparent diffusion coefficient (ADC) of extraocular muscles was measured on coronal reformatted MR images. Signal intensities of extraocular muscles on conventional MR images were compared to those of normal-appearing white matter, and cross-sectional areas of the muscles were also measured. The clinical activity score was also evaluated. Statistical analyses were performed with Pearson correlation and Mann-Whitney U tests. On 3DTFE with diffusion-sensitized driven-equilibrium preparation, the mean ADC of the extraocular muscles was 2.23±0.37 (SD)×10 -3 mm2/s (range: 1.70×10 -3 -3.11×10 -3 mm 2 /s). There was a statistically significant moderate correlation between ADC and the size of the muscles (r=0.61). There were no statistically significant correlations between ADC and signal intensity on conventional MR and the clinical activity score. 3DTFE with diffusion-sensitized driven-equilibrium preparation technique allows quantifying diffusivity of extraocular muscles in patients with Grave's ophthalmopathy. The diffusivity of the extraocular muscles on 3DTFE with diffusion-sensitized driven-equilibrium preparation MR images moderately correlates with their size. Copyright © 2018. Published by Elsevier Masson SAS.

  1. An effective method on pornographic images realtime recognition

    Science.gov (United States)

    Wang, Baosong; Lv, Xueqiang; Wang, Tao; Wang, Chengrui

    2013-03-01

    In this paper, skin detection, texture filtering and face detection are used to extract feature on an image library, training them with the decision tree arithmetic to create some rules as a decision tree classifier to distinguish an unknown image. Experiment based on more than twenty thousand images, the precision rate can get 76.21% when testing on 13025 pornographic images and elapsed time is less than 0.2s. This experiment shows it has a good popularity. Among the steps mentioned above, proposing a new skin detection model which called irregular polygon region skin detection model based on YCbCr color space. This skin detection model can lower the false detection rate on skin detection. A new method called sequence region labeling on binary connected area can calculate features on connected area, it is faster and needs less memory than other recursive methods.

  2. New method for laser driven ion acceleration with isolated, mass-limited targets

    International Nuclear Information System (INIS)

    Paasch-Colberg, T.; Sokollik, T.; Gorling, K.; Eichmann, U.; Steinke, S.; Schnuerer, M.; Nickles, P.V.; Andreev, A.; Sandner, W.

    2011-01-01

    A new technique to investigate laser driven ion acceleration with fully isolated, mass-limited glass spheres with a diameter down to 8μm is presented. A Paul trap was used to prepare a levitating glass sphere for the interaction with a laser pulse of relativistic intensity. Narrow-bandwidth energy spectra of protons and oxygen ions have been observed and were attributed to specific acceleration field dynamics in case of the spherical target geometry. A general limiting mechanism has been found that explains the experimentally observed ion energies for the mass-limited target.

  3. A Single Image Dehazing Method Using Average Saturation Prior

    Directory of Open Access Journals (Sweden)

    Zhenfei Gu

    2017-01-01

    Full Text Available Outdoor images captured in bad weather are prone to yield poor visibility, which is a fatal problem for most computer vision applications. The majority of existing dehazing methods rely on an atmospheric scattering model and therefore share a common limitation; that is, the model is only valid when the atmosphere is homogeneous. In this paper, we propose an improved atmospheric scattering model to overcome this inherent limitation. By adopting the proposed model, a corresponding dehazing method is also presented. In this method, we first create a haze density distribution map of a hazy image, which enables us to segment the hazy image into scenes according to the haze density similarity. Then, in order to improve the atmospheric light estimation accuracy, we define an effective weight assignment function to locate a candidate scene based on the scene segmentation results and therefore avoid most potential errors. Next, we propose a simple but powerful prior named the average saturation prior (ASP, which is a statistic of extensive high-definition outdoor images. Using this prior combined with the improved atmospheric scattering model, we can directly estimate the scene atmospheric scattering coefficient and restore the scene albedo. The experimental results verify that our model is physically valid, and the proposed method outperforms several state-of-the-art single image dehazing methods in terms of both robustness and effectiveness.

  4. Method of Poisson's ratio imaging within a material part

    Science.gov (United States)

    Roth, Don J. (Inventor)

    1996-01-01

    The present invention is directed to a method of displaying the Poisson's ratio image of a material part. In the present invention longitudinal data is produced using a longitudinal wave transducer and shear wave data is produced using a shear wave transducer. The respective data is then used to calculate the Poisson's ratio for the entire material part. The Poisson's ratio approximations are then used to displayed the image.

  5. Interpretation of the method of images in estimating superconducting levitation

    International Nuclear Information System (INIS)

    Perez-Diaz, Jose Luis; Garcia-Prada, Juan Carlos

    2007-01-01

    Among different papers devoted to superconducting levitation of a permanent magnet over a superconductor using the method of images, there is a discrepancy of a factor of two when estimating the lift force. This is not a minor matter but an interesting fundamental question that contributes to understanding the physical phenomena of 'imaging' on a superconductor surface. We solve it, make clear the physical behavior underlying it, and suggest the reinterpretation of some previous experiments

  6. A developed unsharp masking method for images contrast enhancement

    International Nuclear Information System (INIS)

    Zaafouri, A.; Sayadi, M.; Fnaiech, F.

    2011-01-01

    In this paper, we propose a developed unsharp masking process for contrast image enhancement. The main idea here is to enhance the dark and bright area in the same way which matches the response of human visual system well. Then in order to reduce the noise effect, a mean weighted high pass filter is used for edge extraction. The proposed method gives satisfactory results for wide range of low contrast images compared with others known approaches.

  7. Bin mode estimation methods for Compton camera imaging

    International Nuclear Information System (INIS)

    Ikeda, S.; Odaka, H.; Uemura, M.; Takahashi, T.; Watanabe, S.; Takeda, S.

    2014-01-01

    We study the image reconstruction problem of a Compton camera which consists of semiconductor detectors. The image reconstruction is formulated as a statistical estimation problem. We employ a bin-mode estimation (BME) and extend an existing framework to a Compton camera with multiple scatterers and absorbers. Two estimation algorithms are proposed: an accelerated EM algorithm for the maximum likelihood estimation (MLE) and a modified EM algorithm for the maximum a posteriori (MAP) estimation. Numerical simulations demonstrate the potential of the proposed methods

  8. Cross-relaxation imaging:methods, challenges and applications

    International Nuclear Information System (INIS)

    Stikov, Nikola

    2010-01-01

    An overview of quantitative magnetization transfer (qMT) is given, with focus on cross relaxation imaging (CRI) as a fast method for quantifying the proportion of protons bound to complex macromolecules in tissue. The procedure for generating CRI maps is outlined, showing examples in the human brain and knee, and discussing the caveats and challenges in generating precise and accurate CRI maps. Finally, several applications of CRI for imaging tissue microstructure are presented.(Author)

  9. Classification Method in Integrated Information Network Using Vector Image Comparison

    Directory of Open Access Journals (Sweden)

    Zhou Yuan

    2014-05-01

    Full Text Available Wireless Integrated Information Network (WMN consists of integrated information that can get data from its surrounding, such as image, voice. To transmit information, large resource is required which decreases the service time of the network. In this paper we present a Classification Approach based on Vector Image Comparison (VIC for WMN that improve the service time of the network. The available methods for sub-region selection and conversion are also proposed.

  10. An Efficient Evolutionary Based Method For Image Segmentation

    OpenAIRE

    Aslanzadeh, Roohollah; Qazanfari, Kazem; Rahmati, Mohammad

    2017-01-01

    The goal of this paper is to present a new efficient image segmentation method based on evolutionary computation which is a model inspired from human behavior. Based on this model, a four layer process for image segmentation is proposed using the split/merge approach. In the first layer, an image is split into numerous regions using the watershed algorithm. In the second layer, a co-evolutionary process is applied to form centers of finals segments by merging similar primary regions. In the t...

  11. Sharpening methods for images captured through Bayer matrix

    Science.gov (United States)

    Kalevo, Ossi; Rantanen, Henry, Jr.

    2003-05-01

    Image resolution and sharpness are essential criteria for a human observer when estimating the image quality. Typically cheap small-sized, low-resolution CMOS-camera sensors do not provide sharp enough images, at least when comparing to high-end digital cameras. Sharpening function can be used to increase the subjective sharpness seen by the observer. In this paper, few methods to apply sharpening for images captured by CMOS imaging sensors through color filter array (CFA) are compared. The sharpening easily adds also the visibility of noise, pixel-cross talk and interpolation artifacts. Necessary arrangements to avoid the amplification of these unwanted phenomenon are discussed. By applying the sharpening only to the green component the processing power requirements can be clearly reduced. By adjusting the red and blue component sharpness, according to the green component sharpening, creation of false colors are reduced highly. Direction search sharpening method can be used to reduce the amplification of the artifacts caused by the CFA interpolation (CFAI). The comparison of the presented methods is based mainly on subjective image quality. Also the processing power and memory requirements are considered.

  12. Hyperspectral image compressing using wavelet-based method

    Science.gov (United States)

    Yu, Hui; Zhang, Zhi-jie; Lei, Bo; Wang, Chen-sheng

    2017-10-01

    Hyperspectral imaging sensors can acquire images in hundreds of continuous narrow spectral bands. Therefore each object presented in the image can be identified from their spectral response. However, such kind of imaging brings a huge amount of data, which requires transmission, processing, and storage resources for both airborne and space borne imaging. Due to the high volume of hyperspectral image data, the exploration of compression strategies has received a lot of attention in recent years. Compression of hyperspectral data cubes is an effective solution for these problems. Lossless compression of the hyperspectral data usually results in low compression ratio, which may not meet the available resources; on the other hand, lossy compression may give the desired ratio, but with a significant degradation effect on object identification performance of the hyperspectral data. Moreover, most hyperspectral data compression techniques exploits the similarities in spectral dimensions; which requires bands reordering or regrouping, to make use of the spectral redundancy. In this paper, we explored the spectral cross correlation between different bands, and proposed an adaptive band selection method to obtain the spectral bands which contain most of the information of the acquired hyperspectral data cube. The proposed method mainly consist three steps: First, the algorithm decomposes the original hyperspectral imagery into a series of subspaces based on the hyper correlation matrix of the hyperspectral images between different bands. And then the Wavelet-based algorithm is applied to the each subspaces. At last the PCA method is applied to the wavelet coefficients to produce the chosen number of components. The performance of the proposed method was tested by using ISODATA classification method.

  13. On the pinned field image binarization for signature generation in image ownership verification method

    Directory of Open Access Journals (Sweden)

    Chang Hsuan

    2011-01-01

    Full Text Available Abstract The issue of pinned field image binarization for signature generation in the ownership verification of the protected image is investigated. The pinned field explores the texture information of the protected image and can be employed to enhance the watermark robustness. In the proposed method, four optimization schemes are utilized to determine the threshold values for transforming the pinned field into a binary feature image, which is then utilized to generate an effective signature image. Experimental results show that the utilization of optimization schemes can significantly improve the signature robustness from the previous method (Lee and Chang, Opt. Eng. 49 (9, 097005, 2010. While considering both the watermark retrieval rate and the computation speed, the genetic algorithm is strongly recommended. In addition, compared with Chang and Lin's scheme (J. Syst. Softw. 81 (7, 1118-1129, 2008, the proposed scheme also has better performance.

  14. Fast method of constructing image correlations to build a free network based on image multivocabulary trees

    Science.gov (United States)

    Zhan, Zongqian; Wang, Xin; Wei, Minglu

    2015-05-01

    In image-based three-dimensional (3-D) reconstruction, one topic of growing importance is how to quickly obtain a 3-D model from a large number of images. The retrieval of the correct and relevant images for the model poses a considerable technological challenge. The "image vocabulary tree" has been proposed as a method to search for similar images. However, a significant drawback of this approach is identified in its low time efficiency and barely satisfactory classification result. The method proposed is inspired by, and improves upon, some recent methods. Specifically, vocabulary quality is considered and multivocabulary trees are designed to improve the classification result. A marked improvement was, indeed, observed in our evaluation of the proposed method. To improve time efficiency, graphics processing unit (GPU) computer unified device architecture parallel computation is applied in the multivocabulary trees. The results of the experiments showed that the GPU was three to four times more efficient than the enumeration matching and CPU methods when the number of images is large. This paper presents a reliable reference method for the rapid construction of a free network to be used for the computing of 3-D information.

  15. Splitting methods in communication, imaging, science, and engineering

    CERN Document Server

    Osher, Stanley; Yin, Wotao

    2016-01-01

    This book is about computational methods based on operator splitting. It consists of twenty-three chapters written by recognized splitting method contributors and practitioners, and covers a vast spectrum of topics and application areas, including computational mechanics, computational physics, image processing, wireless communication, nonlinear optics, and finance. Therefore, the book presents very versatile aspects of splitting methods and their applications, motivating the cross-fertilization of ideas. .

  16. Domain decomposition methods for solving an image problem

    Energy Technology Data Exchange (ETDEWEB)

    Tsui, W.K.; Tong, C.S. [Hong Kong Baptist College (Hong Kong)

    1994-12-31

    The domain decomposition method is a technique to break up a problem so that ensuing sub-problems can be solved on a parallel computer. In order to improve the convergence rate of the capacitance systems, pre-conditioned conjugate gradient methods are commonly used. In the last decade, most of the efficient preconditioners are based on elliptic partial differential equations which are particularly useful for solving elliptic partial differential equations. In this paper, the authors apply the so called covering preconditioner, which is based on the information of the operator under investigation. Therefore, it is good for various kinds of applications, specifically, they shall apply the preconditioned domain decomposition method for solving an image restoration problem. The image restoration problem is to extract an original image which has been degraded by a known convolution process and additive Gaussian noise.

  17. A novel optical gating method for laser gated imaging

    Science.gov (United States)

    Ginat, Ran; Schneider, Ron; Zohar, Eyal; Nesher, Ofer

    2013-06-01

    For the past 15 years, Elbit Systems is developing time-resolved active laser-gated imaging (LGI) systems for various applications. Traditional LGI systems are based on high sensitive gated sensors, synchronized to pulsed laser sources. Elbit propriety multi-pulse per frame method, which is being implemented in LGI systems, improves significantly the imaging quality. A significant characteristic of the LGI is its ability to penetrate a disturbing media, such as rain, haze and some fog types. Current LGI systems are based on image intensifier (II) sensors, limiting the system in spectral response, image quality, reliability and cost. A novel propriety optical gating module was developed in Elbit, untying the dependency of LGI system on II. The optical gating module is not bounded to the radiance wavelength and positioned between the system optics and the sensor. This optical gating method supports the use of conventional solid state sensors. By selecting the appropriate solid state sensor, the new LGI systems can operate at any desired wavelength. In this paper we present the new gating method characteristics, performance and its advantages over the II gating method. The use of the gated imaging systems is described in a variety of applications, including results from latest field experiments.

  18. Feature extraction from mammographic images using fast marching methods

    International Nuclear Information System (INIS)

    Bottigli, U.; Golosio, B.

    2002-01-01

    Features extraction from medical images represents a fundamental step for shape recognition and diagnostic support. The present work faces the problem of the detection of large features, such as massive lesions and organ contours, from mammographic images. The regions of interest are often characterized by an average grayness intensity that is different from the surrounding. In most cases, however, the desired features cannot be extracted by simple gray level thresholding, because of image noise and non-uniform density of the surrounding tissue. In this work, edge detection is achieved through the fast marching method (Level Set Methods and Fast Marching Methods, Cambridge University Press, Cambridge, 1999), which is based on the theory of interface evolution. Starting from a seed point in the shape of interest, a front is generated which evolves according to an appropriate speed function. Such function is expressed in terms of geometric properties of the evolving interface and of image properties, and should become zero when the front reaches the desired boundary. Some examples of application of such method to mammographic images from the CALMA database (Nucl. Instr. and Meth. A 460 (2001) 107) are presented here and discussed

  19. SAR Data Fusion Imaging Method Oriented to Target Feature Extraction

    Directory of Open Access Journals (Sweden)

    Yang Wei

    2015-02-01

    Full Text Available To deal with the difficulty for target outlines extracting precisely due to neglect of target scattering characteristic variation during the processing of high-resolution space-borne SAR data, a novel fusion imaging method is proposed oriented to target feature extraction. Firstly, several important aspects that affect target feature extraction and SAR image quality are analyzed, including curved orbit, stop-and-go approximation, atmospheric delay, and high-order residual phase error. Furthermore, the corresponding compensation methods are addressed as well. Based on the analysis, the mathematical model of SAR echo combined with target space-time spectrum is established for explaining the space-time-frequency change rule of target scattering characteristic. Moreover, a fusion imaging strategy and method under high-resolution and ultra-large observation angle range conditions are put forward to improve SAR quality by fusion processing in range-doppler and image domain. Finally, simulations based on typical military targets are used to verify the effectiveness of the fusion imaging method.

  20. Images Encryption Method using Steganographic LSB Method, AES and RSA algorithm

    Science.gov (United States)

    Moumen, Abdelkader; Sissaoui, Hocine

    2017-03-01

    Vulnerability of communication of digital images is an extremely important issue nowadays, particularly when the images are communicated through insecure channels. To improve communication security, many cryptosystems have been presented in the image encryption literature. This paper proposes a novel image encryption technique based on an algorithm that is faster than current methods. The proposed algorithm eliminates the step in which the secrete key is shared during the encryption process. It is formulated based on the symmetric encryption, asymmetric encryption and steganography theories. The image is encrypted using a symmetric algorithm, then, the secret key is encrypted by means of an asymmetrical algorithm and it is hidden in the ciphered image using a least significant bits steganographic scheme. The analysis results show that while enjoying the faster computation, our method performs close to optimal in terms of accuracy.

  1. Data-driven modeling and predictive control for boiler-turbine unit using fuzzy clustering and subspace methods.

    Science.gov (United States)

    Wu, Xiao; Shen, Jiong; Li, Yiguo; Lee, Kwang Y

    2014-05-01

    This paper develops a novel data-driven fuzzy modeling strategy and predictive controller for boiler-turbine unit using fuzzy clustering and subspace identification (SID) methods. To deal with the nonlinear behavior of boiler-turbine unit, fuzzy clustering is used to provide an appropriate division of the operation region and develop the structure of the fuzzy model. Then by combining the input data with the corresponding fuzzy membership functions, the SID method is extended to extract the local state-space model parameters. Owing to the advantages of the both methods, the resulting fuzzy model can represent the boiler-turbine unit very closely, and a fuzzy model predictive controller is designed based on this model. As an alternative approach, a direct data-driven fuzzy predictive control is also developed following the same clustering and subspace methods, where intermediate subspace matrices developed during the identification procedure are utilized directly as the predictor. Simulation results show the advantages and effectiveness of the proposed approach. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Research on a Hierarchical Dynamic Automatic Voltage Control System Based on the Discrete Event-Driven Method

    Directory of Open Access Journals (Sweden)

    Yong Min

    2013-06-01

    Full Text Available In this paper, concepts and methods of hybrid control systems are adopted to establish a hierarchical dynamic automatic voltage control (HD-AVC system, realizing the dynamic voltage stability of power grids. An HD-AVC system model consisting of three layers is built based on the hybrid control method and discrete event-driven mechanism. In the Top Layer, discrete events are designed to drive the corresponding control block so as to avoid solving complex multiple objective functions, the power system’s characteristic matrix is formed and the minimum amplitude eigenvalue (MAE is calculated through linearized differential-algebraic equations. MAE is applied to judge the system’s voltage stability and security and construct discrete events. The Middle Layer is responsible for management and operation, which is also driven by discrete events. Control values of the control buses are calculated based on the characteristics of power systems and the sensitivity method. Then control values generate control strategies through the interface block. In the Bottom Layer, various control devices receive and implement the control commands from the Middle Layer. In this way, a closed-loop power system voltage control is achieved. Computer simulations verify the validity and accuracy of the HD-AVC system, and verify that the proposed HD-AVC system is more effective than normal voltage control methods.

  3. Ortho Image and DTM Generation with Intelligent Methods

    Science.gov (United States)

    Bagheri, H.; Sadeghian, S.

    2013-10-01

    Nowadays the artificial intelligent algorithms has considered in GIS and remote sensing. Genetic algorithm and artificial neural network are two intelligent methods that are used for optimizing of image processing programs such as edge extraction and etc. these algorithms are very useful for solving of complex program. In this paper, the ability and application of genetic algorithm and artificial neural network in geospatial production process like geometric modelling of satellite images for ortho photo generation and height interpolation in raster Digital Terrain Model production process is discussed. In first, the geometric potential of Ikonos-2 and Worldview-2 with rational functions, 2D & 3D polynomials were tested. Also comprehensive experiments have been carried out to evaluate the viability of the genetic algorithm for optimization of rational function, 2D & 3D polynomials. Considering the quality of Ground Control Points, the accuracy (RMSE) with genetic algorithm and 3D polynomials method for Ikonos-2 Geo image was 0.508 pixel sizes and the accuracy (RMSE) with GA algorithm and rational function method for Worldview-2 image was 0.930 pixel sizes. For more another optimization artificial intelligent methods, neural networks were used. With the use of perceptron network in Worldview-2 image, a result of 0.84 pixel sizes with 4 neurons in middle layer was gained. The final conclusion was that with artificial intelligent algorithms it is possible to optimize the existing models and have better results than usual ones. Finally the artificial intelligence methods, like genetic algorithms as well as neural networks, were examined on sample data for optimizing interpolation and for generating Digital Terrain Models. The results then were compared with existing conventional methods and it appeared that these methods have a high capacity in heights interpolation and that using these networks for interpolating and optimizing the weighting methods based on inverse

  4. A systematic desaturation method for images from the Atmospheric Imaging Assembly in the Solar Dynamics Observatory.

    Science.gov (United States)

    Torre, Gabriele; Schwartz, Richard; Piana, Michele; Massone, Anna Maria; Benvenuto, Federico

    2016-05-01

    The fine spatial resolution of the SDO AIA CCD's is often destroyed by the charge in saturated pixels overflowing into a swath of neighboring cells during fast rising solar flares. Automated exposure control can only mitigate this issue to a degree and it has other deleterious effects. Our method addresses the desaturation problem for AIA images as an image reconstruction problem in which the information content of the diffraction fringes, generated by the interaction between the incoming radiation and the hardware of the spacecraft, is exploited to recover the true image intensities within the primary saturated core of the image. This methodology takes advantage of some well defined techniques like cross-correlation and the Expectation Maximization method to invert the direct relation between the diffraction fringes intensities and the true flux intensities. During this talk a complete overview on the structure of the method will be provided, besides some reliability tests obtained by its application against synthetic and real data.

  5. Deep kernel learning method for SAR image target recognition

    Science.gov (United States)

    Chen, Xiuyuan; Peng, Xiyuan; Duan, Ran; Li, Junbao

    2017-10-01

    With the development of deep learning, research on image target recognition has made great progress in recent years. Remote sensing detection urgently requires target recognition for military, geographic, and other scientific research. This paper aims to solve the synthetic aperture radar image target recognition problem by combining deep and kernel learning. The model, which has a multilayer multiple kernel structure, is optimized layer by layer with the parameters of Support Vector Machine and a gradient descent algorithm. This new deep kernel learning method improves accuracy and achieves competitive recognition results compared with other learning methods.

  6. Studying depression using imaging and machine learning methods

    Directory of Open Access Journals (Sweden)

    Meenal J. Patel

    2016-01-01

    Full Text Available Depression is a complex clinical entity that can pose challenges for clinicians regarding both accurate diagnosis and effective timely treatment. These challenges have prompted the development of multiple machine learning methods to help improve the management of this disease. These methods utilize anatomical and physiological data acquired from neuroimaging to create models that can identify depressed patients vs. non-depressed patients and predict treatment outcomes. This article (1 presents a background on depression, imaging, and machine learning methodologies; (2 reviews methodologies of past studies that have used imaging and machine learning to study depression; and (3 suggests directions for future depression-related studies.

  7. Studying depression using imaging and machine learning methods.

    Science.gov (United States)

    Patel, Meenal J; Khalaf, Alexander; Aizenstein, Howard J

    2016-01-01

    Depression is a complex clinical entity that can pose challenges for clinicians regarding both accurate diagnosis and effective timely treatment. These challenges have prompted the development of multiple machine learning methods to help improve the management of this disease. These methods utilize anatomical and physiological data acquired from neuroimaging to create models that can identify depressed patients vs. non-depressed patients and predict treatment outcomes. This article (1) presents a background on depression, imaging, and machine learning methodologies; (2) reviews methodologies of past studies that have used imaging and machine learning to study depression; and (3) suggests directions for future depression-related studies.

  8. An integrated model-driven method for in-treatment upper airway motion tracking using cine MRI in head and neck radiation therapy

    Energy Technology Data Exchange (ETDEWEB)

    Li, Hua, E-mail: huli@radonc.wustl.edu; Chen, Hsin-Chen; Dolly, Steven; Li, Harold; Fischer-Valuck, Benjamin; Mazur, Thomas; Gach, Michael; Kashani, Rojano; Green, Olga; Rodriguez, Vivian; Gay, Hiram; Thorstad, Wade; Mutic, Sasa [Department of Radiation Oncology, Washington University, St. Louis, Missouri 63110 (United States); Victoria, James; Dempsey, James [ViewRay Incorporated, Inc., Oakwood Village, Ohio 44146 (United States); Ruan, Su [Laboratoire LITIS (EA 4108), Equipe Quantif, University of Rouen, Rouen 76183 (France); Anastasio, Mark [Department of Biomedical Engineering, Washington University, St. Louis, Missouri 63110 (United States)

    2016-08-15

    Purpose: For the first time, MRI-guided radiation therapy systems can acquire cine images to dynamically monitor in-treatment internal organ motion. However, the complex head and neck (H&N) structures and low-contrast/resolution of on-board cine MRI images make automatic motion tracking a very challenging task. In this study, the authors proposed an integrated model-driven method to automatically track the in-treatment motion of the H&N upper airway, a complex and highly deformable region wherein internal motion often occurs in an either voluntary or involuntary manner, from cine MRI images for the analysis of H&N motion patterns. Methods: Considering the complex H&N structures and ensuring automatic and robust upper airway motion tracking, the authors firstly built a set of linked statistical shapes (including face, face-jaw, and face-jaw-palate) using principal component analysis from clinically approved contours delineated on a set of training data. The linked statistical shapes integrate explicit landmarks and implicit shape representation. Then, a hierarchical model-fitting algorithm was developed to align the linked shapes on the first image frame of a to-be-tracked cine sequence and to localize the upper airway region. Finally, a multifeature level set contour propagation scheme was performed to identify the upper airway shape change, frame-by-frame, on the entire image sequence. The multifeature fitting energy, including the information of intensity variations, edge saliency, curve geometry, and temporal shape continuity, was minimized to capture the details of moving airway boundaries. Sagittal cine MR image sequences acquired from three H&N cancer patients were utilized to demonstrate the performance of the proposed motion tracking method. Results: The tracking accuracy was validated by comparing the results to the average of two manual delineations in 50 randomly selected cine image frames from each patient. The resulting average dice similarity

  9. An integrated model-driven method for in-treatment upper airway motion tracking using cine MRI in head and neck radiation therapy

    International Nuclear Information System (INIS)

    Li, Hua; Chen, Hsin-Chen; Dolly, Steven; Li, Harold; Fischer-Valuck, Benjamin; Mazur, Thomas; Gach, Michael; Kashani, Rojano; Green, Olga; Rodriguez, Vivian; Gay, Hiram; Thorstad, Wade; Mutic, Sasa; Victoria, James; Dempsey, James; Ruan, Su; Anastasio, Mark

    2016-01-01

    Purpose: For the first time, MRI-guided radiation therapy systems can acquire cine images to dynamically monitor in-treatment internal organ motion. However, the complex head and neck (H&N) structures and low-contrast/resolution of on-board cine MRI images make automatic motion tracking a very challenging task. In this study, the authors proposed an integrated model-driven method to automatically track the in-treatment motion of the H&N upper airway, a complex and highly deformable region wherein internal motion often occurs in an either voluntary or involuntary manner, from cine MRI images for the analysis of H&N motion patterns. Methods: Considering the complex H&N structures and ensuring automatic and robust upper airway motion tracking, the authors firstly built a set of linked statistical shapes (including face, face-jaw, and face-jaw-palate) using principal component analysis from clinically approved contours delineated on a set of training data. The linked statistical shapes integrate explicit landmarks and implicit shape representation. Then, a hierarchical model-fitting algorithm was developed to align the linked shapes on the first image frame of a to-be-tracked cine sequence and to localize the upper airway region. Finally, a multifeature level set contour propagation scheme was performed to identify the upper airway shape change, frame-by-frame, on the entire image sequence. The multifeature fitting energy, including the information of intensity variations, edge saliency, curve geometry, and temporal shape continuity, was minimized to capture the details of moving airway boundaries. Sagittal cine MR image sequences acquired from three H&N cancer patients were utilized to demonstrate the performance of the proposed motion tracking method. Results: The tracking accuracy was validated by comparing the results to the average of two manual delineations in 50 randomly selected cine image frames from each patient. The resulting average dice similarity

  10. Quantitative Nuclear Medicine Imaging: Concepts, Requirements and Methods

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2014-01-15

    The absolute quantification of radionuclide distribution has been a goal since the early days of nuclear medicine. Nevertheless, the apparent complexity and sometimes limited accuracy of these methods have prevented them from being widely used in important applications such as targeted radionuclide therapy or kinetic analysis. The intricacy of the effects degrading nuclear medicine images and the lack of availability of adequate methods to compensate for these effects have frequently been seen as insurmountable obstacles in the use of quantitative nuclear medicine in clinical institutions. In the last few decades, several research groups have consistently devoted their efforts to the filling of these gaps. As a result, many efficient methods are now available that make quantification a clinical reality, provided appropriate compensation tools are used. Despite these efforts, many clinical institutions still lack the knowledge and tools to adequately measure and estimate the accumulated activities in the human body, thereby using potentially outdated protocols and procedures. The purpose of the present publication is to review the current state of the art of image quantification and to provide medical physicists and other related professionals facing quantification tasks with a solid background of tools and methods. It describes and analyses the physical effects that degrade image quality and affect the accuracy of quantification, and describes methods to compensate for them in planar, single photon emission computed tomography (SPECT) and positron emission tomography (PET) images. The fast paced development of the computational infrastructure, both hardware and software, has made drastic changes in the ways image quantification is now performed. The measuring equipment has evolved from the simple blind probes to planar and three dimensional imaging, supported by SPECT, PET and hybrid equipment. Methods of iterative reconstruction have been developed to allow for

  11. Robust and efficient method for matching features in omnidirectional images

    Science.gov (United States)

    Zhu, Qinyi; Zhang, Zhijiang; Zeng, Dan

    2018-04-01

    Binary descriptors have been widely used in many real-time applications due to their efficiency. These descriptors are commonly designed for perspective images but perform poorly on omnidirectional images, which are severely distorted. To address this issue, this paper proposes tangent plane BRIEF (TPBRIEF) and adapted log polar grid-based motion statistics (ALPGMS). TPBRIEF projects keypoints to a unit sphere and applies the fixed test set in BRIEF descriptor on the tangent plane of the unit sphere. The fixed test set is then backprojected onto the original distorted images to construct the distortion invariant descriptor. TPBRIEF directly enables keypoint detecting and feature describing on original distorted images, whereas other approaches correct the distortion through image resampling, which introduces artifacts and adds time cost. With ALPGMS, omnidirectional images are divided into circular arches named adapted log polar grids. Whether a match is true or false is then determined by simply thresholding the match numbers in a grid pair where the two matched points located. Experiments show that TPBRIEF greatly improves the feature matching accuracy and ALPGMS robustly removes wrong matches. Our proposed method outperforms the state-of-the-art methods.

  12. MR imaging methods for assessing fetal brain development.

    Science.gov (United States)

    Rutherford, Mary; Jiang, Shuzhou; Allsop, Joanna; Perkins, Lucinda; Srinivasan, Latha; Hayat, Tayyib; Kumar, Sailesh; Hajnal, Jo

    2008-05-01

    Fetal magnetic resonance imaging provides an ideal tool for investigating growth and development of the brain in vivo. Current imaging methods have been hampered by fetal motion but recent advances in image acquisition can produce high signal to noise, high resolution 3-dimensional datasets suitable for objective quantification by state of the art post acquisition computer programs. Continuing development of imaging techniques will allow a unique insight into the developing brain, more specifically process of cell migration, axonal pathway formation, and cortical maturation. Accurate quantification of these developmental processes in the normal fetus will allow us to identify subtle deviations from normal during the second and third trimester of pregnancy either in the compromised fetus or in infants born prematurely.

  13. A proposed assessment method for image of regional educational institutions

    Directory of Open Access Journals (Sweden)

    Kataeva Natalya

    2017-01-01

    Full Text Available Market of educational services in the current Russian economic conditions is a complex of a huge variety of educational institutions. Market of educational services is already experiencing a significant influence of the demographic situation in Russia. This means that higher education institutions are forced to fight in a tough competition for high school students. Increased competition in the educational market forces universities to find new methods of non-price competition in attraction of potential students and throughout own educational and economic activities. Commercialization of education places universities in a single plane with commercial companies who study a positive perception of the image and reputation as a competitive advantage, which is quite acceptable for use in strategic and current activities of higher education institutions to ensure the competitiveness of educational services and educational institution in whole. Nevertheless, due to lack of evidence-based proposals in this area there is a need for scientific research in terms of justification of organizational and methodological aspects of image use as a factor in the competitiveness of the higher education institution. Theoretically and practically there are different methods and ways of evaluating the company’s image. The article provides a comparative assessment of the existing valuation methods of corporate image and the author’s method of estimating the image of higher education institutions based on the key influencing factors. The method has been tested on the Vyatka State Agricultural Academy (Russia. The results also indicate the strengths and weaknesses of the institution, highlights ways of improving, and adjusts the efforts for image improvement.

  14. Guidance for Methods Descriptions Used in Preclinical Imaging Papers

    Directory of Open Access Journals (Sweden)

    David Stout

    2013-10-01

    Full Text Available Preclinical molecular imaging is a rapidly growing field, where new imaging systems, methods, and biological findings are constantly being developed or discovered. Imaging systems and the associated software usually have multiple options for generating data, which is often overlooked but is essential when reporting the methods used to create and analyze data. Similarly, the ways in which animals are housed, handled, and treated to create physiologically based data must be well described in order that the findings be relevant, useful, and reproducible. There are frequently new developments for metabolic imaging methods. Thus, specific reporting requirements are difficult to establish; however, it remains essential to adequately report how the data have been collected, processed, and analyzed. To assist with future manuscript submissions, this article aims to provide guidelines of what details to report for several of the most common imaging modalities. Examples are provided in an attempt to give comprehensive, succinct descriptions of the essential items to report about the experimental process.

  15. An integrated model-driven method for in-treatment upper airway motion tracking using cine MRI in head and neck radiation therapy.

    Science.gov (United States)

    Li, Hua; Chen, Hsin-Chen; Dolly, Steven; Li, Harold; Fischer-Valuck, Benjamin; Victoria, James; Dempsey, James; Ruan, Su; Anastasio, Mark; Mazur, Thomas; Gach, Michael; Kashani, Rojano; Green, Olga; Rodriguez, Vivian; Gay, Hiram; Thorstad, Wade; Mutic, Sasa

    2016-08-01

    For the first time, MRI-guided radiation therapy systems can acquire cine images to dynamically monitor in-treatment internal organ motion. However, the complex head and neck (H&N) structures and low-contrast/resolution of on-board cine MRI images make automatic motion tracking a very challenging task. In this study, the authors proposed an integrated model-driven method to automatically track the in-treatment motion of the H&N upper airway, a complex and highly deformable region wherein internal motion often occurs in an either voluntary or involuntary manner, from cine MRI images for the analysis of H&N motion patterns. Considering the complex H&N structures and ensuring automatic and robust upper airway motion tracking, the authors firstly built a set of linked statistical shapes (including face, face-jaw, and face-jaw-palate) using principal component analysis from clinically approved contours delineated on a set of training data. The linked statistical shapes integrate explicit landmarks and implicit shape representation. Then, a hierarchical model-fitting algorithm was developed to align the linked shapes on the first image frame of a to-be-tracked cine sequence and to localize the upper airway region. Finally, a multifeature level set contour propagation scheme was performed to identify the upper airway shape change, frame-by-frame, on the entire image sequence. The multifeature fitting energy, including the information of intensity variations, edge saliency, curve geometry, and temporal shape continuity, was minimized to capture the details of moving airway boundaries. Sagittal cine MR image sequences acquired from three H&N cancer patients were utilized to demonstrate the performance of the proposed motion tracking method. The tracking accuracy was validated by comparing the results to the average of two manual delineations in 50 randomly selected cine image frames from each patient. The resulting average dice similarity coefficient (93.28%  ±  1

  16. Novel axolotl cardiac function analysis method using magnetic resonance imaging.

    Directory of Open Access Journals (Sweden)

    Pedro Gomes Sanches

    Full Text Available The salamander axolotl is capable of complete regeneration of amputated heart tissue. However, non-invasive imaging tools for assessing its cardiac function were so far not employed. In this study, cardiac magnetic resonance imaging is introduced as a non-invasive technique to image heart function of axolotls. Three axolotls were imaged with magnetic resonance imaging using a retrospectively gated Fast Low Angle Shot cine sequence. Within one scanning session the axolotl heart was imaged three times in all planes, consecutively. Heart rate, ejection fraction, stroke volume and cardiac output were calculated using three techniques: (1 combined long-axis, (2 short-axis series, and (3 ultrasound (control for heart rate only. All values are presented as mean ± standard deviation. Heart rate (beats per minute among different animals was 32.2±6.0 (long axis, 30.4±5.5 (short axis and 32.7±4.9 (ultrasound and statistically similar regardless of the imaging method (p > 0.05. Ejection fraction (% was 59.6±10.8 (long axis and 48.1±11.3 (short axis and it differed significantly (p = 0.019. Stroke volume (μl/beat was 133.7±33.7 (long axis and 93.2±31.2 (short axis, also differed significantly (p = 0.015. Calculations were consistent among the animals and over three repeated measurements. The heart rate varied depending on depth of anaesthesia. We described a new method for defining and imaging the anatomical planes of the axolotl heart and propose one of our techniques (long axis analysis may prove useful in defining cardiac function in regenerating axolotl hearts.

  17. Informatics methods to enable sharing of quantitative imaging research data.

    Science.gov (United States)

    Levy, Mia A; Freymann, John B; Kirby, Justin S; Fedorov, Andriy; Fennessy, Fiona M; Eschrich, Steven A; Berglund, Anders E; Fenstermacher, David A; Tan, Yongqiang; Guo, Xiaotao; Casavant, Thomas L; Brown, Bartley J; Braun, Terry A; Dekker, Andre; Roelofs, Erik; Mountz, James M; Boada, Fernando; Laymon, Charles; Oborski, Matt; Rubin, Daniel L

    2012-11-01

    The National Cancer Institute Quantitative Research Network (QIN) is a collaborative research network whose goal is to share data, algorithms and research tools to accelerate quantitative imaging research. A challenge is the variability in tools and analysis platforms used in quantitative imaging. Our goal was to understand the extent of this variation and to develop an approach to enable sharing data and to promote reuse of quantitative imaging data in the community. We performed a survey of the current tools in use by the QIN member sites for representation and storage of their QIN research data including images, image meta-data and clinical data. We identified existing systems and standards for data sharing and their gaps for the QIN use case. We then proposed a system architecture to enable data sharing and collaborative experimentation within the QIN. There are a variety of tools currently used by each QIN institution. We developed a general information system architecture to support the QIN goals. We also describe the remaining architecture gaps we are developing to enable members to share research images and image meta-data across the network. As a research network, the QIN will stimulate quantitative imaging research by pooling data, algorithms and research tools. However, there are gaps in current functional requirements that will need to be met by future informatics development. Special attention must be given to the technical requirements needed to translate these methods into the clinical research workflow to enable validation and qualification of these novel imaging biomarkers. Copyright © 2012 Elsevier Inc. All rights reserved.

  18. Automatic intra-modality brain image registration method

    International Nuclear Information System (INIS)

    Whitaker, J.M.; Ardekani, B.A.; Braun, M.

    1996-01-01

    Full text: Registration of 3D images of brain of the same or different subjects has potential importance in clinical diagnosis, treatment planning and neurological research. The broad aim of our work is to produce an automatic and robust intra-modality, brain image registration algorithm for intra-subject and inter-subject studies. Our algorithm is composed of two stages. Initial alignment is achieved by finding the values of nine transformation parameters (representing translation, rotation and scale) that minimise the nonoverlapping regions of the head. This is achieved by minimisation of the sum of the exclusive OR of two binary head images, produced using the head extraction procedure described by Ardekani et al. (J Comput Assist Tomogr, 19:613-623, 1995). The initial alignment successfully determines the scale parameters and gross translation and rotation parameters. Fine alignment uses an objective function described for inter-modality registration in Ardekani et al. (ibid.). The algorithm segments one of the images to be aligned into a set of connected components using K-means clustering. Registration is achieved by minimising the K-means variance of the segmentation induced in the other image. Similarity of images of the same modality makes the method attractive for intra-modality registration. A 3D MR image, with voxel dimensions, 2x2x6 mm, was misaligned. The registered image shows visually accurate registration. The average displacement of a pixel from its correct location was measured to be 3.3 mm. The algorithm was tested on intra-subject MR images and was found to produce good qualitative results. Using the data available, the algorithm produced promising qualitative results in intra-subject registration. Further work is necessary in its application to intersubject registration, due to large variability in brain structure between subjects. Clinical evaluation of the algorithm for selected applications is required

  19. Neutron imaging integrated circuit and method for detecting neutrons

    Science.gov (United States)

    Nagarkar, Vivek V.; More, Mitali J.

    2017-12-05

    The present disclosure provides a neutron imaging detector and a method for detecting neutrons. In one example, a method includes providing a neutron imaging detector including plurality of memory cells and a conversion layer on the memory cells, setting one or more of the memory cells to a first charge state, positioning the neutron imaging detector in a neutron environment for a predetermined time period, and reading a state change at one of the memory cells, and measuring a charge state change at one of the plurality of memory cells from the first charge state to a second charge state less than the first charge state, where the charge state change indicates detection of neutrons at said one of the memory cells.

  20. Coupled Electro-Magneto-Mechanical-Acoustic Analysis Method Developed by Using 2D Finite Element Method for Flat Panel Speaker Driven by Magnetostrictive-Material-Based Actuator

    Science.gov (United States)

    Yoo, Byungjin; Hirata, Katsuhiro; Oonishi, Atsurou

    In this study, a coupled analysis method for flat panel speakers driven by giant magnetostrictive material (GMM) based actuator was developed. The sound field produced by a flat panel speaker that is driven by a GMM actuator depends on the vibration of the flat panel, this vibration is a result of magnetostriction property of the GMM. In this case, to predict the sound pressure level (SPL) in the audio-frequency range, it is necessary to take into account not only the magnetostriction property of the GMM but also the effect of eddy current and the vibration characteristics of the actuator and the flat panel. In this paper, a coupled electromagnetic-structural-acoustic analysis method is presented; this method was developed by using the finite element method (FEM). This analysis method is used to predict the performance of a flat panel speaker in the audio-frequency range. The validity of the analysis method is verified by comparing with the measurement results of a prototype speaker.

  1. Integration of image exposure time into a modified laser speckle imaging method

    Energy Technology Data Exchange (ETDEWEB)

    RamIrez-San-Juan, J C; Salazar-Hermenegildo, N; Ramos-Garcia, R; Munoz-Lopez, J [Optics Department, INAOE, Puebla (Mexico); Huang, Y C [Department of Electrical Engineering and Computer Science, University of California, Irvine, CA (United States); Choi, B, E-mail: jcram@inaoep.m [Beckman Laser Institute and Medical Clinic, University of California, Irvine, CA (United States)

    2010-11-21

    Speckle-based methods have been developed to characterize tissue blood flow and perfusion. One such method, called modified laser speckle imaging (mLSI), enables computation of blood flow maps with relatively high spatial resolution. Although it is known that the sensitivity and noise in LSI measurements depend on image exposure time, a fundamental disadvantage of mLSI is that it does not take into account this parameter. In this work, we integrate the exposure time into the mLSI method and provide experimental support of our approach with measurements from an in vitro flow phantom.

  2. Integration of image exposure time into a modified laser speckle imaging method

    International Nuclear Information System (INIS)

    RamIrez-San-Juan, J C; Salazar-Hermenegildo, N; Ramos-Garcia, R; Munoz-Lopez, J; Huang, Y C; Choi, B

    2010-01-01

    Speckle-based methods have been developed to characterize tissue blood flow and perfusion. One such method, called modified laser speckle imaging (mLSI), enables computation of blood flow maps with relatively high spatial resolution. Although it is known that the sensitivity and noise in LSI measurements depend on image exposure time, a fundamental disadvantage of mLSI is that it does not take into account this parameter. In this work, we integrate the exposure time into the mLSI method and provide experimental support of our approach with measurements from an in vitro flow phantom.

  3. Method of image segmentation using a neural network. Application to MR imaging of brain tumors

    International Nuclear Information System (INIS)

    Engler, E.; Gautherie, M.

    1992-01-01

    An original method of numerical images segmentation has been developed. This method is based on pixel clustering using a formal neural network configurated by supervised learning of pre-classified examples. The method has been applied to series of MR images of brain tumors (gliomas) with a view to proceed with a 3D-extraction of the tumor volume. This study is part of a project on cancer thermotherapy including the development of a scan-focused ultrasound system of tumor heating and a 3D-numerical thermal model

  4. An improved image non-blind image deblurring method based on FoEs

    Science.gov (United States)

    Zhu, Qidan; Sun, Lei

    2013-03-01

    Traditional non-blind image deblurring algorithms always use maximum a posterior(MAP). MAP estimates involving natural image priors can reduce the ripples effectively in contrast to maximum likelihood(ML). However, they have been found lacking in terms of restoration performance. Based on this issue, we utilize MAP with KL penalty to replace traditional MAP. We develop an image reconstruction algorithm that minimizes the KL divergence between the reference distribution and the prior distribution. The approximate KL penalty can restrain over-smooth caused by MAP. We use three groups of images and Harris corner detection to prove our method. The experimental results show that our algorithm of non-blind image restoration can effectively reduce the ringing effect and exhibit the state-of-the-art deblurring results.

  5. Sampling methods for low-frequency electromagnetic imaging

    International Nuclear Information System (INIS)

    Gebauer, Bastian; Hanke, Martin; Schneider, Christoph

    2008-01-01

    For the detection of hidden objects by low-frequency electromagnetic imaging the linear sampling method works remarkably well despite the fact that the rigorous mathematical justification is still incomplete. In this work, we give an explanation for this good performance by showing that in the low-frequency limit the measurement operator fulfils the assumptions for the fully justified variant of the linear sampling method, the so-called factorization method. We also show how the method has to be modified in the physically relevant case of electromagnetic imaging with divergence-free currents. We present numerical results to illustrate our findings, and to show that similar performance can be expected for the case of conducting objects and layered backgrounds

  6. Optimized optical clearing method for imaging central nervous system

    Science.gov (United States)

    Yu, Tingting; Qi, Yisong; Gong, Hui; Luo, Qingming; Zhu, Dan

    2015-03-01

    The development of various optical clearing methods provides a great potential for imaging entire central nervous system by combining with multiple-labelling and microscopic imaging techniques. These methods had made certain clearing contributions with respective weaknesses, including tissue deformation, fluorescence quenching, execution complexity and antibody penetration limitation that makes immunostaining of tissue blocks difficult. The passive clarity technique (PACT) bypasses those problems and clears the samples with simple implementation, excellent transparency with fine fluorescence retention, but the passive tissue clearing method needs too long time. In this study, we not only accelerate the clearing speed of brain blocks but also preserve GFP fluorescence well by screening an optimal clearing temperature. The selection of proper temperature will make PACT more applicable, which evidently broaden the application range of this method.

  7. Image segmentation and particles classification using texture analysis method

    Directory of Open Access Journals (Sweden)

    Mayar Aly Atteya

    Full Text Available Introduction: Ingredients of oily fish include a large amount of polyunsaturated fatty acids, which are important elements in various metabolic processes of humans, and have also been used to prevent diseases. However, in an attempt to reduce cost, recent developments are starting a replace the ingredients of fish oil with products of microalgae, that also produce polyunsaturated fatty acids. To do so, it is important to closely monitor morphological changes in algae cells and monitor their age in order to achieve the best results. This paper aims to describe an advanced vision-based system to automatically detect, classify, and track the organic cells using a recently developed SOPAT-System (Smart On-line Particle Analysis Technology, a photo-optical image acquisition device combined with innovative image analysis software. Methods The proposed method includes image de-noising, binarization and Enhancement, as well as object recognition, localization and classification based on the analysis of particles’ size and texture. Results The methods allowed for correctly computing cell’s size for each particle separately. By computing an area histogram for the input images (1h, 18h, and 42h, the variation could be observed showing a clear increase in cell. Conclusion The proposed method allows for algae particles to be correctly identified with accuracies up to 99% and classified correctly with accuracies up to 100%.

  8. Electrodynamics, Differential Forms and the Method of Images

    Science.gov (United States)

    Low, Robert J.

    2011-01-01

    This paper gives a brief description of how Maxwell's equations are expressed in the language of differential forms and use this to provide an elegant demonstration of how the method of images (well known in electrostatics) also works for electrodynamics in the presence of an infinite plane conducting boundary. The paper should be accessible to an…

  9. The iterative shrinkage method for impulsive noise reduction from images

    International Nuclear Information System (INIS)

    Beygi, Sajjad; Kafashan, Mohammadmehdi; Bahrami, Hamid Reza; Mugler, Dale H

    2012-01-01

    In this paper, we present a novel scheme to compensate impulsive noise from images using the sparse shrinkage method. In this scheme, we assume the remaining noise after using a simple median filtering in place of corrupted pixels, found by boundary discriminative noise detection method, to be Gaussian additive noise. This assumption will later be verified by the means of simulation. Knowing that the pure image in the discrete wavelet transform (DWT) domain is a sparse vector, we define an optimization problem to minimize the l 0 -norm of the estimated image vector from the noisy one in the DWT domain. l 0 -norm makes the optimization problem a combinatorial optimization problem which is NP-hard to solve. To come up with a solution for our optimization problem, we convert the l 0 -norm problem to a continuous optimization problem which is then solved to find the estimated image with reduced noise. In the simulation and discussion part, the performance of our proposed method in reducing impulsive noise is compared to that of existing methods in the literature. We show that our proposed algorithm generally performs better in terms of both subjective and objective evaluations and is less complex. (paper)

  10. a Hyperspectral Image Classification Method Using Isomap and Rvm

    Science.gov (United States)

    Chang, H.; Wang, T.; Fang, H.; Su, Y.

    2018-04-01

    Classification is one of the most significant applications of hyperspectral image processing and even remote sensing. Though various algorithms have been proposed to implement and improve this application, there are still drawbacks in traditional classification methods. Thus further investigations on some aspects, such as dimension reduction, data mining, and rational use of spatial information, should be developed. In this paper, we used a widely utilized global manifold learning approach, isometric feature mapping (ISOMAP), to address the intrinsic nonlinearities of hyperspectral image for dimension reduction. Considering the impropriety of Euclidean distance in spectral measurement, we applied spectral angle (SA) for substitute when constructed the neighbourhood graph. Then, relevance vector machines (RVM) was introduced to implement classification instead of support vector machines (SVM) for simplicity, generalization and sparsity. Therefore, a probability result could be obtained rather than a less convincing binary result. Moreover, taking into account the spatial information of the hyperspectral image, we employ a spatial vector formed by different classes' ratios around the pixel. At last, we combined the probability results and spatial factors with a criterion to decide the final classification result. To verify the proposed method, we have implemented multiple experiments with standard hyperspectral images compared with some other methods. The results and different evaluation indexes illustrated the effectiveness of our method.

  11. Applications of γ-ray image method to astronomy

    International Nuclear Information System (INIS)

    Wuensche, C.A.; Braga, J.; Jayanthi, U.B.; Villela, T.

    1990-01-01

    The use of codified mask technique in a gamma ray telescope is presented. The image reconstruction method is described showing the mask operation. The signal/noise relation for redundant uniform arrangements which constitute the mask, is discussed. The MASCO telescope is described in detail showing the main characteristics of project. (M.C.K.)

  12. Monitoring a chemical plume remediation via the radio imaging method

    International Nuclear Information System (INIS)

    McCorkle, R.W.; Spence, T.; Linder, K.E.; Betsill, J.D.

    1996-01-01

    In this paper, the authors present the results of a site characterization, monitoring, and remediation effort at Sandia National Laboratories (SNL). The primary objective of the study is to determine the feasibility of using the Radio Imaging Method (RIM) to solve a near-surface waste site characterization problem. The goals are to demonstrate the method during the site characterization phase, then continue with an in-situ monitoring and analysis of the remediation process

  13. A predictive estimation method for carbon dioxide transport by data-driven modeling with a physically-based data model.

    Science.gov (United States)

    Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kue-Young; Jun, Seong-Chun; Choung, Sungwook; Yun, Seong-Taek; Oh, Junho; Kim, Hyun-Jun

    2017-11-01

    In this study, a data-driven method for predicting CO 2 leaks and associated concentrations from geological CO 2 sequestration is developed. Several candidate models are compared based on their reproducibility and predictive capability for CO 2 concentration measurements from the Environment Impact Evaluation Test (EIT) site in Korea. Based on the data mining results, a one-dimensional solution of the advective-dispersive equation for steady flow (i.e., Ogata-Banks solution) is found to be most representative for the test data, and this model is adopted as the data model for the developed method. In the validation step, the method is applied to estimate future CO 2 concentrations with the reference estimation by the Ogata-Banks solution, where a part of earlier data is used as the training dataset. From the analysis, it is found that the ensemble mean of multiple estimations based on the developed method shows high prediction accuracy relative to the reference estimation. In addition, the majority of the data to be predicted are included in the proposed quantile interval, which suggests adequate representation of the uncertainty by the developed method. Therefore, the incorporation of a reasonable physically-based data model enhances the prediction capability of the data-driven model. The proposed method is not confined to estimations of CO 2 concentration and may be applied to various real-time monitoring data from subsurface sites to develop automated control, management or decision-making systems. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Acoustic Analysis Method for Flat Panel Speaker Driven by Giant Magnetostrictive-Material-Based Exciter(Linear Motor concerning Daily Life)

    OpenAIRE

    兪, 炳振; 平田, 勝弘; 大西, 敦郎; Byungjin, YOO; Katsuhiro, HIRATA; Atsurou, OONISHI; 大阪大学; 大阪大学; 大阪大学

    2011-01-01

    This paper presents a coupled analysis method of electromagnetic-structural-acoustic fields for flat panel speaker driven by giant magnetostrictive material (GMM) based exciter designed by using the finite element method (FEM). The acoustic field creation of the flat panel speaker driven by GMM exciter relies on the vibration of flat panel caused by magnetostrictive phenomenon of GMM when a magnetic field is applied. In this case, to predict the sound pressure level (SPL) at audio frequency r...

  15. A single-image method of aberration retrieval for imaging systems under partially coherent illumination

    International Nuclear Information System (INIS)

    Xu, Shuang; Liu, Shiyuan; Zhang, Chuanwei; Wei, Haiqing

    2014-01-01

    We propose a method for retrieving small lens aberrations in optical imaging systems under partially coherent illumination, which only requires to measure one single defocused image of intensity. By deriving a linear theory of imaging systems, we obtain a generalized formulation of aberration sensitivity in a matrix form, which provides a set of analytic kernels that relate the measured intensity distribution directly to the unknown Zernike coefficients. Sensitivity analysis is performed and test patterns are optimized to ensure well-posedness of the inverse problem. Optical lithography simulations have validated the theoretical derivation and confirmed its simplicity and superior performance in retrieving small lens aberrations. (fast track communication)

  16. Development of CCD Imaging System Using Thermoelectric Cooling Method

    Directory of Open Access Journals (Sweden)

    Youngsik Park

    2000-06-01

    Full Text Available We developed low light CCD imaging system using thermoelectric cooling method collaboration with a company to design a commercial model. It consists of Kodak KAF-0401E (768x512 pixels CCD chip,thermoelectric module manufactured by Thermotek. This TEC system can reach an operative temperature of -25deg. We employed an Uniblitz VS25S shutter and it has capability a minimum exposure time 80ms. The system components are an interface card using a Korea Astronomy Observatory (hereafter KAO ISA bus controller, image acquisition with AD9816 chip, that is 12bit video processor. The performance test with this imaging system showed good operation within the initial specification of our design. It shows a dark current less than 0.4e-/pixel/sec at a temperature of -10deg, a linearity 99.9+/-0.1%, gain 4.24e-adu, and system noise is 25.3e- (rms. For low temperature CCD operation, we designed a TEC, which uses a one-stage peltier module and forced air heat exchanger. This TEC imaging system enables accurate photometry (+/-0.01mag even though the CCD is not at 'conventional' cryogenic temperatures (140K. The system can be a useful instrument for any other imaging applications. Finally, with this system, we obtained several images of astronomical objects for system performance tests.

  17. Bifurcation analysis of incompressible flow in a driven cavity by the Newton–Picard method

    NARCIS (Netherlands)

    Tiesinga, G; Wubs, FW; Veldman, AEP

    2002-01-01

    Knowledge of the transition point of steady to periodic flow is becoming increasingly important in the study of laminar–turbulent flow transition or fluid–structure interaction. Such knowledge becomes available through the Newton–Picard method, a method related to the recursive projection method.

  18. Characteristics of spondylotic myelopathy on 3D driven-equilibrium fast spin echo and 2D fast spin echo magnetic resonance imaging: a retrospective cross-sectional study.

    Science.gov (United States)

    Abdulhadi, Mike A; Perno, Joseph R; Melhem, Elias R; Nucifora, Paolo G P

    2014-01-01

    In patients with spinal stenosis, magnetic resonance imaging of the cervical spine can be improved by using 3D driven-equilibrium fast spin echo sequences to provide a high-resolution assessment of osseous and ligamentous structures. However, it is not yet clear whether 3D driven-equilibrium fast spin echo sequences adequately evaluate the spinal cord itself. As a result, they are generally supplemented by additional 2D fast spin echo sequences, adding time to the examination and potential discomfort to the patient. Here we investigate the hypothesis that in patients with spinal stenosis and spondylotic myelopathy, 3D driven-equilibrium fast spin echo sequences can characterize cord lesions equally well as 2D fast spin echo sequences. We performed a retrospective analysis of 30 adult patients with spondylotic myelopathy who had been examined with both 3D driven-equilibrium fast spin echo sequences and 2D fast spin echo sequences at the same scanning session. The two sequences were inspected separately for each patient, and visible cord lesions were manually traced. We found no significant differences between 3D driven-equilibrium fast spin echo and 2D fast spin echo sequences in the mean number, mean area, or mean transverse dimensions of spondylotic cord lesions. Nevertheless, the mean contrast-to-noise ratio of cord lesions was decreased on 3D driven-equilibrium fast spin echo sequences compared to 2D fast spin echo sequences. These findings suggest that 3D driven-equilibrium fast spin echo sequences do not need supplemental 2D fast spin echo sequences for the diagnosis of spondylotic myelopathy, but they may be less well suited for quantitative signal measurements in the spinal cord.

  19. Influence of image reconstruction methods on statistical parametric mapping of brain PET images

    International Nuclear Information System (INIS)

    Yin Dayi; Chen Yingmao; Yao Shulin; Shao Mingzhe; Yin Ling; Tian Jiahe; Cui Hongyan

    2007-01-01

    Objective: Statistic parametric mapping (SPM) was widely recognized as an useful tool in brain function study. The aim of this study was to investigate if imaging reconstruction algorithm of PET images could influence SPM of brain. Methods: PET imaging of whole brain was performed in six normal volunteers. Each volunteer had two scans with true and false acupuncturing. The PET scans were reconstructed using ordered subsets expectation maximization (OSEM) and filtered back projection (FBP) with 3 varied parameters respectively. The images were realigned, normalized and smoothed using SPM program. The difference between true and false acupuncture scans was tested using a matched pair t test at every voxel. Results: (1) SPM corrected multiple comparison (P corrected uncorrected <0.001): SPM derived from the images with different reconstruction method were different. The largest difference, in number and position of the activated voxels, was noticed between FBP and OSEM re- construction algorithm. Conclusions: The method of PET image reconstruction could influence the results of SPM uncorrected multiple comparison. Attention should be paid when the conclusion was drawn using SPM uncorrected multiple comparison. (authors)

  20. A portable measurement system for subcriticality measurements by the CF-source-driven neutron noise analysis method

    International Nuclear Information System (INIS)

    Mihalczo, J.T.; Ragan, G.E.; Blakeman, E.D.

    1988-01-01

    A portable measurement system consisting of a personal computer used as a Fourier analyzer and three detection channels (with associated electronics that provide the signals to analog-to-digital (A/D) convertors) has been assembled to measure subcriticality by the /sup 252/Cf-source-driven neutron noise analysis method. The /sup 252/Cf-source-driven neutron noise analysis method for obtaining the subcritical neutron multiplication factor of a configuration of fissile material requires measurement of the frequency-dependent cross-power spectral density (CPSD), G/sub 23/(ω), between a pair of detectors (Nos. 2 and 3) located in or near the fissile material and CPSDs G/sub 12/(ω) and G/sub 13/(ω) between these same detectors and a source of neutrons emanating from an ionization chamber (No. 1) containing /sup 252/Cf, also positioned in or near the fissile material. The auto-power spectral density (APSD), G/sub 11/(ω), of the source is also required. A particular ratio of spectral densities, G/sub 12//sup */G/sub 13//G/sub 11/G/sub 23/ (/sup */ denotes complex conjugation), is then formed. This ratio is related to the subcritical neutron multiplication factor and is independent of detector efficiencies

  1. A method for volumetric imaging in radiotherapy using single x-ray projection

    International Nuclear Information System (INIS)

    Xu, Yuan; Yan, Hao; Ouyang, Luo; Wang, Jing; Jiang, Steve B.; Jia, Xun; Zhou, Linghong; Cervino, Laura

    2015-01-01

    was driven by a real patient breathing signal with irregular periods and amplitudes, the average tumor center error was 0.6 mm. The algorithm robustness with respect to sparsity level, patch size, and presence or absence of diaphragm, as well as computation time, has also been studied. Conclusions: The authors have developed a new method that automatically identifies motion information from an x-ray projection, based on which a volumetric image is generated

  2. Fast nonconvex nonsmooth minimization methods for image restoration and reconstruction.

    Science.gov (United States)

    Nikolova, Mila; Ng, Michael K; Tam, Chi-Pan

    2010-12-01

    Nonconvex nonsmooth regularization has advantages over convex regularization for restoring images with neat edges. However, its practical interest used to be limited by the difficulty of the computational stage which requires a nonconvex nonsmooth minimization. In this paper, we deal with nonconvex nonsmooth minimization methods for image restoration and reconstruction. Our theoretical results show that the solution of the nonconvex nonsmooth minimization problem is composed of constant regions surrounded by closed contours and neat edges. The main goal of this paper is to develop fast minimization algorithms to solve the nonconvex nonsmooth minimization problem. Our experimental results show that the effectiveness and efficiency of the proposed algorithms.

  3. Single photon imaging and timing array sensor apparatus and method

    Science.gov (United States)

    Smith, R. Clayton

    2003-06-24

    An apparatus and method are disclosed for generating a three-dimension image of an object or target. The apparatus is comprised of a photon source for emitting a photon at a target. The emitted photons are received by a photon receiver for receiving the photon when reflected from the target. The photon receiver determines a reflection time of the photon and further determines an arrival position of the photon on the photon receiver. An analyzer is communicatively coupled to the photon receiver, wherein the analyzer generates a three-dimensional image of the object based upon the reflection time and the arrival position.

  4. Method of imaging the electrical conductivity distribution of a subsurface

    Science.gov (United States)

    Johnson, Timothy C.

    2017-09-26

    A method of imaging electrical conductivity distribution of a subsurface containing metallic structures with known locations and dimensions is disclosed. Current is injected into the subsurface to measure electrical potentials using multiple sets of electrodes, thus generating electrical resistivity tomography measurements. A numeric code is applied to simulate the measured potentials in the presence of the metallic structures. An inversion code is applied that utilizes the electrical resistivity tomography measurements and the simulated measured potentials to image the subsurface electrical conductivity distribution and remove effects of the subsurface metallic structures with known locations and dimensions.

  5. Interpolation decoding method with variable parameters for fractal image compression

    International Nuclear Information System (INIS)

    He Chuanjiang; Li Gaoping; Shen Xiaona

    2007-01-01

    The interpolation fractal decoding method, which is introduced by [He C, Yang SX, Huang X. Progressive decoding method for fractal image compression. IEE Proc Vis Image Signal Process 2004;3:207-13], involves generating progressively the decoded image by means of an interpolation iterative procedure with a constant parameter. It is well-known that the majority of image details are added at the first steps of iterations in the conventional fractal decoding; hence the constant parameter for the interpolation decoding method must be set as a smaller value in order to achieve a better progressive decoding. However, it needs to take an extremely large number of iterations to converge. It is thus reasonable for some applications to slow down the iterative process at the first stages of decoding and then to accelerate it afterwards (e.g., at some iteration as we need). To achieve the goal, this paper proposed an interpolation decoding scheme with variable (iteration-dependent) parameters and proved the convergence of the decoding process mathematically. Experimental results demonstrate that the proposed scheme has really achieved the above-mentioned goal

  6. Cardiodiagnostic imaging. MRT, CT, echocardiography and other methods

    International Nuclear Information System (INIS)

    Erbel, R.; Kreitner, K.F.; Barkhausen, J.; Thelen, M.

    2007-01-01

    The book presents a differentiated approach to cardiac imaging. The focus is n cardio-MR/-CT and echocardiography. These are highly complex methods involving new equipment, new protocols and indications. The techniques are new and difficult to learn for everybody concerned. MR, CT and echocardiography must always be viewed in the context of other diagnostic methods. The interdisciplinary approach of the book addresses both radiologists and cardiologists and relies on the vast experience of the authors. The book offers more than 500 large high-quality reference images reflecting the latest state of the art. It has amethodological section in which the current methods are described (X-ray, echocardiography, nuclear medicine, angiography, CT, MRT etc.) along with their advantages and shortcomings, and a clinical section in which the main indications are described in the common standardized way (anatomy, clinical picture, interpretation, differential diagnosis). (orig.)

  7. Meshfree Local Radial Basis Function Collocation Method with Image Nodes

    Energy Technology Data Exchange (ETDEWEB)

    Baek, Seung Ki; Kim, Minjae [Pukyong National University, Busan (Korea, Republic of)

    2017-07-15

    We numerically solve two-dimensional heat diffusion problems by using a simple variant of the meshfree local radial-basis function (RBF) collocation method. The main idea is to include an additional set of sample nodes outside the problem domain, similarly to the method of images in electrostatics, to perform collocation on the domain boundaries. We can thereby take into account the temperature profile as well as its gradients specified by boundary conditions at the same time, which holds true even for a node where two or more boundaries meet with different boundary conditions. We argue that the image method is computationally efficient when combined with the local RBF collocation method, whereas the addition of image nodes becomes very costly in case of the global collocation. We apply our modified method to a benchmark test of a boundary value problem, and find that this simple modification reduces the maximum error from the analytic solution significantly. The reduction is small for an initial value problem with simpler boundary conditions. We observe increased numerical instability, which has to be compensated for by a sufficient number of sample nodes and/or more careful parameter choices for time integration.

  8. The gridding method for image reconstruction by Fourier transformation

    International Nuclear Information System (INIS)

    Schomberg, H.; Timmer, J.

    1995-01-01

    This paper explores a computational method for reconstructing an n-dimensional signal f from a sampled version of its Fourier transform f. The method involves a window function w and proceeds in three steps. First, the convolution g = w * f is computed numerically on a Cartesian grid, using the available samples of f. Then, g = wf is computed via the inverse discrete Fourier transform, and finally f is obtained as g/w. Due to the smoothing effect of the convolution, evaluating w * f is much less error prone than merely interpolating f. The method was originally devised for image reconstruction in radio astronomy, but is actually applicable to a broad range of reconstructive imaging methods, including magnetic resonance imaging and computed tomography. In particular, it provides a fast and accurate alternative to the filtered backprojection. The basic method has several variants with other applications, such as the equidistant resampling of arbitrarily sampled signals or the fast computation of the Radon (Hough) transform

  9. Investigating preferences for color-shape combinations with gaze driven optimization method based on evolutionary algorithms.

    Science.gov (United States)

    Holmes, Tim; Zanker, Johannes M

    2013-01-01

    Studying aesthetic preference is notoriously difficult because it targets individual experience. Eye movements provide a rich source of behavioral measures that directly reflect subjective choice. To determine individual preferences for simple composition rules we here use fixation duration as the fitness measure in a Gaze Driven Evolutionary Algorithm (GDEA), which has been demonstrated as a tool to identify aesthetic preferences (Holmes and Zanker, 2012). In the present study, the GDEA was used to investigate the preferred combination of color and shape which have been promoted in the Bauhaus arts school. We used the same three shapes (square, circle, triangle) used by Kandinsky (1923), with the three color palette from the original experiment (A), an extended seven color palette (B), and eight different shape orientation (C). Participants were instructed to look for their preferred circle, triangle or square in displays with eight stimuli of different shapes, colors and rotations, in an attempt to test for a strong preference for red squares, yellow triangles and blue circles in such an unbiased experimental design and with an extended set of possible combinations. We Tested six participants extensively on the different conditions and found consistent preferences for color-shape combinations for individuals, but little evidence at the group level for clear color/shape preference consistent with Kandinsky's claims, apart from some weak link between yellow and triangles. Our findings suggest substantial inter-individual differences in the presence of stable individual associations of color and shapes, but also that these associations are robust within a single individual. These individual differences go some way toward challenging the claims of the universal preference for color/shape combinations proposed by Kandinsky, but also indicate that a much larger sample size would be needed to confidently reject that hypothesis. Moreover, these experiments highlight the

  10. Investigating preferences for colour-shape combinations with gaze driven optimization method based on evolutionary algorithms.

    Directory of Open Access Journals (Sweden)

    Tim eHolmes

    2013-12-01

    Full Text Available Studying aesthetic preference is notoriously difficult because it targets individual experience. Eye movements provide a rich source of behavioural measures that directly reflect subjective choice. To determine individual preferences for simple composition rules we here use fixation duration as the fitness measure in a Gaze Driven Evolutionary Algorithm (GDEA, which has been used as a tool to identify aesthetic preferences (Holmes & Zanker, 2012. In the present study, the GDEA was used to investigate the preferred combination of colour and shape which have been promoted in the Bauhaus arts school. We used the same 3 shapes (square, circle, triangle used by Kandinsky (1923, with the 3 colour palette from the original experiment (A, an extended 7 colour palette (B, and 8 different shape orientation (C. Participants were instructed to look for their preferred circle, triangle or square in displays with 8 stimuli of different shapes, colours and rotations, in an attempt to test for a strong preference for red squares, yellow triangles and blue circles in such an unbiased experimental design and with an extended set of possible combinations. We Tested 6 participants extensively on the different conditions and found consistent preferences for individuals, but little evidence at the group level for preference consistent with Kandinsky’s claims, apart from some weak link between yellow and triangles. Our findings suggest substantial inter-individual differences in the presence of stable individual associations of colour and shapes, but also that these associations are robust within a single individual. These individual differences go some way towards challenging the claims of the universal preference for colour/shape combinations proposed by Kandinsky, but also indicate that a much larger sample size would be needed to confidently reject that hypothesis. Moreover, these experiments highlight the vast potential of the GDEA in experimental aesthetics

  11. Total variation superiorized conjugate gradient method for image reconstruction

    Science.gov (United States)

    Zibetti, Marcelo V. W.; Lin, Chuan; Herman, Gabor T.

    2018-03-01

    The conjugate gradient (CG) method is commonly used for the relatively-rapid solution of least squares problems. In image reconstruction, the problem can be ill-posed and also contaminated by noise; due to this, approaches such as regularization should be utilized. Total variation (TV) is a useful regularization penalty, frequently utilized in image reconstruction for generating images with sharp edges. When a non-quadratic norm is selected for regularization, as is the case for TV, then it is no longer possible to use CG. Non-linear CG is an alternative, but it does not share the efficiency that CG shows with least squares and methods such as fast iterative shrinkage-thresholding algorithms (FISTA) are preferred for problems with TV norm. A different approach to including prior information is superiorization. In this paper it is shown that the conjugate gradient method can be superiorized. Five different CG variants are proposed, including preconditioned CG. The CG methods superiorized by the total variation norm are presented and their performance in image reconstruction is demonstrated. It is illustrated that some of the proposed variants of the superiorized CG method can produce reconstructions of superior quality to those produced by FISTA and in less computational time, due to the speed of the original CG for least squares problems. In the Appendix we examine the behavior of one of the superiorized CG methods (we call it S-CG); one of its input parameters is a positive number ɛ. It is proved that, for any given ɛ that is greater than the half-squared-residual for the least squares solution, S-CG terminates in a finite number of steps with an output for which the half-squared-residual is less than or equal to ɛ. Importantly, it is also the case that the output will have a lower value of TV than what would be provided by unsuperiorized CG for the same value ɛ of the half-squared residual.

  12. Hard x-ray (>100 keV) imager to measure hot electron preheat for indirectly driven capsule implosions on the NIF.

    Science.gov (United States)

    Döppner, T; Dewald, E L; Divol, L; Thomas, C A; Burns, S; Celliers, P M; Izumi, N; Kline, J L; LaCaille, G; McNaney, J M; Prasad, R R; Robey, H F; Glenzer, S H; Landen, O L

    2012-10-01

    We have fielded a hard x-ray (>100 keV) imager with high aspect ratio pinholes to measure the spatially resolved bremsstrahlung emission from energetic electrons slowing in a plastic ablator shell during indirectly driven implosions at the National Ignition Facility. These electrons are generated in laser plasma interactions and are a source of preheat to the deuterium-tritium fuel. First measurements show that hot electron preheat does not limit obtaining the fuel areal densities required for ignition and burn.

  13. The method of images and Green's function for spherical domains

    International Nuclear Information System (INIS)

    Gutkin, Eugene; Newton, Paul K

    2004-01-01

    Motivated by problems in electrostatics and vortex dynamics, we develop two general methods for constructing Green's function for simply connected domains on the surface of the unit sphere. We prove a Riemann mapping theorem showing that such domains can be conformally mapped to the upper hemisphere. We then categorize all domains on the sphere for which Green's function can be constructed by an extension of the classical method of images. We illustrate our methods by several examples, such as the upper hemisphere, geodesic triangles, and latitudinal rectangles. We describe the point vortex motion in these domains, which is governed by a Hamiltonian determined by the Dirichlet Green's function

  14. The best printing methods to print satellite images

    Directory of Open Access Journals (Sweden)

    G.A. Yousif

    2011-12-01

    In this paper different printing systems were used to print an image of SPOT-4 satellite, caver part of Sharm Elshekh area, Sinai, Egypt, on the same type of paper as much as possible, especially in the photography. This step is followed by measuring the experimental data, and analyzed colors to determine the best printing systems for satellite image printing data. The laser system is the more printing system where produce a wider range of color and highest densities of ink and access much color detail. Followed by the offset system which it recorded the best dot gain. Moreover, the study shows that it can use the advantages of each method according to the satellite image color and quantity to be produced.

  15. A Simulation Method for High-Cycle Fatigue-Driven Delamination using a Cohesive Zone Model

    DEFF Research Database (Denmark)

    Bak, Brian Lau Verndal; Turon, A.; Lindgaard, Esben

    2016-01-01

    on parameter fitting of any kind. The method has been implemented as a zero-thickness eight-node interface element for Abaqus and as a spring element for a simple finite element model in MATLAB. The method has been validated in simulations of mode I, mode II, and mixed-mode crack loading for both self...

  16. Improving agreement between static method and dynamic formula for driven cast-in-place piles.

    Science.gov (United States)

    2013-06-01

    This study focuses on comparing the capacities and lengths of piling necessary as determined with a static method and with a dynamic formula. Pile capacities and their required lengths are determined two ways: 1) using a design and computed method, s...

  17. Facilitating User Driven Innovation – A Study of Methods and Tools at Herlev Hospital

    DEFF Research Database (Denmark)

    Fronczek-Munter, Aneta

    2011-01-01

    are actively involved as co-creators. The paper describes the process and its phases, as well as reflects on the results of the user involvement and specific methods. Depending on the methods used at the workshops the participants/users had different focus, changed the priorities and developed different...

  18. Facilitating User Driven Innovation – A Study of Methods and Tools at Herlev Hospital

    DEFF Research Database (Denmark)

    Fronczek-Munter, Aneta

    2012-01-01

    are actively involved as co-creators. The paper describes the process and its phases, as well as reflects on the results of the user involvement and specific methods. Depending on the methods used at the workshops the participants/users had different focus, changed the priorities and developed different...

  19. Fast Depiction Invariant Visual Similarity for Content Based Image Retrieval Based on Data-driven Visual Similarity using Linear Discriminant Analysis

    Science.gov (United States)

    Wihardi, Y.; Setiawan, W.; Nugraha, E.

    2018-01-01

    On this research we try to build CBIRS based on Learning Distance/Similarity Function using Linear Discriminant Analysis (LDA) and Histogram of Oriented Gradient (HoG) feature. Our method is invariant to depiction of image, such as similarity of image to image, sketch to image, and painting to image. LDA can decrease execution time compared to state of the art method, but it still needs an improvement in term of accuracy. Inaccuracy in our experiment happen because we did not perform sliding windows search and because of low number of negative samples as natural-world images.

  20. METHOD OF IMAGE QUALITY ENHANCEMENT FOR SPACE OBJECTS

    Directory of Open Access Journals (Sweden)

    D. S. Korshunov

    2014-07-01

    Full Text Available The paper deals with an approach for image quality improvement of the space objects in the visible range of electromagnetic wave spectrum. The proposed method is based on the joint taking into account of both the motion velocity of the space supervisory apparatus and a space object observed in the near-earth space when the time of photo-detector exposure is chosen. The timing of exposure is carried out by light-signal characteristics, which determines the optimal value of the charge package formed in the charge-coupled device being irradiated. Thus, the parameters of onboard observation equipment can be selected, which provides space images suitable for interpretation. The linear resolving capacity is used as quality indicator for space images, giving a complete picture for the image contrast and geometric properties of the object on the photo. Observation scenario modeling of the space object, done by sputnik-inspector, has shown the possibility of increasing the linear resolution up to10% - 20% or up to 40% - 50% depending on the non-complanarity angle at the movement along orbits. The proposed approach to the increase of photographs quality provides getting sharp and highcontrast images of space objects by the optical-electronic equipment of the space-based remote sensing. The usage of these images makes it possible to detect in time the space technology failures, which are the result of its exploitation in the nearearth space. The proposed method can be also applied at the stage of space systems design for optical-electronic surveillance in computer models used for facilities assessment of the shooting equipment information tract.

  1. Optical image encryption method based on incoherent imaging and polarized light encoding

    Science.gov (United States)

    Wang, Q.; Xiong, D.; Alfalou, A.; Brosseau, C.

    2018-05-01

    We propose an incoherent encoding system for image encryption based on a polarized encoding method combined with an incoherent imaging. Incoherent imaging is the core component of this proposal, in which the incoherent point-spread function (PSF) of the imaging system serves as the main key to encode the input intensity distribution thanks to a convolution operation. An array of retarders and polarizers is placed on the input plane of the imaging structure to encrypt the polarized state of light based on Mueller polarization calculus. The proposal makes full use of randomness of polarization parameters and incoherent PSF so that a multidimensional key space is generated to deal with illegal attacks. Mueller polarization calculus and incoherent illumination of imaging structure ensure that only intensity information is manipulated. Another key advantage is that complicated processing and recording related to a complex-valued signal are avoided. The encoded information is just an intensity distribution, which is advantageous for data storage and transition because information expansion accompanying conventional encryption methods is also avoided. The decryption procedure can be performed digitally or using optoelectronic devices. Numerical simulation tests demonstrate the validity of the proposed scheme.

  2. An automated image processing method for classification of diabetic retinopathy stages from conjunctival microvasculature images

    Science.gov (United States)

    Khansari, Maziyar M.; O'Neill, William; Penn, Richard; Blair, Norman P.; Chau, Felix; Shahidi, Mahnaz

    2017-03-01

    The conjunctiva is a densely vascularized tissue of the eye that provides an opportunity for imaging of human microcirculation. In the current study, automated fine structure analysis of conjunctival microvasculature images was performed to discriminate stages of diabetic retinopathy (DR). The study population consisted of one group of nondiabetic control subjects (NC) and 3 groups of diabetic subjects, with no clinical DR (NDR), non-proliferative DR (NPDR), or proliferative DR (PDR). Ordinary least square regression and Fisher linear discriminant analyses were performed to automatically discriminate images between group pairs of subjects. Human observers who were masked to the grouping of subjects performed image discrimination between group pairs. Over 80% and 70% of images of subjects with clinical and non-clinical DR were correctly discriminated by the automated method, respectively. The discrimination rates of the automated method were higher than human observers. The fine structure analysis of conjunctival microvasculature images provided discrimination of DR stages and can be potentially useful for DR screening and monitoring.

  3. a Comparative Case Study of Reflection Seismic Imaging Method

    Science.gov (United States)

    Alamooti, M.; Aydin, A.

    2017-12-01

    Seismic imaging is the most common means of gathering information about subsurface structural features. The accuracy of seismic images may be highly variable depending on the complexity of the subsurface and on how seismic data is processed. One of the crucial steps in this process, especially in layered sequences with complicated structure, is the time and/or depth migration of seismic data.The primary purpose of the migration is to increase the spatial resolution of seismic images by repositioning the recorded seismic signal back to its original point of reflection in time/space, which enhances information about complex structure. In this study, our objective is to process a seismic data set (courtesy of the University of South Carolina) to generate an image on which the Magruder fault near Allendale SC can be clearly distinguished and its attitude can be accurately depicted. The data was gathered by common mid-point method with 60 geophones equally spaced along an about 550 m long traverse over a nearly flat ground. The results obtained from the application of different migration algorithms (including finite-difference and Kirchhoff) are compared in time and depth domains to investigate the efficiency of each algorithm in reducing the processing time and improving the accuracy of seismic images in reflecting the correct position of the Magruder fault.

  4. The evolving role of new imaging methods in breast screening.

    Science.gov (United States)

    Houssami, Nehmat; Ciatto, Stefano

    2011-09-01

    The potential to avert breast cancer deaths through screening means that efforts continue to identify methods which may enhance early detection. While the role of most new imaging technologies remains in adjunct screening or in the work-up of mammography-detected abnormalities, some of the new breast imaging tests (such as MRI) have roles in screening groups of women defined by increased cancer risk. This paper highlights the evidence and the current role of new breast imaging technologies in screening, focusing on those that have broader application in population screening, including digital mammography, breast ultrasound in women with dense breasts, and computer-aided detection. It highlights that evidence on new imaging in screening comes mostly from non-randomised studies that have quantified test detection capability as adjunct to mammography, or have compared measures of screening performance for new technologies with that of conventional mammography. Two RCTs have provided high-quality evidence on the equivalence of digital and conventional mammography and on outcomes of screen-reading complemented by CAD. Many of these imaging technologies enhance cancer detection but also increase recall and false positives in screening. Copyright © 2011 Elsevier Inc. All rights reserved.

  5. Ectomography - a tomographic method for gamma camera imaging

    International Nuclear Information System (INIS)

    Dale, S.; Edholm, P.E.; Hellstroem, L.G.; Larsson, S.

    1985-01-01

    In computerised gamma camera imaging the projections are readily obtained in digital form, and the number of picture elements may be relatively few. This condition makes emission techniques suitable for ectomography - a tomographic technique for directly visualising arbitrary sections of the human body. The camera rotates around the patient to acquire different projections in a way similar to SPECT. This method differs from SPECT, however, in that the camera is placed at an angle to the rotational axis, and receives two-dimensional, rather than one-dimensional, projections. Images of body sections are reconstructed by digital filtration and combination of the acquired projections. The main advantages of ectomography - a high and uniform resolution, a low and uniform attenuation and a high signal-to-noise ratio - are obtained when imaging sections close and parallel to a body surface. The filtration eliminates signals representing details outside the section and gives the section a certain thickness. Ectomographic transverse images of a line source and of a human brain have been reconstructed. Details within the sections are correctly visualised and details outside are effectively eliminated. For comparison, the same sections have been imaged with SPECT. (author)

  6. Quantitative magnetic resonance micro-imaging methods for pharmaceutical research.

    Science.gov (United States)

    Mantle, M D

    2011-09-30

    The use of magnetic resonance imaging (MRI) as a tool in pharmaceutical research is now well established and the current literature covers a multitude of different pharmaceutically relevant research areas. This review focuses on the use of quantitative magnetic resonance micro-imaging techniques and how they have been exploited to extract information that is of direct relevance to the pharmaceutical industry. The article is divided into two main areas. The first half outlines the theoretical aspects of magnetic resonance and deals with basic magnetic resonance theory, the effects of nuclear spin-lattice (T(1)), spin-spin (T(2)) relaxation and molecular diffusion upon image quantitation, and discusses the applications of rapid magnetic resonance imaging techniques. In addition to the theory, the review aims to provide some practical guidelines for the pharmaceutical researcher with an interest in MRI as to which MRI pulse sequences/protocols should be used and when. The second half of the article reviews the recent advances and developments that have appeared in the literature concerning the use of quantitative micro-imaging methods to pharmaceutically relevant research. Copyright © 2010 Elsevier B.V. All rights reserved.

  7. A simple method for multiday imaging of slice cultures.

    Science.gov (United States)

    Seidl, Armin H; Rubel, Edwin W

    2010-01-01

    The organotypic slice culture (Stoppini et al. A simple method for organotypic cultures of nervous tissue. 1991;37:173-182) has become the method of choice to answer a variety of questions in neuroscience. For many experiments, however, it would be beneficial to image or manipulate a slice culture repeatedly, for example, over the course of many days. We prepared organotypic slice cultures of the auditory brainstem of P3 and P4 mice and kept them in vitro for up to 4 weeks. Single cells in the auditory brainstem were transfected with plasmids expressing fluorescent proteins by way of electroporation (Haas et al. Single-cell electroporation for gene transfer in vivo. 2001;29:583-591). The culture was then placed in a chamber perfused with oxygenated ACSF and the labeled cell imaged with an inverted wide-field microscope repeatedly for multiple days, recording several time-points per day, before returning the slice to the incubator. We describe a simple method to image a slice culture preparation during the course of multiple days and over many continuous hours, without noticeable damage to the tissue or photobleaching. Our method uses a simple, inexpensive custom-built insulator constructed around the microscope to maintain controlled temperature and uses a perfusion chamber as used for in vitro slice recordings. (c) 2009 Wiley-Liss, Inc.

  8. A method for dynamic subtraction MR imaging of the liver

    Directory of Open Access Journals (Sweden)

    Setti Ernesto

    2006-06-01

    Full Text Available Abstract Background Subtraction of Dynamic Contrast-Enhanced 3D Magnetic Resonance (DCE-MR volumes can result in images that depict and accurately characterize a variety of liver lesions. However, the diagnostic utility of subtraction images depends on the extent of co-registration between non-enhanced and enhanced volumes. Movement of liver structures during acquisition must be corrected prior to subtraction. Currently available methods are computer intensive. We report a new method for the dynamic subtraction of MR liver images that does not require excessive computer time. Methods Nineteen consecutive patients (median age 45 years; range 37–67 were evaluated by VIBE T1-weighted sequences (TR 5.2 ms, TE 2.6 ms, flip angle 20°, slice thickness 1.5 mm acquired before and 45s after contrast injection. Acquisition parameters were optimized for best portal system enhancement. Pre and post-contrast liver volumes were realigned using our 3D registration method which combines: (a rigid 3D translation using maximization of normalized mutual information (NMI, and (b fast 2D non-rigid registration which employs a complex discrete wavelet transform algorithm to maximize pixel phase correlation and perform multiresolution analysis. Registration performance was assessed quantitatively by NMI. Results The new registration procedure was able to realign liver structures in all 19 patients. NMI increased by about 8% after rigid registration (native vs. rigid registration 0.073 ± 0.031 vs. 0.078 ± 0.031, n.s., paired t-test and by a further 23% (0.096 ± 0.035 vs. 0.078 ± 0.031, p t-test after non-rigid realignment. The overall average NMI increase was 31%. Conclusion This new method for realigning dynamic contrast-enhanced 3D MR volumes of liver leads to subtraction images that enhance diagnostic possibilities for liver lesions.

  9. Comparison of Design Methods for Axially Loaded Driven Piles in Cohesionless Soil

    DEFF Research Database (Denmark)

    Thomassen, Kristina; Andersen, Lars Vabbersgaard; Ibsen, Lars Bo

    2012-01-01

    For offshore wind turbines on deeper waters, a jacket sub-structure supported by axially loaded piles is thought to be the most suitable solution. The design method recommended by API and two CPT-based design methods are compared for two uniform sand profiles. The analysis show great difference...... in the predictions of bearing capacities calculated by means of the three methods for piles loaded in both tension and compression. This implies that further analysis of the bearing capacity of axially loaded piles in sand should be conducted....

  10. Brief review of image reconstruction methods for imaging in nuclear medicine

    International Nuclear Information System (INIS)

    Murayama, Hideo

    1999-01-01

    Emission computed tomography (ECT) has as its major emphasis the quantitative determination of the moment to moment changes in the chemistry and flow physiology of injected or inhaled compounds labeled with radioactive atoms in a human body. The major difference lies in the fact that ECT seeks to describe the location and intensity of sources of emitted photons in an attenuating medium whereas transmission X-ray computed tomography (TCT) seeks to determine the distribution of the attenuating medium. A second important difference between ECT and TCT is that of available statistics. ECT statistics are low because each photon without control in emitting direction must be detected and analyzed, not as in TCT. The following sections review the historical development of image reconstruction methods for imaging in nuclear medicine, relevant intrinsic concepts for image reconstruction on ECT, and current status of volume imaging as well as a unique approach on iterative techniques for ECT. (author). 130 refs

  11. Imaging Method Based on Time Reversal Channel Compensation

    Directory of Open Access Journals (Sweden)

    Bing Li

    2015-01-01

    Full Text Available The conventional time reversal imaging (TRI method builds imaging function by using the maximal value of signal amplitude. In this circumstance, some remote targets are missed (near-far problem or low resolution is obtained in lossy and/or dispersive media, and too many transceivers are employed to locate targets, which increases the complexity and cost of system. To solve these problems, a novel TRI algorithm is presented in this paper. In order to achieve a high resolution, the signal amplitude corresponding to focal time observed at target position is used to reconstruct the target image. For disposing near-far problem and suppressing spurious images, combining with cross-correlation property and amplitude compensation, channel compensation function (CCF is introduced. Moreover, the complexity and cost of system are reduced by employing only five transceivers to detect four targets whose number is close to that of transceivers. For the sake of demonstrating the practicability of the proposed analytical framework, the numerical experiments are actualized in both nondispersive-lossless (NDL media and dispersive-conductive (DPC media. Results show that the performance of the proposed method is superior to that of conventional TRI algorithm even under few echo signals.

  12. Metric Learning Method Aided Data-Driven Design of Fault Detection Systems

    Directory of Open Access Journals (Sweden)

    Guoyang Yan

    2014-01-01

    Full Text Available Fault detection is fundamental to many industrial applications. With the development of system complexity, the number of sensors is increasing, which makes traditional fault detection methods lose efficiency. Metric learning is an efficient way to build the relationship between feature vectors with the categories of instances. In this paper, we firstly propose a metric learning-based fault detection framework in fault detection. Meanwhile, a novel feature extraction method based on wavelet transform is used to obtain the feature vector from detection signals. Experiments on Tennessee Eastman (TE chemical process datasets demonstrate that the proposed method has a better performance when comparing with existing methods, for example, principal component analysis (PCA and fisher discriminate analysis (FDA.

  13. Protein engineering of Bacillus acidopullulyticus pullulanase for enhanced thermostability using in silico data driven rational design methods.

    Science.gov (United States)

    Chen, Ana; Li, Yamei; Nie, Jianqi; McNeil, Brian; Jeffrey, Laura; Yang, Yankun; Bai, Zhonghu

    2015-10-01

    Thermostability has been considered as a requirement in the starch processing industry to maintain high catalytic activity of pullulanase under high temperatures. Four data driven rational design methods (B-FITTER, proline theory, PoPMuSiC-2.1, and sequence consensus approach) were adopted to identify the key residue potential links with thermostability, and 39 residues of Bacillus acidopullulyticus pullulanase were chosen as mutagenesis targets. Single mutagenesis followed by combined mutagenesis resulted in the best mutant E518I-S662R-Q706P, which exhibited an 11-fold half-life improvement at 60 °C and a 9.5 °C increase in Tm. The optimum temperature of the mutant increased from 60 to 65 °C. Fluorescence spectroscopy results demonstrated that the tertiary structure of the mutant enzyme was more compact than that of the wild-type (WT) enzyme. Structural change analysis revealed that the increase in thermostability was most probably caused by a combination of lower stability free-energy and higher hydrophobicity of E518I, more hydrogen bonds of S662R, and higher rigidity of Q706P compared with the WT. The findings demonstrated the effectiveness of combined data-driven rational design approaches in engineering an industrial enzyme to improve thermostability. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. Hypothesis-driven and field-validated method to prioritize fragmentation mitigation efforts in road projects.

    Science.gov (United States)

    Vanthomme, Hadrien; Kolowski, Joseph; Nzamba, Brave S; Alonso, Alfonso

    2015-10-01

    The active field of connectivity conservation has provided numerous methods to identify wildlife corridors with the aim of reducing the ecological effect of fragmentation. Nevertheless, these methods often rely on untested hypotheses of animal movements, usually fail to generate fine-scale predictions of road crossing sites, and do not allow managers to prioritize crossing sites for implementing road fragmentation mitigation measures. We propose a new method that addresses these limitations. We illustrate this method with data from southwestern Gabon (central Africa). We used stratified random transect surveys conducted in two seasons to model the distribution of African forest elephant (Loxodonta cyclotis), forest buffalo (Syncerus caffer nanus), and sitatunga (Tragelaphus spekii) in a mosaic landscape along a 38.5 km unpaved road scheduled for paving. Using a validation data set of recorded crossing locations, we evaluated the performance of three types of models (local suitability, local least-cost movement, and regional least-cost movement) in predicting actual road crossings for each species, and developed a unique and flexible scoring method for prioritizing road sections for the implementation of road fragmentation mitigation measures. With a data set collected in method was able to identify seasonal changes in animal movements for buffalo and sitatunga that shift from a local exploitation of the site in the wet season to movements through the study site in the dry season, whereas elephants use the entire study area in both seasons. These three species highlighted the need to use species- and season-specific modeling of movement. From these movement models, the method ranked road sections for their suitability for implementing fragmentation mitigation efforts, allowing managers to adjust priority thresholds based on budgets and management goals. The method relies on data that can be obtained in a period compatible with environmental impact assessment

  15. Method for imaging with low frequency electromagnetic fields

    Science.gov (United States)

    Lee, Ki H.; Xie, Gan Q.

    1994-01-01

    A method for imaging with low frequency electromagnetic fields, and for interpreting the electromagnetic data using ray tomography, in order to determine the earth conductivity with high accuracy and resolution. The imaging method includes the steps of placing one or more transmitters, at various positions in a plurality of transmitter holes, and placing a plurality of receivers in a plurality of receiver holes. The transmitters generate electromagnetic signals which diffuse through a medium, such as earth, toward the receivers. The measured diffusion field data H is then transformed into wavefield data U. The traveltimes corresponding to the wavefield data U, are then obtained, by charting the wavefield data U, using a different regularization parameter .alpha. for each transform. The desired property of the medium, such as conductivity, is then derived from the velocity, which in turn is constructed from the wavefield data U using ray tomography.

  16. Method and apparatus for animal positioning in imaging systems

    Science.gov (United States)

    Hadjioannou, Arion-Xenofon; Stout, David B.; Silverman, Robert W.

    2013-01-01

    An apparatus for imaging an animal includes a first mounting surface, a bed sized to support the animal and releasably secured to or integral with the first mounting surface. The apparatus also includes a plurality of straps, each having a first end in a fixed position relative to the bed and a second end for tightening around a limb of the animal. A method for in-vivo imaging of an animal includes providing an animal that has limbs, providing a first mounting surface, and providing a bed removably secured to or integral with the mounting surface and sized to support the animal as well as being coupled to a plurality of straps. The method also includes placing the animal on the bed between the plurality of straps and tightening at least two of the plurality of straps around at least two of the limbs such that the animal is substantially secured in place relative to the bed.

  17. A novel attack method about double-random-phase-encoding-based image hiding method

    Science.gov (United States)

    Xu, Hongsheng; Xiao, Zhijun; Zhu, Xianchen

    2018-03-01

    By using optical image processing techniques, a novel text encryption and hiding method applied by double-random phase-encoding technique is proposed in the paper. The first step is that the secret message is transformed into a 2-dimension array. The higher bits of the elements in the array are used to fill with the bit stream of the secret text, while the lower bits are stored specific values. Then, the transformed array is encoded by double random phase encoding technique. Last, the encoded array is embedded on a public host image to obtain the image embedded with hidden text. The performance of the proposed technique is tested via analytical modeling and test data stream. Experimental results show that the secret text can be recovered either accurately or almost accurately, while maintaining the quality of the host image embedded with hidden data by properly selecting the method of transforming the secret text into an array and the superimposition coefficient.

  18. A deep level set method for image segmentation

    OpenAIRE

    Tang, Min; Valipour, Sepehr; Zhang, Zichen Vincent; Cobzas, Dana; MartinJagersand

    2017-01-01

    This paper proposes a novel image segmentation approachthat integrates fully convolutional networks (FCNs) with a level setmodel. Compared with a FCN, the integrated method can incorporatesmoothing and prior information to achieve an accurate segmentation.Furthermore, different than using the level set model as a post-processingtool, we integrate it into the training phase to fine-tune the FCN. Thisallows the use of unlabeled data during training in a semi-supervisedsetting. Using two types o...

  19. A Method of Poisson's Ration Imaging Within a Material Part

    Science.gov (United States)

    Roth, Don J. (Inventor)

    1994-01-01

    The present invention is directed to a method of displaying the Poisson's ratio image of a material part. In the present invention, longitudinal data is produced using a longitudinal wave transducer and shear wave data is produced using a shear wave transducer. The respective data is then used to calculate the Poisson's ratio for the entire material part. The Poisson's ratio approximations are then used to display the data.

  20. Soft-tissues Image Processing: Comparison of Traditional Segmentation Methods with 2D active Contour Methods

    Czech Academy of Sciences Publication Activity Database

    Mikulka, J.; Gescheidtová, E.; Bartušek, Karel

    2012-01-01

    Roč. 12, č. 4 (2012), s. 153-161 ISSN 1335-8871 R&D Projects: GA ČR GAP102/11/0318; GA ČR GAP102/12/1104; GA MŠk ED0017/01/01 Institutional support: RVO:68081731 Keywords : Medical image processing * image segmentation * liver tumor * temporomandibular joint disc * watershed method Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering Impact factor: 1.233, year: 2012

  1. Properties of the Feynman-alpha method applied to accelerator-driven subcritical systems.

    Science.gov (United States)

    Taczanowski, S; Domanska, G; Kopec, M; Janczyszyn, J

    2005-01-01

    A Monte Carlo study of the Feynman-method with a simple code simulating the multiplication chain, confined to pertinent time-dependent phenomena has been done. The significance of its key parameters (detector efficiency and dead time, k-source and spallation neutrons multiplicities, required number of fissions etc.) has been discussed. It has been demonstrated that this method can be insensitive to properties of the zones surrounding the core, whereas is strongly affected by the detector dead time. In turn, the influence of harmonics in the neutron field and of the dispersion of spallation neutrons has proven much less pronounced.

  2. Design of a practical model-observer-based image quality assessment method for CT imaging systems

    Science.gov (United States)

    Tseng, Hsin-Wu; Fan, Jiahua; Cao, Guangzhi; Kupinski, Matthew A.; Sainath, Paavana

    2014-03-01

    The channelized Hotelling observer (CHO) is a powerful method for quantitative image quality evaluations of CT systems and their image reconstruction algorithms. It has recently been used to validate the dose reduction capability of iterative image-reconstruction algorithms implemented on CT imaging systems. The use of the CHO for routine and frequent system evaluations is desirable both for quality assurance evaluations as well as further system optimizations. The use of channels substantially reduces the amount of data required to achieve accurate estimates of observer performance. However, the number of scans required is still large even with the use of channels. This work explores different data reduction schemes and designs a new approach that requires only a few CT scans of a phantom. For this work, the leave-one-out likelihood (LOOL) method developed by Hoffbeck and Landgrebe is studied as an efficient method of estimating the covariance matrices needed to compute CHO performance. Three different kinds of approaches are included in the study: a conventional CHO estimation technique with a large sample size, a conventional technique with fewer samples, and the new LOOL-based approach with fewer samples. The mean value and standard deviation of area under ROC curve (AUC) is estimated by shuffle method. Both simulation and real data results indicate that an 80% data reduction can be achieved without loss of accuracy. This data reduction makes the proposed approach a practical tool for routine CT system assessment.

  3. Adult Moyamoya disease angiographic images evolutive characters and treatment methods

    International Nuclear Information System (INIS)

    Qian Jiangnan; Ling Feng

    2000-01-01

    Objective: To discuss the angiographic images with evolutional characters and the treatment methods of the Moyamoya disease. Methods: The clinical manifestations, the radiographic changes and the comparative analysis between medicine treatment and surgery treatment, together with the laboratory tests findings were analyzed in one cases adult Moyamoya disease during six years. Conclusions: The angiographic characteristics of MMD show the supplied artery trunk stenosis, and followed by occlusion, with later appearance of vascular smoking sign. Medical treatment proved to be of null. Direct or indirect intra or extra cranial vascular anastomosis are effective for treatment

  4. Apparatus and method for motion tracking in brain imaging

    DEFF Research Database (Denmark)

    2013-01-01

    Disclosed is apparatus and method for motion tracking of a subject in medical brain imaging. The method comprises providing a light projector and a first camera; projecting a first pattern sequence (S1) onto a surface region of the subject with the light projector, wherein the subject is positioned......2,1) based on the detected first pattern sequence (S1'); projecting the second pattern sequence (S2) onto a surface region of the subject with the light projector; detecting the projected second pattern sequence (S2') with the first camera; and determining motion tracking parameters based...

  5. Methods for modeling and quantification in functional imaging by positron emissions tomography and magnetic resonance imaging

    International Nuclear Information System (INIS)

    Costes, Nicolas

    2017-01-01

    This report presents experiences and researches in the field of in vivo medical imaging by positron emission tomography (PET) and magnetic resonance imaging (MRI). In particular, advances in terms of reconstruction, quantification and modeling in PET are described. The validation of processing and analysis methods is supported by the creation of data by simulation of the imaging process in PET. The recent advances of combined PET/MRI clinical cameras, allowing simultaneous acquisition of molecular/metabolic PET information, and functional/structural MRI information opens the door to unique methodological innovations, exploiting spatial alignment and simultaneity of the PET and MRI signals. It will lead to an increase in accuracy and sensitivity in the measurement of biological phenomena. In this context, the developed projects address new methodological issues related to quantification, and to the respective contributions of MRI or PET information for a reciprocal improvement of the signals of the two modalities. They open perspectives for combined analysis of the two imaging techniques, allowing optimal use of synchronous, anatomical, molecular and functional information for brain imaging. These innovative concepts, as well as data correction and analysis methods, will be easily translated into other areas of investigation using combined PET/MRI. (author) [fr

  6. Numerical Simulation of Density-Driven Flow and Heat Transport Processes in Porous Media Using the Network Method

    Directory of Open Access Journals (Sweden)

    Manuel Cánovas

    2017-09-01

    Full Text Available Density-driven flow and heat transport processes in 2-D porous media scenarios are governed by coupled, non-linear, partial differential equations that normally have to be solved numerically. In the present work, a model based on the network method simulation is designed and applied to simulate these processes, providing steady state patterns that demonstrate its computational power and reliability. The design is relatively simple and needs very few rules. Two applications in which heat is transported by natural convection in confined and saturated media are studied: slender boxes heated from below (a kind of Bénard problem and partially heated horizontal plates in rectangular domains (the Elder problem. The streamfunction and temperature patterns show that the results are coherent with those of other authors: steady state patterns and heat transfer depend both on the Rayleigh number and on the characteristic Darcy velocity derived from the values of the hydrological, thermal and geometrical parameters of the problems.

  7. Does thorax EIT image analysis depend on the image reconstruction method?

    Science.gov (United States)

    Zhao, Zhanqi; Frerichs, Inéz; Pulletz, Sven; Müller-Lisse, Ullrich; Möller, Knut

    2013-04-01

    Different methods were proposed to analyze the resulting images of electrical impedance tomography (EIT) measurements during ventilation. The aim of our study was to examine if the analysis methods based on back-projection deliver the same results when applied on images based on other reconstruction algorithms. Seven mechanically ventilated patients with ARDS were examined by EIT. The thorax contours were determined from the routine CT images. EIT raw data was reconstructed offline with (1) filtered back-projection with circular forward model (BPC); (2) GREIT reconstruction method with circular forward model (GREITC) and (3) GREIT with individual thorax geometry (GREITT). Three parameters were calculated on the resulting images: linearity, global ventilation distribution and regional ventilation distribution. The results of linearity test are 5.03±2.45, 4.66±2.25 and 5.32±2.30 for BPC, GREITC and GREITT, respectively (median ±interquartile range). The differences among the three methods are not significant (p = 0.93, Kruskal-Wallis test). The proportions of ventilation in the right lung are 0.58±0.17, 0.59±0.20 and 0.59±0.25 for BPC, GREITC and GREITT, respectively (p = 0.98). The differences of the GI index based on different reconstruction methods (0.53±0.16, 0.51±0.25 and 0.54±0.16 for BPC, GREITC and GREITT, respectively) are also not significant (p = 0.93). We conclude that the parameters developed for images generated with GREITT are comparable with filtered back-projection and GREITC.

  8. Stochastic response and bifurcation of periodically driven nonlinear oscillators by the generalized cell mapping method

    Science.gov (United States)

    Han, Qun; Xu, Wei; Sun, Jian-Qiao

    2016-09-01

    The stochastic response of nonlinear oscillators under periodic and Gaussian white noise excitations is studied with the generalized cell mapping based on short-time Gaussian approximation (GCM/STGA) method. The solutions of the transition probability density functions over a small fraction of the period are constructed by the STGA scheme in order to construct the GCM over one complete period. Both the transient and steady-state probability density functions (PDFs) of a smooth and discontinuous (SD) oscillator are computed to illustrate the application of the method. The accuracy of the results is verified by direct Monte Carlo simulations. The transient responses show the evolution of the PDFs from being Gaussian to non-Gaussian. The effect of a chaotic saddle on the stochastic response is also studied. The stochastic P-bifurcation in terms of the steady-state PDFs occurs with the decrease of the smoothness parameter, which corresponds to the deterministic pitchfork bifurcation.

  9. An Improved Supplier Driven Packaging Design and Development Method for Supply Chain Efficiency

    DEFF Research Database (Denmark)

    Sohrabpour, Vahid; Oghazi, Pejvak; Olsson, Annika

    2016-01-01

    and satisfaction in interaction with the product and packaging system. It also proposes a supply chain focused packaging design and development method to better satisfy supply chain needs placed on packaging. An extensive literature review was conducted, and a Tetra Pak derived case study was developed......Packaging and the role it plays in supply chain efficiency are overlooked in most design and development research. An opportunity exists to meet the needs of supply chains to increase efficiency. This research presents three propositions on how to reduce the gap between supply chain needs....... The propositions were formulated and became the basis for improving Tetra Pak's existing packaging design and development method by better integrating supply chain needs. This was accomplished by using an expanded operational life cycle perspective that includes the entire supply chain. The resulting supply chain...

  10. Systems and methods for process and user driven dynamic voltage and frequency scaling

    Science.gov (United States)

    Mallik, Arindam [Evanston, IL; Lin, Bin [Hillsboro, OR; Memik, Gokhan [Evanston, IL; Dinda, Peter [Evanston, IL; Dick, Robert [Evanston, IL

    2011-03-22

    Certain embodiments of the present invention provide a method for power management including determining at least one of an operating frequency and an operating voltage for a processor and configuring the processor based on the determined at least one of the operating frequency and the operating voltage. The operating frequency is determined based at least in part on direct user input. The operating voltage is determined based at least in part on an individual profile for processor.

  11. High-resolution imaging methods in array signal processing

    DEFF Research Database (Denmark)

    Xenaki, Angeliki

    in active sonar signal processing for detection and imaging of submerged oil contamination in sea water from a deep-water oil leak. The submerged oil _eld is modeled as a uid medium exhibiting spatial perturbations in the acoustic parameters from their mean ambient values which cause weak scattering...... of the incident acoustic energy. A highfrequency active sonar is selected to insonify the medium and receive the backscattered waves. High-frequency acoustic methods can both overcome the optical opacity of water (unlike methods based on electromagnetic waves) and resolve the small-scale structure...... of the submerged oil field (unlike low-frequency acoustic methods). The study shows that high-frequency acoustic methods are suitable not only for large-scale localization of the oil contamination in the water column but also for statistical characterization of the submerged oil field through inference...

  12. Image processing methods and architectures in diagnostic pathology.

    Directory of Open Access Journals (Sweden)

    Oscar DĂŠniz

    2010-05-01

    Full Text Available Grid technology has enabled the clustering and the efficient and secure access to and interaction among a wide variety of geographically distributed resources such as: supercomputers, storage systems, data sources, instruments and special devices and services. Their main applications include large-scale computational and data intensive problems in science and engineering. General grid structures and methodologies for both software and hardware in image analysis for virtual tissue-based diagnosis has been considered in this paper. This methods are focus on the user level middleware. The article describes the distributed programming system developed by the authors for virtual slide analysis in diagnostic pathology. The system supports different image analysis operations commonly done in anatomical pathology and it takes into account secured aspects and specialized infrastructures with high level services designed to meet application requirements. Grids are likely to have a deep impact on health related applications, and therefore they seem to be suitable for tissue-based diagnosis too. The implemented system is a joint application that mixes both Web and Grid Service Architecture around a distributed architecture for image processing. It has shown to be a successful solution to analyze a big and heterogeneous group of histological images under architecture of massively parallel processors using message passing and non-shared memory.

  13. Color management systems: methods and technologies for increased image quality

    Science.gov (United States)

    Caretti, Maria

    1997-02-01

    All the steps in the imaging chain -- from handling the originals in the prepress to outputting them on any device - - have to be well calibrated and adjusted to each other, in order to reproduce color images in a desktop environment as accurate as possible according to the original. Today most of the steps in the prepress production are digital and therefore it is realistic to believe that the color reproduction can be well controlled. This is true thanks to the last years development of fast, cost effective scanners, digital sources and digital proofing devices not the least. It is likely to believe that well defined tools and methods to control this imaging flow will lead to large cost and time savings as well as increased overall image quality. Until now, there has been a lack of good, reliable, easy-to- use systems (e.g. hardware, software, documentation, training and support) in an extent that has made them accessible to the large group of users of graphic arts production systems. This paper provides an overview of the existing solutions to manage colors in a digital pre-press environment. Their benefits and limitations are discussed as well as how they affect the production workflow and organization. The difference between a color controlled environment and one that is not is explained.

  14. Information theoretic methods for image processing algorithm optimization

    Science.gov (United States)

    Prokushkin, Sergey F.; Galil, Erez

    2015-01-01

    Modern image processing pipelines (e.g., those used in digital cameras) are full of advanced, highly adaptive filters that often have a large number of tunable parameters (sometimes > 100). This makes the calibration procedure for these filters very complex, and the optimal results barely achievable in the manual calibration; thus an automated approach is a must. We will discuss an information theory based metric for evaluation of algorithm adaptive characteristics ("adaptivity criterion") using noise reduction algorithms as an example. The method allows finding an "orthogonal decomposition" of the filter parameter space into the "filter adaptivity" and "filter strength" directions. This metric can be used as a cost function in automatic filter optimization. Since it is a measure of a physical "information restoration" rather than perceived image quality, it helps to reduce the set of the filter parameters to a smaller subset that is easier for a human operator to tune and achieve a better subjective image quality. With appropriate adjustments, the criterion can be used for assessment of the whole imaging system (sensor plus post-processing).

  15. Statistical methods of evaluating and comparing imaging techniques

    International Nuclear Information System (INIS)

    Freedman, L.S.

    1987-01-01

    Over the past 20 years several new methods of generating images of internal organs and the anatomy of the body have been developed and used to enhance the accuracy of diagnosis and treatment. These include ultrasonic scanning, radioisotope scanning, computerised X-ray tomography (CT) and magnetic resonance imaging (MRI). The new techniques have made a considerable impact on radiological practice in hospital departments, not least on the investigational process for patients suspected or known to have malignant disease. As a consequence of the increased range of imaging techniques now available, there has developed a need to evaluate and compare their usefulness. Over the past 10 years formal studies of the application of imaging technology have been conducted and many reports have appeared in the literature. These studies cover a range of clinical situations. Likewise, the methodologies employed for evaluating and comparing the techniques in question have differed widely. While not attempting an exhaustive review of the clinical studies which have been reported, this paper aims to examine the statistical designs and analyses which have been used. First a brief review of the different types of study is given. Examples of each type are then chosen to illustrate statistical issues related to their design and analysis. In the final sections it is argued that a form of classification for these different types of study might be helpful in clarifying relationships between them and bringing a perspective to the field. A classification based upon a limited analogy with clinical trials is suggested

  16. A Method for Determining Skeletal Lengths from DXA Images

    Directory of Open Access Journals (Sweden)

    Fogelman Ignac

    2007-11-01

    Full Text Available Abstract Background Skeletal ratios and bone lengths are widely used in anthropology and forensic pathology and hip axis length is a useful predictor of fracture. The aim of this study was to show that skeletal ratios, such as length of femur to height, could be accurately measured from a DXA (dual energy X-ray absorptiometry image. Methods 90 normal Caucasian females, 18–80 years old, with whole body DXA data were used as subjects. Two methods, linear pixel count (LPC and reticule and ruler (RET were used to measure skeletal sizes on DXA images and compared with real clinical measures from 20 subjects and 20 x-rays of the femur and tibia taken in 2003. Results Although both methods were highly correlated, the LPC inter- and intra-observer error was lower at 1.6% compared to that of RET at 2.3%. Both methods correlated positively with real clinical measures, with LPC having a marginally stronger correlation coefficient (r2 = 0.94; r2 = 0.84; average r2 = 0.89 than RET (r2 = 0.86; r2 = 0.84; average r2 = 0.85 with X-rays and real measures respectively. Also, the time taken to use LPC was half that of RET at 5 minutes per scan. Conclusion Skeletal ratios can be accurately and precisely measured from DXA total body scan images. The LPC method is easy to use and relatively rapid. This new phenotype will be useful for osteoporosis research for individuals or large-scale epidemiological or genetic studies.

  17. A Data-Driven Noise Reduction Method and Its Application for the Enhancement of Stress Wave Signals

    Directory of Open Access Journals (Sweden)

    Hai-Lin Feng

    2012-01-01

    Full Text Available Ensemble empirical mode decomposition (EEMD has been recently used to recover a signal from observed noisy data. Typically this is performed by partial reconstruction or thresholding operation. In this paper we describe an efficient noise reduction method. EEMD is used to decompose a signal into several intrinsic mode functions (IMFs. The time intervals between two adjacent zero-crossings within the IMF, called instantaneous half period (IHP, are used as a criterion to detect and classify the noise oscillations. The undesirable waveforms with a larger IHP are set to zero. Furthermore, the optimum threshold in this approach can be derived from the signal itself using the consecutive mean square error (CMSE. The method is fully data driven, and it requires no prior knowledge of the target signals. This method can be verified with the simulative program by using Matlab. The denoising results are proper. In comparison with other EEMD based methods, it is concluded that the means adopted in this paper is suitable to preprocess the stress wave signals in the wood nondestructive testing.

  18. Method of controlling coherent synchroton radiation-driven degradation of beam quality during bunch length compression

    Science.gov (United States)

    Douglas, David R [Newport News, VA; Tennant, Christopher D [Williamsburg, VA

    2012-07-10

    A method of avoiding CSR induced beam quality defects in free electron laser operation by a) controlling the rate of compression and b) using a novel means of integrating the compression with the remainder of the transport system: both are accomplished by means of dispersion modulation. A large dispersion is created in the penultimate dipole magnet of the compression region leading to rapid compression; this large dispersion is demagnified and dispersion suppression performed in a final small dipole. As a result, the bunch is short for only a small angular extent of the transport, and the resulting CSR excitation is small.

  19. Internal scanning method as unique imaging method of optical vortex scanning microscope

    Science.gov (United States)

    Popiołek-Masajada, Agnieszka; Masajada, Jan; Szatkowski, Mateusz

    2018-06-01

    The internal scanning method is specific for the optical vortex microscope. It allows to move the vortex point inside the focused vortex beam with nanometer resolution while the whole beam stays in place. Thus the sample illuminated by the focused vortex beam can be scanned just by the vortex point. We show that this method enables high resolution imaging. The paper presents the preliminary experimental results obtained with the first basic image recovery procedure. A prospect of developing more powerful tools for topography recovery with the optical vortex scanning microscope is discussed shortly.

  20. Simultaneous collection method of on-peak window image and off-peak window image in Tl-201 imaging

    International Nuclear Information System (INIS)

    Murakami, Tomonori; Noguchi, Yasushi; Kojima, Akihiro; Takagi, Akihiro; Matsumoto, Masanori

    2007-01-01

    Tl-201 imaging detects the photopeak (71 keV, in on-peak window) of characteristic X-rays of Hg-201 formed from Tl-201 decay. The peak is derived from 4 rays of different energy and emission intensity and does not follow in Gaussian distribution. In the present study, authors made an idea for the method in the title to attain the more effective single imaging, which was examined for its accuracy and reliability with phantoms and applied clinically to Tl-201 scintigraphy in a patient. The authors applied the triple energy window method for data acquisition: the energy window setting was made on Hg-201 X-rays photopeak in three of the lower (3%, L), main (72 keV, M) and upper (14%, U) windows with the gamma camera with 2-gated detector (Toshiba E. CAM/ICON). L, M and U images obtained simultaneously were then constructed to images of on-peak (L+M, Mock on-peak) and off-peak (M+U) window settings for evaluation. Phantoms for line source with Tl-201-containing swab and for multi-defect with acrylic plate containing Tl-201 solution were imaged in water. The female patient with thyroid cancer was subjected to preoperative scintigraphy under the defined conditions. Mock on-, off-peak images were found to be equivalent to the true (ordinary, clinical) on-, off-peak ones, and the present method was thought usable for evaluation of usefulness of off-peak window data. (R.T.)

  1. Lung function imaging methods in Cystic Fibrosis pulmonary disease.

    Science.gov (United States)

    Kołodziej, Magdalena; de Veer, Michael J; Cholewa, Marian; Egan, Gary F; Thompson, Bruce R

    2017-05-17

    Monitoring of pulmonary physiology is fundamental to the clinical management of patients with Cystic Fibrosis. The current standard clinical practise uses spirometry to assess lung function which delivers a clinically relevant functional readout of total lung function, however does not supply any visible or localised information. High Resolution Computed Tomography (HRCT) is a well-established current 'gold standard' method for monitoring lung anatomical changes in Cystic Fibrosis patients. HRCT provides excellent morphological information, however, the X-ray radiation dose can become significant if multiple scans are required to monitor chronic diseases such as cystic fibrosis. X-ray phase-contrast imaging is another emerging X-ray based methodology for Cystic Fibrosis lung assessment which provides dynamic morphological and functional information, albeit with even higher X-ray doses than HRCT. Magnetic Resonance Imaging (MRI) is a non-ionising radiation imaging method that is garnering growing interest among researchers and clinicians working with Cystic Fibrosis patients. Recent advances in MRI have opened up the possibilities to observe lung function in real time to potentially allow sensitive and accurate assessment of disease progression. The use of hyperpolarized gas or non-contrast enhanced MRI can be tailored to clinical needs. While MRI offers significant promise it still suffers from poor spatial resolution and the development of an objective scoring system especially for ventilation assessment.

  2. Image reconstruction in computerized tomography using the convolution method

    International Nuclear Information System (INIS)

    Oliveira Rebelo, A.M. de.

    1984-03-01

    In the present work an algoritin was derived, using the analytical convolution method (filtered back-projection) for two-dimensional or three-dimensional image reconstruction in computerized tomography applied to non-destructive testing and to the medical use. This mathematical model is based on the analytical Fourier transform method for image reconstruction. This model consists of a discontinuous system formed by an NxN array of cells (pixels). The attenuation in the object under study of a colimated gamma ray beam has been determined for various positions and incidence angles (projections) in terms of the interaction of the beam with the intercepted pixels. The contribution of each pixel to beam attenuation was determined using the weight function W ij which was used for simulated tests. Simulated tests using standard objects with attenuation coefficients in the range of 0,2 to 0,7 cm -1 were carried out using cell arrays of up to 25x25. One application was carried out in the medical area simulating image reconstruction of an arm phantom with attenuation coefficients in the range of 0,2 to 0,5 cm -1 using cell arrays of 41x41. The simulated results show that, in objects with a great number of interfaces and great variations of attenuation coefficients at these interfaces, a good reconstruction is obtained with the number of projections equal to the reconstruction matrix dimension. A good reconstruction is otherwise obtained with fewer projections. (author) [pt

  3. Noise method for monitoring the sub-criticality in accelerator driven systems

    International Nuclear Information System (INIS)

    Rugama, Y.; Munoz-Cobo, J.L.; Valentine, T.E.; Mihalczo, J.T.; Perez, R.B.; Perez-Navarro, A.

    2001-01-01

    In this paper, an absolute measurements technique for the sub-criticality determination is presented. The development of ADS, requires of methods to monitor and control the sub-criticality of this kind of systems, without interfering it's normal operation mode. This method is based on the Stochastic Neutron and Photon Transport Theory developed by Munoz-Cobo et al., and which can be implemented in presently available neutron transport codes. As a by-product of the methodology a monitoring measurement technique has been developed and verified using two coupled Monte Carlo programs. The spallation collisions and the high-energy transport are simulated with LAHET. The neutrons transports with energies less than 20 MeV and the estimation of the count statistics for neutron and/or gamma ray counters in fissile systems, is simulated with MCNP-DSP. It is possible to get the kinetics parameters and the k eff value of the sub-critical system through the analysis of the counter detectors. (author)

  4. Method for Forming Pulp Fibre Yarns Developed by a Design-driven Process

    Directory of Open Access Journals (Sweden)

    Tiia-Maria Tenhunen

    2016-01-01

    Full Text Available A simple and inexpensive method for producing water-stable pulp fibre yarns using a deep eutectic mixture composed of choline chloride and urea (ChCl/urea was developed in this work. Deep eutectic solvents (DESs are eutectic mixtures consisting of two or more components that together have a lower melting point than the individual components. DESs have been previously studied with respect to cellulose dissolution, functionalisation, and pre-treatment. This new method uses a mixture of choline chloride and urea, which is used as a swelling and dispersing agent for the pulp fibres in the yarn-forming process. Although the pulp seemed to form a gel when dispersed in ChCl/urea, the ultrastructure of the pulp was not affected. To enable water stability, pulp fibres were crosslinked by esterification using polyacrylic acid. ChCl/urea could be easily recycled and reused by distillation. The novel process described in this study enables utilisation of pulp fibres in textile production without modification or dissolution and shortening of the textile value chain. An interdisciplinary approach was used, where potential applications were explored simultaneously with material development from process development to the early phase prototyping.

  5. Image reconstruction methods for the PBX-M pinhole camera

    International Nuclear Information System (INIS)

    Holland, A.; Powell, E.T.; Fonck, R.J.

    1990-03-01

    This paper describes two methods which have been used to reconstruct the soft x-ray emission profile of the PBX-M tokamak from the projected images recorded by the PBX-M pinhole camera. Both methods must accurately represent the shape of the reconstructed profile while also providing a degree of immunity to noise in the data. The first method is a simple least squares fit to the data. This has the advantage of being fast and small, and thus easily implemented on the PDP-11 computer used to control the video digitizer for the pinhole camera. The second method involves the application of a maximum entropy algorithm to an overdetermined system. This has the advantage of allowing the use of a default profile. This profile contains additional knowledge about the plasma shape which can be obtained from equilibrium fits to the external magnetic measurements. Additionally the reconstruction is guaranteed positive, and the fit to the data can be relaxed by specifying both the amount and distribution of noise in the image. The algorithm described has the advantage of being considerably faster, for an overdetermined system, than the usual Lagrange multiplier approach to finding the maximum entropy solution. 13 refs., 24 figs

  6. Fringe image analysis based on the amplitude modulation method.

    Science.gov (United States)

    Gai, Shaoyan; Da, Feipeng

    2010-05-10

    A novel phase-analysis method is proposed. To get the fringe order of a fringe image, the amplitude-modulation fringe pattern is carried out, which is combined with the phase-shift method. The primary phase value is obtained by a phase-shift algorithm, and the fringe-order information is encoded in the amplitude-modulation fringe pattern. Different from other methods, the amplitude-modulation fringe identifies the fringe order by the amplitude of the fringe pattern. In an amplitude-modulation fringe pattern, each fringe has its own amplitude; thus, the order information is integrated in one fringe pattern, and the absolute fringe phase can be calculated correctly and quickly with the amplitude-modulation fringe image. The detailed algorithm is given, and the error analysis of this method is also discussed. Experimental results are presented by a full-field shape measurement system where the data has been processed using the proposed algorithm. (c) 2010 Optical Society of America.

  7. MR Imaging of the Internal Auditory Canal and Inner Ear at 3T: Comparison between 3D Driven Equilibrium and 3D Balanced Fast Field Echo Sequences

    Energy Technology Data Exchange (ETDEWEB)

    Byun, Jun Soo; Kim, Hyung Jin; Yim, Yoo Jeong; Kim, Sung Tae; Jeon, Pyoung; Kim, Keon Ha [Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul (Korea, Republic of); Kim, Sam Soo; Jeon, Yong Hwan; Lee, Ji Won [Kangwon National University College of Medicine, Chuncheon (Korea, Republic of)

    2008-06-15

    To compare the use of 3D driven equilibrium (DRIVE) imaging with 3D balanced fast field echo (bFFE) imaging in the assessment of the anatomic structures of the internal auditory canal (IAC) and inner ear at 3 Tesla (T). Thirty ears of 15 subjects (7 men and 8 women; age range, 22 71 years; average age, 50 years) without evidence of ear problems were examined on a whole-body 3T MR scanner with both 3D DRIVE and 3D bFFE sequences by using an 8-channel sensitivity encoding (SENSE) head coil. Two neuroradiologists reviewed both MR images with particular attention to the visibility of the anatomic structures, including four branches of the cranial nerves within the IAC, anatomic structures of the cochlea, vestibule, and three semicircular canals. Although both techniques provided images of relatively good quality, the 3D DRIVE sequence was somewhat superior to the 3D bFFE sequence. The discrepancies were more prominent for the basal turn of the cochlea, vestibule, and all semicircular canals, and were thought to be attributed to the presence of greater magnetic susceptibility artifacts inherent to gradient-echo techniques such as bFFE. Because of higher image quality and less susceptibility artifacts, we highly recommend the employment of 3D DRIVE imaging as the MR imaging choice for the IAC and inner ear

  8. Determining wood chip size: image analysis and clustering methods

    Directory of Open Access Journals (Sweden)

    Paolo Febbi

    2013-09-01

    Full Text Available One of the standard methods for the determination of the size distribution of wood chips is the oscillating screen method (EN 15149- 1:2010. Recent literature demonstrated how image analysis could return highly accurate measure of the dimensions defined for each individual particle, and could promote a new method depending on the geometrical shape to determine the chip size in a more accurate way. A sample of wood chips (8 litres was sieved through horizontally oscillating sieves, using five different screen hole diameters (3.15, 8, 16, 45, 63 mm; the wood chips were sorted in decreasing size classes and the mass of all fractions was used to determine the size distribution of the particles. Since the chip shape and size influence the sieving results, Wang’s theory, which concerns the geometric forms, was considered. A cluster analysis on the shape descriptors (Fourier descriptors and size descriptors (area, perimeter, Feret diameters, eccentricity was applied to observe the chips distribution. The UPGMA algorithm was applied on Euclidean distance. The obtained dendrogram shows a group separation according with the original three sieving fractions. A comparison has been made between the traditional sieve and clustering results. This preliminary result shows how the image analysis-based method has a high potential for the characterization of wood chip size distribution and could be further investigated. Moreover, this method could be implemented in an online detection machine for chips size characterization. An improvement of the results is expected by using supervised multivariate methods that utilize known class memberships. The main objective of the future activities will be to shift the analysis from a 2-dimensional method to a 3- dimensional acquisition process.

  9. Methods and applications in high flux neutron imaging

    International Nuclear Information System (INIS)

    Ballhausen, H.

    2007-01-01

    This treatise develops new methods for high flux neutron radiography and high flux neutron tomography and describes some of their applications in actual experiments. Instead of single images, time series can be acquired with short exposure times due to the available high intensity. To best use the increased amount of information, new estimators are proposed, which extract accurate results from the recorded ensembles, even if the individual piece of data is very noisy and in addition severely affected by systematic errors such as an influence of gamma background radiation. The spatial resolution of neutron radiographies, usually limited by beam divergence and inherent resolution of the scintillator, can be significantly increased by scanning the sample with a pinhole-micro-collimator. This technique circumvents any limitations in present detector design and, due to the available high intensity, could be successfully tested. Imaging with scattered neutrons as opposed to conventional total attenuation based imaging determines separately the absorption and scattering cross sections within the sample. For the first time even coherent angle dependent scattering could be visualized space-resolved. New applications of high flux neutron imaging are presented, such as materials engineering experiments on innovative metal joints, time-resolved tomography on multilayer stacks of fuel cells under operation, and others. A new implementation of an algorithm for the algebraic reconstruction of tomography data executes even in case of missing information, such as limited angle tomography, and returns quantitative reconstructions. The setup of the world-leading high flux radiography and tomography facility at the Institut Laue-Langevin is presented. A comprehensive appendix covers the physical and technical foundations of neutron imaging. (orig.)

  10. Error of image saturation in the structured-light method.

    Science.gov (United States)

    Qi, Zhaoshuai; Wang, Zhao; Huang, Junhui; Xing, Chao; Gao, Jianmin

    2018-01-01

    In the phase-measuring structured-light method, image saturation will induce large phase errors. Usually, by selecting proper system parameters (such as the phase-shift number, exposure time, projection intensity, etc.), the phase error can be reduced. However, due to lack of a complete theory of phase error, there is no rational principle or basis for the selection of the optimal system parameters. For this reason, the phase error due to image saturation is analyzed completely, and the effects of the two main factors, including the phase-shift number and saturation degree, on the phase error are studied in depth. In addition, the selection of optimal system parameters is discussed, including the proper range and the selection principle of the system parameters. The error analysis and the conclusion are verified by simulation and experiment results, and the conclusion can be used for optimal parameter selection in practice.

  11. DAFS measurements using the image-plate Weissenberg method

    International Nuclear Information System (INIS)

    Sugioka, N.; Matsumoto, K.; Sasaki, S.; Tanaka, M.; Mori, T.

    1998-01-01

    An instrumental technique for DAFS measurements which can provide site-specific information is proposed. The approach uses (i) focusing optics with parabolic mirrors and a double-crystal monochromator, (ii) the Laue and Bragg settings and (iii) data collection by the image-plate Weissenberg method. Six image exposures are recorded per plate at five intrinsic energies and one reference energy. The single-crystal measurements were performed at the Co K-absorption edge, and the 200, 220 and 311 reflections of CoO and 511 and 911 reflections of Co 3 O 4 were used for analysis. The regression analysis of χ(k), Fourier transforms of k 3 χ(k) and back-Fourier filtering have been performed

  12. Expanded image database of pistachio x-ray images and classification by conventional methods

    Science.gov (United States)

    Keagy, Pamela M.; Schatzki, Thomas F.; Le, Lan Chau; Casasent, David P.; Weber, David

    1996-12-01

    In order to develop sorting methods for insect damaged pistachio nuts, a large data set of pistachio x-ray images (6,759 nuts) was created. Both film and linescan sensor images were acquired, nuts dissected and internal conditions coded using the U.S. Grade standards and definitions for pistachios. A subset of 1199 good and 686 insect damaged nuts was used to calculate and test discriminant functions. Statistical parameters of image histograms were evaluated for inclusion by forward stepwise discrimination. Using three variables in the discriminant function, 89% of test set nuts were correctly identified. Comparable data for 6 human subjects ranged from 67 to 92%. If the loss of good nuts is held to 1% by requiring a high probability to discard a nut as insect damaged, approximately half of the insect damage present in clean pistachio nuts may be detected and removed by x-ray inspection.

  13. Phaedra, a protocol-driven system for analysis and validation of high-content imaging and flow cytometry.

    Science.gov (United States)

    Cornelissen, Frans; Cik, Miroslav; Gustin, Emmanuel

    2012-04-01

    High-content screening has brought new dimensions to cellular assays by generating rich data sets that characterize cell populations in great detail and detect subtle phenotypes. To derive relevant, reliable conclusions from these complex data, it is crucial to have informatics tools supporting quality control, data reduction, and data mining. These tools must reconcile the complexity of advanced analysis methods with the user-friendliness demanded by the user community. After review of existing applications, we realized the possibility of adding innovative new analysis options. Phaedra was developed to support workflows for drug screening and target discovery, interact with several laboratory information management systems, and process data generated by a range of techniques including high-content imaging, multicolor flow cytometry, and traditional high-throughput screening assays. The application is modular and flexible, with an interface that can be tuned to specific user roles. It offers user-friendly data visualization and reduction tools for HCS but also integrates Matlab for custom image analysis and the Konstanz Information Miner (KNIME) framework for data mining. Phaedra features efficient JPEG2000 compression and full drill-down functionality from dose-response curves down to individual cells, with exclusion and annotation options, cell classification, statistical quality controls, and reporting.

  14. General filtering method for electronic speckle pattern interferometry fringe images with various densities based on variational image decomposition.

    Science.gov (United States)

    Li, Biyuan; Tang, Chen; Gao, Guannan; Chen, Mingming; Tang, Shuwei; Lei, Zhenkun

    2017-06-01

    Filtering off speckle noise from a fringe image is one of the key tasks in electronic speckle pattern interferometry (ESPI). In general, ESPI fringe images can be divided into three categories: low-density fringe images, high-density fringe images, and variable-density fringe images. In this paper, we first present a general filtering method based on variational image decomposition that can filter speckle noise for ESPI fringe images with various densities. In our method, a variable-density ESPI fringe image is decomposed into low-density fringes, high-density fringes, and noise. A low-density fringe image is decomposed into low-density fringes and noise. A high-density fringe image is decomposed into high-density fringes and noise. We give some suitable function spaces to describe low-density fringes, high-density fringes, and noise, respectively. Then we construct several models and numerical algorithms for ESPI fringe images with various densities. And we investigate the performance of these models via our extensive experiments. Finally, we compare our proposed models with the windowed Fourier transform method and coherence enhancing diffusion partial differential equation filter. These two methods may be the most effective filtering methods at present. Furthermore, we use the proposed method to filter a collection of the experimentally obtained ESPI fringe images with poor quality. The experimental results demonstrate the performance of our proposed method.

  15. Early Detection of Diabetic Retinopathy in Fluorescent Angiography Retinal Images Using Image Processing Methods

    Directory of Open Access Journals (Sweden)

    Meysam Tavakoli

    2010-12-01

    Full Text Available Introduction: Diabetic retinopathy (DR is the single largest cause of sight loss and blindness in the working age population of Western countries; it is the most common cause of blindness in adults between 20 and 60 years of age. Early diagnosis of DR is critical for preventing vision loss so early detection of microaneurysms (MAs as the first signs of DR is important. This paper addresses the automatic detection of MAs in fluorescein angiography fundus images, which plays a key role in computer assisted diagnosis of DR, a serious and frequent eye disease. Material and Methods: The algorithm can be divided into three main steps. The first step or pre-processing was for background normalization and contrast enhancement of the image. The second step aimed at detecting landmarks, i.e., all patterns possibly corresponding to vessels and the optic nerve head, which was achieved using a local radon transform. Then, MAs were extracted, which were used in the final step to automatically classify candidates into real MA and other objects. A database of 120 fluorescein angiography fundus images was used to train and test the algorithm. The algorithm was compared to manually obtained gradings of those images. Results: Sensitivity of diagnosis for DR was 94%, with specificity of 75%, and sensitivity of precise microaneurysm localization was 92%, at an average number of 8 false positives per image. Discussion and Conclusion: Sensitivity and specificity of this algorithm make it one of the best methods in this field. Using local radon transform in this algorithm eliminates the noise sensitivity for microaneurysm detection in retinal image analysis.

  16. Meshless Lagrangian SPH method applied to isothermal lid-driven cavity flow at low-Re numbers

    Science.gov (United States)

    Fraga Filho, C. A. D.; Chacaltana, J. T. A.; Pinto, W. J. N.

    2018-01-01

    SPH is a recent particle method applied in the cavities study, without many results available in the literature. The lid-driven cavity flow is a classic problem of the fluid mechanics, extensively explored in the literature and presenting a considerable complexity. The aim of this paper is to present a solution from the Lagrangian viewpoint for this problem. The discretization of the continuum domain is performed using the Lagrangian particles. The physical laws of mass, momentum and energy conservation are presented by the Navier-Stokes equations. A serial numerical code, written in Fortran programming language, has been used to perform the numerical simulations. The application of the SPH and comparison with the literature (mesh methods and a meshless collocation method) have been done. The positions of the primary vortex centre and the non-dimensional velocity profiles passing through the geometric centre of the cavity have been analysed. The numerical Lagrangian results showed a good agreement when compared to the results found in the literature, specifically for { Re} < 100.00 . Suggestions for improvements in the SPH model presented are listed, in the search for better results for flows with higher Reynolds numbers.

  17. A document-driven method for certifying scientific computing software for use in nuclear safety analysis

    International Nuclear Information System (INIS)

    Smith, W. Spencer; Koothoor, Mimitha

    2016-01-01

    This paper presents a documentation and development method to facilitate the certification of scientific computing software used in the safety analysis of nuclear facilities. To study the problems faced during quality assurance and certification activities, a case study was performed on legacy software used for thermal analysis of a fuel pin in a nuclear reactor. Although no errors were uncovered in the code, 27 issues of incompleteness and inconsistency were found with the documentation. This work proposes that software documentation follow a rational process, which includes a software requirements specification following a template that is reusable, maintainable, and understandable. To develop the design and implementation, this paper suggests literate programming as an alternative to traditional structured programming. Literate programming allows for documenting of numerical algorithms and code together in what is termed the literate programmer's manual. This manual is developed with explicit traceability to the software requirements specification. The traceability between the theory, numerical algorithms, and implementation facilitates achieving completeness and consistency, as well as simplifies the process of verification and the associated certification

  18. Experimental method for laser-driven flyer plates for 1-D shocks

    International Nuclear Information System (INIS)

    Paisley, D. L.; Luo, S. N.; Swift, D. C.; Loomis, E.; Johnson, R.; Greenfield, S.; Peralta, P.; Koskelo, A.; Tonks, D.

    2007-01-01

    One-dimensional shocks can be generated by impacting flyer plates accelerated to terminal velocities by a confined laser-ablated plasma. Over the past few years, we have developed this capability with our facility-size laser, TRIDENT, capable of ≥500 Joules at multi-microsecond pulse lengths to accelerate 1-D flyer plates, 8-mm diameter by 0.1-2 mm thick. Plates have been accelerated to terminal velocities of 100 to ≥500 m/s, with full recovery of the flyer and target for post mortem metallography. By properly tailoring the laser temporal and spatial profile, the expanding confined plasma accelerates the plate away from the transparent sapphire substrate, and decouples the laser parameters from shock pressure profile resulting from the plate impact on a target. Since the flyer plate is in free flight on impact with the target, minimal collateral damage occurs to either. The experimental method to launch these plates to terminal velocity, ancillary diagnostics, and representative experimental data is presented

  19. A document-driven method for certifying scientific computing software for use in nuclear safety analysis

    Energy Technology Data Exchange (ETDEWEB)

    Smith, W. Spencer; Koothoor, Mimitha [Computing and Software Department, McMaster University, Hamilton (Canada)

    2016-04-15

    This paper presents a documentation and development method to facilitate the certification of scientific computing software used in the safety analysis of nuclear facilities. To study the problems faced during quality assurance and certification activities, a case study was performed on legacy software used for thermal analysis of a fuel pin in a nuclear reactor. Although no errors were uncovered in the code, 27 issues of incompleteness and inconsistency were found with the documentation. This work proposes that software documentation follow a rational process, which includes a software requirements specification following a template that is reusable, maintainable, and understandable. To develop the design and implementation, this paper suggests literate programming as an alternative to traditional structured programming. Literate programming allows for documenting of numerical algorithms and code together in what is termed the literate programmer's manual. This manual is developed with explicit traceability to the software requirements specification. The traceability between the theory, numerical algorithms, and implementation facilitates achieving completeness and consistency, as well as simplifies the process of verification and the associated certification.

  20. Fatigue resistance of engine-driven rotary nickel-titanium instruments produced by new manufacturing methods.

    Science.gov (United States)

    Gambarini, Gianluca; Grande, Nicola Maria; Plotino, Gianluca; Somma, Francesco; Garala, Manish; De Luca, Massimo; Testarelli, Luca

    2008-08-01

    The aim of the present study was to investigate whether cyclic fatigue resistance is increased for nickel-titanium instruments manufactured by using new processes. This was evaluated by comparing instruments produced by using the twisted method (TF; SybronEndo, Orange, CA) and those using the M-wire alloy (GTX; Dentsply Tulsa-Dental Specialties, Tulsa, OK) with instruments produced by a traditional NiTi grinding process (K3, SybronEndo). Tests were performed with a specific cyclic fatigue device that evaluated cycles to failure of rotary instruments inside curved artificial canals. Results indicated that size 06-25 TF instruments showed a significant increase (p 0.05) in the mean number of cycles to failure when compared with size 06-20 GT series X instruments. The new manufacturing process produced nickel-titanium rotary files (TF) significantly more resistant to fatigue than instruments produced with the traditional NiTi grinding process. Instruments produced with M-wire (GTX) were not found to be more resistant to fatigue than instruments produced with the traditional NiTi grinding process.